Describes a model to analyze software systems and determine areas of risk. Discusses limitations of typical test design methods and provides an example of how to use the model to create high volume automated testing framework.
Time Series Foundation Models - current state and future directions
Software Risk Analysis
1. Software Risk Analysis Data definition and verification key to mitigating risk By Brett Leonard [email_address]
2.
3. Most software organizations only test the known variations because they use written specifications for a basis of their test cases.
4. The adoption of test factories makes the problem worst by making experienced testers spend their time coordinating the activities of junior testers.
5. Coverage of unknown or undefined variables can be accomplished by using high volume automated testing Use this risk analysis model to facilitate conversation and to map areas of risk within an application
7. Software Risk Analysis Model - Interface The Interface Process Group involves programs and frameworks that facilitate communication between programs and/or systems.
8. Software Risk Analysis Model - Data Data can be discrete (non-changing or reference data) or continuous (changing). An example of discrete data would be settings of a program that are generally left unchanged. Specific transaction-level data like dollar amounts and transaction types are an examples of continuous data.
9. Software Risk Analysis Model - Process The Process group includes modules and programs that control and manipulate data – these represent the main functions of the application.
10. Software Risk Analysis Model - Variables Each process group has known and unknown variables
11. Software Risk Analysis Model – Where's the risk? These variables interact with each other to introduce risk to your software products.
12. Software Risk Analysis Model – Focus is on known variations Most groups focus tests on the known intersection of all three process groups.
13. Software Risk Analysis Model – Typical test design We can't blame them – that is what they are taught... Typical Test Design Process Limitations : - Assumes the system requirements are correct and complete – most of the time they are not. - Does not involve decomposition of existing components. - Allows testers to be “lazy” and only derive tests from written requirements. - Many issues will not be caught because they are the result of interactions between areas that are undefined – not known by the system analyst or developer and only manifest when correct variations are hit.
14. Software Risk Analysis Model – Test factory Test Factory Process |---------------------Experienced tester-------------------| Junior tester Experienced tester -----------Junior tester------------ Experienced tester Experienced tester In recent times, the “Test Case Factory” has been adopted by large companies trying to leverage offshore resources. An experienced onshore resource does the analysis and creates test requirements and scenarios. Inexperienced testers then build the test cases.
15. Software Risk Analysis Model – Test factory Limitations of the test factory 1. Experienced testers spend their valuable time coordinating activities of junior testers when they should be identifying risks in the system where test cases should be targeted outside the original requirements. 2. Work packages are not easy to put together for complex tests. This results in low power tests sent to junior testers while the burden of designing and building complex tests passes to experienced testers. 3. Junior testers knowledge of the system is limited to test cases they are assigned. When they execute they are not knowledgeable about the system and will likely find mostly incidental issues. 4. Disproportional amount of time and effort is spent defining, coordinating low power test cases. Can result in a large number of these test cases in the test suite that will need to be executed in order for project managers to be happy.
16. Software Risk Analysis Model – How to use How to use the risk analysis model? 1. The goal should be to understand the system under development as much as possible – Using the process groups can help decompose the system into smaller components. 2. Developers and testers should drive the focus from the known to the unknown to expand coverage to include as many meaningful data variations as possible in our test process – regardless of what the requirements define. 3. One way to shift the focus from known to unknown variations is to analyze the known and ask questions that force us and others to think about the possible unknown. 4. Testing should focus on elements and process areas that have the greatest potential for visible high-impact issues.
17. Software Risk Analysis Model – Data variations are key Data variations are the key to mitigating risk 1. Varying discrete and continuous data can uncover unknown data variations missed by requirement-based tests. 2. Deep analysis and questioning of the systems components and how they inter-relate will allow us to derive data variations that can lead to failures. 3. Developers can help by pointing in the direction of the unknown or untested variations. Testers can facilitate this process by managing the communication between developers and testers.
18. Software Risk Analysis Model – Developers role? What can developers do? 1. Document potential risk areas Identify discrete data variations Identify continuous data variations Identify where data is found and displayed on the system 2. Unit test with data likely to produce failure Flush out issues relating to data/interface and process interface groups early in the test process 3. Document data variation used in unit testing. 4. Document unit test procedures. Help testers not “reinvent the wheel” Ensure smooth and continuous testing as responsibilities shift
19. Software Risk Analysis Model – Testers role? What can testers do? 1. Understand the system under test. Create a mind map of the system. Ask questions early in the design/development phase about your understanding of the elements within the process groups. 2. Analyze and test the validity of the known data variations. 3. Test data – Identify and set aside test data that can be used during unit, systems, integration and acceptance testing. 4. Collaborative test planning – Create integrated test teams with representatives from testing, development, and business. Discuss relevant data variations and create an integrated data strategy. 5. Perform system testing and check assumptions before formal test period begins. 6. Provide the development team with customer focus and direction.
20. Software Risk Analysis Model – Automated Testing Automated testing (specifically high-volume automated testing) can help mitigate the risk resulting from unknown data variations. After a thorough analysis of the system, areas should be identified that may benefit from high volume automated testing. Here is an example: Suppose you were interested in testing the back-end functionality of a web subscription service. In order for the subscription to be completed you need to type in information through an website. The subscription process involves a number of pages and each subscription will take approximately 5 minutes to complete. You are not concerned with the front-end (web page) but want to make sure that the data base is populated correctly once the information is submitted. This is a very good case for high volume automated testing!!
21. Software Risk Analysis Model – Automated Testing Let's break this system into it's component parts: Interface: Web GUI (Http/Soap/XML) -> XML Midware Component (ODBC) Data: Web GUI (Text/XML) ->XML Midware (SQL) -> Database Process: Web GUI Text Validation -> Package to XML -> XML Validation -> XML Conversion to SQL -> Update database If we look at the analysis, we can see that one way to test this would be to bypass the Web GUI and send data to the Mid-ware component. This will prevent front-end data input which takes time and will allow us to fully test the back-end.
22. Software Risk Analysis Model – Automated Testing Simple architecture for high-volume automated testing:
23. Software Risk Analysis Model – Automated Testing How does the architecture work? 1. The test data is stored in an XLS file so that it can be easily changed by non-technical people. 2. The test engine takes the data and creates the necessary XML file records. 3. The test engine sends the XML data to the Mid-ware component the same way the front-end web code would. 4. The Mid-ware performs the database update process and sends XML file back to the test engine. 5. The test engine parses the XML and determines if update occurred successfully. 6. The test engine can then perform a SQL inquiry to the database to make sure the data is updated correctly (optional) This process can take a 5 minute manual transaction and reduce it to a few seconds greatly increasing the number of data variations that can be tested.
24.
25. The interface involves components that facilitate communication between areas of the system (example: ODBC facilitates communication between applications and databases)
26. In a software development project there are known or defined areas of the system and unknown or undefined areas of the system.
27. Many failures can be traced to unknown of undefined areas of a system
28. Using the Risk Analysis Model can help identify areas within the system that contain risk.
29. Typical test design focuses on requirements and by definition avoids unknown or undefined areas of the system.
30. Test factories exasperate the issue by forcing experienced engineers to coordinate and review junior engineers work which leaves less time for deep system analysis