Software Testing A Brief Primer
About Software Testing
The current piece is an attempt to bring entire essence of Software Testing in one single document. It might help the beginners on FAQ’s or to come up to speed in shortest possible time
What is Software Testing
At a broader level, software testing is to check/verify if, “the software conforms to the requirement document/specification”.
Verification and Validation are two terms often confused in software testing. Verification includes testing of testing items, for conformance with requirement specifications. Software testing is a subset of verification which also uses techniques such as reviews, analysis, inspections and walkthroughs. Validation is the process of checking that what has been specified is what the user actually wanted.
What is a bug
The term bug was discovered when an actual bug/insect was found inside a electric panel and resulted in a short circuit. In Software Testing, this term is referred to a deviation in the expected behavior of the Application.
Other activities which are often associated with software testing are static analysis and dynamic analysis. Static analysis investigates the source code of software, looking for problems and gathering metrics without actually executing the code. Dynamic analysis looks at the behavior of software while it is executing, to provide information such as execution paths and Test Coverage.
Requirement Specifications based Testing
The fundamental and the first step towards Software Testing is the availability of Requirement Specifications. A Software Under Test can be as small as a calculator app or a distributed multi tier Enterprise Application
Depending on the complexity of the Test Application, there can be a single or multiple Requirement Specification documents. Some of the most commonly used include:
SRS/ Software Requirements Specification: A basic document which details how the Application Under Test is supposed to perform.
Design Specifications: It describes the architecture of the Application Under Test.
After the Requirement Specifications are frozen. The Test documentation starts. The Various steps include:
A high level Test Design includes test strategy, test planning, test case design, test execution and reporting.
A Test Strategy defines the Organizational level Testing goals and approaches. Its framework is ideally applicable to the Test Organization which could be serving single or multiple projects/products.
A Test Plan defines the entire scope/process of Application Under Test. A test plan captures what needs to be tested, what does not need to be tested (equally important)at what level they will be tested, what sequence they are to be tested in, how the test strategy will be executed to the testing of each item, and describes the test environment.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels of specification and testing:
One of the best practices in Software Development Life Cycle is to include the Testers in the Requirements Review before they are frozen. This can also help early detection of potential bugs which can resurface later due to Incorrect Requirements
Test Case Development
Once the test plan is completed, the next stage of test design is to list the test cases that need to be executed. A number of test cases will be identified for each item to be tested at each level of testing. Each test case will specify how the implementation of a particular requirement or design decision is to be tested and the criteria for success of the test.
The test cases may be documented with the test plan, as a section of a software specification, or in a separate document called a Test Case Document.
Acceptance Test Specification: This specifies the test cases for acceptance testing of the software.
System Test Specification: this includes the test cases for systems integration and testing. This can be published either a separate document, or with system test plan.
Software Integration Test Specifications, specifying the test cases for each stage of integration of tested software components.
Unit Test Specifications: This specifies the test cases for testing of individual units of software. These may form sections of the Detailed Design Specifications.
Concept of Positive and Negative Testing:
Positive Testing checks that the software does what it should.
Negative Testing checks that the software doesn’t do what it shouldn’t. It also includes all types of negative scenarios which a user would not do in case he is familiar with the Test Application.
Example – I used to put Long File Names, Euro Characters, spaces etc in the validation fields and that’s where the maximum bugs surfaced.
White box Testing: also known as glass box testing is the actual code testing and is normally done by developers.
Unit Testing, in which each unit of the software is tested to verify that the detailed design for the unit has been correctly implemented.
Integration Testing, in which progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.
System Testing, in which the software is integrated to the overall product and tested to show that all requirements are met.
A further level of testing is also concerned with requirements:
What is User Acceptance Testing?
UAT (User Acceptance Testing): upon which acceptance of the completed software is based. This will often use a subset of the system tests, witnessed by the customers for the software or system.
Once each level of software specification has been written, the next step is to design the tests. An important point here is that the tests should be designed before the software is implemented, because if the software was implemented first it would be too tempting to test the software against what it is observed to do (which is not really testing at all), rather than against what it is specified to do.
Within each level of testing, once the tests have been executed, test results are evaluated. If problem is encountered, then either the tests are revised and executed again, or the software bug is fixed and the tests executed again. This is repeated until no problems are encountered, at which point development can proceed to the next level of testing.
Testing does not end following the conclusion of acceptance testing. Software has to be maintained to fix problems which show up during use and to accommodate new requirements. Software tests have to be repeated, modified and extended. The effort to revise and repeat tests consequently forms a major part of the overall cost of developing and maintaining software.
Regression Testing: The term Regression Testing is often confused with Re-Testing but Regression Testing is used to refer to the repetition of earlier successful tests in order to make sure that changes to the software have not introduced new bugs.
What are Test Procedures
The final stage of test design is to implement a set of test cases as a test procedure, specifying the exact process to be followed to conduct each of the test cases. This is a fairly straight forward process, which can be likened to designing units of code from higher level functional descriptions.
For each item to be tested, at each level of testing, a test procedure will specify the process to be followed in conducting the appropriate test cases. A test procedure cannot leave out steps or make assumptions. The level of detail must be such that the test procedure is deterministic and repeatable.
Test procedures should always be separate items, because they contain a great deal of detail
which is irrelevant to software specifications. If or are used, test
procedures may be coded directly as or test scripts.
When tests are executed, the outputs of each test execution should be recorded in a Test Report. These results are then assessed against criteria in the test specification to determine the overall outcome of a test. If or are used, this file will be
created and the results assessed automatically according to criteria specified in the test script.
Extreme Cases in Software Testing
There are certain extreme cases which I encountered during my career in Software Testing which I look forward to share in the current document:
All Passes without Execution
Reporting all passes and sending the report without actually executing the tests. This was very common when I stared my career and now the number of instances that I encounter are relatively less but are still there. The worst thing that can happen is “The product getting backfired from the customer premises”. This can cause huge losses to the Organization and also impact the career of the Individual/Tester who tested it.
Last Severity bug but becomes 1st priority/BLOCKER (from Release perspective): Has anyone come across a Microsoft Product which specifies “Win” instead of “Windows, but you won’t be able to find it? Why, because as a Tester you might be logging it as a last severity, but for the Vendor/Microsoft it becomes priority 1/BLOCKER which has to be fixed before a release is made
Is a typical case in which you log the Severity 1/crash bug (Test Blocker), but it is taken as a last priority by the management. Why???
In one of the instances, a vendor had released a version of OS, which had a known bug. The bug was that after installing the OS on a new machine, pull out the HD cable and insert it back. Boot the system, OS will crash and would be completely un- recoverable and would be required to re-install the entire OS again. Still the vendor released it, Why? Because the vendor would not expect the end user to do it.
A couple of url’s that could come in as handy: