We advocate a middle ground between no software assertions at all (a common practice that fortunately is becoming less common) and the maximum of assertions on every statement in a program. Our compromise is to place assertions only at locations where traditional testing is unlikely to uncover software defects. Once testing is completed, the embedded assertions may be removed or deactivated.
Assertions can be extremely powerful testing aides; however they are expensive to derive, instrument (insert), and execute. Thus if we can find ways to better isolate where they are needed, we can not only improve the likelihood of fault detection at those places, but we can also avoid the cost of using assertions at those places where they are less helpful.
To date, assertion localization has been, at best, performed in an adhoc manner. The process has traditionally been developers sprinkling code with assertions in places that interested them. Our model is that assertions need to be placed where the test cases that will be used on the code are helpless in detecting faults. In our opinion, this makes assertion placement more systematic, and enhances the ability of software testing to detect faults.
Predicting where faults may hide is an expensive process because there are so many considerations that must be factored in. We employ sensitivity analysis to gather this information. Sensitivity analysis is a dynamic approach for predicting where faults will hide from test cases [8]. Sensitivity analysis predicts the likelihood that a test scheme will: