Selective Re-Test Techniques

What is Selective Re-test?

It's an approach of performing regression testing over a software product, using minimal and specific set of test cases available in test suites. As selective re-test, concerns with the regression testing type. so let's have a small overview of regression testing before proceeding further.

Structured Design

It's a testing technique, which evaluates a software product that has undergone through the process of bug removal or fixation. As the process of bug removal or fixation requires addition, deletion or modification of the programming code or features, which may affect the existing functionalities to make a software product behave unexpectedly. Therefore, it becomes necessary to test a software product to verify and validate the correctness of existing functionalities.

Now, coming back to selective re-testing. Since, a software product needs to be examined after implementing the bug-resolving corrective actions, it may take following approaches to examine the integrity of the existing features and functionalities:

  • Re-testing the whole software product.
  • Testing the modified functionality or code only without touching the unaltered portion of the software product.
  • Selective re-test, i.e. choosing the minimal number of selective test cases from the existing test suites to emphasize and focus on specific areas.


Generally, there are three different ways to make a selection of test cases out of the existing test cases for selectively retesting a software product and has been described below.

Coverage Technique:

Based on the test coverage criteria, this technique ensures the coverage of modified components of a software program, which may be covered under the test and accordingly making the selection of test cases, from the existing test suites.

Minimization Technique:

Similar to that of coverage technique, but is carried out using a minimal set of test cases.

Safe Techniques:

It's a technique of selecting every possible test case that may cause a modified or updated software version to generate output, different from that of original version.

Apart from the above mentioned important techniques, selection of test cases may also be carried out using two more methods.

  • Data flow coverage technique: This approach considers the test cases, which may involve the interaction or flow of data affected by the changed functionality or features.
  • Ad-hoc test technique: Randomly selecting the test cases, keeping in account the time constraints associated with a software product.
  • Metrics, to measure and compare the techniques.

    Below given are the certain categories, revealed by the Rothermal[ROTH96a] to compare and assess the effectiveness of selective re-test techniques.

    • Inclusiveness:It defines the degree of extent, a technique selects the number of test cases in order to produce multiple variants of output with the modified functionality or program, and should be different to that of the original program.
    • Precision:Contrary to inclusiveness, this metric assesses the technique based on its ability to skip or avoid test case, which may not able to generate different outputs other than the original outcome.
    • Efficiency: It is used to measure the feasibility of a technique in terms of its computational cost.
    • Generality: It reflects the strength of a technique to deal with the realistic and varieties of language construct complex modification and testing the software application in a realistic manner.

    However, it is preferred to carry out the designing process using the mixed combination of both the approaches, which is commonly known by the term of Hybrid Design.