The life of testing is not finite just as counting is not finite. It has to be applied when there's a need. Actually there can be various dimensions to this fact. We can think of testing to be complete when a tester full fills the requirements by running complete set of test cases or its the intuitiveness of the tester who feels that the testing has covered almost the entire application. When complexities aren't big enough the testing process may not take a very long route.
Lets try to analyse the different approaches that can be applied to judge when to stop testing.
Bugs are hard to deal with. Even after a software application is deployed the end users may report of some bug/defect which may have been overlooked by the testers. It is therefore a good practice to proceed with the testing process maintaining a check list along. However, they say there's always a solution to every problem. We can devise a strategy to deal with the question of when to stop testing.
There are certain situations when time and budget pose a constraint, we have almost arrived at the deadline date of project completion or there may be few bugs which are of high priority therefore they are catered first. There is no definite scenario where one can say that testing is complete. It is a a situation based decision where one can just randomly decide to stop the testing or reach a consensus in a scenario when coverage of code is good, size of defects and their impact is low, number of high severity bugs are low. There is no defined rule which can be applied as to when should one stop testing, it is the ingenuity of a tester to be able to come to a concensus. It is a matter of experience and application of knowledge.