Abstract
In the literature, we find several criteria that consider different aspects of the program to guide the testing, a fundamental activity for software quality assurance. They address two important questions: how to select test cases to reveal as many fault as possible and how to evaluate a test set T and end the test. Fault-based criteria, such as mutation testing, use mutation operators to generate alternatives for the program P being tested. The goal is to derive test cases capable of producing different behaviors in P and its alternatives. However, this approach usually does not allow the test of interaction between faults since the alternative differs from P by a simple modification. This work explores the use of Genetic Programming (GP), a field of Evolutionary Computation, to derive alternatives for testing P and introduces two GP-based procedures for selection and evaluation of test data. The procedures are related to the above questions, usually addressed by most testing criteria and tools. A tool, named GPTesT, is described and results from an experiment using this tool are also presented. The results show the applicability of our approach and allow comparison with mutation testing.
Similar content being viewed by others
References
Bergadano, F. and Gunetti, D. 1995. Inductive Logic Programming: From Machine Learning to Software Engineering, MIT Press, Cambridge, MA.
Budd, T.A. and Angluin, D. 1982. Two notions of correctness and their relation to testing, Acta Informatica 18(1): 31–45.
Bottaci, L. 2001. A genetic algorithm fitness function fo mutation testing, In Seminal: Software Engineering Using Metaheuristic Innovative Algorithms, IEEE International Conference on Software Engineering, http://www.brunel.ac.uk//csstmmh2/seminal2001, October.
Chung, I.S. 1998. Automatic testing generation for mutation testing using genetic operators, In Proceedings of SEKE, San Francisco, June.
Delamaro, M.E. and Maldonado, J.C. 1996. A tool for the assesment of test adequacy for C programs, In Proceedings of the Conference on Performability in Computing Systems, East Brunswick, NJ, USA, July, pp. 79–95.
De Millo, R.A., Gwind, D.C., and King, K.N. 1988. An extented overview of the mothra software testing environment, In Proc. of the Second Workshop on Software Testing, Verification and Analysis, Computer Science Press, Banff, Canada, July 19-21, pp. 142–151.
De Millo, R.A., Lipton, R.J., and Sayward, F.G. 1978. Hints on test data selection: Help for the practicing programmer. IEEE Computer C-11: 34–41.
De Millo, R.A. and Offutt, A.J. 1991. Constraint-based automatic test data generation, IEEE Transactions on Software Engineering SE-17(9): 900–910.
Emer, M.C.F.P. and Vergilio, S.R. 2002. Gptest: A testing tool based on genetic programming, In Proceedings of Genetic and Evolutionary Conference-GECCO, July, Morgan Kaufammn Publishers, New York, pp. 1343–1350.
Holland, J.H. 1992. Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA.
Howden, W.E. 1982. Weak mutation testing and completeness of test sets, IEEE Transactions on Software Engineering SE-8(4): 371–379.
Jones, B.F., Sthamer, H.H., and Eyres, D.E. 1996. Automatic structural testing using genetic algorithms, The Software Engineering Journal 11: 299–306.
Koza, J.R. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, MA.
Koza, J.R. 1994. Genetic Programming II: Automatic Discovery of Reusable Programs, MIT Press, Cambridge, MA.
Mathur, A.P. and Wong, W.E. 1993. Evaluation of the cost of alternate mutation strategies, In VII Simpósio Brasileiro de Engenharia de Software, Rio de Janeiro-Brazil, October, pp. 320–375.
Michael, C.C., McGraw, G., and Schatz, M.A. 2001. Generating software test data by evolution, IEEE Transactions on Software Engineering 27(12): 1085–1110.
Morell, L.J. 1988. Theoretical insights into fault-based testing, In Proc. of Workshop on Software Testing, Verification and Analysis, Banff, Canada, pp. 45–62.
Myers, G.J. 1979. The Art of Software Testing, Wiley, New York.
Pargas, R.P., Harrold, M.J., and Peck, R.R. 1999. Test-data generation using genetic algorithms, The Journal of Software Testing, Verification and Reliability 9: 263–282.
Roper, M., Maclean, I., Brooks, A., Miller, J., and Wood, M. 1995. Genetic algorithms and the automatic generation of test data, Technical Report RR/95/195, University of Strathelyde, Glasgow, UK.
Rapps, S. and Weyuker, E.J. 1985. Selecting software test data using data flow information, IEEE Transactions on Software Engineering SE-11(4): 367–375.
Spinoza, E. et al. 2001. Chameleon: A generic tool for genetic programming, In Proceedings of the Brazilian Computer Society Conference, Fortaleza, Brazil, August.
Tracey, N., Clark, J., Mander, K., and McDermid, J. 1998. An automated framework for structural test-data generation, In Proceedings of 13th IEEE Conference on Automated Software Engineering, Hawai, USA.
Wegener, J., Baresel, A., and Sthamer, H. 2001. Evolutionary test environment for automatic structural testing, Information and Software Technology 43: 841–854.
Weichselbaum, R. 1998. Software test automation by means of genetic algorithms, In Proceedings of the Sixth International Conference on Software Testing, Analysis and Review, Munich, Germany.
Weyuker, E.J. 1983. Assessing test data adequacy through program inference, ACM Transactions on Programming Languages and Systems SE-5(4): 641–655.
Wong, W.E., Mathur, A.P., and Maldonado, J.C. 1994. Mutation versus all-uses: An empirical evaluation of cost, strength and effectiveness, In Software Quality and Productivity-Theory, Practice, Education and Training, Hong Kong, December.
Xanthakis, S., Ellis, C., Skourlas, C., LeGall, A., and Katsikas, S. 1992. Application to genetic algorithms to software testing, In Proceedings of the Fifth International Conference on Software Engineering, Tolouse, France.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Emer, M.C.F., Vergilio, S.R. Selection and Evaluation of Test Data Based on Genetic Programming. Software Quality Journal 11, 167–186 (2003). https://doi.org/10.1023/A:1023772729494
Issue Date:
DOI: https://doi.org/10.1023/A:1023772729494