More than 90 % of all produced electronic components are used in embedded systems. Embedded systems have usually to fulfil functional as well as temporal requirements. Most of the embedded systems applied in vehicles are subject to temporal requirements. This is due to reasons of operational comfort (short reaction times of the vehicle MMI to driver commands) and due to the requirements of technical processes which are controlled within the vehicle. Examples are engine control systems, body control systems like ABS and ESP, and airbag control systems. Often these systems are also safety-relevant.
For embedded systems testing is the most important quality assurance measure. It typically consumes 50 % of the overall development effort and budget. Essential to a good test quality is the systematic design of test cases. Test case design defines the kind and scope of the test. Test case design is difficult to automate. For functional testing the generation of test cases is usually not possible since no formal specifications are applied in industrial practice. Structural testing is also difficult to automate due to the limits of symbolic executions. Furthermore, for testing the temporal behavior no specialized methods and tools exist. Therefore in most cases, test cases have to be defined manually.
A promising approach to automate test case design is the Evolutionary Test. It could be applied to testing the temporal behavior of systems as well as to structural testing. Evolutionary testing uses metaheuristic search techniques like evolutionary algorithms and simulated annealing for the generation of test cases. The input domain of the system under test represents the search space in which test data fulfilling the test objectives under consideration are searched for. The Evolutionary Test is generally applicable since it adapts itself to the system under test.
For testing temporal behavior of systems evolutionary testing searches for input situation with the longest or shortest execution times. First of all random test data are generated with which the system is to be executed. The execution times measured for each test datum evaluate the suitability of the test (fitness evaluation). Test data with long or short execution times are selected (depending on the search for the worst case or best case execution time) and combined in order to obtain test data with even longer or shorter execution times (recombination). Following natural processes, random changes are carried out (mutation). By adding these generated test data to the already existing data a new test run is started. The test is terminated if an error in the temporal behavior is detected or a specified termination criterion has been reached. If a violation of the systemís predetermined temporal limits has been detected, the test was successful and the system needs to be corrected (Wegener et al. 1997).
To automate structural testing each program structure represents a test objective for which a test datum is searched for, e.g. to achieve full branch coverage a test datum has to be found for each single branch. To guide the search to program areas which have not been executed so far the fitness functions are based on the branch predicates of the system under test (Jones et al. 1998). A test control manages all the test objectives, starts an optimization for each objective, calculates the fitness values for the generated test data on the basis of the executed program structures, and defines an efficient schedule for the testing of all the objectives. Our test environment supports among others statement testing, branch testing, condition testing and path testing. Principally, the test is terminated when all the test objectives have been considered during the test. The coverage reached and the corresponding test data are presented to the tester.
Because of the complete automation of evolutionary tests the system can be tested with a number of different input situation. Most often more than several thousand test data sets are generated and executed within only a few minutes. Prerequisites for the application of evolutionary tests are extremely few. Only an interface specification of the system under test is needed in order to guarantee the generation of valid input values.
The application of evolutionary tests in several case studies has proved successful and first industrial applications within the field of engine electronics yielded very good results. Effectiveness and efficiency of the test process can be clearly improved by Evolutionary Tests. Evolutionary Tests thus contribute to quality improvement and to the reduction of development costs. The application scope of Evolutionary Tests goes further than the work described within this paper. Additional application fields are for instance safety and robustness tests.
Harmen Sthamer has a degree in Electronics and Communication from Polytechnic of Hannover, Germany. He has a MSc in Electronic Production Engineering and a Ph.D. in Software Technology from the University of Glamorgan, GB. Currently he is working as a scientist in the Software-Technology Laboratory of DaimlerChrysler, Research and Technology. He is currently working on systematic and evolutionary software testing methods for the verification of software-based systems. Harmen Sthamer is author or co-author of several papers and has presented on national and international conferences, e.g. IEE Conferences on Genetic Algorithms. He is a member of the Seminal Network, UK.
Joachim Wegener has a degree in Computer Science from Technical University Berlin, Germany. He is manager of Adaptive Technologies in Software Engineering at DaimlerChrysler, Research and Technology. He was involved in the development of the classification-tree editor CTE and the test system TESSY. He is currently working on the design of software development processes for Mercedes-Benz as well as on systematic and evolutionary software testing methods for the verification of embedded systems. He is a member of SAE International, the Seminal Network and the German Computer Society Special Interest Group on Testing, Analysis and Verification.