Combinatorial testing (CT) is an effective test design technique, considered to be a testing best practice. CT provides automatic test plan generation, but requires a manual definition of the test space in the form of a combinatorial model, consisting of parameters, their respective values, and constraints on the value combinations. The theory of CT exists since the mid 80s, however when we started applying it to IBM products over a decade ago, we encountered several real-world challenges that limited its applicability in practice. In this talk I will describe three different real-world challenges in the application of CT in practice, and our solutions for overcoming these challenges and reaching wide use of CT across IBM testing services.
First, I will describe the scalability challenge we encountered in modeling large test spaces with dozens of parameters and constraints. Our solution is based on the underlying representation of the combinatorial model as a Binary Decision Diagram (BDD), and on a CT test generation algorithm that operates over BDDs. Second, I will describe challenges related to the requirement posed by users to use (in some cases only) existing tests as part of the CT test generation process. Our solution is to provide interaction-based test plan minimization and enhancement.
Finally, I will describe the evolution challenge of combinatorial models and test plans. As the system under test evolves, e.g., due to iterative development processes and bug fixing, so does the test space. Thus, in the context of CT, evolution translates into frequent manual model definition updates, and adaptation of the test plan accordingly. Our solution is twofold. We suggest a first syntactic and semantic differencing technique for combinatorial models. We further suggest a first co-evolution approach for combinatorial models and test plans, considering tradeoffs between maximizing fine-grained reuse and minimizing total test plan size.
We have implemented all the ideas described above in IBM Functional Coverage Unified Solution (IBM FOCUS), an industrial-strength CT tool. IBM FOCUS is at the core of the IBM IGNITE Quality and Test platform, which is IBM's solution for testing services of client applications.
The talk will cover works published in ISSTA'11, ICSE'13, ICSE'17, and ESEC/FSE'18.
Rachel Tzoref-Brill is a Research Staff Member at IBM Research in Haifa. She received her Ph.D. in computer science from Tel Aviv University, Israel, and her M.Sc and B.Sc degrees from Technion, Israel Institute of Technology. She won an ACM SIGSOFT Outstanding Doctoral Dissertation Award. Her research interests include Cloud testing, software test generation, combinatorial testing, applications of AI for software engineering, and empirical software engineering. She received an IBM Corporate
Award for combinatorial testing innovations. She authored a book chapter on advances in combinatorial testing in the book series ‘Advances in Computers’, 2019. She served on the program committees of ESEC/FSE 2020 and ICSE 2021.