1
A Parallel Evolutionary Algorithm for
Prioritized Pairwise Testing of
Software Product Lines
Roberto E. Lopez-Herrejon*,...
2
Software Product Lines
 Software Product Lines (SPLs)
Families of software products
• Each product has different featu...
3
Software Product Lines Testing
 Challenge
How to test a Software Product Line effectively?
 Important factors to cons...
4
Our Contributions
 Formalization of prioritization testing scheme
proposed by Johansen et al.
 Implementation with the...
5
Prioritization Motivation
 Key ideas
Each feature combination represents an important
product of the SPL
For each rel...
6
Feature Models Example
 Feature models are the de facto standard to model
all valid feature combinations of a product l...
7
Feature List and Feature Set
 Example Feature List (FL)
Aircraft, Wing, Engine, Materials, High, Shoulder, Low,
Piston,...
8
Feature Set Example










 

Selected = {Aircraft, Wing, High, Engine, Piston, Materials, Cloth}
Unsele...
9
Terminology (3)
 Examples of valid feature sets
 Aircraft, Wing, Engine, Materials, High, Shoulder, Low, Piston,
Jet, ...
10
Prioritized Product
 Example
pp1 = [p1, 17]
11
Pairwise configuration
pc1=[{Plastic},{Cloth}] pc2=[{High, Wood},{}]
240 pairwise
configurations
12
Weighted Pairwise
Configuration
17
17
15
15
13
13
6
6
pc1=[{Plastic},{Cloth}]
wpc1.w=pp0.w + pp2.w =17 + 15 = 32
weights
13
Prioritized Pairwise Covering
Array
 Example of ppCA
p1, p2, p5
new
products
Challenge: Find a ppCA with the minimum n...
14
PPGS Algorithm
15
Parameter setting
Parameter Setting
Crossover type one-point
Crossover probability 0.8
Selection strategy binary tourna...
16
Evaluation
 Compared against Prioritized-ICPL (pICPL)
Proposed by Johansen et al. (2012)
Uses data parallelization
...
17
Weight priority assignment
methods
1. Measured values
 16 real SPL examples
 Code and feature model
available
 Non-f...
18
Experimental corpus
Problem instances G1 = 160 fm X 2 priority assig. X 3 percentages = 960
Problem instances G2 = 59 f...
19
Wilcoxon Test (1)
 Confidence level 95%
 We show the mean and standard deviation of number of
products required to co...
20
Wilcoxon Test (2)
 PPGS yields test suites of smaller sizes
 PPGS performs faster than pICPL
Group G2 – from 1,000 to...
21
Wilcoxon Test (3)
Group G3 – Measured Values, 32 to ≈3E24 products
PPGS
smaller size
pICPL
faster
22
Â12 measure
 Â12 is an effect size measure
 i.e. value 0.3 means that an algorithm A would obtain lower values
than a...
23
Threats to Validity
 Parameter setting
We use a standard values for our parameters
 Experimental corpus
Selection o...
24
Related Work
 Extensive work on software testing prioritization
and SBSE but not in SPLs
 Some SPL and SBSE testing e...
25
Conclusions and Future Work
 Our contributions
Formalization of a prioritization scheme and its
implementation in PPG...
26
Acknowledgements
Spanish Ministry of Economy
and Competitiveness,
FEDER
Próxima SlideShare
Cargando en…5
×

A parallel evolutionary algorithm for prioritized pairwise testing of software product lines

95 visualizaciones

Publicado el

GECCO'14 presentation

Publicado en: Software
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

A parallel evolutionary algorithm for prioritized pairwise testing of software product lines

  1. 1. 1 A Parallel Evolutionary Algorithm for Prioritized Pairwise Testing of Software Product Lines Roberto E. Lopez-Herrejon*, Javier Ferrer**, Francisco Chicano**, Evelyn Nicole Haslinger*, Alexander Egyed*, Enrique Alba** * Johannes Kepler University Linz, Austria ** University of Malaga, Spain
  2. 2. 2 Software Product Lines  Software Product Lines (SPLs) Families of software products • Each product has different feature combinations Have multiple economical and technological advantages • Increased software reuse • Faster time to market • Better customization  Fact SPL typically have a large number of products
  3. 3. 3 Software Product Lines Testing  Challenge How to test a Software Product Line effectively?  Important factors to consider Avoiding repeating tests Within the economical and technical constraints  State of the art Extensive work on SPL testing but using SBSE techniques remains largely unexplored
  4. 4. 4 Our Contributions  Formalization of prioritization testing scheme proposed by Johansen et al.  Implementation with the Parallel Prioritized product line Genetic Solver (PPGS)  Comprehensive evaluation and comparison against greedy approach.
  5. 5. 5 Prioritization Motivation  Key ideas Each feature combination represents an important product of the SPL For each relevant product give a positive integer value that reflects the priority of the product • Market importance • Implementation costs • ...
  6. 6. 6 Feature Models Example  Feature models are the de facto standard to model all valid feature combinations of a product line root mandatory optional inclusive-or exclusive-or
  7. 7. 7 Feature List and Feature Set  Example Feature List (FL) Aircraft, Wing, Engine, Materials, High, Shoulder, Low, Piston, Jet, Metal, Wood, Plastic, Cloth
  8. 8. 8 Feature Set Example              Selected = {Aircraft, Wing, High, Engine, Piston, Materials, Cloth} Unselected = {Shoulder, Low, Jet, Metal, Wood, Plastic}
  9. 9. 9 Terminology (3)  Examples of valid feature sets  Aircraft, Wing, Engine, Materials, High, Shoulder, Low, Piston, Jet, Metal, Wood, Plastic, Cloth 315 valid feature sets
  10. 10. 10 Prioritized Product  Example pp1 = [p1, 17]
  11. 11. 11 Pairwise configuration pc1=[{Plastic},{Cloth}] pc2=[{High, Wood},{}] 240 pairwise configurations
  12. 12. 12 Weighted Pairwise Configuration 17 17 15 15 13 13 6 6 pc1=[{Plastic},{Cloth}] wpc1.w=pp0.w + pp2.w =17 + 15 = 32 weights
  13. 13. 13 Prioritized Pairwise Covering Array  Example of ppCA p1, p2, p5 new products Challenge: Find a ppCA with the minimum number of feature sets
  14. 14. 14 PPGS Algorithm
  15. 15. 15 Parameter setting Parameter Setting Crossover type one-point Crossover probability 0.8 Selection strategy binary tournament Population size 10 Mutation probability 0.1 Termination condition 1000 evaluations Implemented in JMetal framework
  16. 16. 16 Evaluation  Compared against Prioritized-ICPL (pICPL) Proposed by Johansen et al. (2012) Uses data parallelization  Three different weight priority assignment methods  Different percentages of selected products  Ranging from 5% upto 50%
  17. 17. 17 Weight priority assignment methods 1. Measured values  16 real SPL examples  Code and feature model available  Non-functional properties measured (e.g. footprint) 2. Ranked-based values  Based on how dissimilar two products are  More dissimilar higher chances of covering more pairs 3. Random values  [Min..Max] range
  18. 18. 18 Experimental corpus Problem instances G1 = 160 fm X 2 priority assig. X 3 percentages = 960 Problem instances G2 = 59 fm X 2 priority assig. X 3 percentages = 354 Problem instances G3 = 16 fm X 1 priority assig. = 16 Total independent runs = 1330 X 2 algorithms x 30 indep. runs = 79,800 G1 G2 G3 Summary Number Feature Models 160 59 16 235 Number Products 16-1K 1K-80K 32-≈3E24 16-≈3E24 Number Features 10-56 14-67 6-101 6-101 Weight Priority Assignment RK Ranked-Based, RD Random, M Measured RK,RD RK,RD M Prioritized Products Percentage 20,30,50 5,10,20 ≈0.0 - 100 Problem Instances 960 354 16 1330
  19. 19. 19 Wilcoxon Test (1)  Confidence level 95%  We show the mean and standard deviation of number of products required to cover 50% upto 100% of the total weighted coverage  We highlight where the difference is statistically significant Group G1 – less than 1000 products PPGS smaller size pICPL faster
  20. 20. 20 Wilcoxon Test (2)  PPGS yields test suites of smaller sizes  PPGS performs faster than pICPL Group G2 – from 1,000 to 80,000 products
  21. 21. 21 Wilcoxon Test (3) Group G3 – Measured Values, 32 to ≈3E24 products PPGS smaller size pICPL faster
  22. 22. 22 Â12 measure  Â12 is an effect size measure  i.e. value 0.3 means that an algorithm A would obtain lower values than algorithm B for a measure M in 70% of the times  Lower values, PPGS obtains smaller test suites PPGS obtains smaller size test suites most of the times pICPL smaller test suites pICPL smaller test suites PPGS best performance
  23. 23. 23 Threats to Validity  Parameter setting We use a standard values for our parameters  Experimental corpus Selection of feature models • Counteract by selecting a large number of feature models, different characteristics, and different provenance Selection of priority values • Counteract using 3 different approaches based on ranked values, random values, and measure values from actual non-functional properties
  24. 24. 24 Related Work  Extensive work on software testing prioritization and SBSE but not in SPLs  Some SPL and SBSE testing examples Garvin et al. – simulated annealing Ensan et al. – genetic algorithm with fitness function based on cyclomatic complexity Henard et al. – genetic algorithm with fitness function based on dissimilarity metric
  25. 25. 25 Conclusions and Future Work  Our contributions Formalization of a prioritization scheme and its implementation in PPGS Evaluation and comparison against state-of-the-art greedy algorithm pICPL  Planned future work Evaluate alternative representations of individuals to obtain better performance Analyze adaptable stopping conditions for PPGS based on characteristics of the feature models
  26. 26. 26 Acknowledgements Spanish Ministry of Economy and Competitiveness, FEDER

×