As a lot of teams suffer all over the world from the “Testing Swiss Cheese Syndrome” so I believe it is time to share the information that we have collected. By the end of this presentation, you should be able to make a first diagnostic on your testing activities and reflect on adapted medication.
To introduce you to this syndrome, let’s think about all the testing activities going on during an application development. It is usually a subset of unit testing, integration testing, functional testing, automated testing, manual testing, exploratory testing, etc.
It is very unlikely that a single testing activity covers the whole application. That’s where the similarity with a slide of Swiss cheese: you can imagine holes in your test coverage, areas that haven’t been covered by one testing activity.
The Swiss Cheese Syndrome appends when you don’t have an aggregated view of all these testing activities. In such a case, testing holes join their force to create tunnels where bugs and regressions can stay hidden until production.
Unraveling Multimodality with Large Language Models.pdf
Swiss Testing Day 2013 - How to avoid the testing swiss cheese syndrome
1. How to avoid the
Testing Swiss Cheese Syndrome
Marc Rambert
@MarcRambert
2. Our journey
New testing challenges
Testing Swiss Cheese Syndrome
Medication: Test/Code Coverage
How to cure this syndrome
Conclusion & Q & A
Marc Rambert - Kalistick - 11/02/2008 2
3. New Testing Challenges?
- Time 2 market: from 1 release per year to 1 per day/month/quarter
- Adapt testing to Speed & Quality
- Being Agile
- From one QA team to a whole team doing testing
- Adapt to time & cost constraints
- While managing risks
Marc Rambert - Kalistick - 11/02/2008 3
4. From 1 testing team
to a whole team doing testing
Mike Cohn: Test Pyramid
Founding member of the Scrum alliance
Marc Rambert - Kalistick - 11/02/2008 4
5. The “Testing Swiss Cheese Syndrome” !
Testing ‘holes’ / Gaps
Exploratory
User Acceptance
Manuel Automated
Integration Bug catched !
Functional Unit
Regression
High Risks!
Application
Version X
modifications
modifications
“Testing is an investment and the investment we make at
one layer should be influenced by how well testing has been
done at the other layers”
Mike Cohn
Marc Rambert - Kalistick - 11/02/2008 5
6. Where are the testing gaps?
“Dev are from Mars & Test are from Venus”
Requirements
Application – release X
Done by dev Tested
Risks ? Ok Effectiveness ?
8% to 16% of all bugs are created by fixes
Capers Jones, “Software Quality in 2011: a survey of the state of the art”
Marc Rambert - Kalistick - 11/02/2008 6
7. Lot of questions.
Few information
Application
V x.1.11.1.2
‘Build’
Modifications? Tests coverage?
Impacts? Tests relevance?
Risks? Risks Coverage?
Test How much Go
Focus? testing? Live?
Marc Rambert - Kalistick - 11/02/2008 7
8. Medication:
Capture & Aggregate Test Coverage
Application Automated
Functional Tests
Integration Tests
Changes
Unit Tests
Holes = Risks
=> Which manual tests are relevant?
Marc Rambert - Kalistick - 11/02/2008 8
9. What’s Test/Code coverage
or more effective: Test Footprint
Test 1 - footprint
Application build/Version X
Test Team Test 3
Test 1
Manual & Test 2 - footprint
automated Test 2
tests
Test 3 - footprint
Detect Code Changes
Tests/Code Coverage
Testing Holes
Marc Rambert - Kalistick - 11/02/2008 9
10. Improve your Testing Strategy
Functional tests executed
2
Functional tests not yet executed
3
Automated tests executed
1 Code modifications
1 Overall application code base
1
1
Marc Rambert - Kalistick - 11/02/2008 10
11. A quick way to get it
Marc Rambert - Kalistick - 11/02/2008 11
11
12. How to cure this syndrome
Identify relevant tests
to fill the gaps
Testing
Gaps
Functional tests
Aggregated
Test Integration tests
Coverage
Unit tests
Software
changes
ion
Applicat
Marc Rambert - Kalistick - 11/02/2008 12
13. Impact on Testing Effectiveness
– Team 1
• After executing planned campaigns
• Selected 10 tests with the highest impact (Test Scoring)
⇒ 3 regressions were found.
Best ratio ever: Bugs/Tests executed
– Team 2
• Adopted Test Coverage & Test Scoring to improve its testing
activities
⇒ KPI: number of bugs found within 3 months after deployment
⇒ - 50%
Marc Rambert - Kalistick - 11/02/2008 13
14. Conclusion + Q&A
– Different kind of testers with different tools
– Adopt an aggregated view of all testing activities spread
along the application life-cycle
– Nurture collaboration between Dev & Test
Code Coverage is not only for developers
Test Footprint is more than coverage
More questions: @MarcRambert
Marc Rambert - Kalistick - 11/02/2008 14
Editor's Notes
Kalistick apporte une réponse unique: Quel que soit le test exécuté, manuel ou automatique, Kalistick capture son empreinte. L ’ empreinte est le code de l ’ application qui est stimulée lorsque ce test est exécuté. Ainsi pour chaque test on a l ’ identification précise de chaque ligne de code utilisée. Et pour chaque partie du code on a la vision des tests qui l ’ utilisent. Dés qu ’ un test est exécuté, son empreinte est capturée et vient enrichir une base de connaissance dédiée à l ’ application. La 2 ème partie unique de notre technologie, est un scanner qui détecte à chaque version les modifications réalisées par l ’ équipe de développement. Notre plateforme analyse ensuite ces changements et détecte tous les tests concernés pour détecter les tests pertinents pour éliminer les régressions et dysfonctionnements liés à ces évolutions ou correctifs. Notre système permet de voir ce qui est déjà stimulé par certains tests par exemples les tests automatiques ou les tests fait par les équipes de développements. Toutes ces informations sont corrélées par notre moteur d ’ analyse pour proposer un score qui permet à nos clients de gagner du temps en évitant les tests inutiles, et de prioriser l ’ exécution pour détecter au plus tôt les bugs en commencant par les tests les plus impactés. Enfin, à chaque instant on restitue les « trous de test » c ’ est-à-dire ce qui n ’ a jamais été testé et pour lequel notre système ne connait aucun test.