2. 2CONFIDENTIAL
SPEAKER NAME
Проект Менеджер
10 лет с компанией EPAM Systems. Разработчик в
прошлом
Возглавляю разработку инициатив внутри
Центра Компетенций Тестирования.
Report Portal – Product Owner.
DZMITRY HUMIANIUK
EPAM Systems, Project Manager
ДМИТРИЙ ГУМЕНЮК
3. 3CONFIDENTIAL
SPEAKER NAME
DZMITRY HUMIANIUK
EPAM Systems, Project Manager
ДМИТРИЙ ГУМЕНЮК
Проект Менеджер
10 лет с компанией EPAM Systems. Разработчик в
прошлом
Возглавляю разработку инициатив внутри
Центра Компетенций Тестирования.
Report Portal – Product Owner.
6. 6CONFIDENTIAL
SPEAKER NAME
Проект Менеджер
10 лет с компанией EPAM Systems. Разработчик в
прошлом
Возглавляю разработку инициатив внутри
Центра Компетенций Тестирования.
Report Portal – Product Owner.
DZMITRY HUMIANIUK
EPAM Systems, Project Manager
ДМИТРИЙ ГУМЕНЮК
7. 7CONFIDENTIAL
SPEAKER NAME
Проект Менеджер
10 лет с компанией EPAM Systems. Разработчик в прошлом
Возглавляю разработку инициатив внутри
Центра Компетенций Тестирования.
Report Portal – Product Owner.
DZMITRY HUMIANIUK
EPAM Systems, Project Manager
18. 18CONFIDENTIAL
КАК ЧАСТО ВЫ ЗАПУСКАЕТЕ АВТОМАТИЗАЦИЮ?
КАКОЙ ПРОЦЕНТ ПРОХОЖДЕНИЯ ВАШИХ
ТЕСТОВ?
ПРОСМАТРИВАЕТ ЛИ КОМАНДА ВСЕ
РЕЗУЛЬТАТЫ?КАК МНОГО ТЕСТОВ УПАЛО ИЗ-ЗА ПРОДУКТОВЫХ
БАГОВ?
НА СКОЛЬКО ВАША АВТОМАТИЗАЦИЯ БЛОКИРУЕТ
ПОСТАВКУ?
21. 21CONFIDENTIAL
ИДЕИ В ОСНОВЕ РЕПОРТ ПОРТАЛАLIVE VISIBILITY В
СОСТОЯНИЕ
АВТОМАТИЗАЦИИ
СОКРАЩЕНИЕ
ЗАТРАТ НА РАЗБОР
РЕЗУЛЬТАТОВ
ПОДДЕРЖКА
ПОПУЛЯРНЫХ
ТУЛОВ В
АВТОМАТИЗАЦИИ
КРАСИВЫЕ
ГРАФИКИ ПРЯМО
ИЗ КОРОБКИ
МОЖЕТ РАБОТАТЬ С
СУЩЕСТВУЮЩЕЙ
АВТОМАТИЗАЦИЕЙ
СОЛЮШЕНЫ
ПОМОГАЮТ ДЕЛАТЬ
ДЕНЬГИ
31. 31CONFIDENTIAL
“FIRE DETECTOR” METRICS
Name Why it is important to monitor this What decision can be made Periodici
ty
Calculation
Execution
Frequency
To understand and evaluate how
often test cases executed.
Goal: Increase execution ratio per
day/build.
Configure automated execution of
TA after commits (smoke,
acceptance) and regressions right
after nightly builds.
Triggers
1 execution per week or less
(???)
Target
1 per commit
Interpretation
code change should be verified
not to affect functionality or
other code base.
Each day Measures
1. test executions count
Formula
Unit of measure
Number
% of results
analyzed
To understand and evaluate
percentage of reviewed fail reports.
If fail report not reviewed - there is
not any value
Triggers
<60%
Target
100%
Interpretation
failed report(test cases) should
be review
Each day Measures
1. No of not analyzed failed test cases (TI)
2. No of failed due to product bug (PB)
3. No of failed due to automation issues (AB)
4. No of failed due to system issue (SI)
Formula
100% - To_Investigate / (PB+AB+SI+TI)
Unit of measure
%
32. 32CONFIDENTIAL
“FIRE DETECTOR” METRICS
Name Why it is important to
monitor this
What decision can be made Periodi
city
Calculation
% of product
issues
(stability of
automation)
To understand and
evaluate percentage of
failed test which
corresponds to production
issues
Triggers
trend not being positive
Target
>90%
Interpretation
trend of fails due to test automation
instability should decrease (automation
issues, system issue, not analyzed failed test
cases)
Measures
1. No of not analyzed failed test cases (TI)
2. No of failed due to product bug (PB)
3. No of failed due to automation issues (AB)
4. No of failed due to system issue (SI)
Formula
PB / (PB+AB+SI+TI)
Unit of measure
%
% of
automation
issues
To understand and
evaluate percentage of
failed test which
corresponds to test
automation issues (code
issues, test case invalid)
Triggers
trend not being positive
Target
<5%
Interpretation
trend of fails due to test automation
instability should decrease
Measures
1. No of not analyzed failed test cases (TI)
2. No of failed due to product bug (PB)
3. No of failed due to automation issues (AB)
4. No of failed due to system issue (SI)
Formula
AB / (PB+AB+SI+TI)
Unit of measure
%
33. 33CONFIDENTIAL
“FIRE DETECTOR” METRICS
Name Why it is important to
monitor this
What decision can be made Periodi
city
Calculation
% of system
issues
To understand and evaluate
percentage of failed test
which corresponds to system
issues (no connectivity, lack
of resources and etc.)
Triggers
trend not being positive
Target
<5%
Interpretation
trend of fails due to system issue
instability should decrease
Measures
1. No of not analyzed failed test cases (TI)
2. No of failed due to product bug (PB)
3. No of failed due to automation issues (AB)
4. No of failed due to system issue (SI)
Formula
SI / (PB+AB+SI+TI)
Unit of measure
%
Execution
Time
To understand and evaluate
time of each execution cycle.
Triggers
Regression cycles > 15h
acceptance cycles >1h
Target
regression <8h
acceptance <1h
Interpretation
time for execution should not go over
described limits
Measures
1. Execution duration of regression, hours (Reg)
2. Execution duration of acceptance, hours (Acp)
Formula
Reg < 15h
Acp < 1h
Unit of measure
Hours
34. 34CONFIDENTIAL
SPECIFIC FOR QA AUTOMATION METRICS
Metric Name Description Category
Percent Automatable (PA) # of test cases automatable (ATC) / # of total test cases (TC) Quality
Regression frequency How often does automated regression run? Quality
Automation Progress (AP) # of actual test cases automated (AA) / # of test cases
automatable (ATC)
Progress
Percent of Automated Testing
Coverage (PTC for Automation)
Automation coverage (AC) / Total Coverage (C) (KLOC, FP, etc) Coverage
Automated Testing Window 1) How much time does it take to run QA Automation?
2) How much system time/lab time is required to run the
test(s) selected?
Cost / time
Testing Results Analyses Windows How much time does it take to do data analysis? Cost / time
Auto-Test Development Time # of automated tests/ time to develop Cost / time
Auto-Test Support Time # of automated tests/ time to support Cost / time