Independent Formal Verification of Safety-Critical Systems’ User Interfaces: a space system case study
1. 1
Independent Formal Verification of Safety-
Critical Systems’ User Interfaces:
a space system case study
NASA IVV Workshop
September, 2013
Manuel Sousa1, José Creissac Campos1
Miriam Alves2 and Michael D. Harrison3
1Dept. Informática/Universidade do Minho &HASLab/INESC TEC, Portugal
2Institute of Aeronautics and Space - IAE, São José dos Campos, Brazil
3Queen Mary University of London & Newcastle University, UK
* This work is funded by the ERDF - European Regional Development Fund through the ON.2 – O
Novo Norte Operational Programme, within the Best Case project (ref. N-01-07-01-24-01-26).
2. 2
foreword
Dependable device
Dependable system?
User
The impact of users on a system is hard to anticipate
users behave in unexpected ways
users’ behaviour is changed by (adapts to) the device
users must understand the device
We have been working on approaches to consider the user
during the formal verification of interactive systems
10. 10
IVY analysis of EV subsystem
The system was modelled from the operations manual
model reflects knowledge provided to the operator
properties used to express expected behaviour
A three layered model was built
Each type of variable modelled as an interactor
Each screen modelled as an interactor
Navigation between screens modelled on top of that
Values displayed modelled
as attributes
Buttons modelled as actions
11. 11
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical variable, when the current
value of the variable is in non acknowledged alert (value
within the alert range), there is no acknowledged alarm in the
variable, and the previous criterion [non acknowledged alarm
criterion] is not satisfied. If over the same critical variable an
acknowledged alarm exists, then Fixed Red prevails. For a non
critical variable, when the current value of the variable is in
non acknowledged alarm (value within the alarm range).”
12. 12
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical variable, when the current
value of the variable is in non acknowledged alert (value
within the alert range), there is no acknowledged alarm in the
variable, and the previous criterion [non acknowledged alarm
criterion] is not satisfied. If over the same critical variable an
acknowledged alarm exists, then Fixed Red prevails. For a non
critical variable, when the current value of the variable is in
non acknowledged alarm (value within the alarm range).”
critical
& ((_v>= infAlarmLim& _v<infAlertLim) | (_v<= supAlarmLim& _v>supAlertLim))
& (alarmState != AlaRec&alarmState != AlaNRec)
13. 13
From manual to model
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical variable, when the current
value of the variable is in non acknowledged alert (value
within the alert range), there is no acknowledged alarm in the
variable, and the previous criterion [non acknowledged alarm
criterion] is not satisfied. If over the same critical variable an
acknowledged alarm exists, then Fixed Red prevails. For a non
critical variable, when the current value of the variable is in
non acknowledged alarm (value within the alarm range).”
!critical
& ((_v<infAlarmLim) | (_v>supAlarmLim))
14. 14
From manual to model
[setValue(_v)]
(((critical & ((_v>= infAlarmLim& _v<infAlertLim) | (_v<= supAlarmLim
& _v>supAlertLim))) | (!critical & ((_v<infAlarmLim) | (_v>
supAlarmLim)))) & (alarmState != AlaRec&alarmState != AlaNRec))
->
value’ = _v&colour’ = yellow & error’ = Lim &alertState’ = AleNRec&
characteristic’ = Blink &keep(supAlertLim,infAlertLim,supAlarmLim,
infAlarmLim,unity,critical,alarmState)
conditions for
blinking yellow
setting
blinking yellow
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical variable, when the current
value of the variable is in non acknowledged alert (value
within the alert range), there is no acknowledged alarm in the
variable, and the previous criterion [non acknowledged alarm
criterion] is not satisfied. If over the same critical variable an
acknowledged alarm exists, then Fixed Red prevails. For a non
critical variable, when the current value of the variable is in
non acknowledged alarm (value within the alarm range).”
15. 15
From manual to model
(alarmState != AlaNRec&alarmState != AlaRec) becomes
(alarmState = AlaRec)
How the colouring scheme works (from the operations manual):
“Blinking yellow: For a critical variable, when the current
value of the variable is in non acknowledged alert (value
within the alert range), there is no acknowledged alarm in the
variable, and the previous criterion [non acknowledged alarm
criterion] is not satisfied. If over the same critical variable an
acknowledged alarm exists, then Fixed Red prevails. For a non
critical variable, when the current value of the variable is in
non acknowledged alarm (value within the alarm range).”
17. 17
analysis
Can a variable be in alarm?
Trying to prove otherwise…
False as expected but…
counterexample highlights a situation where the variable
colour is fixed red under an acknowledged alert condition –
should not be possible.
AG(monitTMT.BD1_A.colour = green -> !EX (monitTMT.BD1_A.colour = red))
18. 18
analysis
manual not stating what happens to a non-critical alert
model becomes non-deterministic
19. 19
conclusions/lessons learnt
It was possible to build a relevant model independently (without a deep
understanding of the system) and still provide insights to the client
This particular model captures understanding of the system from the
operations manual/requirements document perspective
Incomplete or inconsistent information leads to unexpected system
behaviour
Computer-aided verification of user interfaces is crucial for critical-
complex systems
Results can help:
improve requirements / manuals
define test cases
improve system dependability
As we add complexity to the models, verification time becomes a
problem – but, interesting results are possible with manageable models
explain where we come from…Our starting point is that when you place a dependable device in from of the user, you are not guaranteed to have a dependable systemusers unexpected: firing a gunusers adapt: langing gear
Explain the basic ideasTypical pattern: consistency
Explain the basic ideas
In this paper:AniMALDiscuss representationsScenarios generator
PTGS = Preparation and Testing Ground System EV = Flight Events Sequence Network. responsible to test and prepare one of the rocket’s electrical sub-network. CR = Electric Control Network. responsible for the testing, simulation and analysis of the automatic launch sequence,
atributoseacções das imagensaxiomas das descrições
EV subsystemmain because of the navigation and constraints to synchronize all the Interactors, tmtVariablesbecause of the values of variables and control of the alarms and alerts triggered.
esforço
A Macbook Pro with an Intel Core 2 Duo P8800 at 2.66 GHz with 8Gb of ram, and a PC with an Intel Core i7 960 at 3.20GHz with 24Gb of ram. The machines have different operating system, Mac OS X and Windows Server 2008 R2 Standard respectively.
A Macbook Pro with an Intel Core 2 Duo P8800 at 2.66 GHz with 8Gb of ram, and a PC with an Intel Core i7 960 at 3.20GHz with 24Gb of ram. The machines have different operating system, Mac OS X and Windows Server 2008 R2 Standard respectively.