2. Lecture outline
• Course information
– Examination: project
• Embedded systems
– Non-functional requirements
• Real-time systems
– Hard vs. soft
• Safety-critical systems
– Dependability attributes
• Example application area
– Automotive electronics
Lecture 1/2
3. Course information
• Contact
– Paul Pop, course leader and examiner
• Email: paul.pop@imm.dtu.dk
• Phone: 4525 3732
• Office: building 322, office 228
• Webpage
– CampusNet
– http://eselab.imm.dtu.dk/cgi-bin/wiki.cgi/SCESCourse/Home
• FeedBack [Edit this page]
– anonymously add feedback about the course
Lecture 1/3
4. Course information, cont.
• Lectures
– Language: English
– 12 lectures + 1 invited lecture (from industry)
– Lecture notes
• available on CampusNet as a PDF file the day before
– Reading materials
• available on CampusNet as PDFs the day before
• Examination
– Project: 70% report + 20% presentation + 10% opposition
• 5 ECTS points
Lecture 1/4
5. Course information, cont.
• Course literature (available as PDFs via CampusNet or DTV)
1. Laprie et al.,
Fundamental Concepts of Dependability
2. Barry W. Johnson,
An Introduction to the Design and Analysis of Fault-Tolerant Systems
3. Neil Storey,
Safety Critical Computer Systems,
Addison Wesley (selected chapters)
4. Hermann Kopetz,
Real-time Systems:
Design Principles for Distributed Embedded Applications,
Springer (selected chapters)
5. Giorgio Buttazzo,
Hard Real-time Computing Systems:
Predictable Scheduling Algorithms and Applications,
Springer (selected chapters)
Lecture 1/5
6. Project, cont.
• Topic categories
1. Literature survey
• See the “references” and “further reading” in the course literature
2. Tool case-study
• Select a commercial or research tool and
use it on a case-study
3. Software implementation
• Implement a technique,
e.g., error detection or fault-tolerance technique
– Suggested topics on the course website:
http://eselab.imm.dtu.dk/cgi-bin/wiki.cgi/SCESCourse/Project
Lecture 1/6
7. Project, cont.
• Examples of last year’s projects
– Worst case execution time analysis—
Theory and application
– Scheduling Anomalies
– A Fault-Tolerant Scheduling Algorithm for
Real-Time Period Tasks with Possible Software Faults
– Mars Climate Orbiter failure
– ARIANE 5: Flight 501 Failure
– London Ambulance Service
– Hamming Correcting Code Implementation in
Transmitting System
– Application of a Fault Tolerance to a Wind Turbine
Lecture 1/7
8. Project, cont.
• Milestones
– Sept. 21: Group registration and topic selection
• Email to paul.pop@imm.dtu.dk
– Oct. 26: Project report draft
• Upload draft to CampusNet
– Nov. 23: Report submission
• Upload final report to CampusNet
– Dec. 4: Project presentation and oral opposition
• Upload presentation to CampusNet
Lecture 1/8
9. Project, cont.
• Project registration
– E-mail Paul Pop, paul.pop@imm.dtu.dk Deadline:
• Subject: 02229 registration Sept. 21
• Body:
– Name student #1, CPR number, e-mail
– Name student #2, CPR number, e-mail
– Name student #3, CPR number, e-mail
– Project title
– Project details
Project approval
• Notes
– Groups of up to 3 persons
– Contact me if you can’t find project partners
Lecture 1/9
10. Project presentation & opposition
• Presentation of project
Deadline:
– 15 min. + 5 min. questions
Dec. 5
• Oral opposition
– Read the draft report
– Prepare at least one question per group member
• Ask the questions after the presentation
Lecture 1/10
11. Project deliverables
1. Literature survey 2. Tool case-study
– Written report – Case-study files
• ~5000 words – Report
• Structure • Document your work
– Title, authors
– Abstract
3. Software implementation
– Introduction
– Body – Source code with comments
– Conclusions
– Report
– References
• Document your work
Deadline for draft: Deadline for final version
Oct. 26 Nov. 23
Lecture 1/11
12. Project: important dates
September 2007 October 2007
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
2 34567 8 123 45 6
9 10 11 12 13 14 15 7 8 9 10 11 12 13
16 17 18 19 20 21 22
14 15 16 17 18 19 20
23 24 25 26 27 28 29 Upload
Register 21 22 23 24 25 26 27
draft
30
28 29 30 31
November 2007 December 2007
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
12 3 1
4 56789 10 2345678
Upload Present &
final report oppose
11 12 13 14 15 16 17 9 10 11 12 13 14 15
18 19 20 21 22 23 24 16 17 18 19 20 21 22
25 26 27 28 29 30 23 24 25 26 27 28 29
Lecture 1/12
13. Embedded systems
• Computing systems are everywhere
• Most of us think of “desktop” computers
– PC’s
– Laptops
– Mainframes
– Servers
• But there’s another type of computing system
– Far more common...
Lecture 1/13
14. Embedded systems, cont.
• Embedded computing systems
Computers are in here...
– Computing systems embedded within
electronic devices and here...
– Hard to define. Nearly any computing and even here...
system other than a desktop computer
– Billions of units produced yearly, versus
millions of desktop units
– Perhaps 50 per household and per
automobile Lots more of these,
though they cost a lot
less each.
Lecture 1/14
15. A “short list” of embedded systems
Anti-lock brakes Modems
Auto-focus cameras MPEG decoders
Automatic teller machines Network cards
Automatic toll systems Network switches/routers
Automatic transmission On-board navigation
Avionic systems Pagers
Battery chargers Photocopiers
Camcorders Point-of-sale systems
Cell phones Portable video games
Cell-phone base stations Printers
Cordless phones Satellite phones
Cruise control Scanners
Curbside check-in systems Smart ovens/dishwashers
Digital cameras Speech recognizers
Disk drives Stereo systems
Electronic card readers Teleconferencing systems
Electronic instruments Televisions
Electronic toys/games Temperature controllers
Factory control Theft tracking systems
Fax machines TV set-top boxes
Fingerprint identifiers VCR’s, DVD players
Home security systems Video game consoles
Life-support systems Video phones
Medical testing systems Washers and dryers
Our daily lives depend on embedded systems
Lecture 1/15
17. What is an embedded system?
• Definition
– an embedded system special-purpose computer system,
part of a larger system which it controls.
• Notes
– A computer is used in such devices primarily as a means to
simplify the system design and to provide flexibility.
– Often the user of the device is not even aware that a
computer is present.
Lecture 1/17
18. Characteristics of embedded systems
• Single-functioned
– Dedicated to perform a single function
• Complex functionality
– Often have to run sophisticated algorithms or multiple algorithms.
• Cell phone, laser printer.
• Tightly-constrained
– Low cost, low power, small, fast, etc.
• Reactive and real-time
– Continually reacts to changes in the system’s environment
– Must compute certain results in real-time without delay
• Safety-critical
– Must not endanger human life and the environment
Lecture 1/18
19. Functional vs. non-functional requirements
• Functional requirements
– output as a function of input
• Non-functional requirements:
– Time required to compute output
– Reliability, availability, integrity,
maintainability, dependability
– Size, weight, power consumption, etc.
Lecture 1/19
20. Real-time systems
• Time
– The correctness of the system behavior depends not only on
the logical results of the computations, but also on the time
at which these results are produced.
• Real
– The reaction to the outside events must occur during their
evolution. The system time must be measured using the
same time scale used for measuring the time in the
controlled environment.
Lecture 1/20
23. Hard vs. soft
• Definitions
– A real-time task is said to be hard if missing its deadline may
cause catastrophic consequences on the environment under
control.
– A real-time task is said to be soft if meeting its deadline is
desirable for performance reasons, but missing its deadline
does not cause serious damage to the environment and
does not jeopardize correct system behaviour.
• Definition
– A real-time system that is able to handle hard real-time
tasks is called a hard real-time system.
Lecture 1/23
24. Hard vs. soft, cont.
• Examples of hard activities
– Sensory data acquisition
– Detection of critical conditions
– Actuator serving
– Low-level control of critical system components
– Planning sensory-motor actions that tightly interact with the
environment
• Examples of soft activities
– The command interpreter of the user interface
– Handling input data from the keyboard
– Displaying messages on the screen
– Representation of system state variables
– Graphical activities
– Saving report data
Lecture 1/24
25. Murphy’s laws
• Murphy’s general law
– “If something can go wrong, it will go wrong”
Major Edward A. Murphy, Jr., US Air Force, 1949
• Murphy’s constant
– Damage to an object is proportional to its value.
• Troutman postulates
– Any software bug will tend to maximize the damage.
– The worst software bug will be discovered six months after the filed test.
• Green’s law
– If the system is designed to be tolerant to a set of faults,
there will always exist an idiot so skilled to cause a nontolerated fault.
• Corollary
– Dummies are always more skilled than measures taken to keep them from harm.
• Johnson’s first law
– If a system stops working, it will do it a the worst possible time.
• Sodd’s second law
– Sooner or later, the worst possible combination of circumstances will happen.
• Corollary
– A system must always be designed to resist
the worst possible combination of circumstances
Lecture 1/25
26. Genesis Space Capsule
• Genesis capsule
– Cost: $260 million
– Collecting samples of the solar wind over 3 years period
– Crashed in Sept 2004 due to the failure of the parachutes
• Reason of crash
– The accelerometers were all
installed backwards. The craft’s
autopilot never got a clue that it
had hit an atmosphere and that
hard ground was just ahead.
Lecture 1/26
27. Mars Orbiter
• One of the Mars Orbiter probes crashed into the planet in 1999.
• It did turn out that engineers who built the Mars Climate
Orbiter had provided a data table in “pound-force” rather than
Newtons, the metric measure of force.
• NASA flight controllers at the Jet Propulsion Laboratory in
Pasadena, Calif., had used the faulty table for their navigation
calculations during the long trip from Earth to Mars.
Lecture 1/27
28. Lockheed Martin Titan 4
• In 1998, a LockMart Titan 4 booster carrying a $1 billion
LockMart Vortex-class spy satellite pitched sideways and
exploded 40 seconds after liftoff from Cape Canaveral, Fla.
• Reason: fried wiring that apparently had not been inspected.
The guidance systems were without power for a fraction of a
second.
Lecture 1/28
29. Therac-25
• Therac-25:
– the most serious computer-related accidents to date (at least
nonmilitary and admitted)
– machine for radiation therapy (treating cancer)
– between June 1985 and January 1987 (at least) six patients received
severe overdoses (two died shortly afterward, two might have died but
died because of cancer, the other two had permanent disabilities)
– scanning magnets are used to spread the beam and vary the beam
energy
– dual-mode: electron beams for surface tumors, X-ray for deep tumors
Lecture 1/29
31. Denver Airport
• Denver International Airport, Colorado: intelligent luggage
transportation system with 4000 “Telecars”, 35km rails,
controlled by a network of 100 computers with 5000
sensors, 400 radio antennas, and 56 barcode readers.
Price: $186 million (BAE Automated Systems).
• Due to SW problems about one year delay which costs $1.1
million per day (1993).
• Abondoned in 2005 to save $1 million per month on
maintenance
Lecture 1/31
32. Reliability
• Definition
– Reliability is the probability of a component, or system, functioning
correctly over a period of time under a given set of operating
conditions.
• Notes
– “Function correctly” means:
• Operating as defined within its specification
• Was functioning correctly at the beginning of the period
• No maintenance is carried out during the period
– Reliability varies with time
• The probability of operating correctly over one year is
much lower than over a month
– Important where continuous uninterrupted operation is essential
• Flight-critical aircraft system
Lecture 1/32
33. Availability
• Definition
– The availability of a system is the probability that the system will be
functioning correctly at any given time.
• Notes
– Relates to a particular point in time, not period as reliability
– Average availability
• Example: during 1000 hours the system is out of operation for 1 hour, the
average availability is 900/1000 = 0.999
– Important
• High availability systems: telephone exchanges have just a few hours of
“downtime” during their life-time
• Safety-critical systems: a nuclear reactor shutdown system is employed
infrequently, but it has to work correctly when needed
Lecture 1/33
34. Failsafe operation
• Definition
– A system is failsafe if it adopts “safe” output states in the
event of failure and inability to recover.
• Notes
– Example of failsafe operation
• Railway signaling system: failsafe corresponds to all the lights on red
– Many systems are not failsafe
• Fly-by-wire system in an aircraft: the only safe state is on the ground
Lecture 1/34
35. System integrity
• Definition
– The integrity of a system is its ability to detect faults in its
own operation and to inform the human operator.
• Notes
– The system will enter a failsafe state if faults are detected
– High-integrity system
• Failure could result large financial loss
• Examples: telephone exchanges, communication satellites
Lecture 1/35
36. Safety-critical systems
• Definitions
– Safety is a property of a system that will not endanger
human life or the environment.
– A safety-related system is one by which the safety of the
equipment or plant is ensured.
• Safety-critical system is:
– Safety-related system, or
– High-integrity system
Lecture 1/36
37. Developing safety-critical systems
Requirements Completed system
Hazard and
Hazard and Certification
Certification
risk analysis
risk analysis
System
System
Specification
Specification validation
validation
Architectural System
Architectural System
design verification
design verification
System
System
Module
Module integration
integration
design
design and testing
and testing
Module
Module
construction
construction
and testing
and testing Lecture 1/37
38. Preliminary topics
• Introduction
• Fundamental concepts: faults, types, models;
error detection
• Dependability analysis
• Fault-tolerance, techniques
• Hazard and risk analysis
• Scheduling, fundamental concepts
• Time, clock synchronization
• Periodic scheduling, schedulability analysis
• System architecture and design
Lecture 1/38
39. Example application area:
automotive electronics
• What is “automotive electronics”?
– Vehicle functions implemented with electronics
• Body electronics
• System electronics (chassis, engine)
• Information/entertainment
Lecture 1/39
40. Automotive electronics market
Cost of Electronics / Car ($)
1400
1200
1000
800
600
400
200
0
1998 1999 2000 2001 2002 2003 2004 2005
Market
8.9 10.5 13.1 14.1 15.8 17.4 19.3 21.0
($billions)
More than 25% of the total cost
of a car is electronics
Lecture 1/40