Binseng Wang, ScD, CCE – Vice President, Performance Management & Regulatory Compliance, ARAMARK Healthcare’s Clinical Technology Services
Clinical engineering (CE) professionals have realized for some time that the “preventive maintenance” (PM) that they have been performing for many years is no longer able to prevent any failures, although some safety and performance inspections (SPIs) can help detect hidden and potential failures that affect patient safety. To help CE professionals decide whether they should continue to perform scheduled maintenance (SM) or not, a systematic method for determining maintenance effectiveness has been developed. This method uses a small set of codes to classify failures found during repairs and SM (PMs and SPIs). Analysis of the failure patterns and their effects on patients and users allows CE professionals to compare the effectiveness of different maintenance strategies, and justify changes in strategies, such as decreasing SM, deploying statistical sampling, or even eliminating SM.
2. What is your definition of PM?
• Preventive Maintenance (or Preventative
(
Maintenance)
• Predictive Maintenance
• Planned Maintenance or Proactive Maintenance
• Percussive Maintenance: the fine art of whacking the
crap out of an electronic device (or anything else) to
crap out of an electronic device (or anything else) to
get it to work again. (Manny Roman, DITEC Ink)
• Percussive Management: the fine art of managing
g g g
people with 2"x4" boards (or whatever else heavy is
Censored by HS & HR…
handy) but not killing them, aka waterboarding.
2
3. How you currently decide on PM?
• OEM said to do it
OEM said to do it
• Joint Commission said to do it (100% for life support
& less for non‐ life support)
pp )
• Our state licensing code (or CMS rules) require 100%
PM on everything
y g
• Even a single injury or death would be unacceptable ‐
> total, absolute safety
• That is always what and how we have done it in the
last >20‐30 years!
4
Remember the roast beef!
4. Good News and Bad News
Good News and Bad News
• Good News
• No significant changes to TJC Med Equip Mgmt standards
from 2010
• Even Better News
• CMS accepted TJC standards in lieu of “according to OEM
recommendations
recommendations”
• Bad News
• Both CMS and TJC are going to scrutinize more carefully
g g y
maintenance programs (strategies)
• How do you prove your non‐OEM maintenance strategy is
not shortchanging patient safety?!
t h t h i ti t f t ?!
5
5. Table of Contents
• Introduction
– How do you convince surveyors that your
maintenance program is effective?
• Evidence Based Maintenance
Evidence‐Based Maintenance
– Maintenance planning (plan)
– Maintenance implementation (do) Plan
– Maintenance monitoring (check) Act Do
– Maintenance improvement (act)
Check
• Discussion and Conclusions
Discussion and Conclusions
– Implementation lessons
– Conclusions
6
6. Acknowledgement
• The data presented here were collected by dozens of BMETs
at hospitals managed by ARAMARK Healthcare under the
leaderships of the following Technology Managers:
– Jim Fedele
– Len Barnett
– Tim Huffman, Steve Zellers
– Bob Pridgen, Bob Wakefield, Allan Williams
– Chad Granade
– Bobby Stephenson
– Dana Lesueur
– Steve Cunningham
Steve Cunningham
– Bob Helfrich
– Scott Newman
– Jared Koslosky
Jared Koslosky
7
7. REFERENCE
• B. Wang, E. Furst, T. Cohen, O.R. Keil, M. Ridgway, R.
Stiefel, Medical Equipment Management Strategies,
Biomed Instrum & Techn, May/June 2006, 40:233‐237
• B. Wang, Evidence‐Based Maintenance, 24x7
magazine, April 2007
• B. Wang, Evidence‐Based Medical Equipment
Maintenance Management, in L. Atles (ed.), A
Practicum for Biomedical Technology & Management
Issues, Kendall‐Hunt, 2008
• M. Ridgway, Optimizing Our PM Programs, Biomed
Instrum & Techn, May/June 2009, 244‐254
• M. Rigway, L.R. Atles & A. Subhan, Reducing Equipment
Downtime: A New Line of Attack, J Clin Eng, 34:200‐
8
204, 2009
8. Related Publications
Related Publications
• Wang B, Fedele J, Pridgen B, Rui T, Barnett L, Granade C,
g , , g , , , ,
Helfrich R, Stephenson B, Lesueur D, Huffman T, Wakefield JR,
Hertzler LW & Poplin B. Evidence‐Based Maintenance: I ‐
Measuring maintenance effectiveness with failure codes, J
Measuring maintenance effectiveness with failure codes J
Clin Eng, July‐Sept 2010, 35:132‐144.
• Wang et al. Evidence‐Based Maintenance: II ‐ Comparing
maintenance strategies using failure codes, J. Clin. Eng., Oct‐
Dec 2010, 35:223‐230
• Wang et al Evidence Based Maintenance: III Enhancing
Wang et al. Evidence‐Based Maintenance: III ‐ Enhancing
patient safety using failure code analysis , J. Clin. Eng., Apr‐
June 2011, 36:72‐84
9
9. How do you convince surveyors that
your maintenance program is effective?
ff ?
• Adopted “risk”‐based inclusion criteria
Adopted risk based inclusion criteria
– Good intentions (plans) do not guarantee good outcomes
• PM completion per TJC requirements
– Most “PMs” do not prevent failures but only find failures that already
occurred. Process ≠ outcome.
• Fast repair turnaround time
p
– Depending on mission criticality and the availability of back‐ups, some
failures and turnaround times are NOT acceptable to users
• Repeat work orders < certain threshold
Repeat work orders < certain threshold
– Reasonable threshold depends on the type of failure
• Failed PMs < certain threshold
10 – idem
10. How do you convince surveyors that
your maintenance program is effective?
ff ?
• Adopted “risk”‐based inclusion criteria
Adopted risk based inclusion criteria
– Good intentions (plans) do not guarantee good results (outcomes)
• PM completion per TJC requirements
– Most “PMs” do not prevent failures but only find failures that already
occurred. Process ≠ outcome.
• Fast repair turnaround time
p
– Depending on mission criticality and the availability of back‐ups, some
failures and turnaround times are NOT acceptable to users
• Repeat work orders < certain threshold
Repeat work orders < certain threshold
– Reasonable threshold depends on the type of failure
• Failed PMs < certain threshold
11 – idem
11. Table of Contents
• Introduction
– How do you convince surveyors that your
maintenance program is effective?
• Evidence Based Maintenance
Evidence‐Based Maintenance
– Maintenance planning (plan)
– Maintenance implementation (do) Plan
– Maintenance monitoring (check) Act Do
– Maintenance improvement (act)
Check
• Discussion and Conclusions
Discussion and Conclusions
– Implementation lessons
– Conclusions
12
12. Maintenance Monitoring Act
A t
Plan
Do
D
Check
• Process Measures Do the right thing right!
g g g
– SPI/PM
SPI/PM completion rates (TJC)
l ti t (TJC)
– Maintenance logs (CMS) Did you earn your diploma by
– Repair call response or turn‐ day-dreaming every day in class
around time
around time (perfect attendance)?
13
(Wang et al., CE Benchmarking, JCE, Jan-Mar 2008)
13. Maintenance Monitoring Act
A t
Plan
Do
D
Check
• Process Measures Do the right thing right!
g g g
– SPI/PM
SPI/PM completion rates (TJC)
l ti t (TJC)
– Maintenance logs (CMS) Did you earn your diploma by
– Repair call response or turn‐ day-dreaming every day in class
around time
around time (perfect attendance)?
• Outcome/Effectiveness
Measures (evidence)
– Uptime
– Global failure rate
– Patient incidents (including
“near misses”)
“ i ”)
– Failure codes
– Repeated repairs
– Others: MTBF customer
Others: MTBF, customer
14 satisfaction, etc.
(Wang et al., CE Benchmarking, JCE, Jan-Mar 2008)
14. Maintenance Monitoring Act
A t
Plan
Do
D
Check
• Process Measures Do the right thing right!
g g g
– SPI/PM
SPI/PM completion rates (TJC)
l ti t (TJC)
– Maintenance logs (CMS) Did you earn your diploma by
– Repair call response or turn‐ day-dreaming every day in class
around time
around time (perfect attendance)?
• Outcome/Effectiveness
Measures (evidence)
– Uptime
– Global failure rate
– Patient incidents (including
“near misses”)
“ i ”)
– Failure codes
– Others: MTBF, customer
satisfaction, etc.
satisfaction etc
15 Do the right thing right!
16. Maintenance Categories
Failure patterns Maintenance Strategies
• Proactive maintenance: tasks undertaken before a
failure occurs to prevent the equipment from failing.
Proactive maintenance must be technically feasible
and worth doing. Typically useful for failure patterns
d hd i i ll f l f f il
Failure rate
A, B and C.
• Reactive (“default”) maintenance: actions undertaken
after a failure has occurred (to restore the equipment
to original performance standards). Typically useful
for failure patterns D, E and F.
17
time
17. Failure Codes
Equipment Failures
MAINTENANCE FAILURE DESCRIPTION
TYPE CODE
Scheduled EF Evident failure, i.e., a problem that can be
maintenance (SM) detected--but was not reported--by the user
including inspection
inspection, without running any special tests or using
calibration, and specialized test/measurement equipment.
preventive HF Hidden failure, i.e., a problem that could not
maintenance be detected by the user unless running a
y g
special test or using specialized
test/measurement equipment.
PF Potential failure, i.e., a failure that is either
about to occur or in the process of occurring
but has not yet caused the equipment to
stop working or problems to patients or
users.
18 NPF No problem found.
18. Failure Codes
Equipment Failures
MAINTENANCE FAILURE DESCRIPTION
TYPE CODE
Corrective UPF Unpreventable failure, evident to user, typically
maintenance caused by normal wear and tear but is unpredictable.
(
(CM), including
), g USE Failures induced by use e g abuse abnormal wear
use, e.g., abuse,
repairs performed & tear, accident, or environment issues. Does NOT
for failures detected include use error (typically no equipment failure)
during SM
PPF Preventable and predictable failure, evident to user.
SIF Service-induced failure, i.e., failure induced by
corrective or scheduled maintenance that was not
p p y
properly completed or a p that was replaced and
p part p
had premature failure (“infant mortality”).
CND Cannot duplicate. Includes use errors. Same as NPF.
FFPM Failure found during PM (to avoid duplication of
19 codes)
19. Failure Codes
Peripheral Failures
MAINTENANCE FAILURE DESCRIPTION
TYPE CODE
CM or SM BATT Battery failure, i.e., battery(ies) failed before
the scheduled replacement time.
ACC Accessory (excluding batteries) failures
evident to user, typically caused by normal
wear and tear.
NET Failure in or caused by network, while the
equipment itself is working without
problems. Applicable only to networked
equipment.
equipment
NOTE: Any resemblance to prior works by A Subhan, P Thorburn,
and M Ridgway is NOT mere coincidence.
coincidence
20
20. Failure Codes Data Collection
Total #Staffed Total Starting
Hospital
p Beds #Equipment Teaching Nature
q p g Date #Work orders
A 161 5,200 Non‐Teaching 9/1/08 12,892
B 256 2,800 Non‐Teaching 3/1/09 6,265
C 360 4,500 Non‐Teaching 4/1/09 9,205
D 415 6,800 Non‐Teaching 10/1/08 18,201
E 586 9,200 Minor Teaching 11/1/09 12,733
F 169 3,200 Major Teaching 11/1/09 5,414
G 159 3,300 Minor Teaching 11/1/09
/ / 5,396
H 193 2,400 Non‐Teaching 2/1/10 3,402
I 439 6,600 Minor Teaching 8/1/08 17,391
J 335
335 5,300
5 300 Non‐Teaching
Non Teaching 1/1/08 18,293
18 293
K 169 3,000 Minor Teaching 11/1/09 5,616
L 318 5,500 Minor Teaching 8/1/08 14,762
M 370
370 4,700
4,700 Non Teaching
Non‐Teaching 3/1/09 7,087
7,087
21 TOTAL 3,930 62,500 136,657
21. Failure Codes Data –
Single equipment type from a single hospital
• 24 consecutive months of SM data
Single Channel Infusion Pumps - SM only
(Hospital D - 316 Units)
100%
ated probabilit for each SM
M
80%
60%
ty
40%
estima
20%
0% Remember the
NPF ACC BATT EF HF PF Law of Large
L fL
22 Numbers!
22. Failure Codes Data –
Single equipment type from a single hospital
• 24 consecutive months of CM data
Single Channel Infusion Pumps - CM only
(Hospital D - 316Units)
100%
ated probability for each CM
h
80%
60%
40%
estima
20%
0% Remember the
CND UPF ACC BATT USE SIF PPF
Law of Large
L fL
23 Numbers!
23. Annual Failure Probability (AFP)
Annual Failure Probability (AFP)
AFP is the probability of finding a particular class of
AFP is the probability of finding a particular class of
failure (e.g., HF) during a year, calculated as below:
• SM failure codes (EF, PF & HF):
– #codes/#SMs completed
• CM failure codes (UPF, USE, PPF & SIF)
– # d /#CM
#codes/#CMs completed * ETFR, where
l d * ETFR h
ETFR = #CMs/year/#units (equipment type failure rate)
• ACC & BATT
ACC & BATT
– Combine SM and CM probabilities as calculated above
• No Fail(ure)
24 – No Fail = 1 – sum (all other failure probabilities)
24. Failure Codes Data –
Single equipment type from a single hospital
• Combining SM & CM data ‐> Annual Failure Probability (AFP)
g y( )
100%
Single Channel Infusion Pumps
(Hospital D - 316 Units)
80%
Estimated AFP per unit
10%
60%
5%
40%
0%
SIF HF PF PPF
20%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
25
25. Failure Codes Data –
Single equipment type from a single hospital
• Comparing AFP from 2 consecutive years
p g y
100%
Single Channel Infusion Pumps
(Hospital D - 316 Units)
80%
Estimate AFP per unit
Year 1 10%
Year 2
60%
5%
ed
40%
0%
SIF HF PF PPF
20%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
26
26. Failure Codes Data –
Single equipment type from a single hospital
100%
Vital Signs Monitor
(Hospital A - 174 units)
80%
nit
Estimated AFP per un
60%
A
40%
20%
E
0%
No UPF ACC BATT USE EF SIF HF PF PPF
Fail
27
27. Failure Codes Data –
Single equipment type from a single hospital
100%
Portable Patient Monitors
(Hospital C - 170 units)
80%
nit
10%
Estimated AFP per un
60%
5%
A
40%
0%
SIF HF PF PPF
20%
0%
No UPF ACC BATT USE EF SIF HF PF PPF
28 Fail
28. Failure Codes Data –
Single equipment type from multiple hospitals
100%
A-3
General Purpose Electrosurgical Unit (ESU) B-18
C-21
D-24
80%
E-21
F-8
10%
Estimated AFP per unit
G-10
H-8
60%
I-25
5%
I-23
K-13
3
d
L-37
40%
0% M-25
SIF HF PF PPF
mean
20%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
29
29. Failure Codes Data –
Single equipment type from multiple hospitals
100%
C-70
Electronic Thermometer
El t i Th t D-362
E-531
G-170
80%
H-95
I-378
Estimated AFP per unit
t
10% I-226
K-32
60%
L-183
M-48
5%
mean
d
40%
0%
SIF HF PF PPF
20%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
30
30. Failure Codes Data –
Single equipment type from multiple hospitals
100%
A-32
Battery-Powered Mon/Pace/Defibrillator B-30
C-42
D-60
80%
E-70
F-25
10%
Estimated AFP per unit
G-42
H-23
60%
I-81
5%
I-55
K-44
d
L-52
40%
0% M-57
SIF HF PF PPF
mean
20%
%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
31
31. Using Failure Codes Data
• Analyses performed in two ways:
Analyses performed in two ways:
A. Comparing data obtained using different maintenance
strategies within each equipment class‐> determine
effectiveness of maintenance strategies
ff f
B. Considering all data for each class of equipment
B Considering all data for each class of equipment
(regardless of maintenance strategy adopted) ‐>
evaluating the effectiveness of CE activities, comparing
current activities (SPI/PM, repairs, etc.) versus potential
i i i (SPI/PM i ) i l
activities (i.e., impact of CE on equipment failures)
32
32. A. Maintenance Strategies Comparison
Two ways to compare maintenance strategies:
Two ways to compare maintenance strategies:
• Data from different sites (lateral comparisons)
– Advantage: no need to wait for data collection
g
(assuming the same failure codes are adopted)
– Disadvantage: there could be differences in
brand/model and/or accessories, user care, etc.
/ /
• Data from same site (longitudinal studies)
– Advantage: no differences in brand/model and/or
Advantage: no differences in brand/model and/or
accessories, user care, etc.
– Disadvantage: need to wait for data collection
g
33
33. (Lateral) Comparison of Maintenance
Strategies
• Types of Maintenance Strategies adopted at different
Types of Maintenance Strategies adopted at different
site:
– F3 ‐ Fixed schedule full service or inspection every 3
months
– F6 ‐ Fixed schedule full service or inspection every 6
months
– F12 ‐ Fixed schedule full service or inspection every 12
months
– Samp ‐ Statistical sampling
Samp ‐ Statistical sampling
– R/R ‐ Repair or replace
34
34. Battery‐powered
defibrillator/monitor/ pacemaker
• Any detectable differences?
y
80%
F3-80
F6-327
10%
60%
Estimated AFP per unit
5%
P
40%
0%
SIF HF PF PPF
E
20%
0%
35 No Fail UPF ACC BATT USE EF SIF HF PF PPF
35. Vital Signs Monitor
• Any detectable differences?
80% Samp-147
Vital Signs Monitor F12-655
R/R-71
10%
60%
stimated AFP per unit
5%
P
40%
0%
SIF HF PF PPF
Es
20%
0%
36 No Fail UPF ACC BATT USE EF SIF HF PF PPF
36. Pulse Oximeters
• Any detectable differences?
100% Samp-149
Pulse Oximeter F12-464
R/R-206
80%
10%
Estimated AFP per unit
60%
P
5%
40%
0%
SIF HF PF PPF
20%
0%
37 No Fail UPF ACC BATT USE EF SIF HF PF PPF
37. Sequential & Intermittent
Compression Devices
• Any detectable differences?
80%
Sequential & Intermittent Compression Devices Samp-278
F12-722
10%
60%
stimated AFP per unit
5%
P
40%
0%
SIF HF PF PPF
Es
20%
0%
38 No Fail UPF ACC BATT USE EF SIF HF PF PPF
38. Single‐channel infusion pumps
• Any detectable differences?
y
80%
Single-Channel Infusion Pumps Samp-542
F12-1150
10%
60%
stimated AFP per unit
5%
P
40%
0%
SIF HF PF PPF
Es
20%
0%
39 No Fail UPF ACC BATT USE EF SIF HF PF PPF
39. Radiant Infant Warmers
• Any detectable differences?
100%
Radiant Infant Warmer F6-69
F12-91
Samp-19
Samp 19
80%
10%
Estimated AFP per unit
60%
P
5%
%
40%
0%
SIF HF PF PPF
E
20%
0%
%
40 No Fail UPF ACC BATT USE EF SIF HF PF PPF
40. Electronic Thermometers
Electronic Thermometers
• Any detectable differences?
y
100%
Electronic Thermometer
F12‐231
F12 231
80%
R/R‐1862
Estimated AFP per unit
10%
60%
5%
40%
0%
20%
SIF HF PF PPF
0%
%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
41
41. Answer to Surveyor Question
Answer to Surveyor Question
• How do you prove your non‐OEM maintenance
How do you prove your non OEM maintenance
strategy is not shortchanging patient safety?!
• Compare AFPDs between “in according to OEM
recommendation” and “my maintenance strategy”:
– No difference (difference < SD): I should be
allowed to use “my maintenance strategy”
– Difference found: change maintenance strategy
and monitor again => Maintenance Improvement
d it i M i t I t
• In general, statistical sampling is preferable to Repair/Replace
(
(“run to failure”) as you can monitor trends instead of waiting
) y g
42 for annual reviews.
42. Table of Contents
• Introduction
– How do you convince surveyors that your
maintenance program is effective?
• Evidence Based Maintenance
Evidence‐Based Maintenance
– Maintenance planning (plan)
– Maintenance implementation (do) Plan
– Maintenance monitoring (check) Act Do
– Maintenance improvement (act)
Check
• Discussion and Conclusions
Discussion and Conclusions
– Implementation lessons
– Conclusions
43
43. Maintenance Improvement
• Maintenance Revision & Continual Improvement
– Inventory classification revision
– SM frequency revision
– Work instruction (tasks) revision
while continuing to monitor effectiveness (evidence) and
efficiency using
efficiency using
– Uptime Plan
– Failure rate
Act Do
– Patient incidents (including “near misses”)
( “ ”)
– Failure codes Check
– Others: MTBF, customer satisfaction, etc.
– Financial indicators
44
44. B. Evaluation of CE Activities
Grouping of failure codes by CE action
Failure Code CE Responsibility Action Class
NPF none None or review
UPF advise Purchasing FUTURE
ACC guide users and Purchasing
BATT guide users and Purchasing
NET work with IT INDIRECT
USE guide users and Facilities
ALL
EF guide users
SIF educate staff and advise OEMs
HF review SM program
DIRECT
PF review SM program
PPF review SM program
45
45. Battery‐powered
defibrillator/monitor/ pacemaker
100%
Battery-Powered Mon/Pace/Defibrillator
80%
Estimated AFP per unit
10%
60%
5%
40% 0%
SIF HF PF PPF
20%
0%
No Fail UPF ACC BATT USE EF SIF HF PF PPF
CE future CE indirect CE direct
46
46. Failure Code Grouping Results
Failure Code Grouping Results
Battery-Powered Mon/Pace/Defibrillator Vital Signs Monitors
Direct Direct
2% 2%
No Failure
35%
Indirect
28%
Indirect
47%
No Failure
Future 61%
9% Future
16%
Pulse Oximeters Single-Channel Infusion Pumps
Direct Direct
1% 3% No Failure
17%
Indirect
22%
Future
6% Indirect Future
56% 24%
No Failure
71%
47
47. Using the Risk‐Management Approach
to Determine Impact
• Risk is defined as “The combination of the
Risk is defined as The combination of the
probability of occurrence of harm and the
severity of that harm. (ISO/IEC Guide
severity of that harm ” (ISO/IEC Guide
51:1999 and ISO 14971:2007)
• Calculated risk = probability * severity [of
harm]
The “risk-based criteria” should actually be called “severity-based criteria,”
due t th lack f
d to the l k of probability !
b bilit
48
48. Estimation of Risk
• Estimation of the
Probability of Harm
– A very exaggerated estimate
of the probability is the APFD
of the probability is the APFD
(because it ignores other protective
mechanisms)
• Estimation of the Severity
of Harm
– The severity is assigned between
The severity is assigned between
0% and 100%, depending on the
impact on patient (no harm ‐
49
death)
Figure adapted from Reason (2000), Duke Univ. MC
patientsafetyed.duhs.duke.edu/module_e/swiss_cheese.html
52. Mean Values of Probability & Risks
Mean Values of Probability & Risks
• Why are you chasing the smallest slices if there are
Why are you chasing the smallest slices if there are
“low‐hanging fruits” (larger slices) out there?
Mean AFP for 22 Equipment Types Mean Annual Risk for 22 Equipment Types
Direct Direct
3% 2.6
26
Indirect
22% Future
11.6
No Failure Indirect
Future 59% 14.2
16%
53
53. Performance Improvement
Performance Improvement
NOT just maintenance improvement
FAILURE FAILURE TYPE PERFORMANCE IMPROVEMENT
GROUP ACTIONS
Direct Service induced failures (SIF) Review and revise maintenance
Failures no‐evident to (hidden from) users (HF) program, e.g., increase frequency, add
Deteriorations in progress that are likely to new tasks, and change strategy.
become failures – potential failures (PF)
Preventable and predictable failures (PPF)
Indirect Accessory failures (ACC) Provide training to users, and
Battery failures (BATT) feedback to purchasing, and
Network failures (NET)
Network failures (NET) assistance to facility managers
assistance to facility managers
Failures induced by abuse, accidents,
in reducing power line issues,
or environment issues (USE)
water and air quality, HVAC,
Failures evident to users but not
reported (EF)
t d (EF) humidity control, etc.
humidity control, etc.
Future Unpreventable failure (UPF) Improve selection in future
acquisitions favoring more
reliable products and
reliable products and
standardization.
54
54
54. CE Impact Analysis ‐ Conclusions
• CE Impact is reaching its limits., i.e., significant investment of
p g g
resources are needed for small gains in reducing risks.
• However, much higher impact (reduction of risks) can be
achieved by broadening the horizon and helping users,
achieved by broadening the horizon and helping users,
Facilities, and Purchasing. ‐> i.e., should NOT focus solely on
what CE can do (i.e., SM).
• The NIBP monitor example shows that the old myth of zero
The NIBP monitor example shows that the old myth of zero
(negligible) “PM yield” needs to be abandoned. Need to
consider the frequency and the severity of all the failures (ALL
risk), not just those managed by CE.
risk) not j st those managed b CE
• In essence,
– Reach out of your comfort zone (maintenance) to bring more impact
to patient care/risk using your expertise!
to patient care/risk using your expertise!
55
55. Table of Contents
• Introduction
– How do you convince surveyors that your
maintenance program is effective?
• Evidence Based Maintenance
Evidence‐Based Maintenance
– Maintenance planning (plan)
– Maintenance implementation (do) Plan
– Maintenance monitoring (check) Act Do
– Maintenance improvement (act)
Check
• Discussion and Conclusions
Discussion and Conclusions
– Implementation lessons
– Conclusions
56
56. Implementation Lessons
(aka how we made it work)
• Put failures codes at the top of selectable
Put failures codes at the top of selectable
choices (e.g., by adding numbers to the front
of the codes, so the float to the top: 1NPF)
of the codes so the “float” to the top: 1NPF).
• Encourage staff to discuss questionable codes
and HF with manager to ensure coding
and HF with manager to ensure coding
accuracy.
• Monthly verification and corrections:
Monthly verification and corrections:
– Missing codes (work orders without codes)
– Logically‐wrong codes (e g HF in repairs)
Logically‐wrong codes (e.g., HF in repairs)
57
57. Conclusions
• Clinical Engineering must evolve together with
healthcare
– Follow progress of medical equipment design and
manufacturing (JC 10 year root cause analysis (RCA) of sentinel events indicate most of
manufacturing (JC 10 year root‐cause‐analysis (RCA) of sentinel events indicate most of
them are due to use errors and communication problems)
– Incorporate the mission‐criticality concept
– Adopt the separation of risk and maintenance needs (high
Adopt the separation of risk and maintenance needs (high
risk ≠ high maintenance but low incidence of failed SM ≠ no
SM needed)
– Learn from Reliability‐Centered Maintenance (RCM)
Learn from Reliability Centered Maintenance (RCM)
experience accumulated in industrial maintenance (but not
fully adopting it)
– Progress from subjective intuitive craftsmanship to
Progress from subjective, intuitive craftsmanship to
58
scientific, evidence‐based engineering
58. Conclusions2
• Refocus resources from “scheduled
maintenance” – SM (SPIs and PMs) to higher‐
impact tasks, e.g., use error tracking, “self‐
identified” failures and repairs (“rounding”),
identified” failures and repairs (“rounding”)
user training, and working with Facilities and
Purchasing.
• It is always a balancing act:
– Needs (mission, safety, revenue, etc.)
– Re$ource$ (human technical financial etc )
Re$ource$ (human, technical, financial, etc.)
(that’s why it is engineering:
find the best “balanced”
59
solution)
)
59. Plan
Bottom Line Act Do
Check
• Evidence‐based Maintenance (EBMaint) allow us to prove to
CMS and TJC that we are NOT shortchanging patient safety
when we deviate from OEM recommendations
(effectiveness) .
• EBMaint allows us to move beyond complying with CMS
requirements and TJC standards and enhance user
satisfaction and patient safety.
satisfaction and patient safety.
• EBMaint motivates us to continually review and improve
equipment maintenance strategies.
• EBM i also helps to prove to the healthcare organizations
EBMaint l h l h h lh i i
that we are using their limited resources in the most
productive manner (efficiency)
60
60. THANK YOU!
• Please contact us if you have any
questions or suggestions
Binseng Wang, ScD, CCE, fAIMBE, fACCE
• Vice President, Performance Mgmt & Regulatory Compliance
• Telephone: 704‐948‐5729
• Email: wang‐binseng@aramark.com
61