Chapter 5 Program Evaluation and Research Techniques
Charlene R. Weir
Evaluation of health information technology (health IT) programs and projects can range from simple user satisfaction for a new menu or full-scale analysis of usage, cost, compliance, patient outcomes, and observation of usage to data about patient's rate of improvement.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Identify the main components of program evaluation
2.Discuss the differences between formative and summative evaluation
3.Apply the three levels of theory relevant to program evaluation
4.Discriminate program evaluation from program planning and research
5.Synthesize the core components of program evaluation with the unique characteristics of informatics interventions
Key Terms
Evaluation, 72
Formative evaluation, 73
Logic model, 79
Program evaluation, 73
Summative evaluation, 73
Abstract
Evaluation is an essential component in the life cycle of all health IT applications and the key to successful translation of these applications into clinical settings. In planning an evaluation the central questions regarding purpose, scope, and focus of the system must be asked. This chapter focuses on the larger principles of program evaluation with the goal of informing health IT evaluations in clinical settings. The reader is expected to gain sufficient background in health IT evaluation to lead or participate in program evaluation for applications or systems.
Formative evaluation and summative evaluation are discussed. Three levels of theory are presented, including scientific theory, implementation models, and program theory (logic models). Specific scientific theories include social cognitive theories, diffusion of innovation, cognitive engineering theories, and information theory. Four implementation models are reviewed: PRECEDE-PROCEED, PARiHS, RE-AIM, and quality improvement. Program theory models are discussed, with an emphasis on logic models.
A review of methods and tools is presented. Relevant research designs are presented for health IT evaluations, including time series, multiple baseline, and regression discontinuity. Methods of data collection specific to health IT evaluations, including ethnographic observation, interviews, and surveys, are then reviewed.
Introduction
The outcome of evaluation is information that is both useful at the program level and generalizable enough to contribute to the building of science. In the applied sciences, such as informatics, evaluation is critical to the growth of both the specialty and the science. In this chapter program evaluation is defined as the “systematic collection of information about the activities, characteristics, and results of programs to make judgments about the program, improve or further develop program effectiveness, inform decisions about future programming, and/or increase understanding.”1 Health IT interventions are nearly always embedded in ...
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Chapter 5 Program Evaluation and Research TechniquesCharlene R. .docx
1. Chapter 5 Program Evaluation and Research Techniques
Charlene R. Weir
Evaluation of health information technology (health IT)
programs and projects can range from simple user satisfaction
for a new menu or full-scale analysis of usage, cost,
compliance, patient outcomes, and observation of usage to data
about patient's rate of improvement.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Identify the main components of program evaluation
2.Discuss the differences between formative and summative
evaluation
3.Apply the three levels of theory relevant to program
evaluation
4.Discriminate program evaluation from program planning
and research
5.Synthesize the core components of program evaluation with
the unique characteristics of informatics interventions
Key Terms
Evaluation, 72
Formative evaluation, 73
Logic model, 79
Program evaluation, 73
Summative evaluation, 73
Abstract
Evaluation is an essential component in the life cycle of all
health IT applications and the key to successful translation of
these applications into clinical settings. In planning an
evaluation the central questions regarding purpose, scope, and
focus of the system must be asked. This chapter focuses on the
2. larger principles of program evaluation with the goal of
informing health IT evaluations in clinical settings. The reader
is expected to gain sufficient background in health IT
evaluation to lead or participate in program evaluation for
applications or systems.
Formative evaluation and summative evaluation are discussed.
Three levels of theory are presented, including scientific theory,
implementation models, and program theory (logic models).
Specific scientific theories include social cognitive theories,
diffusion of innovation, cognitive engineering theories, and
information theory. Four implementation models are reviewed:
PRECEDE-PROCEED, PARiHS, RE-AIM, and quality
improvement. Program theory models are discussed, with an
emphasis on logic models.
A review of methods and tools is presented. Relevant research
designs are presented for health IT evaluations, including time
series, multiple baseline, and regression discontinuity. Methods
of data collection specific to health IT evaluations, including
ethnographic observation, interviews, and surveys, are then
reviewed.
Introduction
The outcome of evaluation is information that is both useful at
the program level and generalizable enough to contribute to the
building of science. In the applied sciences, such as informatics,
evaluation is critical to the growth of both the specialty and the
science. In this chapter program evaluation is defined as the
“systematic collection of information about the activities,
characteristics, and results of programs to make judgments
about the program, improve or further develop program
effectiveness, inform decisions about future programming,
and/or increase understanding.”1 Health IT interventions are
nearly always embedded in the larger processes of care delivery
and are unique for three reasons. First, stakeholders' knowledge
about the capabilities of health IT systems may be limited at the
beginning of a project. Second, the health IT product often
changes substantially during the implementation process. Third,
3. true implementation often takes 6 months or longer, with users
maturing in knowledge and skills and external influences, such
as new regulations or organizational initiatives, occurring over
that period. Identification of the unique contribution of the
health IT application therefore is often difficult and evaluation
goals frequently go beyond the health IT component alone. In
this chapter the health IT component of evaluation is integrated
with overall program evaluation; unique issues are highlighted
for evaluating health IT itself. The chapter is organized into
three sections: (1) purposes of evaluation, (2) theories and
frameworks, and (3) methods, tools, and techniques.
Purposes of Evaluation
The purpose of evaluation determines the methods, approaches,
tools, and dissemination practices for the entire project being
evaluated. Therefore identifying the purpose is a crucial first
step. Mark, Henry, and Julnes have provided four main
evaluation purposes, which are listed in Box 5-1.
Usually an evaluation project is not restricted to just one of
these purposes. Teasing out which purposes are more important
is a process for the evaluator and the involved stakeholders. The
following sections represent a series of questions that can
clarify the process.
Formative versus Summative Evaluation
Will the results of the evaluation be used to improve the
program or to focus on determining whether the goals of the
program have been met? This question refers to a common
classification of evaluation activities that fall into two types:
(1) formative evaluation and (2) summative evaluation. The
difference is in how the information is used. The results of the
formative evaluation are used as feedback to the program for
continuous improvement.2,3 The results of the summative
evaluation are used to evaluate the merit of the program.
Formative evaluation is a term coined by Scriven in 1967 and
expanded on by a number of other authors to mean an
assessment of how well the program is being implemented and
to describe the experience of participants.4 Topics for formative
4. evaluation include the fidelity of the intervention, the quality of
implementation, the characteristics of the organizational
context, and the types of personnel. Needs assessments and
feasibility analyses are included in this general category. Box 5-
2 outlines several questions that fall into the category of
formative evaluation for health IT.
Box 5-1 Main Purposes of Program Evaluation
• Program and organizational improvement
• Assessment of merit or worth
• Knowledge development
• Oversight and compliance
Adapted from Mark M, Henry G, Julnes G. Evaluation: An
Integrative Framework for Understanding, Guiding and
Improving Policies and Programs. San Francisco, CA: Jossey-
Bass; 2000.
In contrast, summative evaluation refers to an assessment of the
outcomes and impact of the program. Cost-effectiveness and
adverse events analyses are included in this category. Some
questions that fall into the summative evaluation category are
listed in Box 5-3.
Dividing the evaluation process into the formative and
summative components is somewhat arbitrary, as they can be
and often are conducted concurrently. They do not necessarily
differ in terms of methods or even in terms of the content of the
information collected. Formative evaluation is especially
important for health IT products where the overall goal is
improvement. Because health IT products are “disruptive
technologies,” they both transform the working environment and
are themselves transformed during the process of
implementation.5 Many writers in the informatics field have
noted the paucity of information on implementation processes in
published studies. In a meta-analysis of health IT by researchers
at RAND, the authors noted:
5. In summary, we identified no study or collection of studies,
outside of those from a handful of health IT leaders that would
allow a reader to make a determination about the generalizable
knowledge of the system's reported benefit. This limitation in
generalizable knowledge is not simply a matter of study design
and internal validity. Even if further randomized, controlled
trials are performed, the generalizability of the evidence would
remain low unless additional systematic, comprehensive, and
relevant descriptions and measurements are made regarding how
the technology is utilized, the individuals using it, and the
environment it is used in.6(p4)
Box 5-2 Questions to Pose during Formative Evaluation
• What is the nature and scope of the problem that is being
addressed by health IT?
• What is the extent and seriousness of the need?
• How well is the technology working and what is the best
way to deliver it?
• How are participants (and users) experiencing the
program?
• How did the intervention change after implementation?
Box 5-3 Questions to Pose during Summative Evaluation
• To what degree were the outcomes affected by the
product?
• What is the cost effectiveness of the product?
• What were the unintended consequences of the product?
Although written in 2006, this statement is still relevant today.
Generalizability and Scope
Will the results of the evaluation be used to inform stakeholders
of whether a particular program is “working” and is in
compliance with regulations and mandates? This question refers
to issues of the generalizability and scope of the project. It is
also a question of whether the evaluation is more of a program
evaluation or a research study. An evaluation of a locally
developed project usually would be considered a program
evaluation. In contrast, if the program was designed to test a
6. hypothesis or research question and described, measured, or
manipulated variables that could be generalized to a larger
population, then the results of the evaluation study are more
like research. However, both approaches use systematic tools
and methods. For example, if the program to be evaluated is a
local implementation of alerts and decision support for
providers at the point of care to evaluate skin breakdown, then
the stakeholders are the administrators, nurses, and patients who
are affected by use of the decision support program. The
evaluation questions would address the use of the program, the
impact on resources, satisfaction, and perhaps clinical
outcomes. The evaluation would likely use a before and after
format and a more informal approach to assess whether the
decision support worked. However, if the evaluation question is
whether or not computerized guidelines affect behavior in
general and under what conditions, then the specific
stakeholders matter less and the ability to generalize beyond the
contextual situation matters more. A more formal, research-
based approach is then used. This question is really about what
is to be learned. Vygotsky called these two approaches
“patterning” versus “puzzling.”7 In the patterning approach to
evaluation the comparison is between what went before at the
local level, whereas in the puzzling approach the task is to do
an in-depth comparison between different options and puzzle
through the differences.
Another way to address this issue is to imagine that evaluation
activities fall along a continuum from “program evaluation” to
“evaluation research.” Program evaluation tends to have a wide
scope, using multiple methods with a diverse range of
outcomes. Evaluation research tends to be more targeted, using
more selected methods and fewer outcomes. On the program
evaluation end of the continuum, evaluation can encompass a
range of activities including but not limited to program model
development, needs assessment, tracking and performance
monitoring, and continuous quality improvement. On the
research end of the continuum, activities include theory testing,
7. statistical evaluation of models, and hypothesis testing. At both
ends of the continuum and in between, however, evaluators can
use a variety of research designs, rigorous measurement
methods, and statistical analyses.
Program Continuance versus Growth
Will the results of the evaluation be used to make a decision
about continuing the program as is or about expanding it to a
larger or different setting if it has generalizable knowledge?
Answering the question of whether a program will be continued
requires a focus on the concerns and goals of stakeholders as
well as special attention to the contextual issues of cost, burden,
user satisfaction, adoption, and effectiveness. Answering this
question also requires an assessment about the manner in which
the program was implemented and its feasibility in terms of
resources and efforts. Does the program require ongoing and
intense training of staff and technicians? Does it require unique
hardware requirements that are a one-time or ongoing cost?
Does the program have “legs” (i.e., can it exist on its own once
implemented)? Are the benefits accrued available immediately
or is it a long-term process?
One specific example to determine whether the program should
be continued is to assess whether or not it contributes to the
institution being a “learning organization.”8,9 This approach
focuses on performance improvement and the following four
areas of concern (modified for health IT):
1.What are the mental models and implicit theories held by
the different stakeholders about the health IT product?
2.How does the health IT product promote mastery or
personal control over the work environment?
3.What is the system-level impact of the health IT product?
How does the intervention support a system thinking approach?
4.How does the health IT product create a unified vision of
the information environment? Addressing this question requires
understanding how individuals view the future computerized
environment and whether or not they have come to a shared
8. vision.
Theories and Frameworks
The use of theory in evaluation studies is controversial among
evaluators as well as in the informatics community. On the one
hand, some authors note that evaluation studies are local,
limited in scope, and not intended to be generalizable. On the
other hand, other authors argue that theory is necessary to frame
the issues adequately, promote generalizable knowledge, and
clarify measurement. This author would argue that theory
should be used for the latter reason. Theoretical perspectives
clarify the constructs and methods of measuring constructs as
well as bring forward an understanding of mechanisms of
action.
For the purposes of this chapter, theoretical perspectives will be
divided into three levels of complexity. At the most complex
level are the social science, cognitive engineering, and
information science theories. These theories are well
established, have a strong evidence base, and use validated
measures and well-understood mechanisms of action. This
chapter discusses some well-known social science theories as
well as two health IT–specific adaptations of these theories that
have significant validation. At the next level are the program
implementation models, which are less complex and basically
consist of a conceptual model. The models are often used to
describe processes but few studies attempt to validate the
models or test models against each other. Finally, at the most
basic level are the program theory models, which are program
specific and intended to represent the goals and content of a
specific project. All evaluations for health IT products should
develop a program theory model to guide the evaluation process
itself. The descriptions below are brief and are intended to
provide an overview of the possibilities at each level.
Social Science Theories
There are myriad theories relevant to both the design of
interventions and products and the structure of the evaluation. A
9. more detailed description of such theories as they apply to
informatics is found in Chapter 2. A short description is
provided here for the purpose of context. These social science
theories include social cognitive theories, diffusion of
innovation theory, cognitive engineering theories, and
information theories.
Social Cognitive Theories
The social cognitive theories include the theory of planned
behavior,10 its close relative the theory of reasoned action,11
and Bandura's social cognitive theory.12 These theories predict
intentions and behavior as a function of beliefs about the value
of an outcome, the likelihood that the outcome will occur given
the behavior, and the expectations of others and self-efficacy
beliefs about the personal ability to engage in the activity. The
empirical validation of these theories is substantial and they
have been used to predict intentions and behavior across a wide
variety of settings.
Diffusion of Innovations Theory
Another very commonly used model in informatics is diffusion
of innovations theory by Rogers.13,14 In this model
characteristics of the innovation, the type of communication
channels, the duration, and the social system are predictors of
the rate of diffusion. The central premise is that diffusion is the
process by which an innovation is communicated through
certain channels over time among the members of a social
system organization. Individuals pass through five stages:
knowledge, persuasion, decision, implementation, and
confirmation. Social norms, roles, and the type of
communication channels all affect the rate of adoption of an
innovation. Characteristics of an innovation that affect the rate
of adoption include relative advantage as compared to other
options; trialability, or the ease with which it can be tested;
compatibility with other work areas; complexity of the
innovation; and observability, or the ease with which the
innovation is visible.
Cognitive Engineering Theories
10. The cognitive engineering theories have also been widely used
in informatics, particularly naturalistic decision making
(NDM),15–17 control theory,18 and situation awareness
(SA).19 These theories focus more on the interaction between
the context and the individual and are more likely to predict
decision making, perception, and other cognitive variables.
NDM is a broad and inclusive paradigm. SA is narrower and is
particularly useful in supporting design.
SA combines the cognitive processes of orientation, attention,
categorization or sense-making, and planning into three levels
of performance. These activities are thought to be critical to
human performance in complex environments. Endsley refers to
a three-level system of awareness: (1) perception, (2)
comprehension, and (3) projection. She defines shared SA as the
group understanding of the situation.19 For example, in one
recent study of health IT, higher SA was significantly
associated with more integrated displays for intensive care unit
(ICU) nursing staff.20,21 Table 5-1 presents the core
components of SA and associated definitions.
TABLE 5-1 Levels of Situational Awareness
LEVEL
DESCRIPTION
Perception of the elements in the environment
What is present, active, salient, and important in the
environment? Attention will be driven by task needs.
Comprehension of the current situation
Classification of the event is a function of activation of long-
term memory. Meaning is driven by the cognitive process of
classification and task identification.
Projection of future status
Expectations of outcomes in the future. Driven by implicit
theories and knowledge about the causal mechanisms underlying
11. events.
Information Theories
One of the most influential theories is information theory
published in 1948 by Claude Shannon.22 Shannon focused on
the mathematical aspects of the theory. Weaver, an oncologist,
focused on the semantic meaning of the theory.23 Information
theory identifies the degree of uncertainty in messages as a
function of the capacity of the system to transmit those
messages given a certain amount of noise and entropy. The
transmission of information is broken down into source, sender,
channel, receiver, and destination. Because information theory
is essentially a theory of communication, information is defined
relative to three levels of analysis:
1.At the technical or statistical level, information is defined
as a measure of entropy or uncertainty in the situation. The
question at this level is: How accurately are the symbols used in
the communication being transmitted?
2.At the semantic level, information is defined as a reduction
of uncertainty at the level of human meaning. The question here
is: How well do the symbols that were transmitted convey the
correct meaning?
3.At the effectiveness level, information is defined as a
change in the goal state of the system. The question here is:
How well does the perceived meaning effect the desired
outcome?24
This simple framework can provide an effective evaluation
model for any system that evaluates the flow of information.
For example, Weir and McCarthy used information theory to
develop implementation indicators for a computerized provider
order entry (CPOE) project.25
Information foraging theory is a relatively new theory of
information searching that has proven very useful for analyzing
web searching.26 Information foraging theory is built on
12. foraging theory, which studies how animals search for food.
Pirolli and Card noticed similar patterns in the processes used
by animals to search for food and the processes used by humans
to search for information on the Internet. The basic assumption
is that all information searches are goal directed and constitute
a calibration between the energy cost of searching and the
estimated value of information retrieved. The four concepts
listed in Box 5-4 are important to measure. Empirical work on
foraging theory has validated its core concepts.27
Box 5-4 Four Concepts of Foraging Theory
1. Information and its perceived value
2. Information patches or the temporal and spatial location of
information clusters
3. Information scent or the presence of cues value and
location of information
4. Information diet or the decision to pursue one source over
another
Information Technology Theories
Two well-developed information technology theories are
specifically used in the IT domain. As is true of many IT
theories, they are compiled from several existing theories from
the basic sciences to improve their fit in an applied setting.
Information System Success
An IT model that integrates several formal theories is DeLone
and McLean's multifactorial model of IT success developed with
the goal of improving scientific generalization.28 Their theory
was originally developed in 1992 and revised in 2003 based on
significant empirical support. The model is based on Shannon
and Weaver's communication theory23 and Mason's information
“influence” theory.29 DeLone and McLean used Shannon and
Weaver's three levels of information: (1) technical (accuracy
and efficiency of the communication system), (2) semantic
(communicating meaning), and (3) effectiveness (impact on
receiver). These three levels correspond to DeLone and
14. are different than the variables associated with later
intentions.30,32,33 Specifically, the perceived work
effectiveness constructs (perceived usefulness, extrinsic
motivation, job fit, relative advantage, and outcome
expectations) were found to be highly predictive of intentions
over time. In contrast, variables such as attitudes, perceived
behavioral control, ease of use, self-efficacy, and anxiety were
predictors only of early intentions to use.
Second, the authors found that the variables predictive of
intentions to use are not the same as the variables predictive of
usage behavior itself.30 They found that the “effort factor
scale” (resources, knowledge, compatible systems, and support)
was the only construct other than intention to significantly
predict usage behavior. Finally, these authors found that the
model differed significantly depending on whether usage was
mandated or by choice. In settings where usage was mandated,
social norms had a stronger relationship to intentions to use
than the other variables.
Program Implementation Models
Program implementation models refer to generalized, large-
scale implementation theories that are focused on
FIG 5-2 An example of unified theory of acceptance and use of
technology.
(From Venkatesh V, Morris M, Davis G, Davis F. User
acceptance of information technology: toward a unified view.
MIS Quart. 2003;27[3]:425-478.)
performance improvement and institution-wide change. Four
models are reviewed here: PRECEDE-PROCEED, PARiHS, RE-
AIM, and quality improvement.
PRECEDE-PROCEED Model
The letters in the PRECEDE-PROCEED model represent the
following terms: PRECEDE Predisposing, Reinforcing, and
Enabling Constructs in Educational Diagnosis and Evaluation;
PROCEED Policy, Regulatory, and Organizational Constructs in
Educational and Environmental Development. This model was
15. originally developed to guide the design of system-level
educational interventions as well as evaluating program
outcomes. It is a model that addresses change at several levels,
ranging from the individual to the organization level. According
to the model the interaction between the three types of variables
produces change: predisposing factors that lay the foundation
for success (e.g., electronic health records or strong leadership),
reinforcing factors that follow and strengthen behavior (e.g.,
incentives and feedback), and enabling factors that activate and
support the change process (e.g., support, training,
computerized reminders, and templates or exciting content). The
model has been applied in a variety of settings, ranging from
public health interventions, education, and geriatric quality
improvement and alerting studies.34 Figure 5-3 illustrates the
model applied to a health IT product.
Promoting Action on Research Implementation in Health
Services (PARiHS)
The PARiHS framework outlines three general areas associated
with implementation (Fig. 5-4):
Inc.)
FIG 5-4 The PARiHS model.
(From Kitson AL, Rycroft-Malone J, Harvey G, McCormack B,
Seers K, Titchen A. Evaluating the successful implementation
of evidence into practice using the PARiHS framework:
theoretical and practical challenges. Implement Sci. 2008;3:1.)
2.Context: Enhancing leadership support and integrating with
the culture through focus groups and interviews
3.Facilitation of the implementation process: Skill level and
role of facilitator in promoting action as well as frequency of
supportive interactions35
These three elements are defined across several subelements,
with higher ratings suggestive of more successful
16. implementation. For instance, a high level of evidence may
include the presence of randomized controlled trials (i.e., the
gold standard in research), high levels of consensus among
clinicians, and collaborative relationships between patients and
providers. Context is evaluated in terms of readiness for
implementation, including consideration of the context's
culture, leadership style, and measurement practices. High
ratings of context may indicate an environment focused on
continuing education, effective teamwork, and consistent
evaluation and feedback. Finally, the most successful
facilitation is characterized by high levels of respect for the
implementation setting, a clearly defined agenda and facilitator
role, and supportive flexibility.36 Successful implementation
(SI) is thus conceptualized as a function (f) of evidence (E),
context (C), and facilitation (F), or SI = f(E, C, F).37 The
PARiHS framework suggests a continuous multidirectional
approach to evaluation of implementation. Importantly,
evaluation is a cyclic and interactive process instead of a linear
approach.
Reach Effectiveness Adoption Implementation Maintenance
(RE-AIM)
RE-AIM was designed to address the significant barriers
associated with implementation of any new intervention and is
particularly useful for informatics. Most interventions meet
with significant resistance and any useful evaluation should
measure the barriers associated with the constructs (Reach,
Effectiveness, Adoption, Implementation, and Maintenance).
These constructs serve as a good format for evaluation.38,39
First, did the intervention actually reach the intended target
population? In other words, how many providers had the
opportunity to use the system? Or, how many patients had
access to a new website? Second, for effectiveness, did the
intervention actually do what it was intended to do? Did the
wound care decision support actually work as intended every
time? Did it identify the patients it was supposed to identify?
Or, did the algorithms miss some key variables in real life?
17. Third, for adoption, what proportion of the targeted staff,
settings, or institutions actually used the program? What was
the breadth and depth of usage? Did they use it for all relevant
patients or only for some? Fourth, for implementation, was the
intervention the same across settings and time? With most
health IT products there is constant change to the software, the
skill level of users, and the settings in which they are used.
These should be documented and addressed in the evaluation.
Finally, for maintenance, some period of time should be
identified a priori to assess maintenance and whether usage
continues and by whom and in what form. For health IT
interventions it is especially useful to look for unintended
consequences as well as workarounds during implementation
and maintenance in particular.
Quality Improvement
Evaluation activities using a quality improvement framework
are often based on Donabedian's classic structure-process-
outcome (SPO) model for assessing healthcare quality.40,41
Donabedian defines structural measures of quality as the
professional and organizational resources associated with the
provision of care, such as IT staff credentials, CPOE systems,
or staffing ratios. Process measures include the tasks and
decisions imbedded in care, such as the time to provision of
antibiotics or the proportion of patients on deep vein thrombosis
(DVT) prevention protocols. Finally, outcomes are defined as
the final or semifinal measurable outcomes of care, such as the
number of amputations due to diabetes, the number of patients
with DVT, or the number of patients with drug-resistant
pneumonia. These three categories of variables are thought to
be mutually interdependent and reinforcing. A model for patient
safety and quality research design (PSQRD) expands on this
well-known, original SPO model.42 For more details on this
model, see Chapter 20.
Program Theory Models
Program theory models are the most basic and practical of the
evaluation models. They are the implicit theories of
18. stakeholders and participants of the proposed program. A
detailed program theory identifies variables, the timing of
measures and observations, and the key expectations that reflect
their understandings. Most importantly, a program theory model
serves as a shared vision between the evaluator team and the
participants, creating a unified conceptual model that guides all
evaluation activities.
Six Steps
Program theory evaluation is recommended practice for all
program evaluation and an important approach regardless of
whether the program is the implementation of a new
documentation system or an institution-wide information
system. Invariably, various participants will have different ideas
about what the program is and why it works.43 Creation of a
program theory model serves to align the various stakeholders
into a single view. The Centers for Disease Control and
Prevention's six-step program evaluation framework is one of
the best examples in current use.44 It recommends the following
six steps:
1.Engage stakeholders to ensure that all partners have
contributed to the goals of the program and the metrics to
measure its success.
2.Describe the program systematically to identify goals,
objectives, activities, resources, and context. This description
process involves all stakeholders.
3.Focus the evaluation design to assess usefulness,
feasibility, ethics, and accuracy.
4.Gather credible evidence by collecting data, conducting
interviews, and measuring outcomes using a good research
design.
5.Justify conclusions using comparisons against standards,
statistical evidence, or expert review.
6.Ensure use and share lessons learned by planning and
implementing dissemination activities.
19. Logic Models
A logic model is a representation of components and
mechanisms of the program as noted by the authors of the W.K.
Kellogg Foundation's guide to logic models:
Basically, a logic model is a systematic and visual way to
present and share your understanding of the relationship among
the resources you have to operate your program, the activities
you plan, and the changes or results you hope to achieve. The
most basic logic model is a picture of how you believe your
program will work. It uses words and/or pictures to describe the
sequence of activities thought to bring about change and how
these activities are linked to the results the program is expected
to achieve.45(p1)
The basic structure of a logic model is illustrated in Figure 5-5.
The logic model starts with a category of “inputs,” which
include staff, resources, prior success, and stakeholders. In
health IT the inputs are the programs, software, the IT staff,
hardware, networks, and training. “Outputs” include the
activities that are going to be conducted, such as training,
implementation, software design, and other computer activities.
Participation refers to the individual involved. Outcomes are
divided into short, medium, and long outcomes or short-term
versus long-term outcomes. The goal is to make sure that the
components of the program are easy to see and the mechanisms
are made explicit.
The specific methods used to apply a model are not prescribed
(although creating a logic model is recommended). Use of a
wide range of methods and tools is encouraged. Engaging the
stakeholders is the first step and that process could involve
needs assessment, functionality requirements analyses,
cognitive task analyses, contextual inquiry, and ethnographic
observation, to name a few approaches. In all cases, the result is
the ability to provide a deep description of the program, the
expected mechanisms, and the desired outcomes. Once there is
20. agreement on the characteristics of the program at both the
superficial level and the deeper structure, designing the
evaluation is straightforward. Agreement among stakeholders is
needed not only to identify concepts to measure, but also to
determine how to meaningfully measure them.
In a health IT product implementation, agreement, especially
about goals and metrics, should be made early in the program
implementation process. Because health IT products are unique,
creating a shared vision and a common understanding of the
meaning of the evaluation can be challenging. Using an iterative
process for implementation can mitigate this problem where
design and implementation go hand in hand and the
stakeholder's vision is addressed repeatedly throughout the
process.
Methods, Tools, and Techniques
The need for variety in methods is driven by the diversity in
population, types of projects, and purposes that are
characteristic of research in informatics. Many evaluation
projects are classified as either qualitative or quantitative. This
division may be somewhat artificial and limited but using the
terms qualitative and quantitative to organize methods helps to
make them relatively easy to understand. A central thesis of this
section is that the choice of method should fit the question and
multiple methods are commonly used in evaluations.
Quantitative versus Qualitative Questions
Because evaluation is often a continuous process throughout the
life of a project, the systems life cycle is used to organize this
discussion. Evaluation activities commonly occur during the
planning, implementation, and maintenance stages of a project
(see Chapter 2 for more details about these stages and others in
the systems life cycle). At each stage both quantitative and
qualitative questions might be asked. At the beginning of a
project the goal is to identify resources,
FIG 5-5 Example of a logic model structure.
(From University of Wisconsin–Extension, Cooperative
21. Extension, Program Development and Evaluation website. Logic
model.
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.htm
l. 2010.)
feasibility, values, extent of the problem to be solved, and types
of needs. The methods used to answer these questions are
essentially local and project specific. During the project,
evaluation questions focus on the intensity, quality, and depth
of the implementation as well as the evolution of the project
team and community. Finally, in the maintenance phase of the
project the questions focus on outcomes, cost–benefit value, or
overall consequences.
Table 5-2 presents a matrix that outlines the different stages of
a project and the types of questions that might be asked during
each stage. The questions are illustrations of possible
evaluation questions and loosely categorized as either
qualitative or quantitative.
Qualitative Methods
Many individuals believe that qualitative methods refer to
research procedures that collect subjective human-generated
data. However, subjective data can be quantitative, such as the
subjective responses to carefully constructed usability
questionnaires used as outcome end points. Diagnostic codes are
another example of quantitative forms of subjective data.
Qualitative methods, rather, refer to procedures and methods
that produce narrative or observational descriptive data that are
not intended for transformation into numbers. Narrative data
refer to information in the form of stories, themes, meanings,
and metaphors. Collecting this information requires the use of
systematic procedures where the purpose is to understand and
explore while minimizing bias. There are several very good
guides to conducting qualitative research for health IT
evaluation studies.46
TABLE 5-2 Evaluation Research Questions By Stage of Project
and Type of Question
22. TYPES OF QUESTIONS
STAGE OF PROJECT
QUALITATIVE
QUANTITATIVE
Planning
What are the values of the different stakeholders?
What are the expectations and goals of participants?
What is the prevalence of the problem being addressed by the
health IT product?
What are the resources available?
What is the relationship between experiencing a factor and
having a negative outcome?
What are the intensity and depth of use of health IT tools?
Implementation
How are participants experiencing the change?
How does the health IT product change the way individuals
relate to or communicate with each other?
How many individuals are participating?
What changes in performance have been visible? What is the
compliance rate?
How many resources are being used during implementation
(when and where)?
Maintenance
How has the culture of the situation changed?
What themes underscore the participants' experience?
What metaphors describe the change?
What are the participants' personal stories?
Is there a significant change in outcomes for patients?
23. Did compliance rates increase (for addressing the initial
problem)?
What is the rate of adverse events?
Is there a significant change in efficiency?
Is there a correlation between usage and outcomes?
IT, Information technology.
Structured and Semistructured Interviews
In-person interviews can be some of the best sources of
information about an individual's unique perspectives, issues,
and values. Interviews vary from a very structured set of
questions conducted under controlled conditions to a very
informal set of questions asked in an open-ended manner.
Typically, evaluators audio-record the interviews and conduct
thematic or content coding on the results. For example, user
interviews that focus on how health IT affects workflow are
especially useful. Some interviews have a specific focus, such
as in the critical incident method.47 In this method an
individual recalls a critical incident and describes it in detail
for the interviewer. In other cases the interview may focus on
the individual's personal perceptions and motivations, such as in
motivational interviewing.48 Finally, cognitive task analysis
(CTA) is a group of specialized interviews and observations
where the goal is to deconstruct a task or work situation into
component parts and functions.49–51 A CTA usually consists of
targeting a task or work process and having the participant walk
through or simulate the actions, identifying the goals,
strategies, and information needs. These latter methods are
useful for user experience studies outlined in Chapter 21.
Observation and Protocol Analysis
An interview may not provide enough information and therefore
observing users in action at work is necessary to fully
understand the interactions of context, users, and health IT.
Observation can take many forms, from using a video camera to
a combination of observation and interview where individuals
“think aloud” while they work. The think-aloud procedures need
to be analyzed both qualitatively for themes and quantitatively
24. for content, timing, and frequency.52
Interviews, CTAs, and observation are essential in almost every
health IT evaluation of users. Technology interventions are not
uniform and cannot simply be inserted into the workflow in a
“plug and play” manner. In addition, the current state of the
literature in the field lacks clarity about the mechanisms of
action or even delineating the key components of health IT.
Thus understanding the user's response to the system is
essential. For more information about the user experience, see
Chapter 21.
Ethnography and Participant Observation
Ethnography and participant observation are derived from the
field of anthropology, where the goal is to understand the larger
cultural system through observer immersion. The degree of
immersion can vary, as can some of the data collection methods,
but the overall strategy includes interacting with all aspects of
the context. Usually ethnography requires considerable time,
multiple observations, interviews, and living and working in the
situation if possible. It also includes reading historical
documents, exploring artifacts in current use (e.g., memos,
minutes), and generally striving to understand a community.
These methods are particularly useful in a clinical setting,
where understanding the culture is essential.46,53
Less intensive ethnographic methods are also possible and
reasonable for health IT evaluations. Focused ethnography is a
method of observing actions in a particular context. For
example, nurses were observed during patient care hand-offs
and their interactions with electronic health records were
recorded. From these observations design implications for hand-
off forms were derived.54,55
Quantitative Methods
Research Designs
Quantitative designs range from epidemiologic, descriptive
studies to randomized controlled trials. Three study designs
presented may be particularly useful for health IT and clinical
settings. Each of these designs takes advantage of the
25. conditions that are commonly found in health IT projects,
including automatically collected data, the ubiquitous use of
pre-post design, and outcome-based targets for interventions.
Time Series Analysis
This design is an extension of the simple pre-post format but
requires multiple measures in time prior to and after an
intervention such as health IT. Evidence of the impact is found
in the differences in mathematical slopes between measures
during the pre-test and post-test periods. This design has
significantly more validity than a simple pre-post one-time
measure design and can be very feasible in clinical settings
where data collection is automatic and can be done for long
periods with little increase in costs. For example, top-level
administrators might institute a decision support computerized
program to improve patient care for pain management. A
straightforward design is to measure the rate of compliance to
pain management recommendations during several periods about
12 months prior and several periods up to 12 months after
implementation, controlling for hospital occupancy, patient
acuity, and staffing ratios. This design is highly recommended
for health IT implementations where data can be captured
electronically and reliably over long periods.56,57
Regression Discontinuity Design
A regression discontinuity design is similar to a time series
analysis but is a formal statistical analysis of the pattern of
change over time for two groups. This design is particularly
suited to community engagement interventions, such as for low
vaccination rates in children or seat belt reminders. Participants
are divided into two nonoverlapping groups based on their
prescores on the outcome of interest. The example used above
of compliance with pain management guidelines in an inpatient
surgical unit may also be applicable. Providers with greater than
50% compliance to pain guidelines are put in one group and
those with less than 50% compliance with pain guidelines are in
the other group. Those with the lowest compliance receive a
decision support intervention such as a computerized alert and a
26. decision support system to assess and treat pain while the rest
do not. Post measures are taken some time after the intervention
and the difference between the predicted scores of the low
compliance group and their actual scores as compared to the
nonintervention group are noted. The validity of this design is
nearly as high as a randomized controlled trial, but this design
is much easier to implement because those who need the
intervention
FIG 5-6 Example of a multiple baseline study.
receive it. However, this design requires large numbers, which
may not be available except in a system-wide or multisite
implementation.58
Multiple Baseline with Single Subject Design
This design adds significant value to the standard pre-post
comparison by staggering implementation systematically (e.g.,
at 3-month intervals) over many settings but the measurement
for all settings starts at the same time. In other words,
measurement begins at the same time across five clinics but
implementation is staggered every 3 months. Figure 5-6
illustrates the pattern of responses that might be observed. The
strength of the evidence is high if outcomes improved after
implementation in each setting and they followed the same
pattern.59
Instruments
Data are commonly gathered through the use of instruments.
User satisfaction, social network analyses, and cost-
effectiveness tools are discussed briefly below.
User Satisfaction Instruments
User satisfaction is commonly measured as part of health IT
evaluations but is a complex concept. On the one hand it is
thought to be a proxy for adoption and on the other hand it is
used as a proxy for system effectiveness. In the first conception,
the constructs of interest would be usability and ease of use and
whether others use it (usability and social norms). In the second
conception, the constructs of interest would refer to how well
27. the system helped to accomplish task goals (usefulness). One of
the most common instruments for evaluating user satisfaction is
the UTAUT.31 This well-validated instrument assesses
perceived usefulness, social norms and expectations, perceived
effort, self-efficacy, ease of use, and intentions to use.
Reliability for these six scales ranges from 0.92 to 0.95.
A second measure of user satisfaction focuses on service quality
(SERVQUAL) and assesses the degree and quality of IT service.
Five scales have been validated: reliability, assurance,
tangibles, empathy, and responsiveness. These five scales have
been found to have reliability of 0.81 to 0.94.60
A third measure is the system usability scale (SUS), which is
widely used outside health IT.61 It is a 10-item questionnaire
applicable to any health IT product. Bangor and colleagues
endorsed the SUS above other available instruments because it
is technology agnostic (applicable to a variety of products) and
easy to administer and the resulting score is easily
interpreted.62 The authors provide a case study of product
iterations and corresponding SUS ratings that demonstrate the
sensitivity of the SUS to improvements in usability.
Social Network Analysis
Methods that assess the linkages between people, activities, and
locations are likely to be very useful for understanding a
community and its structure. Social network analysis (SNA) is a
general set of tools that calculates the connections between
people based on ratings of similarity, frequency of interaction,
or some other metric. The resultant pattern of connection is
displayed as a visual network of interacting individuals. Each
node is an individual and the lines between nodes reflect the
interactions. Although SNA uses numbers to calculate the form
of the networked display, it is essentially a qualitative
technique because the researcher must interpret the patterns of
connections and describe them in narrative form. Conducting an
SNA is useful if the goal is to understand how an information
system affected communication between individuals. It is also
useful to visualize nonpeople connections, such as the
28. relationship between search terms or geographical distances.63
For example, Benham-Hutchins used SNA to examine patient
care hand-offs from the emergency department to inpatient
areas, finding that each hand-off entailed 11 to 20 healthcare
providers.64
Cost-Effectiveness Analysis
Cost-effectiveness analysis (CEA) attempts to quantify the
relative costs of two or more options. Simply measuring
additional resources, start-up costs, and labor would be a
rudimentary cost analysis. A CEA is different than a cost–
benefit analysis, which gives specific monetary analysis. A
simple CEA shows a ratio of the cost divided by the change in
health outcomes or behavior. For example, a CEA might
compare the cost of paying a librarian to answer clinicians'
questions as compared to installing Infobuttons per the number
of known questions. Most CEA program evaluations will assess
resource use, training, increased staff hiring, and other cost-
related information. This would not necessarily be a full
economic analysis that would require a consultation with an
economist. The specific resources used could be delineated in
the logic model, unless it was part of hypothesis testing in a
more formal survey. The reader is directed to a textbook if
further information is needed.37
Conclusion and Future Directions
Evaluation of health IT programs and projects can range from
simple user satisfaction for a new menu to full-scale analysis of
usage, cost, compliance, patient outcomes, observation of
usage, and data about patients' rate of improvement. Starting
with a general theoretical perspective and distilling it to a
specific program model is the first step in evaluation. Once
overall goals and general constructs have been identified, then
decisions about measurement and design can be made. In this
chapter evaluation approaches were framed, focusing on health
IT program evaluation to orient the reader to the resources and
opportunities in the evaluation domain. Health IT evaluations
are typically multidimensional, longitudinal, and complex.
29. Health IT interventions and programs present a unique
challenge, as they are rarely independent of other factors.
Rather, they are usually embedded in a larger program. The
challenge is to integrate the goals of the entire program while
clarifying the impact and importance of the health IT
component. In the future, health IT evaluations should become
more theory driven and the complex nature of evaluations will
be acknowledged more readily.
As health IT becomes integrated at all levels of the information
context of an institution, evaluation strategies will necessarily
broaden in scope. Outcomes will not only include those related
to health IT but span the whole process. The result will be
richer analyses and a deeper understanding of the mechanisms
by which health IT has its impact. The incorporation of theory
into evaluation will also result in more generalizable knowledge
and the development of health IT evaluation science. Health
practitioners and informaticists will be at the heart of these
program evaluations due to their central place in healthcare, IT,
and informatics departments.
References
1.
Patton, MQ: Utilization-Focused Evaluation. 4th ed, 2008, Sage,
London.
2.
Ainsworth, L, Viegut, D: Common Formative Assessments.
2006, Corwin Press, Thousand Oaks, CA.
3.
Fetterman, D: Foundations of Empowerment Evaluation. 2001,
Sage Publications, Thousand Oaks, CA.
4.
Scriven, M: The methodology of evaluation. In Stake, R (Ed.):
Curriculum Evaluation, American Educational Research
Association, Monograph Series on Evaluation, No 1. 1967, Rand
McNally, Chicago.
5.
Christensen, C: The Innovator's Dilemma. 2003,
30. HarperBusiness, New York, NY.
6.
Shekelle, P: Costs and Benefits of Health Information
Technology. 2006, Southern California Evidence-Based Practice
Center, Santa Monica, CA.
7.
Vygotsky, L: Mind and Society. 1978, Harvard University
Press, Cambridge, MA.
8.
Preskill, H, Torres, R: Building capacity for organizational
learning through evaluative inquiry. Evaluation. 5(1), 1999, 42–
60.
9.
Senge, P: The Fifth Discipline: The Art and Practice of the
Learning Organization. 1990, Doubleday, New York, NY.
10.
Ajzen, I: The theory of planned behavior. Organ Behav Hum
Decis Process. 50, 1991, 179–211.
11.
Fishbein, M, Ajzen, I: Belief, Attitude, Intention, and Behavior:
An Introduction to Theory and Research. 1975, Addison-
Wesley, Reading, MA.
12.
Bandura, A: Human agency in social cognitive theory. Am
Psychol. 44, 1989, 1175–1184.
13.
Rogers, EM: Diffusion of Innovations. 1983, Free Press, New
York, NY.
14.
Cooper, RB, Zmud, RW: Information Technology
Implementation Research: A Technological Diffusion Approach.
Management Science. 36, 1990, 123–139.
15.
Klein, G: An overview of natural decision making applications.
In Zsambok, CE, Klein, G (Eds.): Naturalist Decision Making.
1997, Lawrence Erlbaum Associates, Mahwah, NJ.
31. 16.
Kushniruk, AW, Patel, VL: Cognitive and usability engineering
methods for the evaluation of clinical information systems. J
Biomed Inform. 37(1), 2004, 56–76.
17.
Zsambok, C, Klein, G: Naturalistic Decision Making. 1997,
Lawrence Erlbaum, Mahwah, NJ.
18.
Åström, K, Murray, R: Feedback Systems: An Introduction for
Scientists and Engineers. 2008, Princeton University Press,
Princeton, NJ.
19.
Endsley, M, Garland, D: Situation Awareness Analysis and
Measurement. 2000, Lawrence Erlbaum, Mahway, NJ.
20.
Koch, S, Weir, C, Haar, M, et al.: ICU nurses’ information
needs and recommendations for integrated displays to improve
nurses’ situational awareness. J Am Med Inform Assn. 2012,
March 21 [e-pub].
21.
Koch SH, Weir C, Westenskow D, et al. Evaluation of the effect
of information integration in displays for ICU nurses on
situation awareness and task completion time: a prospective
randomized controlled study. J Am Med Inform Assn. In press.
22.
Shannon, C: A mathematical theory of communication. Bell
Syst Tech J. 27, 1948, 379–423, 623-656.
23.
Shannon, C, Weaver, W: The Mathematical Theory of
Communication. 1949, University of Illinois Press, Urbana, IL.
24.
Krippendorf, K: Information Theory: Structural Models for
Qualitative Data. 1986, Sage Publications, Thousand Oaks, CA.
25.
Weir, CR, McCarthy, CA: Using implementation safety
indicators for CPOE implementation. Jt Comm J Qual Patient
32. Saf. 35(1), 2009, 21–28.
26.
Pirolli, P, Card, S: Information foraging. Psychol Rev. 106(4),
1999, 643–675.
27.
Pirolli, P: Information Foraging Theory: Adaptive Interaction
with Information. 2007, Oxford University Press, Oxford,
United Kingdom.
28.
DeLone, W, McLean, E: The DeLone and McLean model of
information systems success: a ten-year update. J Man Inf Syst.
19(4), 2003, 9–30.
29.
Mason, R: Measuring information output: a communication
systems approach. Information & Management. 1(5), 1978, 219–
234.
30.
Venkatesh, V, Morris, M, Davis, G, Davis, F: User acceptance
of information technology: toward a unified view. MIS Quart.
27(3), 2003, 425–478.
31.
Venkatesh, V, Davis, F: A theoretical extension of the
technology acceptance model: four longitudinal field studies.
Manage Sci. 46(2), 2000, 186–204.
32.
Agarwal, R, Prasad, J: A conceptual and operational definition
of personal innovativeness in the domain of information
technology. Inform Syst Res. 9(2), 1998, 204–215.
33.
Karahanna, E, Straub, D, Chervany, N: Information technology
adoption across time: a cross-sectional comparison of pre-
adoption and post-adoption beliefs. MIS Quart. 23(2), 1999,
183–213.
34.
Green, L, Kreuter, M: Health Program Planning: An Educational
and Ecological Approach. 4th ed, 2005, Mayfield Publishers,
33. Mountain View, CA.
35.
Stetler, CB, Damschroder, LJ, Helfrich, CD, Hagedorn, HJ: A
guide for applying a revised version of the PARiHS framework
for implementation. Implement Sci. 6, 2011, 99, doi:1748-5908-
6-99 [pii].
36.
Kitson, A, Harvey, G, McCormack, B: Enabling the
implementation of evidence based practice: a conceptual
framework. Qual Health Care. 7(3), 1998, 149–158.
37.
Kitson, AL, Rycroft-Malone, J, Harvey, G, McCormack, B,
Seers, K, Titchen, A: Evaluating the successful implementation
of evidence into practice using the PARiHS framework:
theoretical and practical challenges. Implement Sci. 3, 2008, 1.
38.
Gaglio, B, Glasgow, R: Evaluation approaches for dissemination
and implementation research. In Brownson, R, Colditz, G,
Proctor, E (Eds.): Dissemination and Implementation Research
in Health: Translating Science to Practice. 2012, Oxford
University Press, New York, NY, 327–356.
39.
Glasgow, R, Vogt, T, Boles, S: Evaluating the public health
impact of health promotion interventions: the RE-AIM
framework. Am J Public Health. 89(9), 1999, 1922–1927.
40.
Donabedian, A: Explorations in Quality Assessment and
Monitoring: The Definition of Quality and Approaches to Its
Assessment. Vol. 1. 1980, Health Administration Press, Ann
Arbor, MI.
41.
Donabedian, A: The quality of care: how can it be assessed?.
JAMA. 260, 1988, 1743–1748.
42.
Brown, C, Hofer, T, Johal, A, et al.: An epistemology of patient
safety research: a framework for study design and
34. interpretation. Part 1: conceptualising and developing
interventions. Qual Saf Health Care. 17(3), 2008, 158–162.
43.
Friedman, C: Information technology leadership in academic
medical centers: a tale of four cultures. Acad Med. 74(7), 1999,
795–799.
44.
Centers for Disease Control and Prevention: framework for
program evaluation in public health. MMWR. 48(RR11), 1999,
1–40.
45.
W.K. Kellogg Foundation: Using Logic Models to Bring
Together Planning, Evaluation, and Action: Logic Model
Development Guide. 2004, W.K. Kellogg Foundation, Battle
Creek, MI.
46.
Patton, M: Qualitative Research and Evaluation Methods. 2001,
Sage Publications, 3rd ed. Newberry Park, CA.
47.
Flanagan, JC: The critical incident technique. Psychol Bull.
51(4), 1954, 327–358.
48.
Miller, WR, Rollnick, S: Motivational Interviewing: Preparing
People to Change. 2nd ed, 2002, Guilford Press, New York, NY.
49.
Crandall, B, Klein, G, Hoffman, R: Working Minds: A
Practitioner's Guide to Cognitive Task Analysis. 2006, MIT
Press, Cambridge, MA.
50.
Hoffman, RR, Militello, LG: Perspectives on Cognitive Task
Analysis. 2009, Psychology Press Taylor and Francis Group,
New York, NY.
51.
Schraagen, J, Chipman, S, Shalin, V: Cognitive Task Analysis.
2000, Lawrence Erlbaum Associates, Mahway, NJ.
52.
35. Ericsson, K, Simon, H: Protocol Analysis: Verbal Reports as
Data. Rev. ed, 1993, MIT Press, Cambridge, MA.
53.
Kaplan, B, Maxwell, J: Qualitative research methods for
evaluating computer information systems. In Anderson, JG,
Aydin, CE, Jay, SJ (Eds.): Evaluating Health Care Information
Systems: Approaches and Applications. 1994, Sage, Thousand
Oaks, CA, 45–68.
54.
Staggers, N, Clark, L, Blaz, J, Kapsandoy, S: Nurses’
information management and use of electronic tools during
acute care handoffs. Western J Nurs Res. 34(2), 2012, 151–171.
55.
Staggers, N, Clark, L, Blaz, J, Kapsandoy, S: Why patient
summaries in electronic health records do not provide the
cognitive support necessary for nurses’ handoffs on medical and
surgical units: insights from interviews and observations.
Health Informatics J. 17(3), 2011, 209–223.
56.
Harris, A, McGregor, J, Perencevich, E, et al.: The use and
interpretation of quasi-experimental studies in medical
informatics. J Am Med Inform Assoc. 13, 2006, 16–23.
57.
Ramsay, C, Matowe, L, Grill, R, Grimshaw, J, Thomas, R:
Interrupted time series designs in health technology assessment:
lessons from two systematic reviews of behavior change
strategies. Int J Technol Assess. 19(4), 2003, 613–623.
58.
Lee, H, Monk, T: Using regression discontinuity design for
program evaluation. American Statistical Association–
Proceedings of the Survey Research Methods Section . 2008,
ASA, Retrieved from
http://www.amstat.org/sections/srms/Proceedings/.
59.
Shadish, WR, Cook, TD, Campbell, DT: Experimental and
Quasi-Experimental Designs for Generalized Causal Inference.
36. 2002, Houghton Mifflin, Boston, MA.
60.
Pitt, L, Watso, NR, Kavan, C: Service quality: a measure of
information systems effectiveness. MIS Quart. 19(2), 1995,
173–188.
61.
Sauro, J: Measuring usability with the system usability scale
(SUS). Measuring Usability LLC
http://www.measuringusability.com/sus.php, 2012, February 2,
2011. Accessed September 27.
62.
Bangor, A, Kortum, P, Miller, JT: An empirical evaluation of
the system usability scale. Int J Hum-Comput Int. 24(6), 2008,
574–594.
63.
In Durland, M, Fredericks, K (Eds.): New Directions in
Evaluation: Social Network Analysis. 2005, Jossey-Bass/AEA,
Hoboken, NJ.
64.
Benham-Hutchins, MM, Effken, JA: Multi-professional patterns
and methods of communication during patient handoffs. Int J
Med Inform. 79(4), 2010, 252–267.
Discussion Questions
1. Of the levels of theory discussed in this chapter, what
level would be most appropriate for evaluation of electronic
health records? Would the level of theory be different if the
intervention was for an application targeting a new scheduling
system in a clinic? Why?
2. What is the difference between program evaluation and
program evaluation research?
3. Assume that you are conducting an evaluation of a new
decision support system for preventative alerts. What kind of
designs would you use in a program evaluation study?
4. Using the life cycle as a framework, explain when and why
you would use a formative or summative evaluation approach.
37. 5. What are the basic differences between a research study
and a program evaluation?
6. Review the following article: Harris A, McGregor J,
Perencevich E, et al. The use and interpretation of quasi-
experimental studies in medical informatics. J Am Med Inform
Assoc. 2006;13:16-23. Explain how you might apply these
research designs in structuring a program evaluation.
Case Study
A 410-bed hospital has been using a homegrown provider order
entry system for 5 years. It has recently decided to put in bar
code administration software to scan medications at the time of
delivery in order to decrease medical error. The administration
is concerned about medication errors, top-level administration
is concerned about meeting The Joint Commission accreditation
standards, and the IT department is worried that the scanners
may not be reliable and may break, increasing their costs. The
plan is to have a scanner in each patient's room; nurses will
scan the medication when they get to the room and also scan
their own badges and the patient's arm band. The application
makes it possible to print out a list of the patients with their
scan patterns and the nurses sometimes carry this printout
because patient's arm bands can be difficult to locate or nurses
do not want to disturb patients while they are sleeping. The bar
code software was purchased from a vendor and the facility has
spent about a year refining it. The IT department is responsible
for implementation and has decided that it will implement each
of the four inpatient settings one at a time at 6-month intervals.
The hospital administration wants to conduct an evaluation
study. You are assigned to be the lead on the evaluation.
Discussion Questions
1. What is the key evaluation question for this project?
2. Who are the stakeholders?
3. What level of theory is most appropriate?
4. What are specific elements to measure by stakeholder
38. group?
Pageburst Integrated Resources
As part of your Pageburst Digital Book, you can access the
following Integrated Resources:
Bibliography and Additional Readings
Web Resources
Chapter 5 Program Evaluation and Research Techniques
Charlene R. Weir
Evaluation of health information technology (health IT)
programs and projects can range from simple
user satisfaction for a new menu or full
-
scale analysis of usage, cost, compliance, patient outcomes, and
observation of usage to data about patient's r
ate of improvement.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Identify the main components of program evaluation
2.Discuss the differences between formative and summative
evaluation
39. 3.Apply the three level
s of theory relevant to program evaluation
4.Discriminate program evaluation from program planning and
research
5.Synthesize the core components of program evaluation with
the unique characteristics of
informatics interventions
Key Terms
Evaluatio
n, 72
Formative evaluation, 73
Logic model, 79
Program evaluation, 73
Summative evaluation, 73
Abstract
Evaluation is an essential component in the life cycle of all
health IT applications and the key to
successful translation of these applications into cl
inical settings. In planning an evaluation the central
questions regarding purpose, scope, and focus of the system
must be asked. This chapter focuses on the
larger principles of program evaluation with the goal of
40. informing health IT evaluations in clinic
al
settings. The reader is expected to gain sufficient background in
health IT evaluation to lead or
participate in program evaluation for applications or systems.
Formative evaluation and summative evaluation are discussed.
Three levels of theory are pres
ented,
including scientific theory, implementation models, and
program theory (logic models). Specific
Chapter 5 Program Evaluation and Research Techniques
Charlene R. Weir
Evaluation of health information technology (health IT)
programs and projects can range from simple
user satisfaction for a new menu or full-scale analysis of usage,
cost, compliance, patient outcomes, and
observation of usage to data about patient's rate of
improvement.
Objectives
At the completion of this chapter the reader will be prepared to:
1.Identify the main components of program evaluation
2.Discuss the differences between formative and summative
evaluation
3.Apply the three levels of theory relevant to program
evaluation
4.Discriminate program evaluation from program planning
and research
5.Synthesize the core components of program evaluation with
the unique characteristics of
informatics interventions
Key Terms
41. Evaluation, 72
Formative evaluation, 73
Logic model, 79
Program evaluation, 73
Summative evaluation, 73
Abstract
Evaluation is an essential component in the life cycle of all
health IT applications and the key to
successful translation of these applications into clinical
settings. In planning an evaluation the central
questions regarding purpose, scope, and focus of the system
must be asked. This chapter focuses on the
larger principles of program evaluation with the goal of
informing health IT evaluations in clinical
settings. The reader is expected to gain sufficient background in
health IT evaluation to lead or
participate in program evaluation for applications or systems.
Formative evaluation and summative evaluation are discussed.
Three levels of theory are presented,
including scientific theory, implementation models, and
program theory (logic models). Specific
Running Head: INFORMATION NEEDS OF VARIOUS
COMPONENTS OF THE CRIMINAL JUSTICE SYSTEM 1
INFORMATION NEEDS OF VARIOUS COMPONENTS OF
THE CRIMINAL JUSTICE SYSTEM 2
Assignment 2: Information Needs of Various Components of the
Criminal Justice System
Name: Donecia Hudson
Argosy University
Date: March 7, 2018
42. Information Needs of Various Components of the Criminal
Justice System
Introduction
Prison overcrowding has become one of the most difficult social
phenomena that the criminal justice system is dealing with since
time immemorial. Prison overcrowding has developed slowly
and very steadily to become a social menace as the conditions in
these prisons are both unhealthy and dangerous to both the
inmates and the community at large. The factors leading to
prison overcrowding are influenced by law enforcement, courts,
juvenile system, homeland security and also the correctional
facilities.
These systems work in tandem to set the conditions necessary
for overcrowding to take place as they form the whole justice
system. The overpopulation of prisoners has led to inhumane
living conditions within the facilities. Congestion is a
consequence of long-standing policies from the various
components of the justice systems that have not acclimatized to
multiple changes. These changes include changes in social
norms and human rights awareness and activism that have
become an essential part of determining what is wrong and what
is right in the treatment of humans.
Overcrowding has spawned a variety of issues that include and
are not limited to prisoner mistreatment and rights violations,
health risks and physical and mental harm inmates under the
watch of responsible authorities. The taxpayer feels the effect
as the number of prisoners determines the needs of the prison
facility. An increasing number of prisoners than the prison
capacity demand more amenities, and hence the taxpayer pays
for the increase.
Causes of Prison Overcrowding
Policies and laws introduced in the court have been met with
much resistance. These laws would address and amend
overcrowding and prisons sentences and verdicts respectively in
43. such a way that just policies are enacted to reduce the prison
population. Resistance to the systems stems from the fact that
the public is demanding harsher punishments and longer
sentences for wrongdoers (Golaszewske, 2011). They require
that to feel more secure and have the delusion that this would
resolve the issues of rising crime rates. Furthermore to enact
favorable policies that will see prison population reduced, will
need more funding and this leaves the state with a daunting
decision of deciding where to allocate limited resources best.
These funds would be aimed at dealing with recidivism rates.
The projects are usually aimed at inmates from the earliest
stages of incarceration which is within the juvenile systems.
Coping with recidivism rates is a way of curbing the growing
inmate population.
Various projects to rehabilitate prisoners need funding and as
such are seen to be low in priority for most states. This being
that the public has the notion that prisoners should be locked
for longer and be kept away from society hence ensure crime
rate goes down. However, research has proved otherwise. The
Bureau of Justice Statistics, (2014), demonstrates that if
sentences are made shorter and more fitting to the crimes
committed, the population in prisons will reduce tenfold. The
increase in federal crimes and harsher punishments in the penal
code has seen to a rise in prison population. Drug offenders and
repeat small-time offenders have to spend time behind bars.
Addressing the situation is possible.
For the longest time, homeland security, along with immigration
authorities have used private prison facilities to hold
immigrants. Nearly 62% of the overall 400,000 arrested by the
US Immigration and Customs Enforcement are held in private
prisons which according to New York Times, have “hellish”
conditions. Homeland Security is therefore responsible for any
changes that can reduce the overcrowding witnessed in these
private prisons which are negatively impacting both the
taxpayer and the inmates held there. Technical violators of
parole are also held in jail for small violations such as being
44. late. The elderly are still in prison even when they do not pose a
danger to the society. Low-risk prisoners are also in prison and
hence the overcrowding.
In summary, the causes for overcrowding are high rates of
recidivism, repeat offenders who have a three-strike policy,
slow court systems, more federal crimes added to the penal code
and harsher sentences for certain offenses. Parole violators,
those awaiting hearing, and elderly prisoners also contribute to
the growing population in prisons.
Recommendations
According to the National Criminal Justice Reference
Service, (2000), prison overcrowding is caused by a slow court
system and hence to alleviate this, a system should be put in
place to reduce the number of inmates remaining in custody un-
sentenced, in remand and awaiting judgment. Use of volunteer
legal teams to organize bail hearings hence decreasing time
spent awaiting trial. Policies should also be reformed to reduce
the length of jail sentences. Elderly prisoners should also be
released into the community instead of spending their whole life
behind bars, (Pitts, Griffins & Johnson, 2014).
The use of other means of punishment for small-time
offenders, drug users and immigrants should help reduce the
populations in the prisons by a huge margin. Prisoners that have
a lower level of harm to the community should be integrated
back into the community and serve their sentence outside
prison. Drug users should alternatively be sent to rehabilitation
facilities and mental prisoners should be in psychiatric
facilities. Community service and probation should be common
alternatives to imprisonment, (Horne & Newman, 2015)
The parole system could also be a source of a solution.
Prisoners who do not pose a threat to the community should be
granted parole under supervision, (Albrecht, 2012). The boards
should also summon parole violators outside the prison and
solve it outside the prison. The juvenile system, on the other
hand, could amend policies that move juveniles to prison after
they have become of age. The juvenile delinquents should get a
45. second chance before being moved to adult prisons.
References
Albrecht, H. J. (2012). Prison overcrowding-finding effective
solutions: strategies and best practices against overcrowding in
correctional facilities. Max-Planck-Institut für Ausländisches
und Internationales Strafrecht.
Horne, C., & Newman, W. J. (2015). Updates since Brown v.
Plata: alternative solutions for prison overcrowding in
California. The journal of the American Academy of Psychiatry
and the Law, 43(1), 87-92.
National Criminal Justice Reference Service. (2000, January 1).
A Second Look at Alleviating Jail Crowding: A Systems
Perspective. Retrieved December 19, 2014, from
https://www.ncjrs.gov/pdffiles1/bja/182507.pdf
Pitts, J. M., Griffin III, O. H., & Johnson, W. W. (2014).
Contemporary prison overcrowding: short-term fixes to a
perpetual problem. Contemporary Justice Review, 17(1), 124-
139.
www.lao.ca.gov/reports/2011/crim/overcrowding_080511.aspx
Case Study
A 410-bed hospital has been using a homegrown provider order
entry system for 5 years. It has recently decided to put in bar
code administration software to scan medications at the time of
delivery in order to decrease medical error. The administration
is concerned about medication errors, top-level administration
is concerned about meeting The Joint Commission accreditation
standards, and the IT department is worried that the scanners
may not be reliable and may break, increasing their costs. The
46. plan is to have a scanner in each patient's room; nurses will
scan the medication when they get to the room and also scan
their own badges and the patient's arm band. The application
makes it possible to print out a list of the patients with their
scan patterns and the nurses sometimes carry this printout
because patient's arm bands can be difficult to locate or nurses
do not want to disturb patients while they are sleeping. The bar
code software was purchased from a vendor and the facility has
spent about a year refining it. The IT department is responsible
for implementation and has decided that it will implement each
of the four inpatient settings one at a time at 6-month intervals.
The hospital administration wants to conduct an evaluation
study. You are assigned to be the lead on the evaluation.
Discussion Questions
1. What is the key evaluation question for this project?
2. Who are the stakeholders?
3. What level of theory is most appropriate?
4. What are specific elements to measure by stakeholder
group?
Case Study
A 410
-
bed hospital has been using a homegrown provider order entry
system for 5 years. It has recently
decided to put in bar code administration software to scan
medications at the time of delivery in order
to decrease medical error. The administration is
concerned about medication errors, top
-
level
administration is concerned about meeting The Joint
Commission accreditation standards, and the IT
department is worried that the scanners may not be reliable and
47. may break, increasing their costs. The
plan is t
o have a scanner in each patient's room; nurses will scan the
medication when they get to the
room and also scan their own badges and the patient's arm band.
The application makes it possible to
print out a list of the patients with their scan patterns and
the nurses sometimes carry this printout
because patient's arm bands can be difficult to locate or nurses
do not want to disturb patients while
they are sleeping. The bar code software was purchased from a
vendor and the facility has spent about a
year re
fining it. The IT department is responsible for implementation
and has decided that it will
implement each of the four inpatient settings one at a time at 6
-
month intervals.
The hospital administration wants to conduct an evaluation
study. You are assigned
to be the lead on
the evaluation.
Discussion Questions
1. What is the key evaluation question for this project?
2. Who are the stakeholders?
48. 3. What level of theory is most appropriate?
4. What are specific elements to measure by stakehol
der group?
Case Study
A 410-bed hospital has been using a homegrown provider order
entry system for 5 years. It has recently
decided to put in bar code administration software to scan
medications at the time of delivery in order
to decrease medical error. The administration is concerned
about medication errors, top-level
administration is concerned about meeting The Joint
Commission accreditation standards, and the IT
department is worried that the scanners may not be reliable and
may break, increasing their costs. The
plan is to have a scanner in each patient's room; nurses will
scan the medication when they get to the
room and also scan their own badges and the patient's arm band.
The application makes it possible to
print out a list of the patients with their scan patterns and the
nurses sometimes carry this printout
because patient's arm bands can be difficult to locate or nurses
do not want to disturb patients while
they are sleeping. The bar code software was purchased from a
vendor and the facility has spent about a
year refining it. The IT department is responsible for
implementation and has decided that it will
implement each of the four inpatient settings one at a time at 6-
month intervals.
The hospital administration wants to conduct an evaluation
study. You are assigned to be the lead on
the evaluation.
Discussion Questions
49. 1. What is the key evaluation question for this project?
2. Who are the stakeholders?
3. What level of theory is most appropriate?
4. What are specific elements to measure by stakeholder
group?