2. Summary of Current Projects
• Program Evaluation Projects
– Project Weed & Seed
– Project Fairchild Tropical Gardens
• Student Development Projects
– Project Social Networking
– Project Writing
• Teaching Research Methods/Statistics
Projects
– Project RLE
3. Program Evaluation Philosophy
• Utilization-focused evaluation (Patton, 1996)
– Evaluations are situation specific
• Comprehensive evaluation designs
– Formative and summative evaluation
– Qualitative and quantitative data
• Useful and meaningful data
– Simple versus sophisticated analyses
• Faculty-student evaluation teams
– Students as evaluation apprentices
4. What is Program Evaluation?
• “Program evaluation is the systematic collection of
information about the activities, characteristics, and
outcomes of programs for use by specific people to
reduce uncertainties, improve effectiveness, and
make decision with regard to what those programs
are doing and affecting” (Patton, 1986).
• “Evaluation research is the systematic application
of social research procedures for assessing the
conceptualization, design, implementation, and
utility of social intervention program” (Rossi &
Freeman, 1993).
5. Types of Evaluation
• Formative Evaluation: focuses on identifying the
strengths and weaknesses of a program or
intervention.
– Comprised of implementation (process) and progress
evaluation
– Occurs during the entire life of the program/intervention
– Is performed to monitor and improve the
program/intervention
• Summative Evaluation: focuses on determining
the overall effectiveness or impact of a program or
intervention.
– Also called impact evaluation
– Assesses if the project/intervention met its stated goals
6. Preparing to
Conduct an Evaluation
• Identify the program and its stakeholders
– Get a description of the program (i.e.,
curriculum)
– Meet with all stakeholders (survey or interview
them)
• Become familiar with information needs
– Who wants the evaluation?
– What is the focus of the evaluation? What
resources do I have available to me?
– Why is an evaluation wanted?
– When is the evaluation wanted?
7. Program Provider/Staff
Issues
• Expectation of a “slam-bang effect”.
• Fear that evaluation with inhibit
creativity/innovation in regards to the
program.
• Fear that the program will be terminated.
• Fear that information will be misused.
• Fear that evaluation will drain resources.
• Fear of losing control of the program.
• Fear of the program staff that they are being
monitored.
8. What is Program Theory?
• Program theory identifies key program
elements and how they relate to each other.
• Program theory helps us decide what data
we should collect and how we should
analyze it.
• It is important to develop an evaluation plan
that measures the extent and nature of each
individual element.
9. Inadequate Program
Evaluation Models
• Social Science Research Model: form two
random groups, providing one with the
service and using the other as a control
group.
• Black-Box Evaluation: an evaluation that
only looks at the outputs and not the internal
operations of the program.
• Naturalistic Model: utilizing only qualitative
methods to gather lots of data.
10. Theory Driven Model
• Theory-driven evaluations are more likely than
methods-driven evaluations to discover program
effects --- they identify and examine a larger set of
potential program outcomes (Chen & Rossi, 1980).
• Theory-driven evaluations are not limited to one
method (i.e., quantitative or qualitative), one data
source (i.e., program participants, artifacts,
community indexes, program staff), or one type of
analysis (i.e., descriptive statistics, correlational
analyses, group difference statistics).
• Theory-driven evaluations utilize mixed-methods
and derive their data from multiple sources.
11. Improvement-Focused Model
• Program improvement is the focus.
• Utilizing this type of model, evaluators can
help program staff to discover discrepancies
between program objectives and the needs
of the target population, between program
implementation and program plans, between
expectations of the target population and the
services actually delivered, or between
outcomes achieved and outcomes projected
(Posavac & Carey, 1997).
12. Goals of an Evaluation
• Implementation Goals
– Equipment needs, staff hiring and training
• Intermediate Goals
– Program is delivered as planned
• Outcome Goals
– Is the program effective?
13. Questions to ask
in an Evaluation
• (1) Does the program match the values of the
stakeholders/needs of the people being served?
• (2) Does the program as implemented fulfill the
plans?
• (3) Do the outcomes achieved match the goals?
• (4) Is there support for program theory?
• (5) Is the program accepted?
• (6) Are the resources devoted to the program being
expended appropriately?
14. Creating an Evaluation Plan
• Step 1: Creating a Logic Model
• Step 2: Reviewing the Literature
• Step 3: Determining the Methodology
• Step 4: Present a Written Proposal
15. Step 1: Creating a Logic
Model
• Review program descriptions
– Is there a program theory?
– Who do they serve?
– What do they do?
• Meet with stakeholders
– Program personnel
– Program sponsors
– Clients of program
– Other individuals/organizations impacted by the
program
16. Step 1: Creating a Logic
Model
• Logic models depict assumptions about the
resources needed to support program
activities and produce outputs, and the
activities and outputs needed to realize the
intended outcomes of a program (United
Way of America, 1996; Wholey, 1994).
• The assumptions depicted in the model are
called program theory.
17. Sample Logic Model
Sample Logic Model
INPUTS ACTIVITIES OUTPUTS OUTCOMES
Resources dedicated What the program The direct products Benefits for
to or consumed by does with the inputs of program activities participants during
the program to fulfill its mission and after program
activities
18. Step 2: Reviewing the
Literature
• Important things to consider:
– In what ways is your program similar to other
programs?
– What research designs were utilized?
– How were participants sampled?
– Can previous measures be adopted?
– What statistical analyses were performed?
– What were their conclusions/interpretations?
• Creating Hypotheses/Research Questions
19. Step 3: Determining
the Methodology
• Sampling Method
– Probability vs. Non-probability
• Research Design
– Experimental, Quasi-experimental, Non-
experimental
• Data Collection
– Ethics, Implementation, Observations, Surveys,
Existing Records, Interviews/Focus Groups
• Statistical Analysis
– Descriptive, Correlational, Group Differences
20. Step 4: Present a
Written Proposal
• Describe the specific purpose of the
evaluation
– Specific goals, objectives and/or aims of the
evaluation
• Describe the evaluation design
– Include theories/support for design
– Methodology – participants, measures,
procedure
• Describe the evaluation questions
– Hypotheses and proposed analyses
• Present a detailed work plan and budget
21. Ethics in Program Evaluation
• Sometimes evaluators will have to deal with
ethical dilemmas during the evaluation
process
• Some potential dilemmas
– Programs that can’t be done well
– Presenting all findings (negative and positive)
– Proper ethical considerations (i.e., informed
consent)
– Maintaining confidentiality of clients
– Competent data collectors
• AEA ethical principles
22. Data Collection:
Implementation Checklists
• Implementation checklists are used to
ascertain if the program is being delivered as
planned.
• You should have questions after each
program chapter/section that the program
deliverer fills out.
• This can then be used to create a new
variable: Level of implementation (none, low,
high).
23. Data Collection: Observations
• What should be observed?
– Participants, Program staff
• Utilize trained observers.
– Your observers should be trained on how to
observe staff and/or clients.
• Use standardized behavioral checklists.
– You should have a standard checklist that
observers can use while observing.
24. Data Collection: Surveys
• To create or not to create?
– Finding a valid/reliable instrument
• What can surveys measure?
– Facts and past behavioral experiences
– Attitudes and preferences
– Beliefs and predictions
– Current/future behaviors
• Types of questions
– Closed versus Open-ended questions
• How to administer?
25. Data Collection:
Existing Records
• School records
– GPA, absences, disciplinary problems
• Health records
– Relevant health information
• National surveys
– National and state indices (e.g., census data)
26. Data Collection:
Focus Groups/Interviews
• Focus groups or individual interviews can be
conducted with program staff and/or clients.
• Can be used to obtain information on
program effectiveness and satisfaction.
• Can also show if client needs are not being
met.
27. Statistical Analysis:
Quantitative
• Descriptive Statistics
– What are the characteristics of our clients?
– What % attended our program?
• Correlational Statistics
– What variables are related to our outcomes?
– How is implementation related to our outcomes?
• Group Difference Statistics
– Is our program group different from our
comparison group?
– Are there group differences on outcomes?
28. Statistical Analysis:
Qualitative
• Transcribe interviews/focus groups/observations
– Should be done verbatim in an organized
fashion
• Summarizing all open-ended questions
– Summarize and keep a tally of how many
participants give each response
• Coding and analyzing all qualitative data
– Utilize a theoretical framework for coding (e.g.,
Grounded Theory)
– Use a qualitative software package to organize
data (e.g., Nudist, NVivo)
29. Writing the Evaluation Report
• This should be as detailed as possible. It
should include both the formative and
summative evaluation findings as well as an
action plan for improvement to the
design/program.
• Should be written in an easy to understand
format (don’t be too technical). For more
technical information you can create an
appendix.
• Include a lot of graphical displays of the
data.
30. Presenting the Results
• You should present the results of the
evaluation to all key stakeholders.
• Professional presentation (PowerPoint,
handouts).
• Don’t just present findings (both positive and
negative) but explain them.
• Present an action plan for possible changes.
31. Working as a Program
Evaluator
• Get more experience
– Take classes
• EP 533 (basic intro), EP 651/652 (Seminar), EP 670
(internship, can take up to 9hours), EP 693
(Independent Study) as well as many others
• Evaluation Certificate (12hrs)
– Workshops
• Get Involved
• Working as a Program Evaluator
32. References
• Chen, H.T., & Rossi, P.H. (1980). The multi-goal, theory-driven
approach to evaluation: A model linking basic and applied social
science. Social Forces, 59, 106-122.
• Julian, D.A. (1997). The utilization of the logic model as a system level
planning and evaluation device. Evaluation and Program Planning, 20,
251-257.
• Patton, M.Q. (1986). Utilization-focused evaluation (2nd ed.). Newbury
Park, CA: Sage Publications.
• Posavac, E.J., & Carey, R.G. (2003). Program evaluation methods and
case studies (6th ed.). New Jersey: Prentice Hall.
• Rossi, P.H., & Freeman, H.E. (1993). Evaluation: A systematic
approach (5th ed.). Newbury Park, CA: Sage Publications.
• United Way of America. (1996). Measuring program outcomes: A
practical approach (Item No. 0989). Author.
33. Websites
• Evaluators’ Institute
– http://www.evaluatorsinstitute.com/
• Guide to program evaluation
– http://www.mapnp.org/library/evaluatn/fnl_eval.htm
• Evaluating community programs
– http://ctb.lsi.ukans.edu/tools/EN/part_1010.htm
• Evaluation bibliography
– http://www.ed.gov/about/offices/list/ope/fipse/biblio.html
• Higher Ed center evaluation resources
– http://www.edc.org/hec/eval/links.html
• The Evaluation Center
– http://www.wmich.edu/evalctr/
34. Websites
• American Evaluation Association (AEA)
– http://www.eval.org/
• Southeast Evaluation Association (SEA)
– http://www.bitbrothers.com/sea/
• Using logic models
– http://edis.ifas.ufl.edu/WC041
• Resources for evaluators
– http://www.luc.edu/faculty/eposava/resource.htm
• Various program evaluation publications (all pdf)
– http://www.uwex.edu/ces/pdande/evaluation/evaldocs.html
• Evaluation toolkit – Kellogg Foundation
– http://www.wkkf.org/Programming/Overview.aspx?CID=281
35. My Contact Information
Jennifer Ann Morrow, Ph.D.
Assistant Professor of Evaluation, Statistics, and
Measurement
Department of Educational Psychology and
Counseling
The University of Tennessee
Knoxville, TN 37996
Email: jamorrow@utk.edu
Office Phone: 865-974-6117