“The purpose of this paper is to stimulate debate on what makes for good monitoring. It draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice and relevant lessons. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about scant experience in involving intended beneficiaries and establishing feedback loops. The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper is aimed at managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area.” (Daniel Ticehurst)
Who is listening to who, how well and with what effect? By Daniel Ticehurst
1. WHO IS LISTENING TO WHO, HOW WELL AND WITH WHAT EFFECT?
1
DANIEL TICEHURST
OCTOBER 16TH, 2012
“Just weighing a pig doesn’t fatten it. You can
weigh it all the time, but it’s not making the hog
fatter.”
President Obama. Green Bay town hall meeting, June
11th 2009
http://pifactory.wordpress.com/2009/06/16/just-weighing-
a-pig-doesnt-fatten-it-obama-hint-on-testing/
1
Project Director, Performance Management and Evaluation, HTSPE Ltd.
2. ACKNOWLEDGEMENTS
I’d like to thank the following people for their comments on early drafts: Andrew Temu,
James Gilling, Jonathan Mitchell, Rick Davies, Harold Lockwood, Mike Daplyn, David
Booth, Simon Maxwell, Ian Goldman, Owen Barder, Natasha Nel, Susie Turrall and
Patricia Woods. Particular thanks go to Martine Zeuthen for her support throughout and to
Larry Salmen whose comments and writings encouraged me to start and keep going. For
their support to editing the first and final drafts, special thanks to Michael Flint, Clive
English and Sarah Leigh-Hunt.
CONTENTS
Executive Summary..........................................................................................................................2
1. Introduction..................................................................................................................................7
a. What are Results?.............................................................................................................7
b. What are the practical differences between Monitoring and Evaluation?.......................8
2. The Starter Problem....................................................................................................................10
3. The Value of Monitoring in Understanding Beneficiary Values .................................................14
4. The Need for Feedback Loops.....................................................................................................15
5. The Importance of Institutions....................................................................................................18
6. Main Observations of Current Practice ......................................................................................20
7. Conclusions .................................................................................................................................25
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 1
3. EXECUTIVE SUMMARY
I am a so called Monitoring and Evaluation (M&E) specialist although, my passion is
monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I
also question the setting up of Monitoring and especially Evaluation units on development
aid programmes: the skills and processes necessary for good monitoring should be an
integral part of management; and evaluation should be seen as a different function. I often
find that ‘M&E’ experts over-complicate the already challenging task of managing
development programmes. The work of a monitoring specialist is to help instil an
understanding of the scope of what a good monitoring process looks like. Based on this, it
is to support those responsible for managing programmes to work together in following
this process through so as to drive better, not just comment on, performance.
I have spent most of my 20 years in development aid working on long term assignments
mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and
private sector development sectors. Of course, just because I have done nothing else but
‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities
to make mistakes and learn from them and the work of others.
The purpose of this paper is to stimulate debate on what makes for good monitoring. It
draws on my reading of history and perceptions of current practice, in the development aid
and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some
good practice and relevant lessons. This is particularly instructive regarding the
resurgence of the aid industry’s focus on results and recent claims about scant experience
in involving intended beneficiaries2 and establishing feedback loops.3 The main audience I
have in mind are not those associated with managing or carrying out evaluations. Rather,
this paper is aimed at managers responsible for monitoring (be they directors in Ministries,
managers in consulting companies, NGOs or civil servants in donor agencies who
oversee programme implementation) and will improve a neglected area.
Human behaviour is unpredictable and people’s values vary widely. In the development
context, the challenges lie in how to understand the assumptions development aid
programmes make about their beneficiaries. Ultimately, understanding behaviours and
decisions is what economics is all about.4 One of its tasks is to show how sometimes
ignorant we are in imagining what we can design to bring about change.5
As Hayak explains, often our inability to discuss seriously what really explains underlying
problems in development is due to timidity about soiling our hands going from purely
scientific questions into value questions.
Both Hayek and Harford argue that a subtle process of trial-and-error can produce a
highly successful system. Certainly, there are no reliable models of behaviour that can
predict the results of development aid programmes with certainty. Development aid
programmes are delivered in complex and highly unpredictable environments and thus
2
People or institutions who are meant to benefit from a particular development initiative.
3
The Sorry State of M&E in Agriculture: Can People-centred Approaches Help? Lawrence Haddad,
Johanna Lindstrom and Yvonne Pinto. Institute of Development Studies, 2010.
4
The Undercover Economist. Tim Harford. Abacus an imprint of Little, Brown Book Group 2006
5
The Fatal Conceit. Frederick Von Hayek. University of Chicago Press. 1991.
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 2
4. are associated with, and subject to, all kinds of ‘jinks’ and ‘sways’. These are often
overlooked and/or under-estimated in how they influence the results sought and
ultimately, how they are monitored and evaluated.
Furthermore, as Rondinelli has stated, the way programmes are designed and monitored
sits uncomfortably with these complexities:
“the procedures adopted for designing and implementing aid interventions often
become ever more rigid and detailed at the same time as recognising that
development problems are more uncertain and less amenable to systematic
design, analysis and monitoring.” 6
This highlights the need to find ways of understanding values: appreciating and learning
about, through feed-back, the opinions of beneficiaries in terms of their assessment of the
relevance and quality of the aid received. For development to have an impact on
poverty reduction, the learning process must incorporate and use the perspectives
of beneficiaries.
As Barder comments, and as a recent Harvard Business Review makes explicit,
approaches to gauging client feedback are under-developed for two key reasons7:
• Either beneficiaries and institutions are simply not asked for their opinions due to
the emphasis of monitoring on, for example, enabling subsequent impact
assessment and/or limiting its enquiry to ‘tracking’ effort and spend and, if they
are,
• The beneficiaries’ response to the performance of those providing support or
services is seldom validated by them and/or fed back in the form of remedial
actions. So why bother providing feedback in the first place?
In the business world, realising that customer retention is more critical than ever,
companies have ramped up their efforts to listen to customers. Many however struggle to
convert their findings into practical prescriptions. Some are addressing that challenge by
creating feedback loops that start at the front line such as Pfizer who uses approaches
similar to what development aid refers to as participatory story telling. Unlike development
aid, however, the concept of participation is applied to allowing opportunities for front line
staff, in addition to their customers or beneficiaries, to tell their stories. Many companies
have succeeded at retaining customers by asking them for simple feedback-and then
empowering frontline employees to act swiftly on that feedback. The importance of
understanding staff and client or customer satisfaction was highlighted through the
balanced scorecard by Kaplan and Norton. 8
6
Development Projects as Policy Experiments. An Adaptive Approach to Development Administration.
Development and Underdevelopment Series. Methuen and Co Ltd 1983
7
http://www.owen.org/blog/4018) 2010 and “Closing the Customer Feedback Loop”, By Rob Markey,
Fred Reichheld and Andreas Dullweber, Harvard Business Review, December 2009
8
The Balanced Scorecard, developed by Robert Kaplan and David Norton in 1994, is a performance
management tool used by managers to keep track of the execution of activities by the staff within
their control and to monitor the consequences arising from these actions. Its balanced nature is how
it is based around four perspectives: Financial (how do we look to shareholders?), Customer (how
do we look to our customers?), Internal Business Process (What must we excel at?) and Learning
and Growth (How can we continue to improve and create value?).
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 3
5. In the field of what is called Monitoring and Evaluation (M&E), few efforts try and
understand behaviour. Often they tend to control expenditure and analyse other numbers
and assess developmental change, but not so much values and opinions. I maintain that
trying to assess 'profound' and lasting developmental impacts, in the absence of effective
feedback loops, is impractical and of limited use. I further argue that this should be a core
feature of any monitoring system and, for practical management reasons, should not be
the sole domain of evaluation.
I do not want to come across as being too black and white or dogmatic about what
constitutes Monitoring as opposed to Evaluation. Although opinions differ as to what
extent Evaluation is independent of and/or relates to Monitoring, I find it useful to define
the main source of differences according to: a) the responsibilities and primary users of
the information generated; b) their objectives; c) their requirements for comparative
analysis (across time, people and space); and d), their reference periods.
I see monitoring as having three inter-related parts:
one that is about controlling expenditures in the context of cataloguing activities,
that involves a participatory approach between those responsible for delivering the
support and the finance team;
another that tracks and analyses the reach of the support (ie, outputs) to intended
beneficiaries these activities make available and how this varies; and
one that gauges how and to what extent beneficiaries respond to this support –
their assessment of its quality, relevance and ultimately usefulness – and also
how this varies among them.
The questions associated with the third component, I maintain, should not be held in
abeyance pending an evaluation. Doing so begs very real questions as to the extent to
which managers are accountable for the quality and relevance of the support if they are
not listening to beneficiary opinion and response. Monitoring needs to be less than
periodically surveying socio-economic impacts irrespective of approach but also more
than just cataloguing ‘outputs and activities’ and controlling ‘spend’. 9
That what I refer to as the third component of any good monitoring system, others may
see as evaluation, gives me hope: that good monitoring practice involves getting outside
the office, listening to beneficiaries and taking what they say on board, re-adjusting
accordingly and closing the feedback loop by letting them know what you have done with
their feedback.
Of course, evaluations do this as well. But understanding the values and behaviours of
beneficiaries is an approach they both share. The difference between how monitoring and
evaluation try to achieve this understanding is based on approach: who does this, how
often, why, with type of comparisons across people and places, and for whom?
Monitoring can and should ultimately drive better performance and involve participatory
processes including, but not limited to, those between the intervention and intended
beneficiaries (be they the poor themselves or institutions that serve them, depending on
9
Such surveys perhaps need doing but not by those attached to programmes.
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 4
6. the outcome sought).10 Having the ability to listen and understand how and in what ways
beneficiaries respond to development programmes, and feeding this information back to
decision-makers should not be judged by academic standards alone.
I do not see the problem as an absence of tools or methods. They are there. Beneficiary
Assessments is one stand out example and is not new - the approach was first developed
in the late 1980s and described in 1995.11 Another is Casley and Kumar’s Beneficiary
Contact Monitoring (BCM), the equivalent in addition to beneficiary assessments, to what I
describe as the third component part of a monitoring system.12 Such assessments, I
argue, can better enable improvements in the quality and usefulness of monitoring.
I hope this paper provides a more balanced understanding and interest in Monitoring in
the face of a growing preoccupation with trying to evaluate results, including and
especially impacts. I’d like to believe that it could also help take advantage of a similar
movement by focussing more on the element of taking into account and learning from the
views of beneficiaries in assessing the value of investments in aid and how well they are
delivered. Doing this should be treated as an integral element of monitoring.
I am a ‘fan’ of logframes and value the need to develop results-chains. The major strength
of the approach is that it provides an opportunity to collect evidence and think through a
programme’s theory of change.
However, it is important to distinguish between the logical framework – the matrix which
summarises the structure of a programme and how this broken down among the hierarchy
of objectives – and the approach – the process by, and the evidence with which, this is
defined. With this in mind, my qualms are about how logical frameworks are easy to be: a)
mis-used through being developed without adequate participation of all stakeholders, not
balancing both logical thinking and deeper critical reflection and organisations filling in the
boxes to receive funding; and b) mis-managed by not being an iterative reference point for
programmes that keep up to speed with the realities through providing opportunities for
beneficiary assessments. There is nothing intrinsic to the process associated with
developing logframes that explains the need for a separate approach built around theories
of change.13
Currently, M&E processes and systems in public sector development aid at higher levels
(Outcomes & Impacts) tend to be over-prescriptive and focussed on measuring pre-
defined indicators within politically defined time periods – ie, elections. The really
challenging questions are not how to do better monitoring, but rather (a) what are the
bureaucratic pressures that lead to civil servants behaving in certain ways and (b) how to
change them. 14 Typically, political time periods of five years ‘force’ over-ambition and
10
As with Michael Quinn Paton’s view on utilisation focussed evaluation, the bottom line objective for
monitoring is how it really makes a difference to improving programme performance so as to
enhance prospects for bringing about lasting change.
11
“…….an approach to information gathering which assesses the value of an activity as it is perceived
by its principal users; . . . a systematic inquiry into people’s values and behaviour in relation to a
planned or on-going intervention for social, institutional and economic change.” Lawrence F.
Salmen, Beneficiary Assessment: An Approach Described, Social Development Paper Number 10
(Washington, D.C.: World Bank, July 1995), p. 1.
12
Project Monitoring and Evaluation in Agriculture by Dennis J. Casley , Krishna Kumar 1987. Johns
Hopkins University Press.
13
http://web.mit.edu/urbanupgrading/upgrading/issues-tools/tools/ZOPP.html
14
Pers Comm Simon Maxwell
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 5
7. therefore the premature measurement of developmental results. The systems civil
servants are obliged to set up are limited in providing information which can help to:
A. Damp down politically-inspired over-ambition regarding outcomes and especially
impacts that may inadvertently undermine the case for development aid;
B. Safeguard against the Law of Unintended Consequences (or at least illuminate
where these are happening through testing the assumptions during
implementation); and
C. Take account of alternative views (“theories of change”) especially those of
beneficiaries and field staff regarding the quality and relevance of their support in
order to help ensure the delivery of results – the true purpose of monitoring.
This can be accomplished by establishing Feedback loops based on beneficiary
perceptions of the quality of project/programme services and ‘products’ (beneficiaries may
be the general population and/or local institutions) and their ‘results’. These in turn
require:
1) Opportunities to encourage often poor and vulnerable beneficiaries and front line
staff to express their views;
2) Sufficient real-time flexibility in project/programme design to permit
incorporation of feedback;
3) Commitment by managers and those responsible for the oversight of
implementation to monitoring programme consequences, intended, positive or
otherwise; and
4) Assurances by those with authority to allocate resources at all to validate
feedback among beneficiaries and then incorporate remedial actions in
projects/programmes.
The rationale of this paper is to explain some of the reasons why monitoring does not, yet
could with effect and at reasonable cost, do the following:
1. Make effective contributions in delivering significant development results
that matter most to beneficiaries; and
2. Better understand the ‘theory’ underlying aid programmes through
monitoring processes and establishing feedback loops, in real time, with
beneficiaries.15
15
This paper uses the term beneficiary in a collective sense: in relation to either the poor themselves
(for aid programmes that deliver support directly to them); or the institutions that serve them (for
programmes that support for example, partner country ministries, NGOs and markets, formal and/or
informal).
Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 6