SlideShare a Scribd company logo
1 of 7
WHO IS LISTENING TO WHO, HOW WELL AND WITH WHAT EFFECT?
                                                                1
                                          DANIEL TICEHURST

                                         OCTOBER 16TH, 2012




              “Just weighing a pig doesn’t fatten it. You can
             weigh it all the time, but it’s not making the hog
                                   fatter.”

           President Obama. Green Bay town hall meeting, June
                                    11th 2009
           http://pifactory.wordpress.com/2009/06/16/just-weighing-
                  a-pig-doesnt-fatten-it-obama-hint-on-testing/




1
    Project Director, Performance Management and Evaluation, HTSPE Ltd.
ACKNOWLEDGEMENTS
I’d like to thank the following people for their comments on early drafts: Andrew Temu,
James Gilling, Jonathan Mitchell, Rick Davies, Harold Lockwood, Mike Daplyn, David
Booth, Simon Maxwell, Ian Goldman, Owen Barder, Natasha Nel, Susie Turrall and
Patricia Woods. Particular thanks go to Martine Zeuthen for her support throughout and to
Larry Salmen whose comments and writings encouraged me to start and keep going. For
their support to editing the first and final drafts, special thanks to Michael Flint, Clive
English and Sarah Leigh-Hunt.




                                                              CONTENTS
Executive Summary..........................................................................................................................2

1. Introduction..................................................................................................................................7

           a. What are Results?.............................................................................................................7

           b. What are the practical differences between Monitoring and Evaluation?.......................8

2. The Starter Problem....................................................................................................................10

3. The Value of Monitoring in Understanding Beneficiary Values .................................................14

4. The Need for Feedback Loops.....................................................................................................15

5. The Importance of Institutions....................................................................................................18

6. Main Observations of Current Practice ......................................................................................20

7. Conclusions .................................................................................................................................25




             Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst                                             1
EXECUTIVE SUMMARY
I am a so called Monitoring and Evaluation (M&E) specialist although, my passion is
monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I
also question the setting up of Monitoring and especially Evaluation units on development
aid programmes: the skills and processes necessary for good monitoring should be an
integral part of management; and evaluation should be seen as a different function. I often
find that ‘M&E’ experts over-complicate the already challenging task of managing
development programmes. The work of a monitoring specialist is to help instil an
understanding of the scope of what a good monitoring process looks like. Based on this, it
is to support those responsible for managing programmes to work together in following
this process through so as to drive better, not just comment on, performance.

I have spent most of my 20 years in development aid working on long term assignments
mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and
private sector development sectors. Of course, just because I have done nothing else but
‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities
to make mistakes and learn from them and the work of others.

The purpose of this paper is to stimulate debate on what makes for good monitoring. It
draws on my reading of history and perceptions of current practice, in the development aid
and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some
good practice and relevant lessons. This is particularly instructive regarding the
resurgence of the aid industry’s focus on results and recent claims about scant experience
in involving intended beneficiaries2 and establishing feedback loops.3 The main audience I
have in mind are not those associated with managing or carrying out evaluations. Rather,
this paper is aimed at managers responsible for monitoring (be they directors in Ministries,
managers in consulting companies, NGOs or civil servants in donor agencies who
oversee programme implementation) and will improve a neglected area.

Human behaviour is unpredictable and people’s values vary widely. In the development
context, the challenges lie in how to understand the assumptions development aid
programmes make about their beneficiaries. Ultimately, understanding behaviours and
decisions is what economics is all about.4 One of its tasks is to show how sometimes
ignorant we are in imagining what we can design to bring about change.5

As Hayak explains, often our inability to discuss seriously what really explains underlying
problems in development is due to timidity about soiling our hands going from purely
scientific questions into value questions.

Both Hayek and Harford argue that a subtle process of trial-and-error can produce a
highly successful system. Certainly, there are no reliable models of behaviour that can
predict the results of development aid programmes with certainty. Development aid
programmes are delivered in complex and highly unpredictable environments and thus
2
       People or institutions who are meant to benefit from a particular development initiative.
3
       The Sorry State of M&E in Agriculture: Can People-centred Approaches Help? Lawrence Haddad,
       Johanna Lindstrom and Yvonne Pinto. Institute of Development Studies, 2010.
4
       The Undercover Economist. Tim Harford. Abacus an imprint of Little, Brown Book Group 2006
5
       The Fatal Conceit. Frederick Von Hayek. University of Chicago Press. 1991.
        Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst      2
are associated with, and subject to, all kinds of ‘jinks’ and ‘sways’. These are often
overlooked and/or under-estimated in how they influence the results sought and
ultimately, how they are monitored and evaluated.

Furthermore, as Rondinelli has stated, the way programmes are designed and monitored
sits uncomfortably with these complexities:

       “the procedures adopted for designing and implementing aid interventions often
       become ever more rigid and detailed at the same time as recognising that
       development problems are more uncertain and less amenable to systematic
       design, analysis and monitoring.” 6

This highlights the need to find ways of understanding values: appreciating and learning
about, through feed-back, the opinions of beneficiaries in terms of their assessment of the
relevance and quality of the aid received. For development to have an impact on
poverty reduction, the learning process must incorporate and use the perspectives
of beneficiaries.

As Barder comments, and as a recent Harvard Business Review makes explicit,
approaches to gauging client feedback are under-developed for two key reasons7:

•      Either beneficiaries and institutions are simply not asked for their opinions due to
       the emphasis of monitoring on, for example, enabling subsequent impact
       assessment and/or limiting its enquiry to ‘tracking’ effort and spend and, if they
       are,

•      The beneficiaries’ response to the performance of those providing support or
       services is seldom validated by them and/or fed back in the form of remedial
       actions. So why bother providing feedback in the first place?

In the business world, realising that customer retention is more critical than ever,
companies have ramped up their efforts to listen to customers. Many however struggle to
convert their findings into practical prescriptions. Some are addressing that challenge by
creating feedback loops that start at the front line such as Pfizer who uses approaches
similar to what development aid refers to as participatory story telling. Unlike development
aid, however, the concept of participation is applied to allowing opportunities for front line
staff, in addition to their customers or beneficiaries, to tell their stories. Many companies
have succeeded at retaining customers by asking them for simple feedback-and then
empowering frontline employees to act swiftly on that feedback. The importance of
understanding staff and client or customer satisfaction was highlighted through the
balanced scorecard by Kaplan and Norton. 8


6
       Development Projects as Policy Experiments. An Adaptive Approach to Development Administration.
       Development and Underdevelopment Series. Methuen and Co Ltd 1983
7
       http://www.owen.org/blog/4018) 2010 and “Closing the Customer Feedback Loop”, By Rob Markey,
       Fred Reichheld and Andreas Dullweber, Harvard Business Review, December 2009
8
       The Balanced Scorecard, developed by Robert Kaplan and David Norton in 1994, is a performance
       management tool used by managers to keep track of the execution of activities by the staff within
       their control and to monitor the consequences arising from these actions. Its balanced nature is how
       it is based around four perspectives: Financial (how do we look to shareholders?), Customer (how
       do we look to our customers?), Internal Business Process (What must we excel at?) and Learning
       and Growth (How can we continue to improve and create value?).
        Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst              3
In the field of what is called Monitoring and Evaluation (M&E), few efforts try and
understand behaviour. Often they tend to control expenditure and analyse other numbers
and assess developmental change, but not so much values and opinions. I maintain that
trying to assess 'profound' and lasting developmental impacts, in the absence of effective
feedback loops, is impractical and of limited use. I further argue that this should be a core
feature of any monitoring system and, for practical management reasons, should not be
the sole domain of evaluation.

I do not want to come across as being too black and white or dogmatic about what
constitutes Monitoring as opposed to Evaluation. Although opinions differ as to what
extent Evaluation is independent of and/or relates to Monitoring, I find it useful to define
the main source of differences according to: a) the responsibilities and primary users of
the information generated; b) their objectives; c) their requirements for comparative
analysis (across time, people and space); and d), their reference periods.

I see monitoring as having three inter-related parts:

      one that is about controlling expenditures in the context of cataloguing activities,
       that involves a participatory approach between those responsible for delivering the
       support and the finance team;

      another that tracks and analyses the reach of the support (ie, outputs) to intended
       beneficiaries these activities make available and how this varies; and

      one that gauges how and to what extent beneficiaries respond to this support –
       their assessment of its quality, relevance and ultimately usefulness – and also
       how this varies among them.

The questions associated with the third component, I maintain, should not be held in
abeyance pending an evaluation. Doing so begs very real questions as to the extent to
which managers are accountable for the quality and relevance of the support if they are
not listening to beneficiary opinion and response. Monitoring needs to be less than
periodically surveying socio-economic impacts irrespective of approach but also more
than just cataloguing ‘outputs and activities’ and controlling ‘spend’. 9

That what I refer to as the third component of any good monitoring system, others may
see as evaluation, gives me hope: that good monitoring practice involves getting outside
the office, listening to beneficiaries and taking what they say on board, re-adjusting
accordingly and closing the feedback loop by letting them know what you have done with
their feedback.

Of course, evaluations do this as well. But understanding the values and behaviours of
beneficiaries is an approach they both share. The difference between how monitoring and
evaluation try to achieve this understanding is based on approach: who does this, how
often, why, with type of comparisons across people and places, and for whom?

Monitoring can and should ultimately drive better performance and involve participatory
processes including, but not limited to, those between the intervention and intended
beneficiaries (be they the poor themselves or institutions that serve them, depending on

9
       Such surveys perhaps need doing but not by those attached to programmes.
        Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst   4
the outcome sought).10 Having the ability to listen and understand how and in what ways
beneficiaries respond to development programmes, and feeding this information back to
decision-makers should not be judged by academic standards alone.

I do not see the problem as an absence of tools or methods. They are there. Beneficiary
Assessments is one stand out example and is not new - the approach was first developed
in the late 1980s and described in 1995.11 Another is Casley and Kumar’s Beneficiary
Contact Monitoring (BCM), the equivalent in addition to beneficiary assessments, to what I
describe as the third component part of a monitoring system.12 Such assessments, I
argue, can better enable improvements in the quality and usefulness of monitoring.

I hope this paper provides a more balanced understanding and interest in Monitoring in
the face of a growing preoccupation with trying to evaluate results, including and
especially impacts. I’d like to believe that it could also help take advantage of a similar
movement by focussing more on the element of taking into account and learning from the
views of beneficiaries in assessing the value of investments in aid and how well they are
delivered. Doing this should be treated as an integral element of monitoring.

I am a ‘fan’ of logframes and value the need to develop results-chains. The major strength
of the approach is that it provides an opportunity to collect evidence and think through a
programme’s theory of change.

However, it is important to distinguish between the logical framework – the matrix which
summarises the structure of a programme and how this broken down among the hierarchy
of objectives – and the approach – the process by, and the evidence with which, this is
defined. With this in mind, my qualms are about how logical frameworks are easy to be: a)
mis-used through being developed without adequate participation of all stakeholders, not
balancing both logical thinking and deeper critical reflection and organisations filling in the
boxes to receive funding; and b) mis-managed by not being an iterative reference point for
programmes that keep up to speed with the realities through providing opportunities for
beneficiary assessments. There is nothing intrinsic to the process associated with
developing logframes that explains the need for a separate approach built around theories
of change.13

Currently, M&E processes and systems in public sector development aid at higher levels
(Outcomes & Impacts) tend to be over-prescriptive and focussed on measuring pre-
defined indicators within politically defined time periods – ie, elections. The really
challenging questions are not how to do better monitoring, but rather (a) what are the
bureaucratic pressures that lead to civil servants behaving in certain ways and (b) how to
change them. 14 Typically, political time periods of five years ‘force’ over-ambition and
10
       As with Michael Quinn Paton’s view on utilisation focussed evaluation, the bottom line objective for
       monitoring is how it really makes a difference to improving programme performance so as to
       enhance prospects for bringing about lasting change.
11
       “…….an approach to information gathering which assesses the value of an activity as it is perceived
       by its principal users; . . . a systematic inquiry into people’s values and behaviour in relation to a
       planned or on-going intervention for social, institutional and economic change.” Lawrence F.
       Salmen, Beneficiary Assessment: An Approach Described, Social Development Paper Number 10
       (Washington, D.C.: World Bank, July 1995), p. 1.
12
       Project Monitoring and Evaluation in Agriculture by Dennis J. Casley , Krishna Kumar 1987. Johns
       Hopkins University Press.
13
       http://web.mit.edu/urbanupgrading/upgrading/issues-tools/tools/ZOPP.html
14
       Pers Comm Simon Maxwell
        Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst                5
therefore the premature measurement of developmental results. The systems civil
servants are obliged to set up are limited in providing information which can help to:

A.     Damp down politically-inspired over-ambition regarding outcomes and especially
       impacts that may inadvertently undermine the case for development aid;

B.     Safeguard against the Law of Unintended Consequences (or at least illuminate
       where these are happening through testing the assumptions during
       implementation); and

C.     Take account of alternative views (“theories of change”) especially those of
       beneficiaries and field staff regarding the quality and relevance of their support in
       order to help ensure the delivery of results – the true purpose of monitoring.

This can be accomplished by establishing Feedback loops based on beneficiary
perceptions of the quality of project/programme services and ‘products’ (beneficiaries may
be the general population and/or local institutions) and their ‘results’. These in turn
require:

1)     Opportunities to encourage often poor and vulnerable beneficiaries and front line
       staff to express their views;

2)     Sufficient real-time flexibility             in   project/programme         design     to       permit
       incorporation of feedback;

3)     Commitment by managers and those responsible for the oversight of
       implementation to monitoring programme consequences, intended, positive or
       otherwise; and

4)     Assurances by those with authority to allocate resources at all to validate
       feedback among beneficiaries and then incorporate remedial actions in
       projects/programmes.

The rationale of this paper is to explain some of the reasons why monitoring does not, yet
could with effect and at reasonable cost, do the following:

1.     Make effective contributions in delivering significant development results
       that matter most to beneficiaries; and

2.     Better understand the ‘theory’ underlying aid programmes through
       monitoring processes and establishing feedback loops, in real time, with
       beneficiaries.15




15
       This paper uses the term beneficiary in a collective sense: in relation to either the poor themselves
       (for aid programmes that deliver support directly to them); or the institutions that serve them (for
       programmes that support for example, partner country ministries, NGOs and markets, formal and/or
       informal).
        Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst                 6

More Related Content

Viewers also liked (10)

MaFI Subsidies Discussion Paper
MaFI Subsidies Discussion PaperMaFI Subsidies Discussion Paper
MaFI Subsidies Discussion Paper
 
MaFI workshop report SEEP Annual Conf, 1 Nov 2010
MaFI workshop report SEEP Annual Conf, 1 Nov 2010MaFI workshop report SEEP Annual Conf, 1 Nov 2010
MaFI workshop report SEEP Annual Conf, 1 Nov 2010
 
International Capacity Building System for pro-poor market development facili...
International Capacity Building System for pro-poor market development facili...International Capacity Building System for pro-poor market development facili...
International Capacity Building System for pro-poor market development facili...
 
"Pro-Poor Growth" - adjusting the rhetoric to the reality. Don Sillers, USAID...
"Pro-Poor Growth" - adjusting the rhetoric to the reality. Don Sillers, USAID..."Pro-Poor Growth" - adjusting the rhetoric to the reality. Don Sillers, USAID...
"Pro-Poor Growth" - adjusting the rhetoric to the reality. Don Sillers, USAID...
 
FAN approach, Wielinga, Apr2011
FAN approach, Wielinga, Apr2011FAN approach, Wielinga, Apr2011
FAN approach, Wielinga, Apr2011
 
Land-to-lab-approach-final-for-web
Land-to-lab-approach-final-for-webLand-to-lab-approach-final-for-web
Land-to-lab-approach-final-for-web
 
BDS for the poor in India. A synthesis by Vrutti Team - 8 Aug 2010
BDS for the poor in India. A synthesis by Vrutti Team - 8 Aug 2010BDS for the poor in India. A synthesis by Vrutti Team - 8 Aug 2010
BDS for the poor in India. A synthesis by Vrutti Team - 8 Aug 2010
 
MaFI Meeting at SEEP Annual Conference 2015 - Report
MaFI Meeting at SEEP Annual Conference 2015 - ReportMaFI Meeting at SEEP Annual Conference 2015 - Report
MaFI Meeting at SEEP Annual Conference 2015 - Report
 
Extreme poverty in Bangladesh: lessons, learnings and reflections
Extreme poverty in Bangladesh: lessons, learnings and reflectionsExtreme poverty in Bangladesh: lessons, learnings and reflections
Extreme poverty in Bangladesh: lessons, learnings and reflections
 
MaFI Meeting 2016 (slides)
MaFI Meeting 2016 (slides)MaFI Meeting 2016 (slides)
MaFI Meeting 2016 (slides)
 

Similar to Who is listening to who, how well and with what effect? By Daniel Ticehurst

Beyond the Pioneer: Getting Inclusive Industries to Scale
Beyond the Pioneer: Getting Inclusive Industries to ScaleBeyond the Pioneer: Getting Inclusive Industries to Scale
Beyond the Pioneer: Getting Inclusive Industries to ScaleThe Rockefeller Foundation
 
SCALING UP PRIMARY CARE
SCALING UP PRIMARY CARE SCALING UP PRIMARY CARE
SCALING UP PRIMARY CARE Ruchi Dass
 
Introduction to Advocacy in Education and Health
Introduction to Advocacy in Education and HealthIntroduction to Advocacy in Education and Health
Introduction to Advocacy in Education and HealthChristopher Jones
 
Corporate Sustainability Strategy Plan
Corporate Sustainability Strategy PlanCorporate Sustainability Strategy Plan
Corporate Sustainability Strategy PlanJOSE ANTONIO CHAVES
 
Team Development Process
Team Development ProcessTeam Development Process
Team Development ProcessCecilia Lucero
 
The Importance Of Environmental Quality
The Importance Of Environmental QualityThe Importance Of Environmental Quality
The Importance Of Environmental QualityAmanda Brady
 
Blessing White 2011 Ee Report
Blessing White 2011 Ee ReportBlessing White 2011 Ee Report
Blessing White 2011 Ee Reportoscartoscano
 
Wistar Rat Orchiectomy
Wistar Rat OrchiectomyWistar Rat Orchiectomy
Wistar Rat OrchiectomyRachel Davis
 
Shaping the Future: Product Strategy in the Age of Uncertainty
Shaping the Future: Product Strategy in the Age of UncertaintyShaping the Future: Product Strategy in the Age of Uncertainty
Shaping the Future: Product Strategy in the Age of UncertaintyAggregage
 
Ifc good practicehandbook_cumulative impact assessment
Ifc good practicehandbook_cumulative impact assessmentIfc good practicehandbook_cumulative impact assessment
Ifc good practicehandbook_cumulative impact assessmentzubeditufail
 
What is Financial Fitness & How is it Measured?
What is Financial Fitness & How is it Measured?What is Financial Fitness & How is it Measured?
What is Financial Fitness & How is it Measured?milfamln
 
Defying Disruption - CPA Congress 2016 Adelaide
Defying Disruption - CPA Congress 2016 Adelaide Defying Disruption - CPA Congress 2016 Adelaide
Defying Disruption - CPA Congress 2016 Adelaide Pete Holliday
 
Sme finance impactassessmentframework
Sme finance impactassessmentframeworkSme finance impactassessmentframework
Sme finance impactassessmentframeworkDr Lendy Spires
 
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...Adrienne Gifford
 
Hollo, RoseAnna Capstone 2014 Online MBA
Hollo, RoseAnna Capstone 2014 Online MBAHollo, RoseAnna Capstone 2014 Online MBA
Hollo, RoseAnna Capstone 2014 Online MBARose Hollo
 
Only in fairytales are emperors told they are naked
Only in fairytales are emperors told they are nakedOnly in fairytales are emperors told they are naked
Only in fairytales are emperors told they are naked3gamma
 

Similar to Who is listening to who, how well and with what effect? By Daniel Ticehurst (20)

Doing better aid work
Doing better aid workDoing better aid work
Doing better aid work
 
Beyond the Pioneer: Getting Inclusive Industries to Scale
Beyond the Pioneer: Getting Inclusive Industries to ScaleBeyond the Pioneer: Getting Inclusive Industries to Scale
Beyond the Pioneer: Getting Inclusive Industries to Scale
 
SCALING UP PRIMARY CARE
SCALING UP PRIMARY CARE SCALING UP PRIMARY CARE
SCALING UP PRIMARY CARE
 
Introduction to Advocacy in Education and Health
Introduction to Advocacy in Education and HealthIntroduction to Advocacy in Education and Health
Introduction to Advocacy in Education and Health
 
Corporate Sustainability Strategy Plan
Corporate Sustainability Strategy PlanCorporate Sustainability Strategy Plan
Corporate Sustainability Strategy Plan
 
Team Development Process
Team Development ProcessTeam Development Process
Team Development Process
 
The Importance Of Environmental Quality
The Importance Of Environmental QualityThe Importance Of Environmental Quality
The Importance Of Environmental Quality
 
Blessing White 2011 Ee Report
Blessing White 2011 Ee ReportBlessing White 2011 Ee Report
Blessing White 2011 Ee Report
 
Wistar Rat Orchiectomy
Wistar Rat OrchiectomyWistar Rat Orchiectomy
Wistar Rat Orchiectomy
 
Shaping the Future: Product Strategy in the Age of Uncertainty
Shaping the Future: Product Strategy in the Age of UncertaintyShaping the Future: Product Strategy in the Age of Uncertainty
Shaping the Future: Product Strategy in the Age of Uncertainty
 
Ifc good practicehandbook_cumulative impact assessment
Ifc good practicehandbook_cumulative impact assessmentIfc good practicehandbook_cumulative impact assessment
Ifc good practicehandbook_cumulative impact assessment
 
What is Financial Fitness & How is it Measured?
What is Financial Fitness & How is it Measured?What is Financial Fitness & How is it Measured?
What is Financial Fitness & How is it Measured?
 
Defying Disruption - CPA Congress 2016 Adelaide
Defying Disruption - CPA Congress 2016 Adelaide Defying Disruption - CPA Congress 2016 Adelaide
Defying Disruption - CPA Congress 2016 Adelaide
 
What is accountability ?
What is accountability ?What is accountability ?
What is accountability ?
 
0809 garciac
0809 garciac0809 garciac
0809 garciac
 
Sme finance impactassessmentframework
Sme finance impactassessmentframeworkSme finance impactassessmentframework
Sme finance impactassessmentframework
 
Top 7 CSR
Top 7 CSRTop 7 CSR
Top 7 CSR
 
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...
Pioneering-New-Operating-Models-and-Measurement-Techniques-for-Private-Sector...
 
Hollo, RoseAnna Capstone 2014 Online MBA
Hollo, RoseAnna Capstone 2014 Online MBAHollo, RoseAnna Capstone 2014 Online MBA
Hollo, RoseAnna Capstone 2014 Online MBA
 
Only in fairytales are emperors told they are naked
Only in fairytales are emperors told they are nakedOnly in fairytales are emperors told they are naked
Only in fairytales are emperors told they are naked
 

More from MaFI (The Market Facilitation Initiative)

Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...MaFI (The Market Facilitation Initiative)
 
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)MaFI (The Market Facilitation Initiative)
 
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09MaFI (The Market Facilitation Initiative)
 
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know About...
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know  About...Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know  About...
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know About...MaFI (The Market Facilitation Initiative)
 

More from MaFI (The Market Facilitation Initiative) (20)

Activate: A tool to help project teams understand and influence behaviour
Activate: A tool to help project teams understand and influence behaviourActivate: A tool to help project teams understand and influence behaviour
Activate: A tool to help project teams understand and influence behaviour
 
Ideas for the new phase of MaFI
Ideas for the new phase of MaFIIdeas for the new phase of MaFI
Ideas for the new phase of MaFI
 
Poll about the future of MaFI
Poll about the future of MaFIPoll about the future of MaFI
Poll about the future of MaFI
 
MaFI Vision and Strategic Principles, Updated Sep16
MaFI Vision and Strategic Principles, Updated Sep16MaFI Vision and Strategic Principles, Updated Sep16
MaFI Vision and Strategic Principles, Updated Sep16
 
Market Facilitation Clinics - SEEP Conference 2016
Market Facilitation Clinics - SEEP Conference 2016Market Facilitation Clinics - SEEP Conference 2016
Market Facilitation Clinics - SEEP Conference 2016
 
MaFI Session during the SEEP AC 2014 - slides/report
MaFI Session during the SEEP AC 2014 - slides/reportMaFI Session during the SEEP AC 2014 - slides/report
MaFI Session during the SEEP AC 2014 - slides/report
 
Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...Are evaluations contributing to learning by market development practitioners?...
Are evaluations contributing to learning by market development practitioners?...
 
Adapting Lean Thinking to Market Systems Development
Adapting Lean Thinking to Market Systems DevelopmentAdapting Lean Thinking to Market Systems Development
Adapting Lean Thinking to Market Systems Development
 
Market Systems Framework Draft
Market Systems Framework DraftMarket Systems Framework Draft
Market Systems Framework Draft
 
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
Introduce Yourself to MaFI Members - Instructions (for members on LinkedIn only)
 
CSOs and VCscs, Blum-Samuelsen 29Oct13
CSOs and VCscs, Blum-Samuelsen 29Oct13CSOs and VCscs, Blum-Samuelsen 29Oct13
CSOs and VCscs, Blum-Samuelsen 29Oct13
 
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
Market Learning Event Report Funded by EU and Hosted by Oxfam, UK, Mar 2013
 
Concept Note for Local Learning Groups - Second Pilot
Concept Note for Local Learning Groups - Second PilotConcept Note for Local Learning Groups - Second Pilot
Concept Note for Local Learning Groups - Second Pilot
 
Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)Mafi Work Plan 2013, short version (March 2013)
Mafi Work Plan 2013, short version (March 2013)
 
Systemic M&E Synthesis, Feb2013
Systemic M&E Synthesis, Feb2013Systemic M&E Synthesis, Feb2013
Systemic M&E Synthesis, Feb2013
 
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
Inquiry into aid effectiveness. Evidence submission from Rosalind Eyben. July09
 
Systemic M&E discussion paper, version 2 - 9 Oct 2012
Systemic M&E discussion paper, version 2 - 9 Oct 2012Systemic M&E discussion paper, version 2 - 9 Oct 2012
Systemic M&E discussion paper, version 2 - 9 Oct 2012
 
Systemic M&E Plenary at the SEEP 2012 Annual Conference
Systemic M&E Plenary at the SEEP 2012 Annual ConferenceSystemic M&E Plenary at the SEEP 2012 Annual Conference
Systemic M&E Plenary at the SEEP 2012 Annual Conference
 
Systemic M&E Concept Note
Systemic M&E Concept NoteSystemic M&E Concept Note
Systemic M&E Concept Note
 
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know About...
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know  About...Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know  About...
Groove M&E for VC Briefs: 5 Things that Every Practitioner Should Know About...
 

Recently uploaded

Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...
Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...
Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...Authentic No 1 Amil Baba In Pakistan
 
(南达科他州立大学毕业证学位证成绩单-永久存档)
(南达科他州立大学毕业证学位证成绩单-永久存档)(南达科他州立大学毕业证学位证成绩单-永久存档)
(南达科他州立大学毕业证学位证成绩单-永久存档)oannq
 
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...JeylaisaManabat1
 
Inspiring Through Words Power of Inspiration.pptx
Inspiring Through Words Power of Inspiration.pptxInspiring Through Words Power of Inspiration.pptx
Inspiring Through Words Power of Inspiration.pptxShubham Rawat
 
南新罕布什尔大学毕业证学位证成绩单-学历认证
南新罕布什尔大学毕业证学位证成绩单-学历认证南新罕布什尔大学毕业证学位证成绩单-学历认证
南新罕布什尔大学毕业证学位证成绩单-学历认证kbdhl05e
 
E J Waggoner against Kellogg's Pantheism 8.pptx
E J Waggoner against Kellogg's Pantheism 8.pptxE J Waggoner against Kellogg's Pantheism 8.pptx
E J Waggoner against Kellogg's Pantheism 8.pptxJackieSparrow3
 

Recently uploaded (6)

Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...
Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...
Authentic No 1 Amil Baba In Pakistan Amil Baba In Faisalabad Amil Baba In Kar...
 
(南达科他州立大学毕业证学位证成绩单-永久存档)
(南达科他州立大学毕业证学位证成绩单-永久存档)(南达科他州立大学毕业证学位证成绩单-永久存档)
(南达科他州立大学毕业证学位证成绩单-永久存档)
 
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...
Module-2-Lesson-2-COMMUNICATION-AIDS-AND-STRATEGIES-USING-TOOLS-OF-TECHNOLOGY...
 
Inspiring Through Words Power of Inspiration.pptx
Inspiring Through Words Power of Inspiration.pptxInspiring Through Words Power of Inspiration.pptx
Inspiring Through Words Power of Inspiration.pptx
 
南新罕布什尔大学毕业证学位证成绩单-学历认证
南新罕布什尔大学毕业证学位证成绩单-学历认证南新罕布什尔大学毕业证学位证成绩单-学历认证
南新罕布什尔大学毕业证学位证成绩单-学历认证
 
E J Waggoner against Kellogg's Pantheism 8.pptx
E J Waggoner against Kellogg's Pantheism 8.pptxE J Waggoner against Kellogg's Pantheism 8.pptx
E J Waggoner against Kellogg's Pantheism 8.pptx
 

Who is listening to who, how well and with what effect? By Daniel Ticehurst

  • 1. WHO IS LISTENING TO WHO, HOW WELL AND WITH WHAT EFFECT? 1 DANIEL TICEHURST OCTOBER 16TH, 2012 “Just weighing a pig doesn’t fatten it. You can weigh it all the time, but it’s not making the hog fatter.” President Obama. Green Bay town hall meeting, June 11th 2009 http://pifactory.wordpress.com/2009/06/16/just-weighing- a-pig-doesnt-fatten-it-obama-hint-on-testing/ 1 Project Director, Performance Management and Evaluation, HTSPE Ltd.
  • 2. ACKNOWLEDGEMENTS I’d like to thank the following people for their comments on early drafts: Andrew Temu, James Gilling, Jonathan Mitchell, Rick Davies, Harold Lockwood, Mike Daplyn, David Booth, Simon Maxwell, Ian Goldman, Owen Barder, Natasha Nel, Susie Turrall and Patricia Woods. Particular thanks go to Martine Zeuthen for her support throughout and to Larry Salmen whose comments and writings encouraged me to start and keep going. For their support to editing the first and final drafts, special thanks to Michael Flint, Clive English and Sarah Leigh-Hunt. CONTENTS Executive Summary..........................................................................................................................2 1. Introduction..................................................................................................................................7 a. What are Results?.............................................................................................................7 b. What are the practical differences between Monitoring and Evaluation?.......................8 2. The Starter Problem....................................................................................................................10 3. The Value of Monitoring in Understanding Beneficiary Values .................................................14 4. The Need for Feedback Loops.....................................................................................................15 5. The Importance of Institutions....................................................................................................18 6. Main Observations of Current Practice ......................................................................................20 7. Conclusions .................................................................................................................................25 Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 1
  • 3. EXECUTIVE SUMMARY I am a so called Monitoring and Evaluation (M&E) specialist although, my passion is monitoring. Hence I dislike the collective term ‘M&E’. I see them as very different things. I also question the setting up of Monitoring and especially Evaluation units on development aid programmes: the skills and processes necessary for good monitoring should be an integral part of management; and evaluation should be seen as a different function. I often find that ‘M&E’ experts over-complicate the already challenging task of managing development programmes. The work of a monitoring specialist is to help instil an understanding of the scope of what a good monitoring process looks like. Based on this, it is to support those responsible for managing programmes to work together in following this process through so as to drive better, not just comment on, performance. I have spent most of my 20 years in development aid working on long term assignments mainly in various countries in Africa and exclusively on ‘M&E’ across the agriculture and private sector development sectors. Of course, just because I have done nothing else but ‘M&E.’ does not mean I excel at both. However, it has meant that I have had opportunities to make mistakes and learn from them and the work of others. The purpose of this paper is to stimulate debate on what makes for good monitoring. It draws on my reading of history and perceptions of current practice, in the development aid and a bit in the corporate sectors. I dwell on the history deliberately as it throws up some good practice and relevant lessons. This is particularly instructive regarding the resurgence of the aid industry’s focus on results and recent claims about scant experience in involving intended beneficiaries2 and establishing feedback loops.3 The main audience I have in mind are not those associated with managing or carrying out evaluations. Rather, this paper is aimed at managers responsible for monitoring (be they directors in Ministries, managers in consulting companies, NGOs or civil servants in donor agencies who oversee programme implementation) and will improve a neglected area. Human behaviour is unpredictable and people’s values vary widely. In the development context, the challenges lie in how to understand the assumptions development aid programmes make about their beneficiaries. Ultimately, understanding behaviours and decisions is what economics is all about.4 One of its tasks is to show how sometimes ignorant we are in imagining what we can design to bring about change.5 As Hayak explains, often our inability to discuss seriously what really explains underlying problems in development is due to timidity about soiling our hands going from purely scientific questions into value questions. Both Hayek and Harford argue that a subtle process of trial-and-error can produce a highly successful system. Certainly, there are no reliable models of behaviour that can predict the results of development aid programmes with certainty. Development aid programmes are delivered in complex and highly unpredictable environments and thus 2 People or institutions who are meant to benefit from a particular development initiative. 3 The Sorry State of M&E in Agriculture: Can People-centred Approaches Help? Lawrence Haddad, Johanna Lindstrom and Yvonne Pinto. Institute of Development Studies, 2010. 4 The Undercover Economist. Tim Harford. Abacus an imprint of Little, Brown Book Group 2006 5 The Fatal Conceit. Frederick Von Hayek. University of Chicago Press. 1991. Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 2
  • 4. are associated with, and subject to, all kinds of ‘jinks’ and ‘sways’. These are often overlooked and/or under-estimated in how they influence the results sought and ultimately, how they are monitored and evaluated. Furthermore, as Rondinelli has stated, the way programmes are designed and monitored sits uncomfortably with these complexities: “the procedures adopted for designing and implementing aid interventions often become ever more rigid and detailed at the same time as recognising that development problems are more uncertain and less amenable to systematic design, analysis and monitoring.” 6 This highlights the need to find ways of understanding values: appreciating and learning about, through feed-back, the opinions of beneficiaries in terms of their assessment of the relevance and quality of the aid received. For development to have an impact on poverty reduction, the learning process must incorporate and use the perspectives of beneficiaries. As Barder comments, and as a recent Harvard Business Review makes explicit, approaches to gauging client feedback are under-developed for two key reasons7: • Either beneficiaries and institutions are simply not asked for their opinions due to the emphasis of monitoring on, for example, enabling subsequent impact assessment and/or limiting its enquiry to ‘tracking’ effort and spend and, if they are, • The beneficiaries’ response to the performance of those providing support or services is seldom validated by them and/or fed back in the form of remedial actions. So why bother providing feedback in the first place? In the business world, realising that customer retention is more critical than ever, companies have ramped up their efforts to listen to customers. Many however struggle to convert their findings into practical prescriptions. Some are addressing that challenge by creating feedback loops that start at the front line such as Pfizer who uses approaches similar to what development aid refers to as participatory story telling. Unlike development aid, however, the concept of participation is applied to allowing opportunities for front line staff, in addition to their customers or beneficiaries, to tell their stories. Many companies have succeeded at retaining customers by asking them for simple feedback-and then empowering frontline employees to act swiftly on that feedback. The importance of understanding staff and client or customer satisfaction was highlighted through the balanced scorecard by Kaplan and Norton. 8 6 Development Projects as Policy Experiments. An Adaptive Approach to Development Administration. Development and Underdevelopment Series. Methuen and Co Ltd 1983 7 http://www.owen.org/blog/4018) 2010 and “Closing the Customer Feedback Loop”, By Rob Markey, Fred Reichheld and Andreas Dullweber, Harvard Business Review, December 2009 8 The Balanced Scorecard, developed by Robert Kaplan and David Norton in 1994, is a performance management tool used by managers to keep track of the execution of activities by the staff within their control and to monitor the consequences arising from these actions. Its balanced nature is how it is based around four perspectives: Financial (how do we look to shareholders?), Customer (how do we look to our customers?), Internal Business Process (What must we excel at?) and Learning and Growth (How can we continue to improve and create value?). Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 3
  • 5. In the field of what is called Monitoring and Evaluation (M&E), few efforts try and understand behaviour. Often they tend to control expenditure and analyse other numbers and assess developmental change, but not so much values and opinions. I maintain that trying to assess 'profound' and lasting developmental impacts, in the absence of effective feedback loops, is impractical and of limited use. I further argue that this should be a core feature of any monitoring system and, for practical management reasons, should not be the sole domain of evaluation. I do not want to come across as being too black and white or dogmatic about what constitutes Monitoring as opposed to Evaluation. Although opinions differ as to what extent Evaluation is independent of and/or relates to Monitoring, I find it useful to define the main source of differences according to: a) the responsibilities and primary users of the information generated; b) their objectives; c) their requirements for comparative analysis (across time, people and space); and d), their reference periods. I see monitoring as having three inter-related parts:  one that is about controlling expenditures in the context of cataloguing activities, that involves a participatory approach between those responsible for delivering the support and the finance team;  another that tracks and analyses the reach of the support (ie, outputs) to intended beneficiaries these activities make available and how this varies; and  one that gauges how and to what extent beneficiaries respond to this support – their assessment of its quality, relevance and ultimately usefulness – and also how this varies among them. The questions associated with the third component, I maintain, should not be held in abeyance pending an evaluation. Doing so begs very real questions as to the extent to which managers are accountable for the quality and relevance of the support if they are not listening to beneficiary opinion and response. Monitoring needs to be less than periodically surveying socio-economic impacts irrespective of approach but also more than just cataloguing ‘outputs and activities’ and controlling ‘spend’. 9 That what I refer to as the third component of any good monitoring system, others may see as evaluation, gives me hope: that good monitoring practice involves getting outside the office, listening to beneficiaries and taking what they say on board, re-adjusting accordingly and closing the feedback loop by letting them know what you have done with their feedback. Of course, evaluations do this as well. But understanding the values and behaviours of beneficiaries is an approach they both share. The difference between how monitoring and evaluation try to achieve this understanding is based on approach: who does this, how often, why, with type of comparisons across people and places, and for whom? Monitoring can and should ultimately drive better performance and involve participatory processes including, but not limited to, those between the intervention and intended beneficiaries (be they the poor themselves or institutions that serve them, depending on 9 Such surveys perhaps need doing but not by those attached to programmes. Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 4
  • 6. the outcome sought).10 Having the ability to listen and understand how and in what ways beneficiaries respond to development programmes, and feeding this information back to decision-makers should not be judged by academic standards alone. I do not see the problem as an absence of tools or methods. They are there. Beneficiary Assessments is one stand out example and is not new - the approach was first developed in the late 1980s and described in 1995.11 Another is Casley and Kumar’s Beneficiary Contact Monitoring (BCM), the equivalent in addition to beneficiary assessments, to what I describe as the third component part of a monitoring system.12 Such assessments, I argue, can better enable improvements in the quality and usefulness of monitoring. I hope this paper provides a more balanced understanding and interest in Monitoring in the face of a growing preoccupation with trying to evaluate results, including and especially impacts. I’d like to believe that it could also help take advantage of a similar movement by focussing more on the element of taking into account and learning from the views of beneficiaries in assessing the value of investments in aid and how well they are delivered. Doing this should be treated as an integral element of monitoring. I am a ‘fan’ of logframes and value the need to develop results-chains. The major strength of the approach is that it provides an opportunity to collect evidence and think through a programme’s theory of change. However, it is important to distinguish between the logical framework – the matrix which summarises the structure of a programme and how this broken down among the hierarchy of objectives – and the approach – the process by, and the evidence with which, this is defined. With this in mind, my qualms are about how logical frameworks are easy to be: a) mis-used through being developed without adequate participation of all stakeholders, not balancing both logical thinking and deeper critical reflection and organisations filling in the boxes to receive funding; and b) mis-managed by not being an iterative reference point for programmes that keep up to speed with the realities through providing opportunities for beneficiary assessments. There is nothing intrinsic to the process associated with developing logframes that explains the need for a separate approach built around theories of change.13 Currently, M&E processes and systems in public sector development aid at higher levels (Outcomes & Impacts) tend to be over-prescriptive and focussed on measuring pre- defined indicators within politically defined time periods – ie, elections. The really challenging questions are not how to do better monitoring, but rather (a) what are the bureaucratic pressures that lead to civil servants behaving in certain ways and (b) how to change them. 14 Typically, political time periods of five years ‘force’ over-ambition and 10 As with Michael Quinn Paton’s view on utilisation focussed evaluation, the bottom line objective for monitoring is how it really makes a difference to improving programme performance so as to enhance prospects for bringing about lasting change. 11 “…….an approach to information gathering which assesses the value of an activity as it is perceived by its principal users; . . . a systematic inquiry into people’s values and behaviour in relation to a planned or on-going intervention for social, institutional and economic change.” Lawrence F. Salmen, Beneficiary Assessment: An Approach Described, Social Development Paper Number 10 (Washington, D.C.: World Bank, July 1995), p. 1. 12 Project Monitoring and Evaluation in Agriculture by Dennis J. Casley , Krishna Kumar 1987. Johns Hopkins University Press. 13 http://web.mit.edu/urbanupgrading/upgrading/issues-tools/tools/ZOPP.html 14 Pers Comm Simon Maxwell Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 5
  • 7. therefore the premature measurement of developmental results. The systems civil servants are obliged to set up are limited in providing information which can help to: A. Damp down politically-inspired over-ambition regarding outcomes and especially impacts that may inadvertently undermine the case for development aid; B. Safeguard against the Law of Unintended Consequences (or at least illuminate where these are happening through testing the assumptions during implementation); and C. Take account of alternative views (“theories of change”) especially those of beneficiaries and field staff regarding the quality and relevance of their support in order to help ensure the delivery of results – the true purpose of monitoring. This can be accomplished by establishing Feedback loops based on beneficiary perceptions of the quality of project/programme services and ‘products’ (beneficiaries may be the general population and/or local institutions) and their ‘results’. These in turn require: 1) Opportunities to encourage often poor and vulnerable beneficiaries and front line staff to express their views; 2) Sufficient real-time flexibility in project/programme design to permit incorporation of feedback; 3) Commitment by managers and those responsible for the oversight of implementation to monitoring programme consequences, intended, positive or otherwise; and 4) Assurances by those with authority to allocate resources at all to validate feedback among beneficiaries and then incorporate remedial actions in projects/programmes. The rationale of this paper is to explain some of the reasons why monitoring does not, yet could with effect and at reasonable cost, do the following: 1. Make effective contributions in delivering significant development results that matter most to beneficiaries; and 2. Better understand the ‘theory’ underlying aid programmes through monitoring processes and establishing feedback loops, in real time, with beneficiaries.15 15 This paper uses the term beneficiary in a collective sense: in relation to either the poor themselves (for aid programmes that deliver support directly to them); or the institutions that serve them (for programmes that support for example, partner country ministries, NGOs and markets, formal and/or informal). Who is Listening to Who, How Well and with What Effect? | Daniel Ticehurst 6