Posts filed under ‘Evaluation reporting’
Here is an interesting paper from ALNAP looking at how the understanding and use of evaluation in humanitarian action can be improved:
Harnessing the Power of Evaluation in Humanitarian Action: An initiative to improve understanding and use of evaluation (pdf)
The paper sets out a framework for improving the understanding and use of evaluation in four key areas:
Capacity Area 1: Leadership, culture and structure
• Ensure leadership is supportive of evaluation and monitoring
• Promote an evaluation culture
• Increase the internal demand for evaluation information
• Create organisational structures that promote evaluation
Capacity Area 2: Evaluation purpose and policy
• Clarify the purpose of evaluation (accountability, audit, learning)
• Clearly articulate evaluation policy
• Ensure evaluation processes are timely and form an integral part of the decision-making cycle
• Emphasise quality not quantity
Capacity Area 3: Evaluation processes and systems
• Develop a strategic approach to selecting what should be evaluated
• Involve key stakeholders throughout the process
• Use both internal and external personnel to encourage a culture of evaluation
• Improve the technical quality of the evaluation process
• Assign high priority to effective dissemination of findings, including through new media (video, web)
• Ensure there is a management response to evaluations
• Carry out periodic meta-evaluations and evaluation syntheses, and review recommendations
Capacity Area 4: Supporting processes and mechanisms
• Improve monitoring throughout the programme cycle
• Provide the necessary human resources and incentive structures
• Secure adequate financial resources
• Understand and take advantage of the external environment:
- Use peer networks to encourage change
- Engage with media demands for information
- Engage with donors on their evaluation needs
I’ve written previously on the issue of how to make sure that evaluation results are used (or at least considered…). Here is a new publication Making Evaluations Matter: A Practical Guide for Evaluators (pdf) from the Centre for Development Innovation that goes into much depth about this issue.
They state four general reasons why evaluation results are often not used:
- Fail to focus on intended use by intended users and are not designed to fit the context and situation
- Do not focus on the most important issues – resulting in low relevance
- Are poorly understood by stakeholders
- Fail to keep stakeholders informed and involved during the process and when design alterations are necessary.
I think the first and last reasons are particularly pertinent. We often don’t have enough insights into how evaluation results will be used – and we also fail to inform and involve stakeholders during the actual evaluation.
Here is an interesting paper from the International Initiative for Impact Evaluation that focuses on how does evaluation results (of impact evaluations) influence policy:
A main conclusion of the paper is as follows:
“The paper concludes that, ultimately, the fulfillment of policy change based on the results of impact evaluations is determined by the interplay of the policy influence objectives with the factors that affect the supply and demand of research in the policymaking process.”
I am always interested in new ways to present evaluation results.
Here is a very engaging and accessible format to present evaluation results – photostories.
This photostory(pdf) tells the story of an evaluation of a programme in Kenya on reconciliation.
One of the challenges faced in evaluation is presenting evaluation findings in way that facilitates their use, as I’ve written about before.
Now here’s an interesting idea – presenting evaluation results in an interactive map. This example is for the monitoring, evaluation, and communications for an agriculture development program in Afghanistan. Here is a screenshot of the map:
Interactive map produced by Jasha Levenson of Cartametrix.
As I’ve written about previously, evaluation reports are notoriously under-read and underutilized. Aside from the executive summary, evaluators need to find ways of presenting their key findings in a summarized format that make them attractive to their publics.
Aside from the predictable Powerpoint summary (which still can serve a purpose), some of the techniques I have used – and that were well received by publics – are as follows:
Multimedia video: using interviews, graphs and quotes in a video to bring the evaluation results “to life” (see this post for an example)
Scorecard or “snapshot”: highlighting the key findings graphically in one page. See this example:
Summary sheet: summarizing the main findings, conclusions and recommendations in fact sheet of 2-4 pages. See this example: Summary Sheet (pdf)
Findings table: summarizing the main findings, particularly useful where the evaluation is responding to pre-set objectives and indicators, as per this example:
I’m always interested to learn of new methods to summarize evaluation findings, so if you have any more ideas, please share them!
I recently conducted a one day training workshop for the staff of Gellis Communications on communications evaluation. We looked at several aspects including:
- How to evaluate communication programmes, products and campaigns;
- How to use the “theory of change” concept;
- Methods specific to communication evaluation including expert reviews, network mapping and tracking mechanisms;
- Options for reporting evaluation findings;
- Case studies and examples on all of the above.
Gellis Communications and myself are happy to share the presentation slides used during the workshop – just see below (these were combined with practical exercises – write to me if you would like copies)
As I’ve written about before, the way in which we present evaluation findings – usually in a long undigestable report – certainly has its limitations. It’s been sometime I’ve been thinking that with the developments in multimedia there must be better ways than the written document to communicate evalution findings – and here it is! We’ve just completed a multimedia video report on the evaluation of the LIFT France conference:
I’ve written previously about what is recommend in putting together a *good* evaluation report.
I came across an interesting fact sheet from the Bruner Foundation on “Using evaluation findings (pdf)”. On page three the authors list eight points to be avoided in writing evaluation reports, sumarised as follows:
1. Avoid including response rates and problems with your methodology as part of your findings.
2. Avoid reporting both numbers and percents unless one is needed to make the other clear.
3. Avoid listing in a sentence or a table, all of the response choices for every question on a survey or record review protocol.
4. Avoid reporting your results with excessive precision.
5. Avoid feeling compelled to keep your results in the same order as they appeared on the survey or the interview protocol.
6. Avoid compartmentalizing your results.
7. Avoid feeling compelled to use all of the information you collected.
8. Avoid including any action steps or conclusions that are not clearly developed from your findings.
Disturbed by the state of affairs in evaluation, Professor Cronbach and colleagues wrote a 95 theses on reform in evaluation (inspired by Martin Luther’s 95 theses in 1517). They speak of the need for:
“A thoroughgoing transformation. Its priests and patrons have sought from from evaluation what it cannot, probably should not, give.”
Although written 28 years ago, the 95 theses (pdf) makes may pertinent points still valid today.
Here are several favourites that have stood the test of time (no. 75 is my favourite):
9. Commissioners of evaluations complain that the messages from evaluations are not useful, while evaluators complain that the messages are not used.
35. “Evaluate this program” is often a vague charge because a program or a system frequently has no clear boundaries.
49. Communication overload is a common fault; many an evaluation is reported with self-defeating thoroughness.
75. Though the information from an evaluation is typically not used at a foreseeable moment to make a foreseen choice, in many evaluations a deadline set at the start of the study dominates the effort.
95. Scientific quality is not the principle standard; an evaluation should aim to be comprehensible, correct and complete, and credible to partisans on all sides.