Posts filed under ‘General’
For those readers interested in humanitarian assessment and evaluation, this webinar that I am co-hosting may be of interest:
Webinar – Humanitarian Standards – too much of a good thing? on Feb 28, 2013 2:00 PM GMT
Are you interested in driving up the quality and accountability of humanitarian action? The Joint Standards Initiative (JSI) is an exciting collaboration between HAP International, the Sphere Project and People In Aid to work out how to improve standards coherence and in turn to improve the quality of humanitarian programmes. This webinar is part of a series of stakeholder consultation events to hear the humanitarian communities views on the use, utility and relevance of humanitarian standards. John Cosgrave will present highlights from two related papers he has written for JSI on this subject and Robert Schofield (JSI Coordinator) and Glenn O’Neil (JSI Consultant) will facilitate a discussion with webinar participants.
Register here for the webinar:
Download John Cosgrave’s thinkpiece (pdf): Humanitarian Standards – too much of a good thing?
After registering, you will receive a confirmation email containing information about joining the webinar.
Having worked for some years as an evaluator, been in many different teams and seen other evaluators in action, I’ve had the benefit of seeing how people do evaluation in so many different ways. And I’ve also had the occasion to see evaluators do things – well – in not such a correct way – shall we say, acting a little pompous, like the illustration above. So I’ve put together the following list of seven thing an evaluator should avoid saying…of course, I’ve never been guilty of any of these :~}
1. “For me, the Terms of Reference are only a rough guide for us” I once heard an evaluator say this and the client nearly fell off their chair. Of course, a terms of reference has to be commented on and modified, normally in the inception report, but it’s key to the evaluation, no one likes “evaluation creep” where the evaluation goes everywhere but fails to answer the questions, oh, that are often in the terms of reference.
2. “I’ve already written the report in my mind” Ah the number of times I’ve heard this gem when coming out of a first meeting with a client… Even before a scrap of evidence has been collected…
3. “Interesting, in my opinion this is what happened…” I’ve been guilty of this, where the evaluator elaborates on their theory of what works and what doesn’t to a poor interviewee. When ever I’ve tried it, the person 9 times out of ten has replied “that’s not how it happened…”.
4. “Don’t worry, I’ve got a long plane trip coming up, I’ll write your report then…” We are all short for time, but a client expects work to be done seriously…even if you are catching up on the report during that flight trip, should you say it out loud?
5. “So our initial results are…” Not so much the evaluator’s fault but the pressure on evaluators to deliver initial results before the findings are in. We should avoid jumping to conclusions in the early days of an evaluation as often we find that our initial hunches may be wrong…
6. “This program is so #%$%! Who is running this thing?” As an evaluator you may come across programmes and projects that are less than ideally run. But it helps if you are a little diplomatic as you may be talking to someone who set up and/or manages what you are evaluating. There may even been connections to the programme or project that you are not aware of.
7. “The way this evaluation is managed is just rubbish!” And I’ve also heard this – the evaluator criticising openly and widely the evaluation commissioner who has …employed them… In general, I think part of the success of the evaluation will be down to good collaboration between the evaluators, the commissioner and the programme/project being evaluated.
Know of any more things to avoid saying? please send them in!
Above fabulous drawing entitled “pompous bastard” by TannerMorrow.
An ongoing debate focuses on how NGOs can measure the impact of their work. The International Initiative for Impact Evaluation (3ie) and Oxfam have recently produced a very interesting paper on this subject:
Using examples from campaigns and other programmes, the paper sets out the challenges and options in evaluating impact and proposes four options for improving impact evaluation:
1) partnering with research institutions to rigorously evaluate “strategic” interventions;
2) pursuing more evidence informed programming;
3) using what evaluation resources they do have more effectively;
4) making modest investments in additional impact evaluation capacity.
View the paper (pdf)
This is its fourth annual survey of the UK project management market, and there are similarities between this population and others working as self employed consultants in the communications and evaluation field.
The survey looks at rewards (salaried vs sell-employed), working conditions and the economic climate, as well as some issues specific to project managers.
Some very interesting points for those interested in the failure of communications and effective monitoring.
A new evaluation website has been launched: Eval Central. It brings together on one website, feeds from different blogs and sites that focus on evaluation (including this one!). Here is an explanation from its creator:
“This experimental site integrates feeds from a variety of evaluation blogs in order to develop a single evaluation news source that can run with very little overhead. Essentially this site is based on a directory but by opting for aggregate feeds, rather than a static list of links, the site becomes a dynamic source for readers on the lookout for new evaluation content. When a reader clicks on any single post, they are taken directly to the source blog.”
Below is an extract of the farewell speech of Ms. Inga-Britt Ahlenius, Under-General Secretary Oversight & Evaluation of the UN. I particularly like her statement that if evaluators do not “tell it as it is” – then no one else will…
“I am aware that some of you are facing challenges to the independence of your work; management in some cases would like to continue to maintain control over the ambit of your work. They want good news, not bad news. So when you have bad news, you learn to tell the bad news in clever ways. Let me tell you a little story.
There is the old story of the Lion King who calls all his subjects to his rather smelly cave and asks them to tell him how his room smells. Nobody dares to do anything, until the dog steps up, sniffs the room and tells the King honestly that it smells. The King devours the dog for his insolence. The monkey then decides to be smarter and tells the King the room smells like roses. The King devours the monkey for his dishonesty and sycophancy. Lastly, with all else in the room trembling with fear, the sly fox steps up and tells the King that he has had a cold for the past few days and cannot smell. The King rewards the fox by making him Prime Minister of his Kingdom.
Now, regardless of the moral of this story – we in this room are NOT to be sly foxes. We are mandated to be dogs! So the question is – how do we survive as dogs when the King asks you if his room smells?
To those of you who are facing hard challenges to your operational independence, and to your professional integrity as evaluators, I would like to remind you of a quote by Dag Hammarskjold which I now and then have reason to repeat. You will find it engraved in the pavement of Dag Hammarskjöld Plaza at 47th Street and First Avenue –
“Never for the sake of peace and quiet, deny your own experience or convictions”.
Because if you, in your position as the United Nations’ evaluators do not “tell it as it is”, what you believe to be correct, then it is unlikely that anybody else in the UN will. I urge you – do not deny your convictions as evaluators!”
Here’s one for you our readers.
Benchpoint is currently designing a survey for a client. Most of the questions have 5-point Likert scales:
Neither satisfied nor dissatisfied
However the client wishes to have one question with a 10 point numerical scale where 9 is extremely satisfied and 0 is extremely dissatisfied.
We say we should stick to the same scale throughout the survey, and that a 5-point descriptive scale is better that a 10-point numerical scale.
What do our readers think?
Claremont Graduate University (California, USA) has announced its Summer 2010 Professional Development Workshop Series on a variety of topics in evaluation and applied research. From August 20-23, you can participate in them directly in California or join them online for interactive webcasts:
Information and Registration>>