Posts filed under ‘Evaluation tools (surveys, interviews..)’
Bond, the UK alliance of NGOs, has produced an interesting guide on advocacy evaluation:
The guide looks at the challenges of influencing power holders (usually done through activities grouped under the umbrella of “advocacy”) but comes to the conclusion that evaluation is feasible:
it is possible to tell a convincing story of an organisation’s contribution to change through their influencing and campaigning work by breaking down the steps of the process that led to change, and looking at how an organisation has created change at each step.
The guide also sets out these steps and provides examples of advocacy evaluation tools from NGOs including Oxfam, CARE, Transparency International amongst others.
i’ve written previously about the Likert scale and surveys – and received literally 100s of enquiries about it. A reader has now pointed me towards this excellent article on survey questions and Likert scales that adds some interesting points to the discussion.
From my previous post, I listed the following best practices on using the Likert Scale in survey questions:
- More than seven points on a scale are too much.
- Numbered scales are difficult for people
- Labelled scales need to be as accurate as possible
And here are some further points to add drawn from this article:
- Be careful with the choice of words for labels:
“Occasionally” has been found to be very different than “seldom” but relatively close in meaning to “sometimes” (quote from article)
- Include a “don’t know” if for a point where people may simply not have an opinion:
“Providing a “don’t know” choice significantly reduced the number of meaningless responses.”
- People will respond more often to those items on the left hand side of the scale:
“There is evidence of a bias towards the left side of the scale”
On that last point, I always write my scales left to right – bad to good… This means that people may tend to select more easily the “bad” ratings. I haven’t found that to be the case (respondents often seem to be over-positive in their ratings I feel), but I stand corrected…
EU Manual: Evaluating Legislation and Non-Spending Interventions in the Area of Information Society and Media
A very interesting manual published by the European Union:
Despite the wordy title, the manual is really about how to evaluate the effects of legislation and initiatives taken by governments (in this case the regional body – EU).
The toolbox at page 72 is well worth a look.
Using Google Analytics to track the relative value of your Offer
Some lessons for the communications Evaluation profession
It is some time since I looked at my Google Analytics account. A pity, because it can reveal some dramatic insights into global trends. And the quality and mine-ability of the data is improving month by month.
I wanted to see what was happening in Benchpoint’s main market place, which is specialist on line surveys of employee opinion in large companies. So I looked up “employee surveys”. I was surprised (and shocked) to see that Google searches for this had declined since their peak in 2004 to virtual insignificance.
This was worrying, because our experience is that the sector is alive and well, with growing competition.
On the whole, we advise against general employee surveys, preferring surveys which gain insight into specific areas.
So I contrasted this with a search for “Employee Engagement”, on its own. The opposite trend! This search term has enjoyed steady growth, with the main interest coming from India, Singapore, South Africa, Malaysia, Canada and the USA, in that order.
“Employee engagement surveys”, which first appeared in Q1 2007, also shows a contrarian trend, with most interest coming from India, Canada, the UK and the USA.
Looking at the wider market, here is the chart for the search term “Surveys” – a steady decline since 2007
But contrast this with searches for “Survey Monkey”
Where is all this leading us? Google is remarkably good are recording what’s cool, and what’s not in great detail and in real time. There are plenty of geeks out there who earn good money doing it for the big international consumer companies. And what it tells us is that, more than ever, positioning is key.
Our own field, “ Communications Evaluation” is fairly uncool. Maybe we need to invent a new sexy descriptor for what we do?
But note, on the chart below, the peaks in the autumn of 2009 and 2010, when the AMEC Measurement Summits were held. Sudden spikes in interest.
This blog and Benchpoint have the copyright of “Intelligent measurement”, which is holding its own in the visibility and coolness stake – with this blog giving a boost way back in 2007…
- Get a Google Analytics account and start monitoring the keywords people are using to seach for your business activity and adapt your website accordingly
- As an interest group/profession, we probably need to adopt a different description of what we do if we wish to maintain visibility and influence. Suggestions anyone? Discuss!
Sorry for such a long post!
I am always interested in learning of different ways to represent data visually.
Well here is something that will fascinate you if you are also interested in the many possibilites of displaying data.
Visual-literacy.org have produced a fantastic “Period Table of Visualization Methods” (reduced version shown above). Being inspired from the standard chemistry period table, they have listed virtually every possible type of data visualization and categorised them. The only type missing I see is the “word cloud”.
Here is an interesting article from the Economist about political polling in the US. The article discusses the increasing difficulties in conducting polls or surveys that assess voting intentions in the US.
Most polling companies, in the US and elsewhere, conduct their surveys by calling phone landlines (fixed lines). But less and less people are using landlines – the article states that some 25% of US residents only have a mobile phone these days. Polling companies often don’t call mobile phones for various reasons, mostly related to cost. So the conclusion is, be careful when looking at survey results based on this traditional approach.
Interestingly, the article did not mention the growth of surveying using the Internet – or the possibility to survey using smart phones.
This article from FiveThirtyEight blog provides more insight into the issue – mentions the growth of Internet polling and is not so pessimistic about the future of traditional surveys.
For evaluation, the debate is interesting as often we use surveying as a tool – and many of the points discussed are relevant to the surveying undertaken for large-scale evaluations.
Using the “theory of change” in evaluation has proven for me to be very useful – it basically maps out from activities to impact how the given intervention would bring about change.
Here’s one for you our readers.
Benchpoint is currently designing a survey for a client. Most of the questions have 5-point Likert scales:
Neither satisfied nor dissatisfied
However the client wishes to have one question with a 10 point numerical scale where 9 is extremely satisfied and 0 is extremely dissatisfied.
We say we should stick to the same scale throughout the survey, and that a 5-point descriptive scale is better that a 10-point numerical scale.
What do our readers think?
I recently conducted a one day training workshop for the staff of Gellis Communications on communications evaluation. We looked at several aspects including:
- How to evaluate communication programmes, products and campaigns;
- How to use the “theory of change” concept;
- Methods specific to communication evaluation including expert reviews, network mapping and tracking mechanisms;
- Options for reporting evaluation findings;
- Case studies and examples on all of the above.
Gellis Communications and myself are happy to share the presentation slides used during the workshop – just see below (these were combined with practical exercises – write to me if you would like copies)
Online tools, such as corporate websites, members’ directories or portals increasingly play an important role in communications’ strategies. And of course, they are increasingly important to evaluate.
I just concluded an evaluation of an online tool, created to facilitate the exchange of information amongst a specific community. The tool in question, the Central Register of Disaster Management Capacities is managed by the United Nations Office for the Coordination of Humanitarian Affairs.
The evaluation methodology that I used for evaluating this online tool is interesting as it combines:
- Content analysis
- Network mapping
- Online survey
- Expert review
- Web metrics
And for once, you can dig into the methodology and findings as the evaluation report is available publicly: View the full report here (pdf) >>