Posts filed under ‘Evaluation tools (surveys, interviews..)’
I’m currently in Brussels for some evaluation training with Gellis Communications and in our discussions the use of Likert Scale in surveys. As I’ve written about before, the Likert scale (named after its creator pictured above) is widely used response scale in surveys. My earlier post spoke about the importance of labelling points on the scales and not to use too many points (most people can’t place their opinion on a scale of more than seven). Here are several other issues that have come up recently:
To use an even or odd scale: there is an ongoing debate on the Likert scale as to whether you should use an odd (five point for example) or even (four point for example). Some advocate and odd scale where respondents can have a “neutral” middle point whereas others prefer to “force” people to select a negative or positive position with an even scale (e.g four points). In addition, the use of a “don’t know” option is inconclusive. I personally believe that a “don’t know” option is essential on some scales where people may simply not have an opinion. However, studies are inconclusive if such an option increases accuracy of responses.
Left to right or right to left: I always advocate displaying scales from the negative to the positive, left to right. It seems more logic to me and some automated survey software mark your answers and calculate the responses for graphs on this basis, e.g. that the first point is the lowest. But I’ve had heard others argue that it should be the opposite way around – put positive to negative, left to right – as people will click on the first point by default in online surveys – which I personally don’t believe. I’ve not yet found any academic reference supporting either way but looking at all examples in academic articles, 95% are written as negative to positive, left to right – some evidence in itself!
Mr Likert you have a lot to answer for!
As I’ve written about previously, checklist are an often overlooked and underrated tool useful for evaluation projects.
In this post, Rick Davies explains about the “weighted checklist” and how they can be used in evaluation. By being “weighted” it means that each item or attribute of a checklist is given an importance – more, same or less compared to other items – and it all tallies up in the overall assessment (see an example here).
It seems that people creating surveys don’t always pay attention to granularity issues as I’ve written about before… Here is another example from a survey from people who should know better (The Guardian newspaper nonetheless)…
Now what’s wrong with this? It’s highly unusual to place an MBA at the same level as a PhD – An MBA is the same educational level as an MA or MSc – a masters degree. This would make analysis difficult afterwards as you cannot separate correctly the different levels of education – and that’s a granularity issues – placing items in a correct level – not to mention that the scale above gives the impression that an MBA has the same value as a PhD…
Content analysis is a research method to analyse and categorise all sorts of texts and images (from photos to interview transcripts to newspaper articles) with the aim of identifying trends and patterns.
This is usually a labour-intensive task but has recently been improved by smart software that appears to be getting better and better. Here is one, that I came across, Wordle, that searches through a text and gives more prominence to words appearing more frequently – and creates a “word cloud”. A simple idea that graphically can be quite revealing – here is a wordle of this very blog you are reading (click on it to see a larger version):
It’s quite interesting to see the main words that emerge; “communications”, “audience”, “survey”, “analysis” and “materials” – which for the most – are accurate descriptions of the focus of the blog.
Try creating your own word cloud on wordle>>
I discovered wordle while on the many eyes website, an interesting website about visualisation.
I’ve written before about survey respones and the use of “don’t know” as an option on a Likert scale. What I said was that in some situations, a person may not have an opinion on a subject – and cannot say if they agree or disagree – so it may be wise to include a “don’t know” option. Well, i just read an interesting article that suggests that people who respond “don’t know” may actually have an opinion – it’s just that they may require a longer amount of time to develop confidence or awareness of their choice. The article gives an example of how the opinion of undecided people can be acurately predicted by creative means:
In a recent study, 33 residents of an Italian town initially told interviewers that they were undecided about their attitude toward a controversial expansion of a nearby American military base. But researchers found that those people’s opinions could be predicted by measuring how quickly they made automatic associations between photographs of the military base with positive or negative words.
I’ve written previously about resources for putting together surveys and I’m often asked about the “wrongs and rights” of writing questions for surveys.
So I’ve put together a factsheet (pdf) that has 12 hints on writing better survey questions using real examples – both good and bad.
In an earlier post, I also listed other useful guides about surveys.
No doubt you have heard of the Millenium Development Goals (MDGs), eight broad goals on poverty, ill-health, etc, agreed upon by all countries to try and reach by 2015.
From a monitoring and evaluation point-of-view, what is interesting is that these goals are broad sweeping statements, such as:
Goal 1: Eradicate Extreme Hunger and Poverty
Goal 3: Promote Gender Equality and Empower Women
One could ask – how can these broad goals be possibly monitored and evaluated?
As detailed on this MDGs monitoring website, what has been done is to set specific indicators for each goal, for example:
Goal 3: Promote Gender Equality and Empower Women
Description: Eliminate gender disparity in primary and secondary education, preferably by 2005, and in all levels of education no later than 2015
3.1 Ratios of girls to boys in primary, secondary and tertiary education
3.2 Share of women in wage employment in the non-agricultural sector
3.3 Proportion of seats held by women in national parliament
So from broad goals, the MDGs focus on two to seven specific indicators per goal that they are monitoring. That’s an interesting approach, as often we see broad goals set by organisations and then no attempt made to actually detail any indicators.
the MDGs monitoring website plays an active role in monitoring these indicators combining quantitative data (statistics) and qualitative data (case studies) – also an interesting approach to show how such indicators can be tracked.
Following are some useful resources for writing surveys:
A brief guide to questionnaire development (pdf) – very good guide with some interesting points on types of questions to use.
A guide to good survey design (pdf): Very comprehensive guide including “pitfalls with questions” (go to page 59).
Online survey design guide: a whole website dedicated to writing better online surveys.
20 tips for writing better surveys – good tips on managing surveys and improving response rates.
Measuring networks can have many applications: how influence works, how change happens within a community, how people meet, etc. I’m interested in measuring networks as indicator of how contacts are established amongst people, particularly in events and conferences, as I’ve written about previously.
In this area, there is a new resource page available on social network analysis and evaluation from M&E news. The page contains many useful resources and examples of network analysis and evaluation for non-profit organisations, education, events and research and development – including one from myself.
(Above image is from a network analysis of a conference, further information is available here>> )