Posts filed under ‘Training evaluation’
During the recent European Evaluation Conference, I saw a very interesting presentation on going beyond the standard approach to training evaluation.
Dr. Jan Ulrich Hense of LMU München presented his research on “Kirkpatrick and beyond: A comprehensive methodology for influential training evaluations” (view Dr Hense’s full presentation here).
As I’ve written about before (well in a post four years ago…) , Donald Kirkpatrick developed a model for training evaluation that focused on evaluating four levels of impact:
Dr Hense provides a new perspective – we could say an updated approach – to this model. Even further, he has tested his ideas with a real training evaluation in a corporate setting.
I particularly like how he considers the “input” aspect (e.g. participants’ motivation) and the context of the training (which can be very important to influence its outcomes).
View Dr Hense’s presentation on his website.
Often we evaluate conferences with their participants just after the conferences, measuring mostly reactions and learnings, as I’ve written about previously.
Wouldn’t it be more interesting actually to try and measure the longer term impact of a conference? This is what the International AIDS Society has done concerning one of its international conferences – measuring longer term impact 14 months after the conference – you can view the report (pdf) here.
Their overall assessment of impact was as follows:
“AIDS 2008 had a clear impact on delegates’ work and on their organizations, and that the conference influence has extended far beyond those who attended, thanks to networking, collaboration, knowledge sharing and advocacy at all levels.”
As regular readers will now, I am interested in network mapping and have undertaken some projects where I have used network mapping to assess networks that have emerged as a result of conferences.
Here is quite an interesting tool, Net-Map, an interview-based mapping tool. The creators of this tool state that it is a “tool that helps people understand, visualize, discuss, and improve situations in which many different actors influence outcomes”.
Read further about the tool and view many of the illustrative images here>>
In the work I do to evaluate conferences and events, I have put together what I believe is a “neat” way of displaying the main results of an evaluation: an event scorecard. In the evaluation of a conference that occurs every year in Geneva, Switzerland, the LIFT conference, the scorecard summarises both qualitative and quantitative results taken from the survey of attendees. Above you can see a snapshot of the scorecard.
If you are interested, you can view the full scorecard by clicking on the thumbnail image below:
And for the really keen, you can read the full evaluation report of the LIFT08 evaluation report (pdf).
Greetings from Tashkent, Uzbekistan from where I write this post. I’m here for an evaluation project and off to Bishkek, Kyrgyzstan now.
I’ve just spent a week in Armenia and Georgia (pictured above) for an evaluation project where I interviewed people from a cross section of society. These are both fascinating countries, if you ever get the chance to visit… During my work there, I was wondering – what do people think about evaluators? For this type of in-site evaluation, we show up, ask some questions – and leave – and they may never see us again.
From this experience and others I’ve tried to interpret how people see evaluators – and I believe people see us in multiple ways including:
The auditor: you are here to check and control how things are running. Your findings will mean drastic changes for the organisation. Many people see us in this light.
The fixer: you are here to listen to the problems and come up with solutions. You will be instrumental in changing the organisation.
The messenger: you are simply channelling what you hear back to your commissioning organisation. But this is an effective way to pass a message or an opinion to the organisation via a third party.
The researcher: you are interested in knowing what works and what doesn’t. You are looking at what causes what. This is for the greater science and not for anyone in particular.
The tourist: you are simply visiting on a “meet and greet” tour. People don’t really understanding why you are visiting and talking to them.
The teacher: you are here to tell people how to do things better. You listen and then tell them how they can improve.
We may have a clear idea of what we are trying to do as evaluators (e.g. to assess results of programmes and see how they can be improved), but we also have to be aware that people will see us in many different ways and from varied perspectives – which just makes the work more interesting….
Evaluators often use interviews as a primary tool to collect information. Many guides and books exist on interviewing – but not so many for evaluation projects in particular. Here are some hints on interviewing based on my own experiences:
1. Be prepared: No matter how wide-ranging you would like an interview to be, you should as a minimum note down some subjects you would like to cover or particular questions to be answered. A little bit of structure will make the analysis easier.
2. Determine what is key for you to know: Before starting the interview, you might have a number of subjects to cover. It may be wise to determine what is key for you to know – what are the three to four things you would like to know from every person interviewed? Often you will get side-tracked during an interview and later on going through your notes you may discover that you forgot to ask about a key piece of information.
3. Explain the purpose: Before launching into questions, explain in broad terms the nature of the evaluation project and how the information from the discussion will be used.
4. Take notes as you discuss: Even if it is just the main points. Do not rely on your memory as after you have done several interviews you may mix up some of the responses. Once the interview has concluded try to write further on the main points raised. Of course, recording and then transcribing interviews is recommended but not always possible.
5. Take notes about other matters: It’s important also to note down not only what a person says but how they say it – you need to look out for body language, signs of frustration, enthusiasm, etc. Any points of this nature I would normally note down at the end of my interview notes. This is also important if someone else reads your notes in order for them to understand the context.
6. Don’t offer your own opinion or indicate a bias: Your main role is to gather information and you shouldn’t try to defend a project or enter into a debate with an interviewee. Remember, listening is key!
7. Have interviewees define terms: If someone says “I’n not happy with the situation”, you have understood that they are not happy but not much more. Have them define what they are not happy about. It’s the same if an interviewew says “we need more support”. Ask them to define what they mean by “support”.
8. Ask for clarification, details and examples: Such as “why is that so?”, “can you provide me with an example?”, “can you take me through the steps of that?” etc.
Hope these hints are of use..
Organisations often focus on evaluating the “outputs” of their activities (what they produce) and not on “outcomes” (what their activities actually achieve), as I’ve written about before. Many international organisations and NGOs have now adopted a “results-based management” approach involving the setting of time-bound measurable objectives which aim to focus on outcomes rather than outputs – as outcomes are ultimately a better measure of whether an activity has actually changed anything or not.
Has this approach been successful? A new report from the UN (of their development agency – UNDP) indicates that the focus is still on outputs rather than outcomes as the link between the two is not clear, as they write:
“The attempt to shift monitoring focus from outputs to outcomes failed for several reasons…For projects to contribute to outcomes there needs to be a convincing chain of results or causal path. Despite familiarity with tools such as the logframe, no new methods were developed to help country staff plan and demonstrate these linkages and handle projects collectively towards a common monitorable outcome.”
Interestingly, they highlight the lack of clarity in linking outputs to outcome – to show a causal path between the two. For example, the difficulty in showing that something that I planned for and implemented (e.g. a staff training program – an output) led to a desirable result (e.g. better performance of an organisation – an outcome).
One conclusion we can make from this study is that we do need more tools to help us establish the link between outputs and outcomes – that would certainly be a great advance.
Read the full UN report here >>
Further to my earlier post on ten tips for better web surveys, the email that people receive inviting them to complete an online survey is an important factor in persuading people to complete the survey – or not. Following are some recommended practices and a model email to help you with this task:
1. Explain briefly why you want an input: it’s important that people know why you are asking their opinion or feedback on a given subject. State this clearly at the beginning of you email, e.g. “As a client of XYZ, we would appreciate your feedback on products that you have purchased from us”.
2. Tell people who you are: it’s important that people know who you are (so they can assess whether they want to contribute or not). Even if you are a marketing firm conducting the research on behalf of a client, this can be stated in the email as a boiler plate message (see example below). In addition, the name and contact details of a “real” person signing off on the email will help.
3. Tell people how long it will take: quite simply, “this survey will take you some 10 minutes to complete”. But don’t underestimate – people do get upset if you tell them it will take 10 minutes and 30 minutes later they are still going through your survey…
4. Make sure your survey link is clickable: often survey softwares generate very long links for individual surveys. You can often get around this by masking the link, like this “click to go to survey >>“. However, some email systems do not read correctly masked links so you may be better to copy the full link into the email as in the example below. In addition, also send your email invitation to yourself as a test – so you can click on your survey link just to make sure it works…
5. Reassure people about their privacy and confidentiality: people have to be reassured that their personal data and opinions will not be misused. A sentence covering these points should be found in the email text and repeated on the first page of the web survey (also check local legal requirements on this issue).
6. Take care with the ”From”, “To” and “Subject”: If possible, the email address featured in the ”From” field should be a real person. The problem will be if your survey comes from email@example.com it may end up in many people’s spam folders. For the “To”, it should contain an individual email only – we still receive email invitations where we can see 100s of email addresses in the “To” field – it doesn’t really instill confidence as to how your personal data will be used. The “Subject” is important also – you need something short and straight to the point (see example below). Avoid using spam-catching terms such as “win” or “prize”.
7. Keep it short: You often can fall into the trap of over explaining your survey and hiding the link somewhere in the email text or right at the bottom. Try and keep your text brief – most people will decide in seconds if they want to participate or not – and they need to be able to understand why they should, for whom, how long it will take and how (“Where is the survey link?!).
Model email invitation:
On behalf of XYZ, we thank you for your participation in the XYZ Summit.
We would very much appreciate your feedback on the Summit by completing a brief online survey. This survey will take some 10 minutes to complete. All replies are anonymous and will be treated confidentially.
To complete the survey, please click here >>
If this link does not work, please copy and paste the following link into your internet window:
Thank you in advance; your feedback is very valuable to us.
tel: ++ 1 123 456 789
Benchpoint has been commissioned by XYZ to undertak this survey. Please contact Glenn O’Neil of Benchpoint Ltd. if you have any questions: firstname.lastname@example.org
The following article from Quirks Marketing Research Review also contains some good tips on writing email invitations.
Dr Michael Scriven of the Evaluation Centre of Western Michigan University describes the different types of checklists and how good checklists are put together. In particular, I like his list of the seven values of checklists, of which I summarise as follows:
- Reduces the chance of forgetting to check something important
- Are easier for the lay stakeholder to understand and evaluate
- Reduces the “halo effect”- it forces an evaluator to look at all criteria and not be overwhelmed by one highly valued feature
- Reduces the influence of the “rorschach effect” – that is the tendancy to see what one wants to see in a mass of data – evaluators have to look at all dimensions
- Avoids criteria being counted twice or given too much importance
- Summarises a huge amount of professional knowledge and experience
- Assists in evaluating what we cannot explain
As Dr Scriven points out, checklists are very useful tools in getting us to think through the “performance criteria” of all kinds of processes, projects or occurences, e.g. what are the key criteria that make a good trainer – and what criteria are more important than other?
Often in evaluation, we are asked to evaluate projects and programmes from several different perspectives: the end user, the implementer or that of an external specialist or “expert”. I always favour the idea that evaluation is representing the *target audiences* point of view – as is often the case in evaluating training or communications programmes – we are trying to explain the effects of a given programme or project on target audiences. However, often a complementary point of view from an “expert” can be useful. A simple example – imagine if you making an assessment of a company website – a useful comparison would be comparing the feedback from site visitors with that of an “expert” who examines the the website and gives his/her opinion.
However, often opinions of “experts” are mixed in with feedback from audiences and comes across as unstructured opinions and impressions. A way of avoiding this is for “experts” to use checklists – a structured way to assess the overall merit, worth or importance of something.
Now many would consider checklists as being a simple tool not worthy of discussion. But actually a checklist is often a representation of a huge body of knowledge or experience: e.g. how do you determine and describe the key criteria for a successful website?
Most checklists used in evaluation are criteria of merit checklists – where a series of criteria are established and given a standard scale (e.g. very poor to excellent) and are weighed equally or not (e.g. one criteria is equal or more crucial than the next one). Here are several examples where checklists could be useful in evaluation:
- Evaluating an event: you determine “success criteria” for the event and have several experts use a checklist and then compare results.
- Project implementation: a team of evaluators are interviewing staff/partners on how a project is being implemented. The evaluators use a checklist to assess the progress themselves.
- Evaluating services/products: commonly used, where a checklist is used by a selection panel to determine the most appropriate product/services for their needs.
This post by Rick Davies actually got me thinking about this subject and discusses the use of checklists in assessing the functioning of health centres.