Evaluation is everything

“Everything we do, we should evaluate. Evaluation is everything.”

The fifth and final meeting of Evaluation Exchange was held on 21st March at Project Ability’s project space at 103 Trongate, hoping for inspiration from the artistic space!

We began by reviewing everyone’s progress over the life of the project. Nearly everyone had moved forward. Most had at least devised a plan and some were already conducting focus groups, arranging interviews and designing questionnaires. Even for those who had not seen much concrete progress, there was a real perception that involvement in Evaluation Exchange had increased their own and their organisation’s focus on the importance of evaluation.

“There’s no doubt about it, if I had not come to this we wouldn’t have got anywhere. People are talking about it. People are aware of how important it is.”

There were also plans to further emphasise the importance of evaluation internally with one group member planning to showcase their evaluation work at an upcoming conference and another seeing the techniques and knowledge gained at Evaluation Exchange used across their organisation to achieve consistent application of outcomes.

After this general review, we split into small groups to discuss specific concerns and receive feedback about our individual evaluations.

The second half of the session was designed to reflect on the success of Evaluation Exchange and what lessons we had learned and could share about the process.

Supporting peer support

Based on discussions throughout the project, Emma had written some lessons for supporting peer support. We had seven ‘headlines’, which included:

  • The knowledge really is in the room
  • Dis-own it
  • Keep it concrete
  • Build trust and confidence
  • Don’t be the middleman
  • Stay regular and clear on commitment
  • Chill out

After discussion about whether this was an accurate and complete reflection of what had helped the group to work, we added:

  • Set it up
  • Use processes that work

When the group was happy with the headlines we moved on to prioritise which themes were most important. We used a diamond ranking exercise, where categories are rated in order of importance using a diamond template, resulting in one single most important category, two categories in second place, three in third, two in fourth and one in fifth (the least important) – see picture below.

Diamond ranking is also a useful way of ensuring that all views in the room are included and the final ordering is reached by consensus. This works by splitting into pairs (although with more time or where there is likely to be significant disagreement you can start with individuals) and each pair prioritising the categories. The pairs then merge into two groups of four and negotiatea new order based on their original rankings, then the groups of four come together to negotiate the (for our group) final order.

The final part of this session was for the group to rate how well Evaluation Exchange had performed on each of the elements we had identified as being important to a successful peer support group. We did this by using a traffic light system, where green meant Great, amber meant OK and red meant Not So Good. As you can see from the picture, the project did pretty well!

Self-evaluating… ourselves!

We also wanted to look at how well Evaluation Exchange had worked in areas other than peer support. Specifically, we wondered what participants had learned from the experience, whether it had met their expectations and whether it had had any wider impact on their organisation. To look at this explicitly, we asked the members of the group to interview each other using three questions. These were:

  • What has been the key learning point for you from being part of Evaluation Exchange?
  • Has Evaluation Exchange met your initial expectations?
  • Have you made an impact with your organisation because of your participation in Evaluation Exchange?

The answers overall were very positive, showing that Evaluation Exchange had improved participants’ perceptions of evaluation, had increased their knowledge, and helped them to progress with their evaluations. It had also, in some cases, been a catalyst for improved knowledge and prioritisation of evaluation at an organisational level.  These quotes give a flavour about what the group thought of being involved.

“[Evaluation Exchange] has led to focus and a clear plan. It’s been beneficial meeting members of other organisations. Maybe surpassed expectations.”

“I learnt a lot from this group that I wouldn’t necessarily have got from a training session.”

“Creativity and fun. I’m now sure that is my approach. […] I used to think of evaluation as a black hole, but now think it can be fun.”

“Feel that there is recognition by the Council and Capability Scotland because of the association with IRISS and ESS.”

What now?

So you can see from this brief overview that the project has been, by all accounts, an all round success! We are thrilled about how well it went and excited about looking in more detail at what went well, what the group has learned and what we can share.

Watch this space for further reflections of what we’ve learnt from the process, our own evaluation of Evaluation Exchange, and a tool to help people wanting to set up, run or participate in a peer support network.

Record the Unexpected

One of the early criticisms of outcomes was that organisations would find supporting evidence for these positive statements regardless of the reality of the situation and they would ignore everything else. This fear proved largely unfounded but keeping this potential pitfall in mind isn’t a bad idea. We must never allow our outcomes to become a strait jacket that prevents us from capturing the reality of our work.

More and more funders are telling us that they don’t want the reporting fairy tale: Once upon a time we did this and everyone lived happily ever after! They want the real picture of the difference their support is making… the successes, the challenges and the learning. Alongside expressing this desire some of the more progressive funders have started to address the issue of ‘Funding Assurance’. Working towards the “Harmonising Reporting” principles, helping their funded organisations to improve their evaluation skills or simply being clear about what they want to know are just some of the ways that funders are responding to this key expressed challenge of voluntary organisations.

What we are really talking about here is trust and trust requires honesty from funder and funded alike. So when it comes to reporting… tell the whole story. Show the progress you have made towards achieving your outcomes, record and report on all unexpected outcomes from your work, talk about the challenges you faced along the way and critically… demonstrate the learning that will inform your future plans as well as your future evaluations.

Tom Scott

Tom is the Training Officer for Evaluation Support Scotland, a charity that works with voluntary organisations and funders so that they can evaluate what they do, learn from that evaluation and deliver better services. You can find evaluation tools, support guides and templates on the ESS website.

Evaluation is a journey

The third meeting of Evaluation Exchange was held on 23rd January 2013 at TouchBase, hosted by Sense Scotland in Glasgow. It’s a great, fully accessible space and I’d thoroughly recommend considering it for meetings.

Everyone highlighted positive progress overall, and we were keen to repeat our successful format from previous meeting, which had capitalised on the knowledge in the room and put peer support at the heart of the process. We quickly split into small groups with a short period of dedicated time for each person to get support and feedback on their evaluation.

Emerging from these discussions were issues such as capturing unintended outcomes, tensions between different types of outcomes (e.g. soft/hard, individual/service, service/funder) and data (administrative/statistical/qualitative/stories), the need to be pragmatic, the question of attribution (how do we know it was us that made the difference?) and how to present the results of our evaluations.

We then reviewed the evaluation steps from moving from defining an outcome, to identifying indicators to deciding how measure the indicators. You can look at these support guides from ESS to refresh these steps.  Through this lens, we worked through another outcome from one of the Evaluation Exchange members, this time an from PLUS Stirling’s Direct Short Break service, who provide short breaks to families and carers, using an online booking system and want to know that children and families feel in control of the service.

Credible self-evaluation?

During this process we started to ask questions about the credibility of our evaluations – how can we be sure they will be seen as robust, credible and impartial? Particularly if, as self-evaluators, we are undertaking them ourselves rather than getting an outside perspective. We agreed that we could increase the credibility of our work by:
•    Being open and transparent about what we have done in our evaluation
•    Reporting any known shortcomings or gaps in our work
•    Ensuring that all relevant view points and data sources are included
•    Actively encouraging honest responses from the people we support and other stakeholders
•    Consistently in highlighting learning points for improvement as well as positive results
•    Having confidence ourselves that we have been as impartial as possible

We agreed to focus on this as a topic in a future session of Evaluation Exchange.

What makes good peer support?

We were interested in exploring different ways of receiving peer support. The group felt that in general they preferred an informal method of support that could be either face-to-face or via the internet.

The group included email in their definition of ‘the internet’ – and they felt that day-to-day they were more likely to email one or two colleagues informally than post on an open forum. They felt that the biggest benefits of peer support came through small personal relationships, which were cultivated through the building up of trust and shared goals. They were also keen on the idea of being able to access support as it was needed, rather than blocking out time for meetings on an on-going basis.

That said, the group also felt that regular meetings and belonging to a formal, closed group had helped them progress their evaluations by keeping it on their priority list as they knew that they had another meeting coming up, where they would be asked how they were getting on.

There was no appetite for an open group which would lack some of the features that made a closed group with a static membership successful: continuity, established trust, familiarity with each others work, shared understanding and language, and the fact that personal/professional relationships had been built over time. It was acknowledged that over a long time period a closed group can have risks as members drop out and move on and the membership is not renewed.

Seeing the Big Picture – How are we doing?

In order to evaluate the session in a creative way, Tom showed the group a map with a number of features and asked everyone to “choose a vehicle, location or anything else that represents where you are on the evaluation journey”. We used The Big Picture map, produced by SCVO.

Here are some of the responses:

  • Pink car next to the Library and Bank – “I chose that car because I feel like I am gathering everything I need just now like information and support”
  • Hot air balloon – “I am with others, its objective, I feel pleasant and supported… and I know there is a direction even though I don’t have the map!”
  • Train at the station – “I am deciding on our direction… and who gets off!”
  • Boat on the river – “I have come by a meandering way but I will get to where I’m going”
  • Hot air balloon – “I’ve got the group, I’ve got the objectives and I’ve got other staff and volunteers… I’m clear on the timescales and the direction – what a doddle!”
  • Double decker bus – “I’m on the journey – not quite sure where I’m going yet but I’m not on the roundabout going in circles”.

Why not look at the map and reflect on where you are on your own evaluation journey and how you can move forward.

 

Discovering the Story

Analysis is a scary word. Qualitative analysis is a scary phrase. Upon reading this phrase I am immediately transported to a lonely desk in a darkened side room where I sit hunched with head in hand… pouring over interview responses and client quotes at 11pm on a Sunday. Perhaps your picture is not quite as bleak! My picture has certainly improved over the years as I have learned more about evaluation and analysis and how valuable the process can be in telling the story of my clients. In evaluation, analysis is the part of the process that needs time set aside to allow us to discover the meaning behind the messages. Taking the time to make sense of the information we collect improves our reports and leads to better activities and outcomes for our clients. It can also provide a welcome morale boost by reminding us in rich colour… the difference we are making.

We have collected data from our clients using multiple methods and now a report is due. We need to discover the story so that we can tell it. This process often starts with theme generation. This is where you sift through a collection of responses (any qualitative data) and identify similarities. The responses can then be grouped by their commonality – this is called coding. Typically this word based analysis is done by counting word repetition or identifying key words in the context of our outcomes. Another word based analysis technique that is useful for the voluntary sector is looking for key indigenous terms. The idea is that human experience is marked by ‘tribal vocabulary’ or using words and phrases in a ‘localised’ way. This method fits well with the idea of participatory appraisal – people telling their own story in their own way using their own language.

While there are other methods of qualitative analysis the ones I described above are simple, clear and not overly time consuming. If you have more time there are plenty of ways to expand your analysis. Scrutinising your information by comparing it to other sets will provide more insights and noting information that doesn’t fit easily into identified themes can provide evidence for unexpected outcomes. Also, examining in detail why a defined outcome appears unsupported can yield interesting results… much can be learned from a text by what is not mentioned. All of the above will add value but your evaluation has to be appropriate to the size and scope of your organisation. Don’t overburden yourself with good intentions! Also, don’t do this on your own! Allowing different people to formulate ideas independently and then come to agreement as a group adds rigour to your findings. The “Evaluation Springboard” website has a basic ‘how-to’ guide that includes some tips on making sense of collected information. Finally, remember to keep a copy of your outcomes handy and you won’t go far wrong!

Tom Scott

Tom is the Training Officer for Evaluation Support Scotland, a charity that works with voluntary organisations and funders so that they can evaluate what they do, learn from that evaluation and deliver better services. You can find advice on analysing and reporting and other supportive resources such as guides and templates on the ESS website.

It’s beginning to look a lot like… peer support

 

On 5th December the Evaluation Exchange group met again for a half-day session, hosted by Capability Scotland (thanks, Sue!). We had a slightly reduced number attending as we were unfortunately beset by winter illnesses, but those who attended felt the day was really productive.

Making progress

Everyone felt that their evaluations had moved on since the first meeting, and that membership of the peer support network had in some respects prompted this. This was partly due to the on-going contact between participants (through the online space Basecamp) and partly due to the motivation of knowing that we’d be reporting back on our progress at the next meeting.

“It’s helpful to have contact in between […] gentle prompts.”

For others involvement in the project had sparked new ways of thinking about evaluation for them and their organisations – particularly those who were now thinking about undertaking user-led evaluations.

“[User-led evaluation is] a revelation, we’ve never done it before […] a big shift, but in a very good way.”

Participation in Evaluation Exchange had also generated enthusiasm for evaluation for the members’ organisations. One organisation had received great interest in the project following a mention in their staff newsletter, while discussions at a team meeting in another organisation had led to a desire to do peer evaluation within the organisation.

Supporting each other

We were keen to make sure that the expertise of the participants was used to the fullest in the meetings in order to really embed the peer support element of this project. At this session this was accomplished in three ways. First the participants split off into small groups where each individual was allocated a set amount of time to present the challenges or issues they were encountering in their project and receive advice, feedback and ideas from the others in their group. Participants knew to expect this in advance and were encouraged to think about where they most wanted input. This was very successful and the group was keen to repeat this at all future sessions.

“Really exciting looking at others’ outcomes – you see with clear eyes.”

Second, we had planned a structured sharing exercise, where the group listed a number of aspects of evaluation they wanted to know more about, then rated their own expertise on each of these. Those with more expertise in each area were then asked to share this with the group which generated further discussion. As part of this, the group members wrote up on a flip chart all the ‘creative’ evaluation methods they knew of or had used and we then had time for people to ask about those they wanted to know more about.

 

Third, during the input on developing soft outcomes (which we had agreed to have as a theme for session 2 at the end of the previous session) we used an soft outcome that one of the group members wanted to measure as an illustration of how to turn an outcome into measurable indicators. Tom explained these key steps in developing indicators:

  • Define your outcome – it should include the elements WHO, WHAT, HOW
  • Ask yourself “If this outcome happened what would it look like in real life?”
    • Supplementary questions could include, what would we see/know/do/think/etc. as a result of this outcome being achieved
  • Write down all the things you come up with, without censorship
    • Some of these may need to be broken down further before you can use them as indicators
  • Choose around 5 priority indicators to measure, trying to take a spread of those that:
    • Fit best
    • Are really important
    • Are easy to measure

We also discussed making a list of all the potential stakeholders that you might want to get a view from and ensuring that the key ones are covered by your indicators.

The group was really enthused by this process and in evaluating the day said it was one of the most useful elements. So we agreed that over the course of the future sessions we would work as a group on one outcome from each participant. At January’s session we plan to take this to the next step and once we have defined indicators move on to the question of how to measure each one.


Benefits of peer support for self-evaluation

A key outcome of the project is to look at how peer support can be valuable in enabling organisations to be self-evaluative. At this early stage this (rather than training style input or expert advice from IRISS or ESS) does seem to be a driver in the participants’ perceived success of the project. In discussing the usefulness of peer support the group mentioned that sharing with others and coming together as a group helps in several ways:

  • Helps to think things through with others
  • Having other people question you (positively) helps you think more clearly or differently about your evaluation issues and challenges
  • You can apply other people’s processes to your own evaluation
  • Contact in between meetings helps to prompt you to keep thinking about your evaluation and not let it slip off the radar
  • Similarly, knowing you have to bring something to discuss with the group keeps the evaluation on your priority list

It’s also important to note however, that some of the benefits mentioned (including those above) relate to the fact that participating in the group gives people a space to reflect on evaluation and protected time to work on it. This, while unsurprising, provides food for thought when we come to look at how peer support networks can become self-sustaining and what might be needed to allow their creation and existence in the longer term.

I’m also starting to think about other issues related to a successful peer support network for self-evaluation. Do such networks need to be restricted to a certain size to work? Are face-to-face meetings crucial for success and, if so, what is the importance is of online (or some other means) of keeping in touch in between meetings? Do, by their nature, groups of this type have a limited lifespan based on the fact that the evaluations being carried out will be completed? These are some of my first thoughts and it would be great to hear other reflections on this.

A place for ‘Creative Evaluation’?

 

Robust, dispassionate, scientific

 

These are not words typically associated with creative methods of evaluation… tools that allow people to contribute to evaluation in an active manner using participatory, visual or creative verbal means. Why then am I writing a blog encouraging the use of these tools? Why am I not leaving the relationship map, evaluation wheel and video diary far behind me?

I am an advocate of creative evaluation because one size never fits all. As evaluators we have to consider the abilities and needs of the people whose experience or service we want to evaluate, the environment that we are working in and what we really want to know. How can we really capture people’s views, particularly if they are very young, have communication difficulties or come from a ‘seldom heard’ group? How can we ensure that we are ‘working with’ rather than ‘doing to’ the groups with whom our evaluations are concerned? Is a validated questionnaire delivered by an independent party, or another ‘traditional’ approach the best or only method? Which methods are really going to tell us something meaningful and useful about the difference we are making?

If the goal is to gather a large amount of anonymous information then a questionnaire is the best tool for that job. If we need to explore personal issues in a controlled 1-2-1 setting then an interview is the appropriate method. If we want to make discoveries about people’s journeys, if we want to understand the contribution of our work to our desired outcomes, if we want to tell our combined story for our peers, our funders and the policy makers then creative evaluation has a part to play.

Creative tools can give you meaningful data if they are clearly linked to your outcomes. Consistent use of these tools will allow people supported by services to reflect on their experience in a way that is comfortable and accessible for them. Molly Engle of Oregon State University develops this point in her blog, Evaluation is an everyday activity. If your work is process focussed in nature then your evaluation should be also. If your work is exploratory and unpredictable… (and fun?) then your evaluation should be also.

 

This blog post was contributed by Tom Scott, Evaluation Support Scotland

Tom is the Training Officer for Evaluation Support Scotland (ESS), a charity that works with voluntary organisations and funders so that they can evaluate what they do, learn from that evaluation and deliver better services. You can find many examples of creative evaluation tools and other supportive resources such as guides and templates on the ESS website. ESS is partnering with IRISS on Evaluation Exchange.

Evaluation Exchange has started – So what?

The first meeting of Evaluation Exchange, the peer support group for self-evaluators facilitated by IRISS (Institute for Research and Innovation in Social Services) and ESS (Evaluation Support Scotland), was held on Wednesday 31st October and was, by all accounts, a really successful, productive meeting.

Our ten peer-evaluators come from the third and statutory sectors and are working on evaluating projects in fields from substance misuse to supporting people with communications difficulties. You can find out more about them here.

The day was split into three main sections: establishing the group and its purpose, general discussion and training around evaluation and specific work on individual evaluation projects.

Getting to know you…

From the outset it was clear that everyone was really engaged with and enthusiastic about the peer support aspect of the group, with people talking about being excited about being open and honest, working in partnership, discovering answers together and hearing from new people and new perspectives. Challenges were also acknowledged mainly under the banner of keeping focused and finding the time to prioritise evaluation amid competing demands. Nevertheless, we felt that it was important to make time outside of our meetings to reflect on what we are learning, to check in with each other and to work on our evaluations.

 

We also agreed some criteria about the success of the programme. We agreed that our participation would have been successful if:

•    We have a robust evaluation
•    We have an evaluation system in place
•    We can ‘crack outcomes’
•    We have new ideas to take forward in our organisations
•    We can streamline monitoring and evaluation
•    We are energised and excited about evaluation
•    We can motivate or enthuse our colleagues

Evaluation – so what?

Members of the group had a number of reasons for wanting to undertake evaluations including:

•    Showing how well we are doing

“We need to show how good we are”

•    Show we are making a difference to people’s lives

“I feel very strongly, we KNOW it’s making a difference so we want it to continue. So we NEED an evidence base.”

•    Seeing where we might need to get better

“[Evaluation] stops you assuming what is working well for you. You can get complacent when everyone says it’s a great project but this helps you break it down and shine a light on what’s really going on.”

However, for everyone in the group (perhaps unsurprisingly since they had applied to take part in a peer support network!) evaluation was seen to be really important.

“There’s no getting away from it, especially in social work, we work purely on an evidence base”

 “I’m passionate about evaluating our work.”

During the session facilitated by Tom from ESS, the So What? game particularly struck a chord with the group. The game is designed to help you decide whether or not an issue you have decided to evaluate is really an outcome – unless you can answer the question “So what?” convincingly then you are not measuring an outcome – but rather an output or an indicator.

This process generated a lot of discussion that touched on some of the more challenging aspects of evaluation. The group talked the difficulties in making connections between different levels of outcomes – personal, organisational and those from funders or government – and in turn this led on to the fundamental question of who the evaluation is being done for – is it for our own organisations to help us improve the services we provide or is it to meet reporting requirements or secure funding? And what is the correct balance between these needs? This also sparked discussion around the challenges of measuring ‘softer’ outcomes and how best to really include the voices of those we are working with in evaluation.

“Makes you wonder if it’s all about the people you work with?”

“Evaluations should make sense to the people we work with – that should be at the centre. But is it? Or is government/funders? Reconciling service and service user outcomes is difficult, it’s a process.”

Specific projects

Each member of the group had come prepared to present for 5 minutes on their project, which we did before splitting into three smaller groups with the purpose of clarifying evaluation questions, deciding what is needed to answer those questions and how Evaluation Exchange can help. The success of this part of the session was mixed. While the work on individual projects led to interesting discussion and reflection, we felt that at this stage clarity on evaluation questions was not wholly achieved. Given the different stages of thinking that the group members began at, this was likely inevitable. The facilitators were also wary of their role in the small groups being perceived as ‘the expert’ and this undermining the crucial peer support aspect of the project.

What are we going to do next?

We worked with the group to identify priorities for future sessions. We agreed that at each future session we would split the time between working on individual projects and addressing key evaluation challenges. Throughout the first session a number of issues and challenges in relation to evaluation were raised and, for the next session on 5th December, the group has agreed to concentrate on:

•    Evaluation methods – traditional and creative
•    Involving service users in evaluation

And in between times we will, of course, be thinking about what we learned at the first meeting, keeping in touch via Basecamp – our shared online space – or however else we choose, and working on our evaluation projects. We will also be reflecting on the following two questions:

1.    How good is my organisation at evaluation?

To help you answer that question you could consider these 4 statements

•    We know what to evaluate because we have clear outcomes for our organisation or          project
•    We have appropriate systems for collecting information / data about our outcomes
•    We can analyse and report on our outcomes
•    We use learning from evaluation in our ongoing work

2.    If your participation in Evaluation Exchange were to have an impact on your organisation (not just you) what’s one thing that might change or be different in your organisation or colleagues?

We invite you to do likewise!

Join the Evaluation Exchange!

Do you work in social services? Do you have a service or project you need to evaluate? Do you need support and assistance? IRISS and Evaluation Support Scotland (ESS) are inviting practitioners to join a peer support group which, over the next 6 months, will assist you to plan and/or undertake an evaluation related to your work. This will build your (and your organisation’s) capacity to self-evaluate and, in the process, help us learn, understand and share ‘what works’ in peer support for self-evaluation.

Is this project for you?

Yes! If you work within social services (statutory or independent) and you have a service, project or other piece of work that you currently need to evaluate, we would like to hear from you. We need people with more than an interest in evaluation – though that will certainly help – as this is a practical project aimed at supporting you to successfully undertake an evaluation that you are already thinking about as part of your work.

You should also be happy to discuss and share ideas about your evaluation with others in order to aid your learning and theirs. Through this you will be contributing to a wider understanding across the social services sector about the role of peer support in self-evaluation. Any issues of anonymity and confidentiality will be addressed and respected within the sessions and during reporting.

What’s in it for you?

By the end of the process each participating organisation will have increased their capacity for self-evaluation and should have at least produced a plan for a specific piece of evaluation work, which will be informed by peer review and support.

What’s the commitment?

Along with an interest in evaluation and a belief in the value of peer support, you must present with a specific, current evaluation task relating to your work. You will attend and actively contribute to five facilitated sessions. These will take place approximately monthly with provisional plans for the initial full day session on Wednesday 31st October, and half days in November, December, January and February. Between sessions you will need to be able to dedicate some time to moving forward with your evaluation.

We will need your help to reflect on the success of this project, so in addition to sharing your thoughts and recommendations on the process, we would like you to produce a short summary of your experience of participation at the end of the process.

We assume you’ll want to share your learning with your organisation and there will also be opportunities for you to get involved in sharing your thoughts on the process more widely.

What to do next?

Due to the nature of the project, there is a limited number of places available for participants. If you would like to take part please complete this expression of interest form, by 21st September 2012, telling us a bit more about you and why you want to take part.

If you have any questions please contact Emma Collins on 0141 559 5059 / 07545 696163 or emma.collins@iriss.org.uk or read more about this project.