Contribution Analysis

CONTRIBUTION ANALYSIS

I recently attended a session on contribution analysis. I was on a fact-finding mission. At the end, I came away thinking it wasn’t anything new and what was involved didn’t include anything I hadn’t done before as a researcher or evaluator. It just wasn’t called contribution analysis!

On further reflection, however, I saw that it might offer certain benefits – but before going any further….

What is contribution analysis?

  • It was ‘invented’ by John Mayne, at the end of the 1990s when he was working for the Office of the Auditor General in Canada. He’s since become an independent advisor on public sector performance.
  • It works towards or back from evidencing the impact of a project/ programme or policy– and fits nicely with the shift to focusing on impact (or ‘outcomes’) as we move away from an evaluation culture where we list all the things we have done or produced! If you are smart  – and depending on what your project outcomes are – you should be able to link your outcomes to higher strategic ones eg those in Scotland’s national performance framework.  Policymakers will love you?!
  • It acknowledges right up front that your project/programme or policy is only part of a more complicated landscape– with multiple partners, different initiatives targeting the same groups, and spheres of direct and indirect influence all playing a part. You can also use it to explain why your project has, for example, limited impact due to other factors out with your control and postulate alternative explanations of change. In short, it doesn’t ignore the rest of the world.
  • Importantly, Mayne recognised the limitations we work under- that you can’t always prove causality by testing cause in effect in controlled conditions in the positivist tradition– however, YOU CAN provide evidence and reduce uncertainty about the difference you are making.
  • Where projects/programmes or policies don’t have clear outcomes or objectives, targets or timelines, it can help re-dress this – by making them explicit and thereby possible to evaluate.

So how do you do it?

Mayne identified a number of steps – which seem to have been slightly amended or re interpreted by a range of people – but essentially these are:

1. Set out the attribution problem to be addressed  – this could either be: identify the outcome you hope to achieve OR ask if you project/ programme/policy has achieved your stated outcome?

2. Develop a theory of change or logic model  – this involves developing a results chain to make explicit the theory of change upon which the project, programme or policy is based eg. if I invest time, money and human resources in delivering information, mentoring schemes, role models and taster events targeted at pupils from schools where few go onto higher education (HE), I can inform pupils and their influencers about the benefits of higher education, raise their aspirations, motivate them to achieve better exam results at school and successfully apply to university – thereby widening access to higher education (the ultimate outcome)!

Results chains can get complicated – with multiple chains or ‘nested logic models’ -but that is for another day.

This should also involve identifying the risks and assumptions you have made eg that your funding will continue and policymakers believe widening access to HE is a good thing; that you can keep and retain good staff; that your activities are well-received and pupils (and parents) react as you expect; that your investment is significant enough to sustain positive change in behaviour; that changes outside your sphere of influence such as charging or raising fees for courses or a reduction in university places doesn’t happen. Agreeing risks and assumptions will be an important part in assessing whether implementation has occurred as expected – with lessons to be learnt if assumptions are incorrect. You might also consider ways of reducing risks.

It’s been said that contribution analysis is good for confirming or reviewing a recognised theory of change – but not so good at creating or uncovering one! If so, does this mean it’s not suitable for trial or experimental projects?

3. Gather evidence to substantiate your theory of change or logic model – using data you have been gathering whether statistical or qualitative. You can also draw on others’ evidence eg to back up your assumptions around how people react to activities similar to your own – perhaps in the same or slightly different environments?

4. Assemble a contribution or performance story based on the evidence – this should assess where the evidence is strong/credible/weak or contested.

5. Seek out additional evidence

6. Revise and strengthen your contribution story using the additional evidence – your final version should be short as well as plausible (and perhaps with a visual representation of your theory of change). Policymakers seem to like this, preferring stories to reading vast tomes of research.

Different uses

The session also revealed that people have used contribution analysis in different ways. Some have used it to simply evaluate projects (often retrospectively) while others have used it as a planning tool to identify with partners the outcomes they want to achieve, agreeing a theory of change and results chain to determine their project before it starts.

In the second case, they also identify key measures for evaluating the different points in the chain, building in a system for  ‘quality assurance,’ ‘performance management’ or ‘continuous improvement’ to track progress over time and inform future design.  How ‘informative’ it is will depend on the type of data collected of course….  Nevertheless, this approach offers the benefit of integrating your planning, monitoring and evaluation data into one set. You can also build your evidence year on year.

Of course, this assumes that you have the luxury of long-term funding as it may take time for you to build your evidence. Having said that, contribution analysis can be useful in that you have identified interim steps – or short, medium and longer-term goals – which you can report against rather than trying to demonstrate impact prematurely.

The challenges- of which there are many

  • Contribution analysis doesn’t remove the difficulties in identifying what it is you want to achieve, your theory of change and what indicators you use to measure it. If you’re working with others to plan, it doesn’t make the challenges of partnership working nor the time and commitment involved any less.
  • Sometimes your measures will be constrained and compromised by the data that is available- which may be a weak proxy or ill-fitting indicator!
  • Contribution analysis may lead to you discovering evidence that makes you want to revise your key outcome – eg you might find out that it’s better for more women to leave their husbands to protect them from domestic abuse, even though your project is funded to keep families together…. This may not sit well you’re your funders!
  • As mentioned, it can take a very long time to generate, gather and build credible evidence to attribute positive change– which can be frustrating and problematic, especially if the pressure is on to cut projects or activities within these to make savings. Those at the session who had used contribution analysis said they were nowhere near this stage – and didn’t know of anyone else who was either!!! Linking your impact to financial data to provide an indicator on value for money is yet another challenge!

To sum up

Contribution analysis is emergent practice, but I like that it provides a theoretical framework and set of methods – the end results of which are more likely to be accepted as credible by policy makers and funders and which focus on impact. I like that it builds in others’ spheres of influence to your contribution story  – with Steve Montague to be name checked for developing thinking in this area.  Having said all of that, it doesn’t make the doing any easier!

REFERENCES

Mayne J (1999) Addressing attribution through contribution analysis: using performance measures sensibly

Montague S  (2000) Three spheres of performance governance spanning the boundaries from single-organisation focus towards a partnership: http://evi.sagepub.com/content/13/4/399

Scottish Evaluation Network event, 18 June 2013, Contribution analysis: a half day workshop on the practicalities of using contribution analysis. Contributions from Lisa Cohen (NHS Health Scotland) and Sarah Morton (University of Edinburgh)