What came to the forefront whilst I was researching the background?

I have just completed my first week working on a project with IRISS exploring the links between evidence and innovation in the context of Scotland’s social services.  This blog marks the first of my weekly attempts to chart my progress and experiences of researching this emerging topic. I use the word emerging here, because what has already become apparent is that, although ‘evidence’ and ‘innovation’ have both been explored extensively across the academic, policy and practice literatures, their complex and evolving relationship with one another has not yet been considered in the same detail.  This makes for a project that is both timely and challenging.

Dedicating this week to researching the key debates and ‘buzz words’ associated with evidence and innovation, and their application to social services, has resulted in my own re-emphasising of the importance of context.  It has led me to conclude that the relationship between evidence and innovation cannot be adequately reflected on, in a way that is meaningful to practitioners, without this topic being situated within the current context of social service policy and practice in Scotland.  Of particular relevance for the evidence and innovation debate is the drive to reform Scotland’s public services in general, and social services in particular.  This imperative is present in influential reports including The Christie Report, The Scottish Government’s response to this in Renewing Scotland’s Public Services, and its report on Social Services, Changing Lives.  Each of these highlights the difficult situation faced by public services in the wake of a financial crisis which has simultaneously reduced public sector resources whilst increasing the demand for such resources as the effects of the recession are felt by society’s most vulnerable.  This, combined with an aging population and recruitment difficulties, has culminated in a challenging climate for social services.

The government’s response to these complex demands has been to present an improvement agenda, and there are two aspects of this agenda which I see as crucial to the contextualisation of the evidence and innovation project at IRISS. First, that improvement will be stimulated through radical innovation (rather than incremental innovation), and second that improvements will also be achieved by tightening oversight and accountability, through the application of transparent and rigorous audit mechanisms.  It is easy to get swept along in the power of these separate discourses.  ‘Innovation’ seems to have evolved into a term that has only positive connotations, and has become the ‘go to’ concept for those attempting to improve organisations and services.  The drive towards greater accountability for organisations funded by public money has been a persuasive discourse since the 1980s New Public Management revolution and, arguably, has only been strengthened by recent high profile illustrations of mismanagement and error in the public sector (The NHS debt and the Daniel Pelka case are just two examples).

However, upon closer inspection the government’s suggestion that social service reform should be pursued through greater accountability and radical innovation is problematic.  There is a potential antagonism here between the pursuit of innovation and tighter regulatory regimes, particularly where the latter is based on narrow, high-stakes performance criteria that are used publicly to facilitate competition, to scrutinise and to pass judgements. Innovation risks being stifled in a climate where practitioners have to ‘watch their backs’ and ensure that they are achieving particular performance targets (Munro, 2004; Brown, 2010).   This issue may be intensified if it is radical innovation that is being sought. Regulatory regimes that can serve to create a ‘blame culture’ may undermine desire and willingness to implement radical innovations, as social service practitioners may fear the repercussions for their clients and themselves if something goes wrong,

Given this potential antagonism present within the government’s reform discourse, and the likelihood that both radical innovation and greater accountability will continue to propel the improvement agenda, it is important that this context firmly underpins the IRISS project exploring the links between evidence and innovation. This context suggests that there is likely to be a pivotal role for ‘evidence’ throughout the innovation process, not least as a way of mediating some of the potential risks and concerns this poses for service users and the workforce.  Having established this, other questions begin to surface.  What kinds of evidence are integral to innovation?  Collected by whom? And used when and how?  These are just some of the questions I will be pursuing over the forthcoming weeks.

However, what also emerges from this consideration of context is the unlikeliness that evidence alone is enough to secure successful innovation in a social service context.  Evidence may well prompt, inform and evaluate innovation, but there is clearly an important role for the Government and service leaders in the development of a culture where “well thought through, well managed risks” are an accepted feature of the transformational process (Brown, 2010, p. 1222).  Come to think of it, there might be a role for evidence here too, as a way of developing our understanding of the culture and conditions necessary for successful innovation in a social service setting.  Undoubtedly, new links between evidence and innovation will continue to emerge as I begin the second week of this thought-provoking project.

Jodie Pennacchia
The University of Nottingham
Follow: @jpennacchia

References

Brown, L (2010), Balancing Risk and Innovation to Improve Social Work Practice, British Journal of Social Work, 40(4), pp. 1211-1228.

Munro, E (2004), The impact of audit on social work practice, British Journal of Social Work, 34(8), pp. 1073-1095.

 

 

Contribution Analysis

CONTRIBUTION ANALYSIS

I recently attended a session on contribution analysis. I was on a fact-finding mission. At the end, I came away thinking it wasn’t anything new and what was involved didn’t include anything I hadn’t done before as a researcher or evaluator. It just wasn’t called contribution analysis!

On further reflection, however, I saw that it might offer certain benefits – but before going any further
.

What is contribution analysis?

  • It was ‘invented’ by John Mayne, at the end of the 1990s when he was working for the Office of the Auditor General in Canada. He’s since become an independent advisor on public sector performance.
  • It works towards or back from evidencing the impact of a project/ programme or policy– and fits nicely with the shift to focusing on impact (or ‘outcomes’) as we move away from an evaluation culture where we list all the things we have done or produced! If you are smart  – and depending on what your project outcomes are – you should be able to link your outcomes to higher strategic ones eg those in Scotland’s national performance framework.  Policymakers will love you?!
  • It acknowledges right up front that your project/programme or policy is only part of a more complicated landscape– with multiple partners, different initiatives targeting the same groups, and spheres of direct and indirect influence all playing a part. You can also use it to explain why your project has, for example, limited impact due to other factors out with your control and postulate alternative explanations of change. In short, it doesn’t ignore the rest of the world.
  • Importantly, Mayne recognised the limitations we work under- that you can’t always prove causality by testing cause in effect in controlled conditions in the positivist tradition– however, YOU CAN provide evidence and reduce uncertainty about the difference you are making.
  • Where projects/programmes or policies don’t have clear outcomes or objectives, targets or timelines, it can help re-dress this – by making them explicit and thereby possible to evaluate.

So how do you do it?

Mayne identified a number of steps – which seem to have been slightly amended or re interpreted by a range of people – but essentially these are:

1. Set out the attribution problem to be addressed  – this could either be: identify the outcome you hope to achieve OR ask if you project/ programme/policy has achieved your stated outcome?

2. Develop a theory of change or logic model  – this involves developing a results chain to make explicit the theory of change upon which the project, programme or policy is based eg. if I invest time, money and human resources in delivering information, mentoring schemes, role models and taster events targeted at pupils from schools where few go onto higher education (HE), I can inform pupils and their influencers about the benefits of higher education, raise their aspirations, motivate them to achieve better exam results at school and successfully apply to university – thereby widening access to higher education (the ultimate outcome)!

Results chains can get complicated – with multiple chains or ‘nested logic models’ -but that is for another day.

This should also involve identifying the risks and assumptions you have made eg that your funding will continue and policymakers believe widening access to HE is a good thing; that you can keep and retain good staff; that your activities are well-received and pupils (and parents) react as you expect; that your investment is significant enough to sustain positive change in behaviour; that changes outside your sphere of influence such as charging or raising fees for courses or a reduction in university places doesn’t happen. Agreeing risks and assumptions will be an important part in assessing whether implementation has occurred as expected – with lessons to be learnt if assumptions are incorrect. You might also consider ways of reducing risks.

It’s been said that contribution analysis is good for confirming or reviewing a recognised theory of change – but not so good at creating or uncovering one! If so, does this mean it’s not suitable for trial or experimental projects?

3. Gather evidence to substantiate your theory of change or logic model – using data you have been gathering whether statistical or qualitative. You can also draw on others’ evidence eg to back up your assumptions around how people react to activities similar to your own – perhaps in the same or slightly different environments?

4. Assemble a contribution or performance story based on the evidence – this should assess where the evidence is strong/credible/weak or contested.

5. Seek out additional evidence

6. Revise and strengthen your contribution story using the additional evidence – your final version should be short as well as plausible (and perhaps with a visual representation of your theory of change). Policymakers seem to like this, preferring stories to reading vast tomes of research.

Different uses

The session also revealed that people have used contribution analysis in different ways. Some have used it to simply evaluate projects (often retrospectively) while others have used it as a planning tool to identify with partners the outcomes they want to achieve, agreeing a theory of change and results chain to determine their project before it starts.

In the second case, they also identify key measures for evaluating the different points in the chain, building in a system for  ‘quality assurance,’ ‘performance management’ or ‘continuous improvement’ to track progress over time and inform future design.  How ‘informative’ it is will depend on the type of data collected of course
.  Nevertheless, this approach offers the benefit of integrating your planning, monitoring and evaluation data into one set. You can also build your evidence year on year.

Of course, this assumes that you have the luxury of long-term funding as it may take time for you to build your evidence. Having said that, contribution analysis can be useful in that you have identified interim steps – or short, medium and longer-term goals – which you can report against rather than trying to demonstrate impact prematurely.

The challenges- of which there are many

  • Contribution analysis doesn’t remove the difficulties in identifying what it is you want to achieve, your theory of change and what indicators you use to measure it. If you’re working with others to plan, it doesn’t make the challenges of partnership working nor the time and commitment involved any less.
  • Sometimes your measures will be constrained and compromised by the data that is available- which may be a weak proxy or ill-fitting indicator!
  • Contribution analysis may lead to you discovering evidence that makes you want to revise your key outcome – eg you might find out that it’s better for more women to leave their husbands to protect them from domestic abuse, even though your project is funded to keep families together
. This may not sit well you’re your funders!
  • As mentioned, it can take a very long time to generate, gather and build credible evidence to attribute positive change– which can be frustrating and problematic, especially if the pressure is on to cut projects or activities within these to make savings. Those at the session who had used contribution analysis said they were nowhere near this stage – and didn’t know of anyone else who was either!!! Linking your impact to financial data to provide an indicator on value for money is yet another challenge!

To sum up

Contribution analysis is emergent practice, but I like that it provides a theoretical framework and set of methods – the end results of which are more likely to be accepted as credible by policy makers and funders and which focus on impact. I like that it builds in others’ spheres of influence to your contribution story  – with Steve Montague to be name checked for developing thinking in this area.  Having said all of that, it doesn’t make the doing any easier!

REFERENCES

Mayne J (1999) Addressing attribution through contribution analysis: using performance measures sensibly

Montague S  (2000) Three spheres of performance governance spanning the boundaries from single-organisation focus towards a partnership: http://evi.sagepub.com/content/13/4/399

Scottish Evaluation Network event, 18 June 2013, Contribution analysis: a half day workshop on the practicalities of using contribution analysis. Contributions from Lisa Cohen (NHS Health Scotland) and Sarah Morton (University of Edinburgh)