Valuing, and Improving the Value of, Case Studies

Case studies written by individuals and organisations trying out innovations can be valuable learning tools.  The trouble is they are not always produced, valued, or produced in a way that maximises their value.  In hierarchies of evidence, case studies can often find themselves towards the bottom (Johnson & Austin, 2005; Nutley, Powell & Davies, 2012).  One of the reasons for this is the perception that case studies generate a lot of complex, situated data which does not necessarily provide definitive answers beyond the immediate context of discussion.  This may not be popular in a Government, policy-making and practice era that favours unequivocal evidence to inform decisive decision-making.

And yet, in the world of social service innovation, where risk is a core concern (Mulgan, 2007; Brown, 2010), case studies are a way of building a more detailed understanding of the innovation process and of the context(s) of interest.  This may be particularly useful if the innovation is later scaled, mediating some of the risks of perverse outcomes when the programme is introduced to a new context.

My experience of researching the relationships between evidence and innovation has further highlighted the value of case studies.  I have found good case studies to be a vital source of learning on this topic.  But ‘good’ is the operative word here.  I have also experienced some shortcomings in the case studies I have encountered.  So my aim here is to use examples from the IRISS project to explore some of the things we might need from case studies, and how we might encourage practitioners and organisations to do more of these things.

In order to illustrate what I have found useful in a case study I will refer to one example.  Participle’s Interim Report on The Life Programme documents the implementation of an innovative approach to working with troubled families in Swindon, and later in Wigan, Lewisham and Colchester.  This case study enabled useful learning because it was comfortable with acknowledging the initial shortcomings of the programme, and the glitches that occurred during implementation and scaling.  For instance Participle note that, to begin with, their programme did not pay sufficient attention to young children’s experiences within the family, which may have posed risks for child protection.  They also documented the complexities they uncovered when it came to scaling-up The Life Programme, which stemmed from inadequate knowledge of the contexts of transfer (p. 22). In both cases Participle discuss how these problems were resolved, and in both cases this included the gathering of further evidence.

Given the remit of the IRISS project exploring the links between evidence and innovation, this provided important practice examples of what the innovation process is like, and how evidence is implicated in this.  However, the documenting and sharing of innovations can have much broader benefits than this.  There is evidence that one of the barriers to innovation and evidence-informed practice is a lack of applicable examples of these in use in a social service context (IRISS, 2010, p. 3-4).  The Participle example contains important learning about the complexities of the innovation process; a process which is so often depicted as necessary, rational and straightforward in Government discourse.  It provides practitioners with an alternative to this, at times, rose-tinted view.  They could follow the innovation journey of other organisations and use this to reflect on their own decisions and processes.

However, case studies that document the innovation process do not always exist, and where they do they are not always presented in a way that enables the maximum amount of learning.  Organisations can show a reluctance to describe the innovation process in all of its complexity, which would mean drawing out the difficulties that arise along the way.  The Participle case study was, arguably, a rare find in this regard.   This was the only social service specific case study that I found to have significant engagement with the difficulties of the innovation process.  Yet, even here there were other shortcomings.  This, and other cases studies, offered very little detail about evidence use.  Indeed, throughout the IRISS project it has proved tricky to untangle when and how evidence is being used during the innovation process.  It is hardly surprising then that our understanding of the relationships between evidence and innovation is currently based more on theory than on practice examples.

There may be legitimate reasons why social service case studies are failing to adequately document the innovation process.  In part, this surely feeds back to the wider context of social service reform in Scotland, and across the UK, which is increasingly geared towards high-stakes accountability (The Scottish Government, 2011; The Munro Report, 2011).  In such a context individuals and organisations may not be comfortable with documenting their failings, in the interests of promoting a better understanding of the complexities of evidence-use and innovation, through fear that this may impact on their reputation or funding.  It may also be the case that, given the complex nature of these issues, some individuals and organisations do not feel equipped to engage in discussions which capture the complexity of the process, whilst still managing to present this information in an accessible way.

We need individuals and organisations to share by providing case studies of their innovations, and to share in an honest, detailed way.  I am using ‘case study’ rather loosely here to refer to any form of story telling about the innovation process, not only by researchers but by practitioners and those who use services.  This means shaking off long-standing ideas and ideals about what counts as evidence, who is the ‘expert’ and who can carry out research.  Perhaps encouraging and facilitating organisations and individuals to engage in ‘real time’ documenting of innovation processes would be one approach here.  Not only would this be of benefit to future work on evidence and innovation, but it would also be extremely valuable to other individuals and organisations attempting their own innovations.

So important questions to consider going forward are how can individuals and organisations be encouraged and supported to provide quality descriptions and analyses of the innovation process, and how can this information be better shared with other individuals and organisations?

 

References:

Brown, L (2010) Balancing Risk and Innovation to Improve Social Work Practice, British Journal of Social Work, 40(4), pp. 1211-1228.

Johnson, M & Austin, M (2005) Evidence-based Practice in the Social Services: Implications for Organizational Change, University of California, Berkeley.
Accessed 23/09/13: http://www.cfpic.org/children/pdfs/EvidBasedPractFinalFeb05.pdf

Mulgan, G (2007) Ready or Not? Taking Innovation in the Public Sector Seriously, London: NESTA.

Nutley, S; Powell, A; Davies, H (2012) What counts as good evidence? St Andrews: Research Unit for Research Utilisation.