Valuing, and Improving the Value of, Case Studies

Case studies written by individuals and organisations trying out innovations can be valuable learning tools.Ā  The trouble is they are not always produced, valued, or produced in a way that maximises their value.Ā  In hierarchies of evidence, case studies can often find themselves towards the bottom (Johnson & Austin, 2005; Nutley, Powell & Davies, 2012). Ā One of the reasons for this is the perception that case studies generate a lot of complex, situated data which does not necessarily provide definitive answers beyond the immediate context of discussion.Ā  This may not be popular in a Government, policy-making and practice era that favours unequivocal evidence to inform decisive decision-making.

And yet, in the world of social service innovation, where risk is a core concern (Mulgan, 2007; Brown, 2010), case studies are a way of building a more detailed understanding of the innovation process and of the context(s) of interest.Ā  This may be particularly useful if the innovation is later scaled, mediating some of the risks of perverse outcomes when the programme is introduced to a new context.

My experience of researching the relationships between evidence and innovation has further highlighted the value of case studies.Ā  I have found good case studies to be a vital source of learning on this topic.Ā  But ā€˜goodā€™ is the operative word here.Ā  I have also experienced some shortcomings in the case studies I have encountered.Ā  So my aim here is to use examples from the IRISS project to explore some of the things we might need from case studies, and how we might encourage practitioners and organisations to do more of these things.

In order to illustrate what I have found useful in a case study I will refer to one example.Ā  Participleā€™s Interim Report on The Life Programme documents the implementation of an innovative approach to working with troubled families in Swindon, and later in Wigan, Lewisham and Colchester.Ā  This case study enabled useful learning because it was comfortable with acknowledging the initial shortcomings of the programme, and the glitches that occurred during implementation and scaling.Ā  For instance Participle note that, to begin with, their programme did not pay sufficient attention to young childrenā€™s experiences within the family, which may have posed risks for child protection.Ā  They also documented the complexities they uncovered when it came to scaling-up The Life Programme, which stemmed from inadequate knowledge of the contexts of transfer (p. 22). In both cases Participle discuss how these problems were resolved, and in both cases this included the gathering of further evidence.

Given the remit of the IRISS project exploring the links between evidence and innovation, this provided important practice examples of what the innovation process is like, and how evidence is implicated in this.Ā  However, the documenting and sharing of innovations can have much broader benefits than this.Ā  There is evidence that one of the barriers to innovation and evidence-informed practice is a lack of applicable examples of these in use in a social service context (IRISS, 2010, p. 3-4).Ā  The Participle example contains important learning about the complexities of the innovation process; a process which is so often depicted as necessary, rational and straightforward in Government discourse.Ā  It provides practitioners with an alternative to this, at times, rose-tinted view.Ā  They could follow the innovation journey of other organisations and use this to reflect on their own decisions and processes.

However, case studies that document the innovation process do not always exist, and where they do they are not always presented in a way that enables the maximum amount of learning. Ā Organisations can show a reluctance to describe the innovation process in all of its complexity, which would mean drawing out the difficulties that arise along the way. Ā The Participle case study was, arguably, a rare find in this regard. Ā  This was the only social service specific case study that I found to have significant engagement with the difficulties of the innovation process. Ā Yet, even here there were other shortcomings. Ā This, and other cases studies, offered very little detail about evidence use.Ā  Indeed, throughout the IRISS project it has proved tricky to untangle when and how evidence is being used during the innovation process. Ā It is hardly surprising then that our understanding of the relationships between evidence and innovation is currently based more on theory than on practice examples.

There may be legitimate reasons why social service case studies are failing to adequately document the innovation process. Ā In part, this surely feeds back to the wider context of social service reform in Scotland, and across the UK, which is increasingly geared towards high-stakes accountability (The Scottish Government, 2011; The Munro Report, 2011). Ā In such a context individuals and organisations may not be comfortable with documenting their failings, in the interests of promoting a better understanding of the complexities of evidence-use and innovation, through fear that this may impact on their reputation or funding. Ā It may also be the case that, given the complex nature of these issues, some individuals and organisations do not feel equipped to engage in discussions which capture the complexity of the process, whilst still managing to present this information in an accessible way.

We need individuals and organisations to share by providing case studies of their innovations, and to share in an honest, detailed way.Ā  I am using ā€˜case studyā€™ rather loosely here to refer to any form of story telling about the innovation process, not only by researchers but by practitioners and those who use services.Ā  This means shaking off long-standing ideas and ideals about what counts as evidence, who is the ā€˜expertā€™ and who can carry out research.Ā  Perhaps encouragingĀ and facilitating organisations and individuals to engage in ā€˜real timeā€™ documenting of innovation processes would be one approach here. Ā Not only would this be of benefit to future work on evidence and innovation, but it would also be extremely valuable to other individuals and organisations attempting their own innovations.

So important questions to consider going forward are how can individuals and organisations be encouraged and supported to provide quality descriptions and analyses of the innovation process, and how can this information be better shared with other individuals and organisations?

 

References:

Brown, L (2010) Balancing Risk and Innovation to Improve Social Work Practice, British Journal of Social Work, 40(4), pp. 1211-1228.

Johnson, M & Austin, M (2005) Evidence-based Practice in the Social Services: Implications for Organizational Change, University of California, Berkeley.
Accessed 23/09/13: http://www.cfpic.org/children/pdfs/EvidBasedPractFinalFeb05.pdf

Mulgan, G (2007) Ready or Not? Taking Innovation in the Public Sector Seriously, London: NESTA.

Nutley, S; Powell, A; Davies, H (2012) What counts as good evidence? St Andrews: Research Unit for Research Utilisation.

What (when, why, where, for whom) works?

This week I have been reviewing the literature on evidence, and its role in informing policy and practice in the context of Scotlandā€™s social services.Ā  Evidence-based approaches have been borrowed from medicine where they have existed for centuries, although they have only been explicitly labelled Ā ā€˜evidence-basedā€™ since the early 1990s (Claridge & Fabian, 2005).Ā  This approach to the delivery of services can be defined as ā€œthe conscientious, explicit and judicious use of current best evidence in making decisions about the care of individualsā€ (Sackett, Richardson, Rosenberg, & Haynes, 1997 quoted in Johnson & Austin, 2005, p. 4).Ā  In light of this definition the idea of utilising evidence-based approaches in social services, which has had a presence in UK Government discourse since the mid 1990s, seems logical.Ā  The aim is to increase the efficiency and effectiveness of social services, in order to improve the level of care they are able to provide.Ā  This remains a popular goal given the context of reduced public sector budgets across the UK (The Scottish Government, 2011).Ā  It also accords with the move towards greater and more transparent accountability in the spending of public money, which has been a powerful public sector reform discourse since the 1980s (Munro, 2004).Ā  This has led to the view that policy and commissioning decisions, as well as everyday social service practice, should be based-on, or informed and enriched by, the ā€˜bestā€™ available evidence.

Again this might seem logical, obvious even.Ā  As Oliver Letwin, Minister for Government Policy, said of evidence-based policy and practice, at the 2013 launch of the What Works Centres:

once youā€™ve decided that youā€™re trying to achieve somethingā€¦it does make abundant sense to try and find out whether the thing you are doing to achieve it has actually shown that it is capable of achieving it, and then to adjust it or remove it if it hasnā€™t, and reinforce it if it hasā€¦this is blindingly obvious stuff and I just feel ashamed, on behalf of not just our country but actually every country in the world almost, that this is regarded as revolutionary.Ā  It ought to be regarded as entirely commonplace.

And yet evidence-based policy and practice remain elusive goals in many social service contexts (Johnson & Austin, 2005, p. 12).Ā  My aim here is to isolate one possible explanation for this apparent difficulty, which is particularly relevant to the social work profession.Ā  For the purposes of this discussion I will use the What Works Centre view of evidence as a product or output upon which to base a judgment or decision, although I am aware that this definition is subject to debate and criticism.

Difficulties in implementing evidence-based approaches may exist due to the presence of hierarchies of research design in the area of evaluation and efficacy research.Ā  These hierarchies tend to prioritize experimental studies, or systematic reviews of existing experimental studies on a particular intervention (Johnson & Austin, 2005; The Social Research Unit at Dartington, 2013).Ā  However, this hierarchy may be problematic if we are trying to encourage the use of evidence in a social work context.Ā  First, from the perspective of evidence-based policy, experimental studies such as Randomized Controlled Trials (RCTs) can be extremely useful at telling us whether or not an intervention works; i.e. achieves the outcomes we have assessed it against.Ā  However, this research design may be less adept at answering a host of other questions, which need to be answered if a social work intervention is to be rolled-out across services as best practice (Cartwright, 2013).Ā  For instance, we need to understand why this intervention has or has not worked, and to understand in greater detail, how it works.Ā  We also need to understand who the intervention has worked for, when and where?Ā  The who question enables us to identify which client group(s) the intervention has been used with, and the extent to which their characteristics and needs can be viewed as typical across a range of social work clients. Issues of spatiality and temporality are particularly important if we seek to scale-up an intervention.Ā  Put another way, what we are trying to get at is both the causal ingredient ā€“ i.e. the thing to which we can attribute the effectiveness of our intervention ā€“ and the necessary support factors required for this causal relationship to hold (Otto & Ziegler, 2008; Cartwright, 2013). These support factors could include organisational culture, the individual attributes of the practitioners involved, the practitioner-client dynamics, the availability of the necessary resources and so forth.Ā  RCTs alone cannot provide us with all of this information.Ā  We require a methodological mix of research, capable of providing a more holistic picture, including evidence regarding the transportability of an intervention.Ā  Thus there is a need for greater recognition of the role of other study designs, and indeed of mixed-method evaluations, for successful evidence-based policy in social work.

Greater encouragement of mixed-method approaches may also be helpful in addressing a second concern about the research design hierarchy, which pertains to its impact on evidence-based practice. There may be a disjuncture between the ontological and epistemological positions which underpin experimental methods ā€“ that is the views about what kinds of things there are in the world, and how we can come to ā€˜knowā€™ them ā€“ and the ontological, epistemological, professional and value-orientated views of social workers.Ā  In order to make decisions, social workers will draw on, and piece together, a variety of different types of knowledge and evidence (Collins and Daly, 2011).Ā  This will include case notes from a range of professionals, their own observations and previous knowledge and experiences, and service user views.Ā  The interpretive skills of social workers are therefore paramount in making sense of the complex, multi-faceted and, often, incomplete picture they have of a client or situation (Shonkoff, 2000; Otto & Ziegler, 2008).Ā  These are skills that may be more closely aligned with interpretive research traditions, which seek to understand and interpret the actions and meanings of agents, and advocate the existence of multiple truths. Ā If social workers are to make better use of research-based evidence, it needs to be capable of improving their understanding of complex and fluctuating scenarios.

That is not to suggest that RCTs have no role to play, they do.Ā  They are useful at telling us what works, and we should encourage greater quantitative literacy amongst social workers so that they can read the latest evidence on interventions and reach their own conclusions about them.Ā  However, on their own RCTs may be insufficient to develop evidence-based approaches in social work.Ā  Not only are they unable to provide adequate detail of the conditions needed for causal factors to operate, which is crucial in a social service context which is characterized by diversity and contextual nuance, but such research may also seem ontologically and epistemologically removed from the professional foundations of social work.Ā  For evidence-based approaches to thrive we need practitioners on side.Ā  So, as well as continuing with RCTs which look at what works and encouraging quantitative literacy amongst the social service workforce, we also need to encourage the use of other, and mixed, methodological approaches (Cartwright, 2013). Not only will this make better use of the proficient interpretive skills social workers already have (Nutley et al, 2010, p. 135-6), it will also provide the well-rounded, nuanced evidence that is required for evidence-based approaches to be more valuable in a social work context.

In relation to the Evidence and Innovation Project at Iriss, this discussion suggests that there may be a role for innovation in informing and inspiring new methodological approaches and combinations in order to improve the effectiveness and take-up of evidence-based approaches in a social work context.Ā  It might also be the case that the view of evidence underpinning the ā€˜What Worksā€™ agenda ā€“ i.e. which situates evidence as a product or output upon which to base a decision or judgment ā€“ has limitations.Ā  In the context of the Evidence and Innovation Project, it may be important to explore other, broader, and potentially ā€˜innovativeā€™ understandings of what evidence is and can be used for.Ā  These issues will guide my thinking over the forthcoming weeks and will be returned to in future blog posts.

Jodie Pennacchia
The University of Nottingham
Follow: @jpennacchia

Ā References

Cartwright, Nancy (2013), Knowing what we are talking about: why evidence doesnā€™t always travel, Evidence & Policy, 9:1 pp. 97-112.

Claridge, J and Fabian, T (2005), History and Development of evidence-based medicine, World Journal of Surgery, 29:5 pp. 547-553.

Cnaan, R and Dichter, M E (2008), Thoughts on the Use of Knowledge in Social Work Practice, Research on Social Work Practice, 18:4, pp. 278-284.

Collins, C and Daly, E (2011), Decision making and social work in Scotland: The role of evidence and practice wisdom, Glasgow: The Institute for Research and Innovation in Social Services.

Johnson, M and Austin, M (2005), Evidence-based Practice in the Social Services: Implications for Organisational Change, University of California, Berkeley.

Munro, E (2004), The impact of audit on social work practice, British Journal of Social Work, 34(8), pp. 1073-1095.

Nutley, S;Ā  Morton, S; Jung, T; Boaz, A (2010), Evidence and policy in six European countries: diverse approaches and common challenges, Evidence & Policy, 6:2, pp. 131-44.

Otto, H and Ziegler, H (2008), The Notion of Causal Impact in Evidence-Based Social Work: An Introduction to the Special Issue on What Works? Research on Social Work Practice, 18:4, pp. 273-276.

Shonkoff, J (2000), Science, Policy, and Practice: Three Cultures in Search of a Shared Mission, Child Development, 71:1, pp. 181-187.