This week I have been reviewing the literature on evidence, and its role in informing policy and practice in the context of Scotland\u2019s social services.\u00a0 Evidence-based approaches have been borrowed from medicine where they have existed for centuries, although they have only been explicitly labelled \u00a0\u2018evidence-based\u2019 since the early 1990s (Claridge & Fabian, 2005).\u00a0 This approach to the delivery of services can be defined as \u201cthe conscientious, explicit and judicious use of current best evidence in making decisions about the care of individuals\u201d (Sackett, Richardson, Rosenberg, & Haynes, 1997 quoted in Johnson & Austin, 2005, p. 4).\u00a0 In light of this definition the idea of utilising evidence-based approaches in social services, which has had a presence in UK Government discourse since the mid 1990s, seems logical.\u00a0 The aim is to increase the efficiency and effectiveness of social services, in order to improve the level of care they are able to provide.\u00a0 This remains a popular goal given the context of reduced public sector budgets across the UK (The Scottish Government, 2011<\/a>).\u00a0 It also accords with the move towards greater and more transparent accountability in the spending of public money, which has been a powerful public sector reform discourse since the 1980s (Munro, 2004).\u00a0 This has led to the view that policy and commissioning decisions, as well as everyday social service practice, should be based-on, or informed and enriched by, the \u2018best\u2019 available evidence.<\/p>\n
Again this might seem logical, obvious even.\u00a0 As Oliver Letwin, Minister for Government Policy, said of evidence-based policy and practice, at the 2013 launch of the What Works Centres<\/a>:<\/p>\n
Difficulties in implementing evidence-based approaches may exist due to the presence of hierarchies of research design in the area of evaluation and efficacy research.\u00a0 These hierarchies tend to prioritize experimental studies, or systematic reviews of existing experimental studies on a particular intervention (Johnson & Austin, 2005; The Social Research Unit at Dartington, 2013<\/a>).\u00a0 However, this hierarchy may be problematic if we are trying to encourage the use of evidence in a social work context.\u00a0 First, from the perspective of evidence-based policy, experimental studies such as Randomized Controlled Trials (RCTs) can be extremely useful at telling us whether or not an intervention works; i.e. achieves the outcomes we have assessed it against.\u00a0 However, this research design may be less adept at answering a host of other questions, which need to be answered if a social work intervention is to be rolled-out across services as best practice (Cartwright, 2013).\u00a0 For instance, we need to understand why<\/em> this intervention has or has not worked, and to understand in greater detail, how<\/em> it works.\u00a0 We also need to understand who <\/em>the intervention has worked for, when and where?\u00a0 The who<\/em> question enables us to identify which client group(s) the intervention has been used with, and the extent to which their characteristics and needs can be viewed as typical across a range of social work clients. Issues of spatiality and temporality are particularly important if we seek to scale-up an intervention.\u00a0 Put another way, what we are trying to get at is both the causal ingredient \u2013 i.e. the thing to which we can attribute the effectiveness of our intervention \u2013 and the necessary support factors required for this causal relationship to hold (Otto & Ziegler, 2008; Cartwright, 2013). These support factors could include organisational culture, the individual attributes of the practitioners involved, the practitioner-client dynamics, the availability of the necessary resources and so forth.\u00a0 RCTs alone cannot provide us with all of this information.\u00a0 We require a methodological mix of research, capable of providing a more holistic picture, including evidence regarding the transportability of an intervention.\u00a0 Thus there is a need for greater recognition of the role of other study designs, and indeed of mixed-method evaluations, for successful evidence-based policy in social work.<\/p>\n
In relation to the Evidence and Innovation Project at Iriss<\/a>, this discussion suggests that there may be a role for innovation in informing and inspiring new methodological approaches and combinations in order to improve the effectiveness and take-up of evidence-based approaches in a social work context.\u00a0 It might also be the case that the view of evidence underpinning the \u2018What Works\u2019 agenda \u2013 i.e. which situates evidence as a product or output upon which to base a decision or judgment \u2013 has limitations.\u00a0 In the context of the Evidence and Innovation Project, it may be important to explore other, broader, and potentially \u2018innovative\u2019 understandings of what evidence is and can be used for.\u00a0 These issues will guide my thinking over the forthcoming weeks and will be returned to in future blog posts.<\/p>\n
Jodie Pennacchia
\n<\/em>The University of Nottingham
\n<\/em>Follow: @jpennacchia<\/em><\/p>\n
\u00a0References<\/strong><\/p>\n
Collins, C and Daly, E (2011), Decision making and social work in Scotland: The role of evidence and practice wisdom<\/a>, Glasgow: The Institute for Research and Innovation in Social Services.<\/p>\n
Johnson, M and Austin, M (2005), Evidence-based Practice in the Social Services: Implications for Organisational Change<\/a>, University of California, Berkeley.<\/p>\n
Munro, E (2004), The impact of audit on social work practice<\/a>, British Journal of Social Work<\/em>, 34(8), pp. 1073-1095.<\/p>\n