The Alexi Project covers 16 CSE services, all participating in the evaluation of the 'Hub and Spoke' model. It's a big, complex project with a lot of diversity in terms of the organisations involved, the local authorities they work with, the service landscape and the populations they serve. The evaluation team have chosen to adopt a realist approach to evaluating the model because it recognises and values what can be learned from this complexity.
Whether you are a spoke worker, hub manager or interested in CSE policy and research the term ‘realist evaluation’ might be completely new. So we will be introducing some of the key elements of a realist approach over five blog posts that will hopefully help you understand the kind of research we are undertaking.
To begin then, let's think about questions.
Realist evaluation is not a methodology or a set of tools; it’s more like a set of beliefs about evaluation and the types of question we should be asking.
Lots of evaluation is directed toward answering the question ‘Does this project or intervention work?’ Local and central governments are particularly likely to ask this question - for example the Cabinet Office 'What Works Network', which is made up of seven different independent What Works Centres each with their own policy area. Such a public commitment to using evidence for policy making is hugely important, but 'what works' is not usually a good enough question to guide researchers toward the kinds of answers that will really help policy makers.
As well as measuring impact and looking for evidence of improvement, realist evaluators want to know what caused the positive (or negative) change. In other words ‘What is it, specifically, about this intervention that has this effect?’
Let's take the example of a mentoring programme. If we don't consider how and why such a programme 'works' evaluators could gather data on a young person’s self-esteem ‘before’ and ‘after’ 10 mentoring sessions and then say that any improvements were caused by the mentoring. But of course, the improvement in self-esteem could actually be because the young person joined a football team at the same time as they started the mentoring, had been consistently praised for their effort on the pitch, and was feeling increasingly good about themselves as a result. In reality they would both be likely to have an effect, but the point is that we need to think about attribution. In other words how we can know that an improvement is the result of one thing, rather than another.
The hub and spoke model is trying to achieve a set of outcomes.
However, we are not simply looking for evidence that these have, or have not been achieved. We are also trying to understand why and how things have changed in each hub and spoke service in relation to these outcomes. Perhaps 3/4 of the services have managed to retain a distinctive identity and role focused on CSE. Is that because their multi-agency partners had a long-standing respect for the role of the voluntary sector and valued their distinct contribution? Or because spoke workers were co-located in other voluntary sector organisations which affirmed their way of working? Maybe both were important, or the cause was something else entirely.
Understanding the why and how, as well as the what, means that the lessons learned from this project can be much more useful. It can help us understand what kinds of conditions would need to be in place in order to achieve similar outcomes were you to 'roll out' the programme.
So the final evaluation report won't be asking 'Did the hub and spoke model work?' Instead it will be seeking to understand any significant patterns about how it worked in different places, and why that might be - with the goal of helping service providers, funders and commissioners design similar projects better in the future.
Look out for the next blog post in the series: 'There's nothing so practical as a good theory'.