Whether you are a spoke worker, hub manager or interested in CSE policy and research the term ‘realist evaluation’ might be completely new. So we will be introducing some of the key elements of a realist approach over five blog posts that will hopefully help you understand the kind of research we are undertaking.
The first post in the series looked at the types of questions realist evaluators ask, and concluded that we need to know why and how change happens if we are to really learn from programmes like the hub and spoke model. In this post we're going to look at why the evaluation team are generating and testing theories.
Think back on the last week - what kinds of things did your service do, for and with young people? Maybe you were sitting in McDonald's listening carefully to a young person talk about her 'boyfriend' and how he must love her even though he passes her off to other men. Or perhaps you were in a strategy meeting advocating for a family that needs extra support to cope with the impact of abuse and having to go to court. Whatever you were doing, there was probably a good reason for what you did, and the way you tried to do it.
All services, programmes and interventions have some of kind of ‘programme theory’ - some ideas about how they are meant to cause positive change - but we’re not always brilliant at articulating these. That's why people are increasingly using ‘theory of change' or ‘logic model’ approaches, which are tools that help us communicate these programme theories. Here are some examples if you haven't come across them before.
Realist evaluation is part of this family of theory-based approaches to research: developing theories about how the project is meant to work at the start, using this to direct the data collected, and then refining these theories at the end on the basis of the new knowledge we've generated.
In the Alexi Project we began by commissioning a review of the evidence on hub and spoke models. This, together with data from the evaluation of the phase one services, helped the team develop some initial theories about the model, which were further developed over the course of the second year. As we have entered the final year of the project we have a number of theories related to each of the six project outcomes, and data from the phase three services is therefore being used to test and refine these 'candidate theories'.
Here's an example of a draft theory that has emerged from years one and two, and is currently being tested.
This theory is based on analysis of interview data from eight services involved in phases one and two - for example, this reflection from a Children's Services Manager.
"It’s gone very well, it’s been very good. That’s mainly because I, I suppose you could say I’ve learnt to deal with my anxiety (laughs). I have… When I spoke to you last year I was really anxious, I worried a lot about not having control, not being a direct manager….. it’s trust, I’ve learnt to trust [spoke worker], and I feel confident in her and the wider [charity] team” (Children’s Services manager)
Embedded in these theories are insights about the contexts in which services are working (e.g. initial concerns from statutory teams) and the mechanisms that actually create positive change (demonstrable skills and expertise of spoke workers). This should help commissioners and service providers be able to plan more effectively, if and when they are using hub and spoke models in the future. For example, if this theory is upheld by data collected this year it might suggest that spoke workers will only win the trust of statutory teams in new areas if they have enough experience of working on CSE cases. This in turn, might affect the recruitment policies of future services so that newer workers stay in the hub, while the more experienced are sent out as new spoke workers.
Or in other words - there's nothing so practical as a good theory.
Look out for the next blog post in the series: 'Interventions don't work'.