It’s increasingly becoming ‘business as usual’ in the mental health sector to speak of the role of lived experience when conducting evaluations. I’m particularly passionate about the role of lived experience in evaluation, as someone who at various times has worn the hats of participant, evaluator and commissioner.
I love evaluation because it has the potential to identify what works and spread that. When things aren’t working as well as they could, it also provides the opportunity for reflection and learning. This couldn’t be more important in the mental health system, where there are pockets of excellence in a system that otherwise often lets down, disempowers and actively harms consumers.
But if we speak the language of lived experience involvement in evaluation, without genuinely understanding and committing to what that means, evaluation too becomes, at best ineffective, and at worst, harmful. It offers the promise of lived experience involvement and control, and a reality without that. This replicates an all too common consumer experience in the mental health system.
I recently spoke at the TheMHS 2021 Conference about the potential role of lived experience in evaluations. This is part one of a two-part article series that draws on this recent presentation to identify what the role of lived experience could be, and some of the challenges evaluators and commissioners face in getting this work right. Part 2 on the role of power in lived experience evaluation is available here.
When to involve people with lived experience in evaluation
Evaluations are often complex, but when all the complexity is stripped away, there are four core activities. These can happen iteratively and repetitively, concurrently, or linearly.
Currently, lived experience involvement in evaluation is generally focused on data collection. People with lived experience are data points. They provide their information to evaluations through the sharing of their stories and personal data and experiences. Not always, but often, this runs the risk that evaluations are extractive. They draw on our experiences, without facilitating our input into the other fundamental processes of conducting an evaluation.
One understanding of evaluation is that it is about understanding whether an intervention has been implemented to anticipated standards, or delivered expected outcomes. If we are confining lived experience engagement to data collection, people with lived experience are always dancing on a stage built by someone else. We don’t get a real say in whether evaluations are evaluating the things that matter to us, are doing it in a way that works for us, and are generating findings and recommendations with meaningful implications for us.
There are opportunities to support lived experience involvement – and leadership – throughout the evaluation process. I’ve provided some suggestions below. This list is by no means exhaustive – I’d love to hear more about your experiences and what’s worked for you, so please feel free to get in touch.
Establishing key evaluation questions and methods:
-
- Include people with identified lived experience on the commissioning and evaluation team (including in leadership roles where they have genuine decision-making power)
- Connect with lived experience peer researchers and evaluators
- Involve people with lived experience in determing what the evaluation scope and purpose should be – what do they want to know about the program, its implementation and outcomes?
- Involve people with lived experience (e.g. program participants) in determining what are the evaluation criteria that matter to them – if the program was functioning as it should, what does that look like?
- Involve people with lived experience in determining how they want to be involved in the evaluation
Collecting data
-
- Involve people with lived experience in the selection and design of data collection tools
- Use lived experience peer researchers – we bring many strengths to data collection
- Involve people with lived experience in participant recruitment
- Collect data from people with lived experience – understand their experiences and outcomes
- Ensure everyone who has contributed to your project understands how their contributions will be used, and what the evaluation has found based on their experiences
Analysing and interpreting data
-
- Develop analytical processes and frameworks with people with lived experience
- Involve people with lived experience in analysis processes – making sense, identifying themes and gaps
Reporting data and disseminating findings and recommendations
-
- Involve people with lived experience in identifying where the outputs should be shared and what formats they should take
- Involve people with lived experience in writing, recording and reviewing project outputs
- Use other methods of reporting and dissemination beyond a traditional, jargon-laden report
- Involve people with lived experience in disseminating project outputs
- Share the findings of any evaluation that’s drawn on the wisdom and experiences of people with lived experience
While I’ve said ‘involve’ several times in that list, wherever I use involve, know that I also mean lived experience involvement and leadership. Part 2 of this article series will focus more on the power challenges built into evaluation.
In evaluation land, we can get bogged down in relying on our standard data collection methods, such as surveys, interviews and focus groups. We structure our project governance approaches around endless meetings. These approaches are often picked out of convenience – we’re familiar with them, and they work in an evaluation approach where decisions about what matters are typically held by a small group of commissioners and evaluators.
But we often don’t spend the time thinking about whether these approaches work for people with lived experience, and genuinely allow for the kind of shared power and decision-making conjured up by the term ‘lived experience in evaluation’. Think about other ways that might allow accessible engagement and involvement with a project. This article, framed around remote design in covid times, has some great ideas (and potential pitfalls) for how to engage people in non-traditional ways in design, equally applicable to much evaluation work.
It’s important to remember that it’s not enough to just identify these opportunities and chuck a person with lived experience in there. Meaningful lived experience engagement and leadership in evaluation means fundamentally examining the role that power plays in how you set up and conduct and evaluation. That is explored more in Part 2 of this article.
Thank you to the commenter in the chat after my presentation who inspired the article’s title.