At the 2018 Australasian Evaluation Society conference in Launceston, I gave a paper reflecting on my experiences of conducting evaluation in the mental health sector while having a lived experience.

I experience mental illness, and for some time, I have also worked on projects in the mental health sector, as a consultant, an employee in a not for profit, and in a lived experience advisory capacity. I am also studying the role of lived experience in mental health evaluations.

One of the reasons I have been drawn to this topic is that I feel like their is a gap in the evidence. When you google depression, you get this:

depression

 

Incidentally, when you google ‘evaluator’ you get lots of smiley people with clipboards. I don’t find it that surprising then that when the evaluation literature talks about including people with a lived experience in evaluation, it dichotomises people into ‘evaluators’ and ‘beneficiaries’; it doesn’t adequately account for occasions where those with lived experience also hold evaluation agency. Even in the way Fetterman expresses empowerment evaluation, perhaps the closest formalised theory of what I am talking about, evaluators are the ‘coaches’ of evaluation skills. So I’m keen to explore this more in my research. Alas, I have only just started my research so aren’t really in a position to meaningfully discuss it at this stage.

What I can do, is provide my reflections on undertaking this kind of work – some ‘practice-based evidence’.

Values

Evaluation is literally a process of judgement. We must acknowledge that our personal experiences and values colour how we make these judgements. Several people at the AES conference this year, presented approaches for actively surfacing these values within the evaluation, so they don’t sit as unacknowledged forces and tensions in the project.

This is helpful because I don’t think we, as evaluators, are generally forced to confront our values-based assumptions and biases in an evaluation. The processes that we do have for acknowledging assumptions are not necessarily values-laden (like financial conflict of interest) and are woefully inadequate for the kind of personal experiences that colour our judgements in the cases I am writing about.

We have no universal truths, simply different viewpoints on those truths, and we have to get better at explicitly incorporating those into our approaches, especially those where strong power dynamics are at play.

Power

Power is central in evaluation – and something we probably don’t talk about enough. It has been so heartening to see the focus on transforming power relationships as part of the ‘transformation’ theme in Launceston this year.

Evaluations don’t spring out of nowhere – they relate to programs, are funded and commissioned. These issues can never sit independently from the evaluation. So:

  • Who really controls the evaluation?
  • How does one account for and deal with variation in power and influence among participants and between participants and the evaluator?

Something I’m interested in is that evaluation is often tied to institutional lines, e.g. who funds, who do we report to? One of the benefits, I think, of lived experience involvement is being able to have a much more dynamic approach to implementation and impact – constantly learning and evolving. But we are often confined to institutional restraints in terms of how you can actually use that knowledge.

To achieve meaningful participation, evaluators must acknowledge power dynamics – if you’re not, you are failing in your duty to the communities you hope to support through your work. I want to avoid the curse of co-design – everything these days seems to include ‘co-design’ and it rarely is. I would contend that mislabelling something co-design is actively harmful as it papers over those power dynamics.

This matters in my context – and those of many dealing with lived experience evaluation – because stigma is still such a dominant force in our work. And more than that, self-stigma can have a tenacious hold on participants, leading participants to undervalue their knowledge and experiences in the company of ‘experts’. Those of us with lived experience need to remember that we are the experts of our own stories.

Language

We know that one of the reasons peer work in mental health is effective because the worker can act as a ‘translator’ between the clinical world and consumers. Similarly, there are a role for translators in mental health evaluation.

Evaluation is a world with jargon – we are the only people I’ve ever met who are hung up on the difference between ‘outcomes’ and ‘outputs’. Mental health also has a dictionary of jargon and acronyms (the Department of Health literally has a dictionary).

Having someone who can span those boundaries is helpful. But it’s important they don’t become a pressure point in a project – solely responsible for communicating in each direction. I’ve seen this happen. It links back to what I wrote before about power – it is incumbent on an evaluation team to engage in two-way communication and empower communities, rather than relying on translators to emerge. Especially because those translators are likely to have an element of privilege – the capacity to act as a translator – that means they may not be representative of their whole community.

Self-care

This is perhaps the most important point. For me, as an evaluator with lived experience, this means being constantly self-aware, constantly in a conversation with myself about how I am feeling, where my boundaries are, and if they are being tested. Finding a trusted relationship in my team to debrief is also vital. I use other self-care strategies too, but the point of these is that they are deeply personal. I am really excited that John Stoney and Emma Williams created space for this conversation at the AES conference in Launceston – and am keen to keep working with them on what this means.

 

I’m just starting my research journey in this space, and keen to connect with others doing similar work so please reach out!