Evidence and Elephants

A blog post by Dr Emily Tweed, Clinical Lecturer in Public Health, University of Glasgow

You might have heard of the parable about a group of blindfolded people and an elephant. Each of them is feeling their way around different parts of the elephant so they disagree about whether they’ve encountered a snake (the trunk), a tree (the leg), a spear (the tusk), and so on: in some versions, they come to blows over it. This story is reputed to be thousands of years old and has been used to illustrate all sorts of ideas. One of the most common is the idea that our understanding of the world is limited by our individual experience and a greater understanding requires multiple perspectives.

As someone with a researcher-practitioner role, this parable resonates with me in relation to how we use evidence. Sometimes, there’s a risk that by approaching an issue from a particular standpoint – whether methodological, disciplinary, or ideological – we fail to grasp how it is part of a bigger whole.

This is especially likely given existing conventions and norms about what constitutes evidence, or good evidence. How many of you have seen slides or papers referring to the ‘pyramid of evidence’, with randomised controlled trials at the top, and wondered how we can possibly move forward in a field where relatively few interventions or policies can be tested in a trial?

Take drug-related deaths, for instance. To understand the terrible increase that Scotland has seen in people dying from the acute effects of drug use, we need a diverse range of evidence types. Randomised controlled trials might be able to tell us which specific treatments work best to improve a highly select set of outcomes in a particular population, often under carefully controlled situations. However, they can’t tell us much about the context in which people use drugs in the real world (their ‘risk environment’); how social networks influence protection and harms; how drug markets have changed in response to Covid-19; the factors that affect the adoption of different legislative approaches to tackling drug-related harms; or many other relevant questions besides.

Answering these questions requires us to draw on multiple ways of knowing, matching the method to what we’re trying to find out, and piecing it all together in view of each approach’s strengths and limitations. Rather than a pyramid, our model of evidence probably needs to resemble a web – a web that attempts to represent the extraordinary complex interplay of factors that are contributing to the drug-related deaths emergency.

We value randomised controlled trials because they are one of the most reliable ways of eliminating all the ‘noise’ that might interfere with the ‘signal’ of whether something works or not. But sometimes the noise is what’s interesting – all those contextual factors that contribute to harms, benefits, decisions and outcomes. Focusing too much on experimental approaches narrows our view so that we only consider the possibilities of what can be tested, rather than looking more broadly about what might contribute. It’s this tendency which has resulted in an evidence base skewed towards individual-level interventions trying to improve health and wellbeing rather than changes in societal processes and institutions (like education, housing, policing, and urban planning).

There’s a mismatch between evidence we want, and the evidence we have. Our commitment to being evidence-informed is laudable, but sometimes we forget how powerfully our decisions are shaped by evidence that’s missing. To go back to the starting analogy, if all we are using is our hands, and all we can feel is a tail, then no wonder we might be getting things wrong – we’re trying to tackle a snake when in reality it’s an elephant.

Fortunately, there’s an increasing appreciation among funders, researchers, practitioners, and policymakers of the value of doing things differently – of using interdisciplinary approaches; evaluating ‘upstream’ policy decisions as well as individual treatments; and conceptualising public health challenges as complex systems with multiple contributing factors rather than single linear causes. In my role, I’ve been trying to pay more attention to the ‘missing evidence’, and how it might be influencing my decisions – and to consider which ways of knowing are best suited to different pieces of the puzzle. We may never be able to see the whole elephant, but we can at least get a better idea of how to nudge it in the right direction.

About the author

Emily Tweed is a clinical lecturer in public health at the MRC/CSO Social and Public Health Sciences Unit, University of Glasgow. Her research and practice interests are in health inequalities, with a particular focus on the use of routine administrative data to understand the broader determinants of health.

The following articles were influential in the writing of this blog and may be of interest to readers:

Categories: Blog series, Importance of EvidencePublished On: March 30, 2022

Share This Story, Choose Your Platform!