by Daniel Jacobson
CN: mental illness, stigma, diagnoses
‘Approximately 1 in 4 people in the UK will experience a mental health problem each year.’
This statement comes from the mental health charity Mind, but similar statistics can be found wherever you look. Its ubiquity is almost self-explanatory: it’s easy to remember and quote, easy to picture in a population, and doesn’t scare people, whilst simultaneously confirming its gravitas.
Unfortunately, it is possible that the comfortable simplicity of these statistics may be a side effect of an unfortunate laziness displayed by people whose opinions carry the most weight. Mind say that 1 in 4 experience a problem each year, whereas Jeremy Corbyn says that a quarter of us will experience a mental disorder ‘during our lives’, and the NHS, from whom Mind obtain their statistic, say that, in 2007, 23% of the UK population ‘had at least one psychiatric disorder’, implying at that time in particular. All three have stressed mental health as a priority; all three are saying slightly different things.
“If this much of our collective knowledge of mental health is limited by definition, there must be an explanation”
Regardless of whether these discrepancies arise from laziness or misunderstanding, even the ‘one in four’ factor is open for debate. According to the BBC Radio 4 podcast More or Less, the origins of this statistic go back to The World Health Report 2001, commissioned by the World Health Organisation. However, not only do none of the citations specify ‘1 in 4’, but the report itself states that ‘some 450 million people suffer from a mental or behavioural disorder’ which, as of 2001, suggests a worldwide figure of 1 in 13. This led Jamie Horder, a researcher at King’s College London, to suggest that the prevalence of the incriminating figure arises simply as it is a ‘nice number’. For a statistic of its ubiquity, this is particularly disconcerting. If this much of our collective knowledge of mental health is limited by definition, there must be an explanation.
The term ‘mental health statistics’ is, for now, an oxymoron. There still isn’t a proper way of quantifying mental illness effectively: it is too personal, too complex, too broad, too vague, and too stigmatic a subject. The only methods of collecting mental health data in the above studies for a community have been qualitative and that, in itself, can be problematic.
“There still isn’t a proper way of quantifying mental illness effectively: it is too personal, too complex, and too stigmatic a subject”
The most common way of collecting this sort of data is through surveys. Whilst this is the most effective way of garnering a sufficiently-sized sample, especially for a subject like mental illness, this may be the worst possible approach. It demands a combination of self-diagnosis and an awareness of social norms in order to reliably report your own behavioural ‘abnormalities’ – a combination that nobody really has.
Survey responses are also swayed hugely by social factors, as reported in the aforementioned WHO report. Whilst it concluded a worldwide occurrence of all major psychiatric disorders of 24%, ranging from 7.3% in Shanghai, to 52.5% in Santiago, Chile. This huge diversity in results far from conforms to the ‘1 in 4’ figure in any meaningful way. But even then, this highlights the difficulty of collecting meaningful mental health statistics. This very diversity may be heavily informed by cultural disparities that alter diagnoses. A certain culture might have particularly strong taboos against discussing mental health, or might not recognise a mental state we might class as a ‘disorder’ as outside of the norm. If we impose our own culturally informed definitions and norms on a culture that doesn’t share them, the results may not be relevant to the contexts they are applied to.
The other prevalent way of collecting data on mental health is professionally, and is generally achieved using checklists, such as in Jon Ronson’s The Psychopath Test. These checklists, however, do not reflect the entire story: several characteristics may be falsely included or missed out, and each one included is given equal weighting with regards to diagnosis. A known example of this is autism, which has been reported to affect five times as many boys as girls. This has led to many girls being ‘false negatives’, meaning that they are not diagnosed when the disorder is present, simply because in the checklist, being male is an indication of the presence of autism.
“Charities, politicians, and government organisations, whilst displaying the right goals, are reporting fake results”
A general lack of public understanding can play an important role in the false reporting of statistics. Misreporting of specific details can completely subvert a study’s conclusion, and this carelessness manifests itself in a series of Chinese whispers. Charities, politicians, and government organisations, whilst displaying the right goals, are reporting fake results. These points can be used tactically in an effort to increase funding, or secure political popularity. In words attributed to Benjamin Disraeli, ‘There are three kinds of lies: lies, damned lies, and statistics.’ In order for a society to be well-informed, it must receive reliable information from sources dedicated to offering it.
One message that one should bear in mind is the fact that none of this is the fault of ‘statistics’ or ‘data’ itself, but either the means under which it was collected, or false interpretations by those who feel entitled to make them. Data collection and analysis are only ever half of the story; the second half relies on its true, unbiased reporting. These studies possess the capacity to not only reveal more about the true mechanisms behind mental illness, but destigmatise the topic as a whole. The first half has its problems, but increased care of trial design has the potential to fix this. However, our learning is stunted until we can trust the statistics we receive.
Header image by Neil Conway