The issue of social contagion of mental illness has gotten a lot of attention lately. Is it true? Does mental illness spread through populations (particularly youth) like a virus? As with many things, the reality is probably a bit more complicated than the narrative would allow. But what about the evidence? Did a recent study find that mental illness spreads via social contagion in populations of kids?
Published in JAMA Psychiatry, a new study found that adolescents with exposure to classmates with mental health diagnoses experienced a 5% elevated risk of developing a mental health problem themselves. The authors concluded this suggests a peer network transmission of mental illness. This was a big study of over 700,000 youth…which paradoxically is part of the problem with the study.
Big samples are mostly good, don’t get me wrong. But they cause an odd kind of problem. In essence (and yes a bit of statistical inside baseball), social science and medical stats try to ascertain whether a correlation or other effect is “statistically significant”. This means that the stats are trying to determine if any observed result could be due to sampling error (different groups differed by random chance). If you’ve ever taken a college stats class you may remember talk of p-values and that’s what those are…an estimation of random differences due to sampling error. In a sample of 700,000 people, it’s assumed that the sampling error is probably pretty minimal, thus it becomes much easier to identify smaller and smaller effects as “statistically significant” as they are less likely to be due to sampling error.
The problem is that error doesn’t only come from sampling error, but noise can be introduced into a study via multiple other routes, none of which influence the p-value. The way questions are phrased can change people’s answers, participants may guess the hypotheses and change their responses, and behaviors and responses naturally gravitate toward each other a little bit. Researchers themselves often inject various biases into studies.
As a consequence, it’s long been known that to observe a correlation of exactly .00 is actually rather rare. You tend to get a pretty big hum of nonsense correlations between .00 and .10, then trailing off becoming less common once you get to .20. But in big samples, even correlations very near .00 can become “statistically significant” even if there is a high probability that they are noise, not true effects.
Unfortunately, most social scientists and medical scholars somehow don’t know this or don’t care. There are probably lots of reasons for that. Most haven’t thought about effect sizes much, it’s not really to their advantage to question “statistically significant” findings (got to publish, publish, publish!), and people naturally want to think that their studies are important even when, most often, they are not.
So that 5%? As effect sizes go, that’s pretty clearly noise in my opinion. It’s an effect called an odds ratio which can be converted to a raw correlation of .013. Basically, about as close to zero as you can get without being zero. There’s a high probability that an effect that small is simply statistical noise not a true relationship in the real world. Or put simply, this paper does not provide evidence for the social transmission of mental illness in youth.
Scholars need to get better at this. There are lots and lots of studies with likely noise results that are falsely being peddled as meaningful. This means, in effect, social and medical science is producing a lot of misinformation. This should be a major scandal, yet we keep doing it.
As for “social contagion” my best guess is that this term has caused a lot of confusion. I see little evidence human behavior is analogous to a virus. That’s not to say that the increase in mental health among all age groups (it’s not just teens, we need to stop thinking of it that way) doesn’t involve social factors. But they’re probably more complicated than “monkey see, monkey do.”
Either way, even if we ignore that this tiny effect is most likely noise, even taken at face value it’s merely a correlation, not evidence of “contagion”. Likely, classmates are experiencing a number of social circumstances that are similar…same neighborhoods, similar levels of poverty, similar families, same teachers, same bullies, etc. They likely have the same cluster of teachers, administrators, psychiatrists and school psychologists involved in their diagnosis, all sharing the same biases. The data as it is provides no reason to prioritize “contagion” above other explanations. Even if, again, we ignore that such weak results are likely a false positive in the first place.
I hate to use the word reckoning as it is overused, but social science research really is in need of one. This paper is yet another example, among countless others, of how we use weak data with weak results to make overblown claims that mislead the public. This is absolutely routine in our field. And the public, who often pay for this only to be misled by it, deserve better.