Social Media, Mental Health and Attention
A new study using objective measures of social media finds little evidence for a correlation.
A new study just dropped considering the relationship between social media use, mental health and attentional control. This one was conducted by Chloe Jones and colleagues at Curtin University in Australia. All-in-all it’s kind of a typical self-report study except for one innovation: they use objective rather than self-report measures of social media usage.
Participants included 425 undergraduate students. They filled out surveys related to their mental health, completed a task that measured their attention, and also supplied data from their phone (via “screentime” or “app usage”) for time spent on TikTok, Instagram, Snapchat, Facebook, and Twitter/X. That last bit has been a recent innovation in some studies…collecting data directly from devices, rather than relying on unreliable self-report data.
Using objective data also cuts down on one source of noise, mainly single-responder bias. Single responder bias occurs when data on both a hypothesized predictor and outcome are collected from the same person. This can cause a bias in responses that leads to false correlations. So, it’s great the authors did this.
The authors then run 30 bivariate correlations between the outcome measures (attention, total mental health symptoms, anxiety, depression and stress) and social media (total time and each of the 5 platforms individually). Of these 30 correlations, only 4 were statistically significant, and these all had very weak effect sizes (r = .11 or .12). Overall, that’s a pretty poor showing for the social media is bad hypothesis. Even those 4 significant correlations could be due to chance (more on this in a second).
As such, this paper adds to increasing evidence that current concerns about social media bear more resemblance to a moral panic than a public health concern founded on solid and clear evidence. That said, I do have some criticisms of the paper.
1) I do wish scholars would stop relying on bivariate correlations. We know there are certain variables we really need to control including sex, personality (particularly neuroticism), and family abuse/neglect. The latter is particularly true given evidence Mike Males has presented from CDC data suggesting the real issue depressed teens have is family abuse, not screens. It may also be worth controlling for exposure to suicides in the individual’s social sphere, such as family and friends, as well as exposure to bullying, and marginalized status, as marginalized teens may turn to social media for help when they are picked on. We really need more focus on standardized regression coefficients from properly controlled data, and less focus on bivariate correlations.
2) The authors don’t control for the accumulation of error due to multiple analyses. I suspect that with a Bonferroni correction, even those 4 out of 30 significant correlations would have become non-significant.
3) This is a common problem, but the abstract displays negativity bias. That is to say, the abstract only mentions the 4 correlations that were significant (hence “bad news”), and fails to mention that there were 26 other correlations that were non-significant. As such, reading the abstract only (which, unfortunately, is what most people do) gives the impression that the authors found more consistent, albeit small, results than they actually did. Sadly, too many authors do this with their abstracts, and we need to improve a culture throughout social science where abstracts are clearer about null results.
Ultimately, this is just one study done with college students. It suggests that social media use has little to do with either mental health or attention. However, this doesn’t tell us much directly about whether, say, cell-phone bans in schools would be useful for teens. At the moment, the evidence in that realm is pretty messy. I’d say the data we have don’t look promising for these types of things being useful interventions. Yet, we really need some rigorous, preregistered randomized controlled trials. Unfortunately, policy makers have already put the cart before the horse and created sunk costs so getting good, objective data now may be difficult.