Another Study Finding Cellphone Bans Don't Work is Misleadingly Hyped
Please God, send your Holy Asteroid to make the bad science stop...
I really don’t want to make this substack about complaining of bad social media/smartphone studies all the time. There’s Dungeons and Dragons, tanks, and other science stuff to talk about. But there’s also a whack-a-mole element to bad science, particularly during a moral panic like we’re in now.
The recent entry is an unpublished study that was irresponsibly hyped by some news media, advocates of cellphone bans in schools, and presumably the authors themselves. The hype was that the study presented causal proof that cellphone bans work to improve student learning. In reality, of course, it did the opposite, but critical thinking is low, and nonsense is high. So here’s a quick rundown of everything wrong with the narrative on this study.
1) It bears repeating that the study is unpublished. Yes, I know peer-review has many, many, many flaws. To be very clear, this is not meant to be a ringing endorsement of peer-review as it currently exists. But eliminating all scientific scrutiny altogether is worse. Did we learn nothing from the Cold Fusion debacle? This misuse of unpublished studies is becoming a widespread problem. When I first started seeing preprints (i.e. unpublished studies online), people assured me scientists were just sharing them to get feedback from other scholars. That would be fine. But now we’re clearly seeing them being used for press attention and policy decisions. This is not good. We’re advancing a lot of unreliable stuff here as if it were “science”.
2) The effects were basically zero. ABC news hyped up the study results as “surprising”, but the only surprise for cellphone ban fans was that cellphone bans in schools didn’t do much at all. I don’t love the statistical approach the authors use for a lot of reasons, but even taking it at face value, the effect sizes are basically zero (about r = .02 for grades and r =.08 for attendance1). These are basically zero, below the level we’d expect for statistical noise/unreliable results. Basically, if all you knew about any given kid was whether their school had a cellphone ban, you could predict a kid’s standardized testing scores about 0.04% better than chance. Yes, that’s 4 one hundredths of a percent better than chance. That’s assuming the effect is “real” but, it’s just as likely statistical noise.
3) Even these null results contrast with the federal NAEP standardized testing scores. Two years after Florida introduced state-wide cellphone bans in schools, their national scores plummeted to their worst levels in 20 years. I’m not saying that’s causal, but it does conflict with all the narratives from administrators and teachers that things were so much better in schools after the bans, all the kids were learning, holding hands, singing Kumbaya with flowers in their hair, etc. And clearly, cellphones were not the problem with Florida schools. Experiment tried…and failed.
4) This study is not a causal study. It appears they mainly compared schools that were higher or lower in phone usage before the ban (honestly their contrast is so weird and convoluted it’s difficult to make sense of). That is not a randomized controlled design. There are obvious confounds…those schools complying with the state laws quicker or more efficiently, or who had different pre-ban phone usage, likely differ significantly from those that were slower or had higher usage pre-ban on many levels than just the ban itself2.
5) The paper is weirdly coy about what school district they tested. They keep saying LUSD, but never clearly define what district this is, unless I missed it somehow.
6) I suspect it’s Orlando. I think that’s important, because I accidently introduced a confound in this study. About a year ago I began an investigative report, using public records requests with this district. The data they supplied made it clear that schools got worse not better after the bans, whether on bullying, mental health, and the overuse of suspensions. In doing so, I created a historical artifact. This new paper suggests that suspensions went down in the second year of the ban, but I think there’s good reason to suspect that Orange County (or even other school districts they talked to), got embarrassed by that suspension data. They may have responded accordingly, using suspensions less or even just cleverly moving some kids off the books so they wouldn’t appear as much in official statistics.
Ultimately, the reality is, like other hyped studies, this study, even taken at face value, provides better evidence against cellphone bans than for them. That fits with increasing data that cellphone bans in schools are not working and the hype around them is largely moral panic and (often willfully) ignorant of data.
Unfortunately, during moral panics, critical thinking tends to drop to zero. They hype around this truly unimpressive study is just another example.
To be fair, the results the authors present are a gish-gallup of confusing numbers. Invited by one of the authors themselves, I ran their paper through ChatGPT to assist with effect size calculation and see if I was off base. I put these calculations online and asked other statisticians to check them in case I got them wrong. The feedback I got was that they seemed accurate. I’m still open to the possibility I’m missing something here, so happy to edit if that’s the case. It would super help if authors would present their results more clearly than they often do, particularly standardized effect size estimates in terms of r, d or OR. This should be a basic transparency issue.
Honestly, it’s just a real mess of a contrast.


