False Links Between Screen Time and Cognitive Development

False Links Between Screen Time and Cognitive Development

Why social science needs to stop marketing trivial effects as meaningful.

A new report, covered breathlessly by Time Magazine, dramatically warns parents that screen time is associated with decreased cognitive development in young children.  The study in question was published in JAMA Pediatrics.  But does the data from the study actually provide evidence for such dire warnings?

The study uses a parent-report format with mothers reporting on both the development and screen time usage of their children.  Data were collected on just over 2400 children.  That’s an impressively large sample but, ironically, this actually sets the study up for its critical error as we’ll see in a bit.  This design has other issues.  Getting both the predictor (screen time) and outcome (development) data from the same person (mom) also creates biases.  These biases can result in small correlations between variables that reflect self-report data rather than “real things” that happen in the real world. While parent-report is a helpful source of information in assessing and tracking children’s development, in a clinical setting assessment of children’s development would also include more objective assessment measures.

The authors of the study also acknowledged that they did not look at the type or quality of the programming the children were watching. This is a serious flaw typically found within screen-time type research.  Singing along to nursery rhymes and copying the actions could encourage language and motor skills.  Not all screen-time is equal and should not be treated as such.

Results of the study suggest that screen time correlates approximately r = .06 with reduced development at 5 years of age.  This effect was “statistically significant.”  But here comes the study’s important mistake.  As noted above with the problem of getting two points of data from the same source (mom), sometimes study designs can result in small, but spurious correlations that reflect methodological design error, not “real” correlations that exist in the “real world.”  With large sample sizes (such as, say 2400 kids), those tiny correlations can become “statistically significant” even if they don’t reflect anything going on in the real world.

Let’s put the size of this correlation into perspective.  Put in statistical terms, the size of this correlation reflects 0.36% shared variance.  Put in layperson’s terms (though less precise of course), if all we knew about these kids was their screen time, we’d be able to predict their cognitive development about 0.36% better than a coin toss.  That’s about one third of one percent, not thirty six percent.  And that’s assuming that this effect was real, which, as noted above, it probably isn’t due to other methodological issues.

A recent article by Amy Orben and Andrew Przybylski in Nature Human Behavior put this nicely into perspective.  This supposed effect of screen time on cognitive development is less than the effect of eating potatoes or wearing eyeglasses on decreased mental health.  We don’t warn people about potatoes or eyeglasses, though, because those correlations are obvious nonsense, and this one is too.  So “statistical significance” doesn’t equal anything we should worry about or even anything that’s actually real.  Parents, policy makers and the media need to treat these kinds of claims with a grain of salt. 

Social science has a widespread problem with using press releases to misinform the public based on poor quality research.  Scientists can sometimes identify such results as meaningless, but newsmakers and parents who don’t have statistical expertise may be less able to do so.  Unfortunately, this appears to be supported at times by professional guilds such as the American Psychological Association and American Academy of Pediatrics.  Parents should be informed that such organizations are not government organizations nor even science organizations, but professional guilds that exist to protect and market their practitioners.  Fortunately, government reviews, such as a recent one in the UK, tend to be more open and acknowledge that data to support our moral panics over screens remains limited and that more and better quality research is required if we are to be confident about such big claims.

News media, likewise, particularly those reporting on science, need to be more alert in avoiding “Death by Press Release.” They have, unfortunately, been suckered before, such as with a report linking games like Grand Theft Auto to decreased empathy in boys that turned out to be based on fatally flawed data.  By uncritically reporting on these studies, some news organizations are contributing to moral panic and misinforming rather than informing parents.

The bottom line is that this new study actually provides better evidence against the idea that screens cause cognitive delays in toddlers than for it.  Such tiny effect sizes as 0.36% of variance should not be considered “evidence” for anything.  Unfortunately, claims were made about effects that the actual data can’t support.  We all need to do better than this.   

Original Article Published on Psychology Today

Previous
Previous

Preparing and Protecting Youth (and Marriages) from Pornography

Next
Next

Because I Loved Her, I Left Her