My attention was caught a little while ago by a story on the BBC news website about the effects of video-gaming in children. The paper it refers to (Przybylski, 2014) is published in Pediatrics. The paper reports secondary data analysis of a large ESRC survey, which divided children into four groups, according to their engagement with video games. One group played no games at all, one group played video games for less than an hour a day, one played for between one and three hours a day, and a fourth group played video games for over three hours a day.
Findings showed that the second group reported better adjustment. That is, playing video games for less than an hour a day was linked to better life satisfaction scores, more pro-social behaviour, and fewer internalizing and externalizing problem behaviours, when compared to (a) playing no games and (b) playing for more than three hours a day. There were no significant differences between playing a moderate (1-3 hours) amount and the other groups.
Then I saw this tweet.
— BBC Oxford (@BBCOxford) August 4, 2014
And it got me thinking about what we mean by the word “agree” when we’re referring to scientific findings. We could look at agreement in terms of scientific merit – the problems of relying entirely on self-report measures, and the correlational nature of the findings, might lead me to be hesitant in my agreement.
I could be pedantic at this point and disagree with the tweet, because there was no claim made about “producing” well-adjusted children. But the evidence, clearly and transparently reported in the paper, suggests that playing video games for under an hour a day was associated with better adjustment.
More interesting is considering invitation to agreement as based in the readers’ experiences. The tweet might be inviting anecdote. So I want to use this tweet as an opportunity to talk about the difference between anecdote and evidence.
You might well hear a scientist say at some point:
The plural of anecdote is not data
This is because anecdotes and experience are prone to bias. So is science – but it is arguably less biased than reliance on personal experience. This is why we do science, in fact – to try as far as possible to uncover truths about the world around us that are free of such bias. Research evidence, of which the above study is a good case in point, takes a representative sample (so here, 5 000 children) to answer its question.
Furthermore, these 5 000 children are each an anonymous row of data on a spreadsheet, not children with whom the researchers had any emotional connection or investment in their psychological adjustment. The argument of science is that taking such a sample gives a “truer” answer than asking one or two people about their experience.
And yes, it is true that some theories we have today in Psychology are rooted in the observations of one or two children. A famous example of this is of course Piaget (1959) and his careful records of his children growing up, showing us how children (might) develop. This, is a good example of empiricism – of observation. Piaget’s early records do not lay claim to cause and effect relationships. To determine the answer to why development happens, cue the scientific method (as argues Piaget himself, e.g., Piaget, 1932).
OK. Good so far. But what about qualitative research? Here, couldn’t each data set be summarized as a collection of anecdotes? Researchers are often explicitly interested in interviewees’ stories. And that counts as data. Well, yes – eventually. The data set is a set of organized “stories” that are then submitted to rigorous analysis by researchers, to build up a picture that is grounded in all the available data.
In sum, I heartily encourage students to question empirical papers: do the claims match up to the data that are being presented? What alternative explanations might there be for the findings? But questioning agreement from the point of view of your own experience, I do not encourage. This paper showed that spending less than an hour a day video-gaming was associated with good psychological adjustment among 5000 children. This result was statistically significant. Among those children, there will be several for whom this association is not true. Scientific methods allow us to draw conclusions based on a representative picture: more than an anecdote is needed, if you disagree with them.