The Myth of the “Digital Native” Generation

By Hillary Miller August 08, 2017

2055 3

“Digital native” has been a buzzword since the early 2000s, when it was first used to describe the generation born after roughly 1980 that grew up immersed in digital technology. The logic was that “digital natives” had an inherently better grasp on technology than their parents, who adapted to these tools later in life. This native digital fluency supposedly impacted everything from the way the younger generation interacted with information to their ability to multitask.

However, a new study suggests that the very notion of a digital native is a myth, and holding onto it could be hampering children’s ability to learn.

Origins of the Digital Native Concept

The idea of a digital native first came into the popular consciousness through an article entitled “Digital Natives, Digital Immigrants,” written by Marc Prensky in 2001. Prensky argued that the digital native generation was defined by those who had never known a world where digital and online technologies were not an integral part of life. Digital immigrants had come of age in the analogue world and migrated, by choice or by necessity, to the digital world.

Millennials of the digital age view computers, tablets, and smartphones as an extension of themselves, while digital immigrants often view these same technologies as a threat to the way of life to which they are accustomed.

Prensky’s 2010 book, “Teaching Digital Natives,” had a significant impact on the education and instructional design field. The book explained that the generational divide between digital natives and digital immigrants went further than their level of comfort with technology; it affected the way students interacted with information and absorbed knowledge. Digital natives, Prensky argued, learned differently from any prior generation, and education had to change in order to reach them.

“A Yeti with a Smartphone”

The new study, published in the journal Teaching and Teacher Education this June, challenged the long-accepted conclusions of Prensky’s work. Lead author Paul A. Kirschner found no evidence that there was a significant difference between millennials and older generations in terms of their skill in utilizing technology and their ability to multitask. Kirschner concluded that a true “digital native,” who was inherently better with technology simply by virtue of their age, was a myth: about as real as “a yeti with a smartphone.”

Kirschner posits that not only are these assumptions wrong, they’re also harmful to the education of children and young adults. Prensky’s idea of the digital native started a paradigm shift in educational theory, based on the notion that these learners were fundamentally different than earlier generations. But with no empirical evidence to back up these claims, Kirschner asserts that the corresponding shift is what amounts to a dangerous fad rather than a science-based cognitive theory.

The study particularly highlighted the falsity that today’s students are able to handle multiple streams of information better than their parents. Asking students to adapt to new uses of technology at the same time they’re learning core content through interactive multimedia may be hampering their ability to learn at all.

Implications of Kirschner’s Findings

It’s a bygone conclusion that the average American student has been exposed to more personal technology than their parents had at the same age. But this exposure doesn’t necessarily affect the way students use technology, or the way they learn best.

Kirschner isn’t the first to suggest that mere exposure to technology doesn’t always correspond to skill. The New Media Consortium’s Horizon Report found that millennials most often use technology in the same way as their parents: as passive media consumers. Educators are finding that growing up in the digital age is far from a guarantee that students are able to effectively leverage technology for research, collaboration, and content creation. Evidence is building that true digital literacy has to be taught explicitly, even to the youngest generation.

The full study, entitled “The myths of the digital native and the multitasker,” is  available on ScienceDirect.

Related News

3 Comments

  1. Hey Hillary,

    Great article, I think that the modern parent, who is indeed a “digital native” – one born around the 1980s, is looking for a bit of reassurance that they are not damaging their child by allowing them to use screens. Do you think we’ll see evidence of this soon?

    Best wishes,
    John

    • Hi John,

      That’s an interesting question! The prevailing theory suggests that, like most things in life, moderation is key. There’s no hard evidence to suggest that allowing children to use age-appropriate technology in small doses is harmful to their development. In fact, when parents participate in digital activities along with their children and provide guided interaction about how the activities on a screen connect with the real world, it can be a great opportunity for learning. (University of Edinburgh professor Lydia Plowman expands on this concept in an article from BBC: http://www.bbc.co.uk/guides/z3tsyrd)

      Of course, screen time should never be a replacement for human interaction, physical activity, or any of the other cornerstones of a child’s development. But — at least in my opinion — we’ll soon see more evidence that limited, structured engagement with screens is actually more beneficial than, say, passively watching television (like many in our generation often did!).

      All the best,
      Hillary