Editor’s Picks

Education Technology

Higher Education


EdTech and Education in the 2010s: A Lost Decade or a Period of Progress?

By Henry Kronk
December 23, 2019

In the final months of the current decade, U.S. students received two lackluster report cards. First in October, the National Assessment of Education Progress (NAEP, the Nation’s Report Card) delivered 4th and 8th grade test results in math and reading. While scores went up or down by a few points in all categories, the results were largely unchanged compared to 2017.

Then, in November, the Programme for International Student Achievement (PISA) released their results from 2018. American students scored average in reading and science and below average in math.

These results led many stakeholders to issue a familiar refrain. The New York Times headline read “‘It Just Isn’t Working’: PISA Test Scores Cast Doubt on U.S. Education Efforts.”

Following the NAEP results, a group of 10 education orgs issued an agenda, stating, “we want to express the urgency that we see with the faltering reading results and dispiriting stagnant math results.”

In March of this year at an education policy conference, Secretary of Education Betsy DeVos said, “Over the past 40 years, federal taxpayer spending on education has increased about 180 percent, amounting to over $1.2 trillion cumulatively. And yet, we’re still 24th in reading, 25th in science, and 40th in math when compared to the rest of the world. Doing the same thing — and more of it — won’t bring about new results.”

In other words, many comments ranged from “what we’re doing isn’t working” to “spending more on education won’t deliver better results.”

With EdTech and Education, Many Are Quick to Point Out Faults

Some have voiced similar concerns about edtech. Bloomberg ran an editorial in September arguing that “Classroom Technology Doesn’t Make the Grade: Schools are spending billions on digital tools. Students have little to show for it.”

In March, Forbes’ Derek Newton mused, “What If Online Education Simply Doesn’t Work?” (True, one might say that online education is distinct from edtech, but there is overlap.)

Relating use of edtech in American classrooms and student achievement does on massive standardized tests not afford many concrete conclusions, if any at all. But they are related, and, while these tests were delivered over the past ten years, edtech was deployed into American primary and secondary schools on a massive scale. Americans have invested and spent huge sums to deploy edtech in classrooms. By the 2015-16 school year, roughly half of American students were issued their own (1:1) devices in school. That number has only continued to climb.

Illustration of lightbulbs indicating education and edtech
PhotoTechno, iStock.

Edtech developers raised a collective $1.45 billion in 2018, making it a record year. 2019 is looking like it may break this record againBloomberg values the American and European edtech market at over $100 billion.

So: after such a history of average and more or less unchanging test scores, are these claims true? Have these dollars (taxpayer and otherwise) spent on public education and edtech been squandered?

No one, of course, has been able to definitively and convincingly answer in the affirmative or negative.

But there are several points to raise that compromise the conclusion that American public education and/or the impacts of edtech are hopelessly mediocre. 

Inequality Is Growing in U.S. Schools

American students have not been as consistent as the broad test scores indicate. Inequality has grown significantly at schools, and far outstrips that witnessed in other countries.

The PISA results for low-performing students remains the same as what it was 30 years ago. But American educators have managed to raise the bar for the top and middle portion of their classes.

Furthermore, common wisdom indicates that some schools—certain charters, those in wealthier districts, or specially designated magnets—manage to educate their learners better. They have better funding, draw more inspired educators, or just have some special mojo coming out of the water fountains.

But the PISA test scores firmly disprove this idea. As the Hechinger Report’s Jill Barsay writes, “the vast majority of educational inequality in America is inside each school, according to the PISA test score report [emphasis added]. Statisticians mathematically teased out inequality between schools versus within each school and found that, in the U.S., only 20 percent of the variation in student performance is between schools. The remaining 80 percent is inside each school.”

(Barsay got in touch with a couple experts and explains this phenomenon in far more depth in her weekly column Proof Points.)

To be sure, income inequality is also growing outside outside of schools. This year, it reached a record high since the Census Bureau began tracking it.

Many Highlight Education and EdTech Failures Over Successes

It turns out that many other countries also struggle to improve their students’ test scores and conduct a fair amount of handwringing every time they fail to do so.

In New Zealand, students tested above average in reading and math. But the country has been on a downward trajectory. The latter fact was trumpeted over the former by New Zealand media.

Many voices also focus on the relative position of their country’s students before they get down to actual scores. The U.K. scored just worse than the U.S. in reading. But the BBC still celebrated the fact that the U.K. had been on an upward trajectory relative to that of other countries.

Standardized Tests Are Not Gold Standards of Assessment

While many education stakeholders and members of the public take the results of the PISA and NAEP assessments to be ironclad, there is a healthy debate about their methodology and communication details among experts and researchers.

A 2016 methodological critique of PISA found the test exhibited, “an inconsistent rationale, opaque sampling, unstable evaluative design, measuring instruments of questionable validity, opportunistic use of scores transformed by standardization, reverential confidence in statistical significance, an absence of substantively significant statistics centered on the magnitudes of effects, a problematic presentation of findings and questionable implications drawn from the findings for educational norms and practice.”

What’s more, many have taken issue with the communication employed by NAEP. What, for example, does ‘proficiency’ actually mean?

The Brookings Institute’s Tom Loveless describes the common misconception that the term indicates performing at grade level:

Proficient on NAEP means competency over challenging subject matter.  This is not the same thing as being “on grade level,” which refers to performance on local curriculum and standards. NAEP is a general assessment of knowledge and skills in a particular subject.

Equating NAEP proficiency with grade level is bogus.  Indeed, the validity of the achievement levels themselves is questionable.  They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.  The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.”

To conclude, many members of the public make it seem like statements regarding the failure of edtech and education policy are fact, not opinion. In many cases, these discussions require more nuance and understanding. Sounding the alarm isn’t necessarily the best response.

There is no doubt that American public education and its use of edtech have a great deal of room for growth. Some students in some districts in some schools—many in fact—are failed by the institutions that should be serving them on a daily basis.

But for others—also many—the current situation is working. Especially as highlighted by Jill Barsay’s column, certain groups of students are showing strong growth. It might be more productive for the public to seek ways to grow this progress to everyone instead of demonizing teachers, administrators, edtech developers, and policymakers.

Featured Image: VectorMine, iStock.


  1. I don’t think this study just dismisses the whole Gamification model, if anything, it only proves that badges do not encourage learning, I just don’t think it disproves Gamiication doesn’t work, it is not only about badges!

  2. Just because badges may not have a significant impact within education does not mean gamification in general is a failed experiment. My son is 9 years old and typing over 50 words per minute. How? Because of a typing game… he’s been obsessed with it. If his typing practice wasn’t gamified he’d probably still be typing with 2 fingers. It’s the best example I’ve seen so far where gamification is extremely effective.

Leave a Reply