By Cait Etherington August 05, 2017
Teachers are far from perfect. Ask any parent with school-age children, and they will soon start to share stories about unfair and inaccurate grading, negligence, and poor communication. Most students, even those in elementary school, are also happy to share stories about teachers who fail to meet the grade. Human teachers, it seems, are the target of a great deal of criticism, and this means that the idea of replacing them with algorithms naturally has at least some widespread support. After all, if we could replace human teachers, would we not also eliminate their errors, subjective biases, and oversights and better yet, find a way to tailor each educational experience to ensure that every child is getting exactly what he or she needs to excel?
Theoretically, this is true, but there is one problem. For every awful teacher, there are many teachers who are inspiring, supportive, and transformative. Yes, we all have memories of teachers we despised but also teachers who touched our lives in lasting ways. The future of teaching, then, appears to rest on the ability to harness new technologies in a manner that will permit teachers to spend more time doing what they do best and less time doing what they sometimes don’t do very well at all.
As a variety of robust learning management systems have already shown, there are things that algorithms can do that teachers cannot. For example, compare a university professor to the Canvas learning management system. For years, university professors have struggled to police whether or not students are doing their assigned readings, and they have relied on various methods from quizzes to probing in-class questions to weekly summaries to do so. Then, Canvas arrived. If a professor posts readings on the platform, it is possible to not only see which students have opened up the reading and when (yes, anyone who now opens the readings only three minutes before the start of class is now fully exposed) but also how long each student has spent reading the required texts. Similarly, while asking students to produce weekly summaries has long been a commonplace practice, especially in the liberal arts, with Canvas, keeping track of these short assignments and knowing exactly when they were submitted is now much easier.
The take away is clear: While an algorithm may not be well equipped to craft a carefully thought-out syllabus for a philosophy or literature or history course, an algorithm is more equipped than a human teacher to determine which students are completing the readings and when. And as new technologies continue to transform learning management platforms (e.g., moving forward, educators will increasingly be able to gather data on the parts of one’s assigned readings on which students are focusing), algorithms will increasingly also be offer insight on content and even use this data to make smart choices about which content requires the most attention in any given course and which content may not be relevant at all.
Beyond surveillance and gathering data on interactions with content, algorithms can also serve another purpose in the classroom: Answering routine questions. As reported in an earlier eLearningInside News article on Jill Watson, a virtual teaching assistant who is part of teaching team at Georgia Tech, algorithms are also well positioned to anticipate and respond to students’ routine questions (e.g., “What are the readings this week?” or “When is the final exam?”). Here, there is some good news for students: Unlike a human teacher, a virtual teacher won’t get frustrated if you keep asking the same routine question throughout the semester.
The limits of algorithms are relatively clear. First, for all their benefits, they are not necessarily able to make decisions about original content. Indeed, while an algorithm can help a human teacher understand what content students are interacting with and with what level of difficulty, relying on an algorithm to craft a syllabus would likely be a disaster. The other problem, which is more troubling, is affective. Algorithms might “know” what students are reading and when and how long they are interacting with course materials and even be able to respond to routine questions, but the bottomline is that teaching is about about emotional intelligence. If a student wrote to Jill Watson to explain that their father had just passed away and that they would be missing the final exam, for example, it seems unlikely that Jill would be well equipped to send an appropriate and compassionate response.
Moving forward, it seems unlikely that algorithms will replace teachers. However, as recent developments in machine learning have already demonstrated, at least some of the tasks that teachers have never quite mastered likely will be augmented by algorithms that in many respects have the capacity to “know” students in a different and even more accurate manner. The next step for machines and human teachers will evidently be to learn how to build productive “teaching teams” that make the most of both machine and human strengths.