By Cait Etherington February 14, 2017
When Apple first launched Siri in 2013, many people were skeptical. Why talk when you can type? Moreover, for many adults, Siri’s limited understanding and trite responses were simply frustrating. Less than five years later, Siri is still “alive” and well, and an entire generation of children are now growing up with Siri. Although Siri’s ability to engage in complex conversations continues to leave much to be desired, “she” has accomplished one thing. Since Siri’s arrival, consumers have become increasingly comfortable with the idea of talking to or with their phones, tablets and computers. For this reason, Siri may be best understood not as an endpoint but rather as a transitional technology–something designed to ease the transition between two technological interfaces (in this case, the keyboard and voice-activated assistants). In other words, what Siri has been doing to us, whether or not we know it, is helping us adjust to and accept future forms of communication, which will most likely rarely entail the use of an old-fashioned keyboard. This may explain why we are now seeing the arrival of Siri-type assistants in eLearning courses too. Ready to meet Jill Watson, virtual teaching assistant at Georgia Tech? Like it or not, Jill, may soon transform the eLearning environment.
Chatbots, both text based and voice activated, have been used in finance and other industries for several years now, but in 2016, chatbots started to make waves in higher education too. In 2016, Professor Ashok Goel, at Georgia Tech, added a new teaching assistant to his team. Her name was Jill Watson and the course was Knowledge Based Artificial Intelligence (KBAI), which is offered every semester and is a requirement in Georgia Tech’s online master’s of science in computer science program. With about 300 students posting close to 10,000 messages over the course of the semester, Professor Goel was looking for help and so were his eight human teaching assistants. Enter Jill Watson. Jill can answer most questions and unlike the other eight teaching assistants in Professor Goel’s class, Jill was available 24/7. This meant that night owls could now be assured of having their questions answered in real time, even if they were completing course modules at 3:00 am.
As Professor Goel explains, “One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn’t really go up. Students tend to ask the same questions over and over again.” This made anticipating what Jill would need to answer relatively
easy. Goel’s team then wrote a code enabling Jill to field routine questions that come up every semester. However, as one member of Goel’s team admits, initially, like many new teachers, Jill had a lot to learn. “A student asked about organizing a meet-up to go over video lessons with others, and Jill gave an answer referencing a textbook that could supplement the video lessons — same keywords — but different context,” she explained, “So we learned from mistakes like this one, and gradually made Jill smarter.” By the end of the course, Jill was answering questions with 97% accuracy. This was just as high or much higher than most human teaching assistants.
In traditional eLearning environments, students are tethered to their keyboards and screens. While this is not necessarily a problem, it does create certain limits. For example, hands on activities, like fixing a car or appliance or building a piece of furniture, have traditionally been difficult to teach in an eLearning space, since students must read or view and then attempt to put their skills into practice. As voice assistants become increasingly popular, we will increasingly be able to do as we learn. No longer tethered to a keyboard and screen, it will become increasingly possible to put theory into practice in real time. Eliminating the lag between absorbing and applying knowledge is expected to increase the value of eLearning and accelerated learning on myriad levels, but especially in trades where voice-activated assistants promise to recreate the conditions of traditional apprenticeships, but perhaps with even a higher level of quality control and guidance.
For students and employees with physical disabilities, for the visually impaired and for anyone who is suffering from keyboard-related injuries, including tendinitis or carpel tunnel, the arrival of voice-activated eLearning also promised to expand access. Among other benefits, there is hope that moving forward, the eLearning space will increasing prove able to meet the needs of auditory learners in both educational settings and training contexts. Some educators, including those who work with young people who have one or more speech pathology, have also started to embrace voice-activated learning assistants as a way to improve students’ speech, since voice-activated assistants invariably require one to speak clearly. This, of course, is also where some educators and trainers are worried. Unable to understand slurred speech and many accents, there are concerns that while opening up access for some, voice-activated chatbots may also close off access to others. For this reason, there is a move to ensure that as chatbots appear as teaching assistants in a growing number of eLearning environments, they offer both text and voice-activated options, at least during this period of transition.