In a recent op-ed about online proctoring, ProctorU CEO Scott McFarland made some concerning claims about how he feels proctoring online exams is “essential” and “indispensable.” Many were quick to point out their skepticism of the owner of a proctoring company making such a claim.
So I need to dig deeper into this article from @elearninginside, but several glaring problems on the surface. First of which is letting a proctoring company president write an article like this in the first place. Sigh…. https://t.co/mQaXaqo0pf
— Rage Against the AI Cyberocracy (@grandeped) March 25, 2021
One important detail that McFarland left out was that the exams or tests themselves are not essential. Not only that, he skipped over some of the largest concerns with proctoring, while also not accurately addressing the research that is happening in this area.
So how can I make such a brash counter-claim that tests, quizzes, and exams are not essential? McFarland mentions how proctoring is not optional if “the goal of an exam or test or assignment is to measure learning or skill mastery.”
This is where the big reveal comes: quizzes, tests, exams, assignments – none of those can measure learning or skill mastery. Not directly at least.
Assessments, and the Proctoring Systems That Monitor Them, Do Not Measure Mastery
The core problem here is that we really don’t know exactly how the brain learns information or skills. And for what we do know, we don’t have the ability to directly observe when it is happening in the brain. That would be painful and dangerous. So we have to rely on something external to the brain serving as evidence that learning happened.
Snake oil is both nutritious and delicious, says snake oil salesman. Thanks to @grandeped for this thread. https://t.co/vOD2WBGKJ0
— Charles Logan (@charleswlogan) March 25, 2021
All forms of tests and assignments are designed to serve as this evidence in the form of a proxy for direct observation. The idea is that if you really learned something, you can take that knowledge and answer questions about it, or describe it, or do a project with it, or something along those lines. There are a wide range of assignment options that work well as a proxy, but exams and tests are usually questionable at best. This is especially true when they rely on one of the most popular forms: the multiple choice question.
Multiple choice questions are a very low-level assessment option because the answer is always right there in front of students. Learners might know the right answer, or they might be really good at guessing the right answer, or the instructor might ask a question that actually relates to something they learned elsewhere rather than in class. Not surprisingly, some students learn how to game the questions to figure out which is the correct answer. I used to do this all the time in school. I would pass tests I never studied for just because I could often recognize the patterns that teachers used to come up with the “wrong” option choices. (People tend to lie differently than they tell the truth, in my experience). At best, tests can measure how well we can take the test, but there is no guarantee they measure what we actually learned.
Academic integrity is a mirage at best, a prayer lifted up to the assessment gods asking that, somehow, all the questions actually connect with what was learned, and that all of the students didn’t just get lucky in guessing or gaming one way or the other.
Tests Do Not Replicate How Students Apply Knowledge in the Real World
Even if that prayer is answered, there is an even bigger problem with tests: they are not realistic. How many of us sit around answering test questions (of any kind) all day long for our jobs? If you look to the areas of universal design for learning and authentic assessment, you can find better ways to assess learning. This involves thinking about ways to create real world assignments that match what students will see on the job or in life.
I have been teaching online at a public university for over 10 years, and I never give tests, quizzes, or exams. Most of the entire program I teach in was designed without tests – undergrad and graduate level courses alike. Obviously, every subject is different and not all classes can be like mine. But the main class activities in my courses revolve around projects that have students creating artifacts that match closely what they would do in a real life position in our field. Often students work through four stages for each project, spanning the entire school semester. Weekly discussion questions (based on opinion and application) keep students interacting each week.
No tests, no papers, no proctoring, no plagiarism detectors … just students applying what they (hopefully) learn.
https://twitter.com/hypervisible/status/1379501433763078145
And since I work with students weeks and weeks before they turn in projects, due dates become guideposts and grades become moot, because they can get feedback until they know they are going to score what they want.
Isn’t this kind of the way real jobs work as well: interacting as we learn, rather than punitive set-backs at the first mistake?
At least that is how good jobs should work.
This Is Hardly the Only Issue with Proctoring
No one would deny that cheating is happening, nor is anyone saying that we should not be concerned that it is happening. There is, however, a growing concern with issues that McFarland did not mention. To that end, there are plenty of people that you can read and follow if you want to hear about the bias and privacy issues with proctoring: Safiya Noble, Chris Gillard, Audrey Watters, and many others.
However, I know that most instructors have no control over whether their institution has a contract or not with a proctoring company. So I am focusing here on individual responses. But please keep in mind that some institutions have taken a close look at this issue and decided to discourage the use of proctoring tools while encouraging authentic assessment. That can happen.
“But wait!” one might say, “McFarland gave a LOT of evidence that cheating is happening. Like, A LOT of cheating. We have to catch those cheaters, right? Won’t they just cheat on authentic assessment as well?”
As someone who has worked in online education for decades as an instructional designer, faculty, and researcher, I have followed the research and conversation on cheating for quite a while. There really isn’t much consensus on how much cheating is going on. McFarland makes claims that 50% of students cheat and that less than 1% of cheating is reported. Does that mean 5,000% of students are cheating? McFarland also cites a recent 800% increase in “confirmed violations of academic integrity.”
“Academic integrity is a mirage at best, a prayer lifted up to the assessment gods asking that, somehow, all the questions actually connect with what was learned, and that all of the students didn’t just get lucky in guessing or gaming one way or the other.”
In fact, the research is actually all over the place. You will see numbers anywhere between 2% and 95%. As one research paper puts it: “the precipitating factors of academic misconduct vary across the literature … The research of academic integrity is often unsystematic and the reports are confusing.”
Then, there is the claim that McFarland makes that “a separate, peer-reviewed research paper published in May of 2020 in the Journal of the National College Testing Association also confirmed the link between online classes and dishonesty.” But that is not what the paper said at all. That paper looks at the differences between proctored and unproctored exams, and makes a lot of claims about how online learning has the potential for more dishonesty. But it does not confirm a link between dishonesty and online courses, because it was not looking for that.
Research on Student Cheating Is All Over the Place
McFarland also mentions the Dendir and Maxwell paper on cheating, which is an important study. But please keep in mind that all of the issues that Dendir and Maxwell quickly dismiss in Section 5 (“Caveats and Limitations”) are actually huge factors that could really impact their numbers. I always caution people to be careful with that paper because Dendir and Maxwell really did not address the limitations well enough. I pointed out several problems with multiple choices questions above that they did not explore, and their point about students being comfortable with the technology hearkens back to the “digital natives” myth that was never that accurate. Not to mention students themselves are speaking out about the intense and disruptive discomfort they experience due to online proctoring.
The main reason for all of this confusion about the research is that there is little consistency in the time frame, the definition of what counts as “cheating,” and how the frequency of cheating is measured. Sometimes students are asked to report one semester’s worth of cheating. In other cases, the time frame covers their entire time at college. When asked what forms of cheating occur, researchers often don’t ask if the form was considered cheating at the time (some faculty allow open book exams, asking others for help, and other activities that are often counted as “cheating” in the literature). Then there is little research into the actual frequency. Did a student “cheat” on 2 out of 10 assignments one semester, or 5 out of 50 in the same semester? There is a big difference.
The research is usually aimed at seeing how many students cheated, not finding out the likelihood that they will cheat on your specific test this Friday.
The reason for this is because those numbers probably wouldn’t be as scary as “5000% of students cheat!” This is important because the real concerns most critics have with proctoring technology are about the problems with racism, ableism, and privacy violations that students have reported. If you think that most of the students in your course would probably cheat, you kind of shrug at the possible problems and say “well, I have to do something.” But if someone were to say that there was a less-than-5% chance any given student would cheat on your specific exam, then suddenly, the problems you subject students to do not seem worth it.
Or, at least, they shouldn’t.
Matt Crosslin, Ph.D., is currently an instructional designer in the online course development industry, as well as part-time faculty at a public university in Texas. He is also the lead author of the book Creating Online Learning Experiences: A Brief Guide to Online Courses, from Small and Private to Massive and Open. He has been involved in online education for over 20 years. He also blogs occasionally at EduGeek Journal, watches or reads a lot of SciFi and Fantasy, and occasionally paints or draws something.
Featured Image: Jeswin Thomas, Unsplash.
No Comments