Articles

Editor’s Picks

Researchers Designed an Automated Digital Feedback System that Improved Students’ Grades

By Henry Kronk
July 06, 2019

Whether it’s teaching, testing, assessing, or any number of other aspects of education, many researchers are currently investigating how digital tools might scale proven educational practices for larger student bodies. A group of researchers from the University of Alberta have recently developed and tested an automated digital feedback system for tests. They published their results in Frontiers on June 28.

It has been well-established that students benefit from feedback from their tests. On a basic level, it gives learners a target to shoot for on their next assessment or on the final exam. It is also a beneficial exercise for instructors themselves.

Feedback Is Known to Improve Learning–But It’s Difficult to Perform at Scale

When dealing, however, with a class of hundreds of learners—or maybe even thousands in a MOOC—it would take a small army of teaching assistants to provide adequate and timely feedback for learners.

The team of researchers from the Faculty of Education and the Department of Education Psychology at the University of Alberta led by Associate Professor Okan Bulut recognized this reality. As a result, they decided that the best way to create a digital feedback tool was to automate it.

The team had the benefit of working with an established education course at UA. In all, 776 “pre-service teachers” participated in their study.

The researchers set about designing their automated digital feedback tool, which they called ExamVis. To do this, they helped instructors establish a blueprint of the courses’ two midterm exams.

For each midterm subsection, they asked the instructor to identify key concepts and provide prewritten feedback based on varying levels of success on the exam. Students who scored above a certain level received only positive feedback.

ExamVis, a Digital Feedback Tool

The team then designed two different styles of digital feedback reports: one short, and one long. With the long report going into much further detail, both reports broke down a student’s score into numerical figures, visual representations, and provided prewritten teacher feedback.

To deliver the feedback, the system requires that students take the exam on a university computer. When it came time for midterms, students were allowed to book an exam time on a school computer at their convenience.

The research team ran two studies in total. In the first, students were presented with a short report immediately upon completing the exam, and then had the option to view the longer one once every student had completed it. (This was done so that early test-takers couldn’t share their results with their peers). The second study only provided students with a long report sent to their email following completion.

They then separated students into test and control groups based on which they chose and assessed their final exam scores against their midterms.

The Results

As the authors write, “Overall, the results of Study 2 indicated that students who were provided with extended score reports with personalized feedback performed better than those who had access to both short and extended score reports.”

This was found by applying a hierarchical linear regression model to the results. With the differences between the academic achievement between the two test groups, researchers determined that the longer digital feedback accounted for 68% of the improvement. The researchers did not, however, say how much better the second cohort performed compared to the first.

The authors conclude, writing “Our findings from both studies revealed that the score reports helped the students perform better on subsequent exams. However, a long-standing challenge in the feedback literature still remains. Even well-crafted feedback that is tailored to students’ strengths and weaknesses, elaborates on deficient areas, and is administered via a computer to minimize possible harmful effects on students’ self-esteem, does not translate in immediate adoption, processing, or feedback-seeking.”

Read the full report here.

Featured Image: Wikimedia Commons.