Few Colleges Deploying Edtech Products Consult Scientific, Study Finds, But Other Factors Might Be More Important
By Henry Kronk
June 29, 2019
One of the central issues pertaining to technology and education today is that few—if any—know whether a new edtech product or service will improve learning outcomes. It might also have unintended consequences. While many researchers have investigated the decision-making process behind edtech implementation at the K-12 level, few have done so in higher ed. Assistant Director and Senior Researcher Fiona Hollands and Research Assistant Maya Escueta, both of the Center for Benefit-Cost Studies in Education (CBCSE) at Columbia University have a new article on the subject.
To research “How research informs educational technology decision-making in higher education: the role of external versus internal research,” Hollands’ team interviewed 45 edtech ‘decision-makers’ from a wide variety of institutions around the U.S. With the view to get as broad a set of perspectives as possible, the team asked decision-makers from 2- and 4-year institutions organized as public, private non-profit, and for-profit.
How Decision-Makers Decide on Edtech: From “Reasonably Ration” to the “Garbage Can Model”
These decision-makers had previously implemented a range of edtech products and services in their institution. These included learning management systems (LMS), online course resources, analytics tools, adaptive courseware, and more.
They stated their primary research question as follows: “Do educational technology decision-makers in higher education use research to inform decisions about acquiring and using educational technology to facilitate teaching and learning and, if so, how?”
Before Hollands et al. arrived at what information institutions gather, they looked at their decision-making process. The best processes involved starting with identifying specific issues and coming up with a solution for them.
On the other end of the spectrum, some interviewees displayed the garbage can model of decision-making. The model was theorized by Michael D. Cohen, Johan P. Olsen, and James G. March in 1972 to describe how some organizations make decisions. The model generally involves putting the cart before the horse. As Cohen et al. wrote, “Recent studies of universities, a familiar form of organized anarchy, suggest that such organizations can be viewed for some purposes as collections of choices looking for problems, issues and feelings looking for decision situations in which they might be aired, solutions looking for issues to which they might be an answer, and decision makers looking for work.”
The garbage can model involves taking a potential solution and using it as a bin into which one can try to throw as many issues as possible. Cohen et al. believed that “organized anarchies” like universities exhibited this decision-making process too often.
‘Research’ Is a Subjective Term
Every decision-maker Hollands et al. interviewed said they researched a range of potential edtech products before signing on the dotted line. For some, this constituted only a “background information-gathering process.” A majority (80%) also went through demonstrations of the products or piloted them.
Many also read case studies provided by the vendors, checked out user reviews, surveyed their own community, and employed a variety of other techniques. When it came to applying “scientifically rigorous” information to their search however, just 18% read “scholarly papers or journals about educational technology strategies” and 16% conducted their own scientific comparison studies.
This tends to track with edtech implementation at the K-12 level, according to some research. A 2017 working group that surveyed 500 decision-makers in K-12 districts found that, “Only 11% said they would not buy or adopt a program if peer-reviewed research was absent.”
While the phrasing of the questions was different, the working group found that just 7% of respondents would turn down an edtech product if proper research were not available.
Among the interviewed decision-makers of the current study, the most common form of research involved “conducting student, staff, and faculty interviews, surveys, or focus groups about educational technology issues.” 40% of respondents used this technique.
Nearly a third of respondents also considered company-provided efficacy studies. As Hollands et al. write, “only one interviewee was able to provide a definition of efficacy that reflects the definition of efficacy research provided by the National Science Foundation and U.S. Department of Education’s Institute for Education Sciences (U.S. Department of Education, Institute for Education Sciences and National Science Foundation 2013), suggesting that the efficacy studies alluded to might not involve experimental or quasi-experimental methods.”
‘Our Student Body Is Unique; Outside Research Doesn’t Apply’
This description of decision-making doesn’t necessarily appear to be positive. But many interviewees also discussed a few reasons why they didn’t make scientifically rigorous material a top priority.
There often isn’t much of it to begin with. Or if there does happen to be a study relating to the product(s) in question, they might apply to different contexts or grade levels. Their sample size might be too small to draw trustworthy conclusions.
Interviewees reported how, in some contexts, research existed on the technology they were considering, and that they had no issue with it per se. Still, they didn’t refer to it or put stock in its findings because the decision-makers believed their student body was unique. They didn’t think the research applied to their institution. The authors report that every person they interviewed believed this to some degree.
Others, furthemore, said that research might not be the most important aspect of successful edtech implementation.
The authors acknowledge in the final section of their study that they have implicitly assumed that solid research is a good—if not the best—means to determine how well an edtech product might work. They conclude, writing, “Despite these critiques, we acknowledge that there is no rigorous evidence to show that educational technology decisions based on experimental or quasi-experimental research guarantee better teaching and learning outcomes than those based on less rigorous, internally-conducted research and pilot studies.”
They describe how numerous decision-makers they interviewed made a similar argument. Many also said that acceptance and “buy-in” among the community formed a more important factor than research determining its effectiveness.
Read the full study here. Access is open to the public.
Featured Image: Matthieu Joannon.