Classroom products that have undergone peer-reviewed research have cleared a lofty bar for proving their merits. But that doesn’t mean that K-12 officials will be impressed with the evidence at hand.
A recently released survey, in fact, has found that just 11 percent of district administrators and teachers said they would flatly reject buying or adopting an ed-tech product if it lacked peer-reviewed research behind it.
The findings were part of a project led by a working group of researchers, school officials and others studying the uses of evidence in educational technology. The working group emerged from a symposium staged earlier this year by Jefferson Education Accelerator, a commercial project that pairs education companies with school districts and independent researchers; and Digital Promise, a nonprofit that tries to promote the effective use of research and technology in schools.
That top-of-the-line finding does not mean school district officials are inclined to ignore peer-reviewed research outright—but for many, it was clearly not their top priority, said Michael J. Kennedy, an associate professor of special education at the University of Virginia, who led the project.
Forty-one percent of respondents said they give “strong consideration” to whether an ed-tech product is backed by peer-reviewed research, and another 41 percent said that research meeting that standard is something they consider, but it’s not essential.
The remaining 7 percent of those surveyed said they will buy or adopt products without strong research behind them.
Other factors appeared to matter much more to K-12 officials weighing an ed-tech product than whether it had been peer-reviewed.
For instance, 38 percent of survey respondents said the cost of the digital product is extremely important to them, and another 19 percent said it was very important. In addition, 38 percent said the extent to which the ed-tech tool meshes within existing district initiatives or products is extremely important, and 27 percent said it is very important.
The survey is not nationally representative, but it had a fairly broad reach. There were 515 respondents, and they came from 17 different states. Twenty-four percent were district tech supervisors; 22 percent, assistant superintendents; 7 percent, superintendents; 27 percent, teachers; and 10 percent, principals. They came from a mix of urban (31 percent), suburban (26 percent), and rural (23 percent) schools, as well as schools with a mixed makeup (20 percent).
The survey was conducted via an online platform, through a link disseminated through social media, discussion boards, and members of the working group.
Companies Managing Risks
It would be tempting for academic scholars and others to “wag a finger” admonishing school districts for not paying more attention to peer-reviewed research, Kennedy said in an interview. But doing so ignores the many competing pressures that K-12 educators and administrators have to weigh in making a ed-tech purchases, including fighting to keep costs low and figuring out if products meet specific classroom needs, said Kennedy, a former elementary and special education teacher.
Kennedy believes the survey results show that many district officials are interested in the evidence backing up products, but they aren’t sure how it can be applied to day-to-day school needs.
“There’s a disconnect between what researchers think is high-quality research and what school districts think,” he said in an interview. Despite school officials’ interest in weighing evidence, for many, their attitude is, “when push comes to shove, I’m buying what I’m going to buy,” said Kennedy.
District officials and teachers want to know how the ed-tech product will help their school- and classroom-specific needs, or, “what does the product do, and how does it do it?” Kennedy said. That may mean they may want to know if an ed-tech product can mesh with interoperability standards, and how much useful data it can churn out.
Kennedy also pointed to a number of familiar obstacles that get in the way of applying academic research on products—particularly ed-tech tools—to classroom settings.
For one, the process of conducting research, going through peer review, and publishing the results can take years–by which time the landscape of ed-tech may have changed. Another barrier: Peer-reviewed research may apply to one ed-tech product in one setting but that doesn’t mean it’s relevant to the needs of teachers with different classroom populations and circumstances.
Kennedy also believes the survey results underscore the risks that ed-tech companies, particularly startups, face in arranging independent research on their own products.
Companies that subject their products to exhaustive research that yields positive results have a potentially powerful asset they can use to sell their products, he said. But if the research produces lackluster results, it can be a huge blow to companies’ work and their reputations. If school officials don’t value place a high value on research to begin with, Kennedy asked, why would entrepreneurs take that risk?
For that reason, Kennedy believes that the greatest driver for ed-tech entrepreneurs to subject their products to rigorous review will need to come from the K-12 community.
“If school officials put their foot down,” he said, “that’s the only stick that will push tech developers to seek some evidence for their products.”