Does this ed-tech product work?
That question is at the core of many implementations in districts. Nearly 150 school leaders learned Wednesday about a free tool that has been developed so they can conduct research to answer the question themselves. The leaders were gathered here for an “executive summit” at the Future of Education Technology Conference, and this was one of the sessions many attended.
“There’s no common definition of what it means [for a product] to ‘work,'” said Fatima Jibril, senior Education Pioneer fellow at the U.S. Department of Education’s Office of Educational Technology. “For some, it means usage data…but is [the ed-tech] really moving the needle? That’s what we’re seeking to help schools and school districts understand.”
The goal of the free Ed Tech RCE Coach is to help schools conduct their own evaluations to get actionable results, Jibril said. “RCE” stands for rapid-cycle evaluation, and the “coach” is actually an online tool designed to walk districts and schools through the process of evaluating the educational technology used in classrooms.
Educators who use the website are guided through steps to identify the question they want to answer with their test. Then, it shows them how to set up and run the ed-tech evaluation, and then analyze results to arrive at a conclusion.
Questions Districts Ask to Evaluate Products
Schools can look at existing educational technology, or run a pilot with a product they are considering, using the rapid-cycle evaluations.
Among the questions districts have used the tool to explore are:
- Does use of eSpark in middle school ELA courses lead to improved iReady assessment scores?
- How do students who used eSpark do, compared to students who used Lexia?
- Does Dreambox Online Math increase fall NWEA math scores among Title I elementary summer program participants compared to students from the same schools and grade levels who did not use Dreambox?
- Does a communication strategy aimed at parents increase home use of First-in-Math?
After they run their own evaluations, districts have the option to publish their results anonymously on the website. They do not need to share their findings, however.
“People are apprehensive about being the first to post their results, but hopefully we’ll get to that point,” said Alex Resch, the associate director of human services research at Mathematica Policy Research, which developed the process for the U.S. education department.
Lessons Learned from Early Adopters
In the first year of testing by the rapid-cycle evaluation tool, the team that rolled it out has learned some important lessons, said Rebecca Griffiths, a senior researcher at the Center for Technology in Learning at SRI International, which has also worked on the project.
“RCE Coach needs to fit with current practice, address real needs, and fit with the way that districts think about evaluation,” she said.
In large districts where there is a centralized unit that does evaluations—but where that group is asked to do more tests than it can manage—the RCE Coach can be used for capacity-building, she said. “They could see themselves running with RCE Coach and teaching others how to use it,” she said.
While creating randomized trials is the best way to learn with confidence about a product’s effectiveness, the researchers said that schools had reservations about using this method. The reason is that randomization means some students in a control group would not have access to the same ed-tech solution, “and schools had reservations about that,” said Griffiths.
School districts have expressed “lots of interest in using the RCE Coach in evaluating the impact of devices,” to figure out whether their 1-to-1 implementation is working, she said. “This is a tough one for us.”
It’s not a matter of whether the devices work, but to reach a valid conclusion would require looking at the role of the devices, the professional development used, the type of software involved, and how that’s implemented, Griffiths explained.
“It’s what you do with it that’s going to help students learn. That’s where we’d encourage” schools to focus, she said.
It’s important to decide what outcomes a school wants to focus on. One principal, for instance, wanted to see if the evaluation could tell her whether the use of an ed-tech product was increasing student engagement and excitement about what they were learning.
“You can decide what outcomes are important to you, and generate evidence that is really relevant to your district,” said Griffiths, who noted that it’s critical for schools to have a champion to follow through on the use of the rapid-cycle evaluation tool.
Monitoring Usage as a Metric
“We believe there’s great potential to the use of usage data,” Griffiths said. It can help schools understand how technology is being used, and the difference in its impact between high-volume users and low-volume users, for instance.
“However, there’s a wide variety of practices in place for how those data are captured by developers, how they’re reported, and how useful they are,” she said. “We see this as an emergent set of practices.”
The RCE Coach team produced templates for the collection of usage data, and created a guide for how schools can start conversations with ed-tech developers about usage data. They even produced the content for an email to help facilitate discussing the subject with developers.