Are Interim Assessments Living Up to Their Billing? New Review Aims to Find Out

Staff Writer

Interim assessments have become a sought-after option in recent years as districts look for products that offer more immediate data on student academic progress than are provided through year-end tests.

But are they delivering the high-quality, reliable information needed for districts to use these tests meaningfully? A new effort by organizations known for their independent reviews of education products and policy will look to answer that question.

Nonprofits EdReports and the National Center for the Improvement of Educational Assessment announced today they will collaborate to review commercially-available interim assessment products.

The analysis will focus on judging testing providers’ products against the claims they make to school districts.

The goal is to empower local districts by giving them information that can help inform instructional decisions, said Eric Hirsch, executive director of EdReports, which provides free reviews of instructional materials.

“[Some districts have] talked about abandoning their aligned curriculum because they would get results from interim assessment that would say their kids were way off the mark,” Hirsch said. “It really raises questions about, how well are these assessments aligned to standards in the first place? Are they measuring what they’re reporting to?”

Interim assessments come in many varieties, but they’re typically given at various points during an academic year or during a specific course, with the goal of diagnosing a student’s current understanding and informing where a teacher should focus next. They differ from end-of-the-year, summative exams, which generally aim to measure students mastery of skills and track whether they meet grade-level expectations.

Unlike state standardized exams, which are undergo a federal peer review process, interim assessment products receive little external evaluation.

Pandemic Created New Needs

It’s especially important for districts to better understand how an interim test aligns with state standards and supports teachers as educators scramble to pinpoint and address the learning students missed during the pandemic, Hirsch said.

And it could also prove critical to have more data on these types of tests as some suggest replacing end of the year summative exams — which currently serve as the basis of the country’s accountability system — with these more frequent exams.

NWEA, a nonprofit provider well-known for its computer-adaptive formative test, MAP, announced in 2019 that it was working to develop an interim product that aimed to replace end of the year testing with three tests taken in the fall, winter, and spring.

One thing we’ve seen with interim assessments is that districts, schools, and educators, they tend to use them for purposes beyond their design.Peter Leonard, Executive Director of Student Assessment and MTSS, Chicago Public Schools

“This review’s focus on quality is so important,” said Laura Slover, CEO of CenterPoint Education Solutions, a nonprofit that supports schools, districts, and states in designing and implementing aligned assessments.

“So much attention has been focused on quantity — how often, how many items,” she added. This review could instead “help districts make sure they are selecting tests that provide valid, relevant data to inform instruction.”

The reviews won’t look at whether or not products could serve the same function as summative tests and support a new accountability system, Hirsch said. But they will assess whether interim tests are doing a good job of providing the services they claim to offer.

For example, the designer of an interim test could claim that it’s predictive, providing data on whether students are on track to perform well in the future. Or, it could be marketed as providing descriptive results, showing students’ academic standing, with the aim of helping teachers make content decisions.

In many cases, “apples-to-apples comparisons” of different interim tests aren’t appropriate, said Erika Landl, senior associate for Center for Assessment.

“You really have to understand what they were designed to do, and evaluate those different tools relative to their intended [use.]”

The reviews will assess interim tests in three areas: How well the assessments align to college and career-ready state standards and expectations for fairness and accessibility; technical quality based on the types of information they provide on student performance and how they intend for it to be used; and whether there’s clarity and utility in the score reports and supporting resources.

An Independent Review

Having a third-party review will be helpful, said Peter Leonard, executive director of student assessment and multi-tiered systems of support for the Chicago Public Schools, especially because the onus is normally on school districts to choose or develop their own curriculum and assessments.

Until now, most of the research he and other district administrators have relied on when choosing interim testing products is data that comes from the providers themselves, Leonard said.

The most important thing to come out of the review will be telling districts “what purposes these assessments can serve,” Leonard said. “Because that then informs how we use it as a district.”

“One thing we’ve seen with interim assessments is that districts, schools, and educators, they tend to use them for purposes beyond their design,” he said.

EdReports and the Center for Assessment are slated to release reviews for two testing products in early 2023, starting with Curriculum Associates’ i-Ready exam and SmarterBalanced’s Smarter Balance Interim Assessment Blocks.

Participation in the reviews is voluntary. The organizations said they contacted “several” assessment providers and these two publishers agreed to be reviewed.

Follow EdWeek Market Brief on Twitter @EdMarketBrief or connect with us on LinkedIn.

Image by Getty

See also: