Evaluating the Assessments

Forget whether or not students should be able to opt out of high-stakes testing – are they actually effective in measuring student achievement? A new report from the Fordham Institute examines four high-stakes assessments - ACT Aspire, PARCC, and Smarter Balanced and the Massachusetts’ state exam (MCAS) – against the Council of Chief State School Officers Criteria for Procuring and Evaluating High-Quality Assessments. Researchers found that overall, PARCC and Smarter Balanced had the strongest correlation to the CCSSO criteria.

Assessments were judged on three specific areas: 

  • Content: Do the assessments place strong emphasis on the most important content for college and career readiness (CCR), as called for by the Common Core State Standards and other CCR standards?
  • Depth: Do they require all students to demonstrate the range of thinking skills, including higher-order skills, called for by those standards?
  • Overall Strengths and Weaknesses: What are the overall strengths and weaknesses of each assessment relative to the examined criteria for ELA/Literacy and mathematics?

PARCC and Smarter Balanced rated an excellent or good match across all areas for ELA/Literacy and mathematics. ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed, but the researchers found they did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core.

“Going forward, the new tests—and states deploying them—would benefit from additional analyses. For instance, researchers need to carefully investigate the tests’ validity and reliability evidence…We need more evidence about the quality of these new tests, whether focused on their content (as in our study) or their technical properties,” writes one of the report’s authors, Morgan Polikoff, assistant professor of education at the University of Southern California's Rossier School of Education. “It is my hope that, over time, the market for state tests will reward the programs that have done the best job of aligning with the new standards. Our study provides one piece of evidence to help states make those important decisions.”

Read Evaluating the Content and Quality of Next Generation Assessments, by Nancy Doorey and Morgan Polikoff, Thomas B. Fordham Institute (February 2016)

At the AAP PreK-12 Learning Group’s upcoming Content in Context conference, key sessions will address the brave new world of assessments and what’s next for states after ESSA.

Tags: 
Assessment and Accountability
Educational Standards