As the school year wraps up, children across California are sitting down for year-end standardized testing. The Golden State is one of 11 that fully participate in the Smarter Balanced assessment consortium, established in 2009 to create higher-quality, comparable tests across the country. As part of our deeper learning grantmaking, the Hewlett Foundation has invested in improving the quality of student assessment at all levels, from formative measures in the classroom to state-wide summative testing – including Smarter Balanced and PARCC, another state-led consortium.

While many states have made headway in measuring the knowledge and skills that matter through tests that are more accessible to different learners, and that provide better feedback to educators, other states have opted out of PARCC and Smarter Balanced. In this interview, Education Program Officer Chris Shearer discusses a detailed analysis of the current state of state assessments, completed by consulting group Education First.

What have we learned about the state of state assessments in this report?

In a nutshell, we have seen a shift in participation from nearly all states in the country belonging to PARCC or Smarter Balanced at their outset to fewer than half today. While 40 percent of states are still using consortia test items, test quality does not appear to be enough of a prominent factor in many states as they begin to contract separately for test creation. Indeed, federal peer reviews have found that less than half of these non-consortia states meet the criteria for a high-quality test. Finally, taking advantage of new flexibility under the Every Student Succeeds Act, an increasing number of all states are relying on the SAT and ACT for testing in high school.

What is the implication of states dropping out of assessment consortia?

That’s the $64,000 question. Testing has become a hot-button issue and the report shows that Smarter Balanced has dropped from 32 participating states in 2010 to 13 last year, and PARCC has seen a decline from its initial 27 states to just seven. Can these test consortia survive?

However, the work that states did together has “moved the market,” regardless of whether or not they decide to continue to collectively govern state test design and financing in the future. States led the way, commissioning much-improved test questions, which have fundamentally upgraded what vendors can provide and what local agencies now expect in terms of quality. It’s now more possible to ascertain what students know and are able to do. And states can compare their student results with one another much more easily. Survival of the consortia as “brands” may not be as important as the resulting nationwide jump in quality and sophistication they made possible.

What is happening, or could happen, with those states who are still in a consortium?

Testing has become a very hot-button topic since SBAC and PARCC first rolled out. I think states that can weather the testing storms, i.e. the backlash against the consortia, will still be well ahead of their colleagues if they stay together. Yes, test publishers can now offer many of these same test questions to any state “a la carte,” allowing one to go it alone and dodge some controversy. However, by remaining banded together, states will reap tangible benefits regarding testing cost, take advantage of their collective capacity to continue to improve quality, and will save time making such changes. Some vanguard states – including a group on the West Coast, anchored by California’s massive student population – are acting on this argument.

The other big thing to note is the collective state attention to issues of accessibility and testing accommodations for students with diverse learning styles. These next-generation tests would be hard for any one state to create alone, but make it possible for many states to pool resources and expertise to do it together, so that they can reach the widest variety of students.

What kind of support will states who “go it alone” need to get to make their assessments successful?

These states would benefit from embedding some of the proven consortia items into their new assessments for purposes of comparability – and some already are. They will also want to commission an independent third-party review of their assessment to determine quality and alignment to their state education standards. Fortunately, there are a number of expert nonprofits to help states out with both item selection and external quality reviews.

How are grantmakers thinking about these results?

Several foundations that have long supported states and nonprofits to improve the quality of student assessments will be watching how the market settles out. Education First’s report tells us that many states will soon be signing new contracts with vendors and the vast majority will be seeing high churn among their political leadership – upcoming gubernatorial, legislative, and school board elections abound.

In my opinion, the highest leverage, near-term opportunity for foundations is to ensure that there is both strong capacity and demand for quality test questions and for independent reviews of state assessment quality – and for supportive nonprofits to help states make good use of their test results for improvement. Grantmakers also need to keep a strong focus on transparency regarding achievement gaps and continue to foster testing innovation.