By Lisa Berdie
When I talk about the use of data in education (and specifically school accountability) I tend to get one of two reactions– it’s seen as either a pet project, at best peripheral to, and at worst a distraction from, school improvement strategies; or that I’m a cog in an (evil) technocratic takeover of public education. And when I start to talk about how district policies and technical changes are undermining Colorado’s school accountability system, people think I’m mostly being melodramatic. But I’m convinced I’m not making a big deal out of nothing. I’m convinced we should be talking about data, and its role in school accountability systems and improvement strategies more (do I sound like a broken record yet?).
Too often these issues are hidden in seemingly benign district practices and technical decisions by policymakers. For example Chalkbeat recently released an article: “How Colorado districts comply with state law speaks volumes about their views on testing.” The Chalkbeat article could have easily been titled “districts facilitate opt-out,” or from my perspective “districts prevent public accountability for student outcomes.” This might sound like hyperbole, but I think it’s time we talk about how the collection and use of data can either compel us to address or ignore challenges and opportunities for Colorado students.
First, how districts can compromise our use of data and the critical conversations we should be having about school performance. In the above article Chalkbeat focused on how some district practices, in Boulder specifically, discouraged high school students from taking state standardized tests: for the second year in a row, Boulder high schools continued instruction for ninth-grade students who opted-out of PARCC assessments, effectively penalizing students who took the state assessments as they had to catch up on the content later (1).
Perhaps one of the few things most educators can agree on as a clear benefit of NCLB was the extent to which associated standardized assessment requirements highlighted massive opportunity gaps faced by students of color and students living in poverty. While reasonable people can disagree on the way to address these gaps, it is impossible to ignore that the data exists.
And we learn a lot from this data. But if the data is compromised when students opt out, we miss the opportunity for a critical conversation. Let’s look at Boulder as an example:
The Boulder stereotype: lefty, affluent, white. Families that public education generally serves well, with the financial capital to provide additional resources and opportunities in and out of school, and the political capital to advocate for opportunities. But the city and district is not as homogenous as some would believe. State standardized test results shows that even in idyllic Boulder, there are communities the public education system simply underserves. And I emphasize: without state testing data we’re at a loss for understanding who exactly Boulder’s education system underserves and work that needs to be done to address the gaps.
Here’s a conversation we miss having without data. Denver consistently (and rightfully) gets called out for having one of the largest achievement gaps in the country (2). The gap in high school math between non-FRL white students and FRL students of color in Denver in 2014? 50 percentage points. Boulder isn’t far from that large a gap with a 49 percentage point difference between the two groups of students.
In the recent past, all student groups have shown greater growth in reading and writing in Denver than in Boulder. This means that similar students (i.e. students that score similarly on state standardized tests) have been making more progress (score better the next year than students with similar scores in the past) in Denver than Boulder across all student groups.
The most recent data here is now two years old. There are some points of comparison we can draw between TCAP and PARCC– mainly relative percentile analyses (3). Yet, given extremely low participation rates in Boulder (and therefore non representative samples) at the high school level, these comparisons are rather pointless. This means that our ability to talk about equity, about academic achievement, and, most importantly, the practices that underlie the data, is seriously compromised.
It’s not only at the parent or district-level that decisions risk silencing vital conversations about equity and student achievement, conversations that are, at their core, about our kids’ civil rights. The Colorado Department of Education is proposing new District and School Performance Frameworks that use a “combined subgroup” that aggregates performance results of English learners, students of color, students with disabilities, and students eligible for free/reduced price lunch. Each of these groups of students are currently accounted for separately in the Colorado frameworks which allows districts, schools, researchers, and advocacy groups to be able to talk about inequity in a precise, action-oriented way. Combining these subgroups would fill these conversations about equity, student achievement, and improvement strategies with generalities, dismissing the unique strengths and opportunities for growth that these individual groups possess.
Ahead of State Board action on this issue slated to take place at tomorrow’s board meeting, A+ and 21 other organizations have been working to communicate the incredibly problematic nature of this proposal. It blurs our ability to understand which systems have effectively targeted instruction to address needs of different student groups; instructional strategies are different for English Language Learners and for students on IEPs, for example. It communicates low expectations for students by suggesting that students who fall into multiple subgroups will disadvantage schools and districts because of expected low scores. It limits taxpayer accountability given that each of these student groups receive funding, regardless of whether they fall into multiple student groups or not.
According to the Accountability Work Group that CDE convened to rework the accountability system, “the purpose of Colorado’s state school and district accountability system is to provide valid and actionableinformation to enable districts, schools and stakeholders to evaluate and improve the effectiveness of their programs, with the goal that all students, regardless of background or learning need, meet state expectations for academic achievement and growth and are prepared for postsecondary and workforce success” (emphasis is mine).
Creating a “combined subgroup” seems to undermine this stated purpose of accountability. At every level of aggregation we lose insight into what is actually going on with students, so rather than being valid and actionable, a combined subgroup seems to blur what the data means. It makes evaluation and problem identification more difficult. It assumes all students of color, students with disabilities, and students living in poverty have the same assets and struggles.
This move on the accountability side of the department goes hand-in-hand with the lack of publicly available PARCC data (see previous posts from A+ here and here). Boulder’s practice of effectively encouraging high school students to opt out of state testing and CDE’s combined subgroup indicate a troubling trend. Neither supports an education system that asks hard questions about equity and takes bold action to improve student achievement. Both serve to obscure the state of our public education. Both threaten Colorado’s ability to make research-based choices to improve our education system. Both make identifying trends and improving programs a less precise endeavor. Is that what we want for Colorado’s kids?
Correction: A previous version of this blog post incorrectly stated that the purpose of the accountability system was defined by CDE. It is in fact that purpose statement from the Accountability Work Group, the group convened by CDE to make recommendations to CDE and the State Board of Education for the 2016 release of School and District Performance Frameworks.
1. I certainly hear the arguments against standardized testing. Some that resonate with me more than others: a point-in-time assessment provides limited information about student performance (agreed! Data points must be contextualized and multiple measures should be used to understand school quality and student learning); state standardized tests make little sense for upper class high school students (yep- I agree students should have more flexibility in taking higher level courses and focus on personally meaningful tests like ACT/SAT, AP, and ASVAB); the focus on standardized assessment results can narrow programming and curriculum (that’s why you also see A+ advocating for equitable access to non-core academic subjects like art); standardized assessments cause undue stress for students and teachers alike; standardized assessments take away critical instructional time (the state has taken steps to limit time spent on state standardized assessments; district mandated assessments are another question altogether). Yes these criticisms bear weight. But I think what we learn from the data- both on a student and systems level- should be a bigger part of the conversation (and is mostly ignored by the opt out movement).
2. We’ve explained that Denver growing opportunity gaps are due to the fact that while FRL students have made gains in Denver, non-FRL students have made much larger gains.
3. See our analyses that compare schools and districts based on percent of students at proficiency benchmarks on TCAP v. PARCC, and the state’s analysis of schools and districts based on mean scale scores on TCAP v. PARCC.