Guest post by John Spencer
John Spencer, who I met online after reading his blog, is a middle school computers and journalism teacher, passionate about authentic learning, social justice and critical thinking in a digital world. Follow him at his blog.
Reblogged from John Spencer’s blog on Arizona Stories from School.
Arizona is one of many states that use student scores to evaluate teachers. This post tells a story about how this insidious decision to use an unreliable system affects one teacher in Arizona. The system in place in Arizona is no different than the system that will be used to rate every teacher in Georgia by 2014. It’s not a pretty picture.
The Problem with VAM Scores
Before the district kill-and-drill benchmark test, a student says to me, “You look stressed.”
“I’m fine,” I lie, offering a grin that looks more like a grimace.
“I normally blow off the test, but my mom says you could lose your job if our scores are bad, so I’ll do my best.”
“How do you know about that?” I ask.
“My mom works in the cafeteria at Heatherbrae and she told me our scores decide if you’re a good teacher.”
“I’m not worried,” I lie. But the truth is I am. I’m terrified. I have tried my hardest not to teach to the test and I know that my students are not prepared. It’s a gamble I’ve won before, but somehow this feels different.
Why VAM Scores Fail
In theory, value-added measures make sense. We need to know which teachers are good and which teachers are lousy. So, what better way to look at teacher effectiveness than seeing the value that they add to student learning? Instead of focusing on the subjective observations of principals, VAM promises to offer an objective criteria for accountability.
Unfortunately, it doesn’t work. For one, it assumes that learning and achievement are the same thing. They’re not. A multiple choice test is one of the worst ways to assess student learning. In addition, VAM scores are based upon a growth model that doesn’t account for the changes in class make-up with a transient population. I have had thirteen students leave and nine students join since the first quarter. I don’t teach the same class that I taught in the first quarter.
But I’m less concerned with the flaws of VAM as an evaluation tool as I am with the bad policies it creates. See, our district has to follow state policies enacted as a result of federal teacher evaluation requirements under Race to the Top. As a result, the district needs time to look at student growth to figure out the VAM scores before deciding if a teacher will receive a contract.
So, now we have a pre-test the first week of school (so much for team building) and at the close of the third quarter. Here are some of the results:
- A compacted curriculum, where we rush to cover and review standards through the entire fourth quarter without ever questioning depth of mastery. Here’s to two day to learn linear equations!
- An increase in test preparation over project-based learning.
- Impatient teachers. I mention that, because I snapped at kids who weren’t learning at breakneck speed when, at other times, I would have sat with them in small groups and helped them master the content through scaffolding and critical thinking.
- Low staff and student morale. We had the highest number of teacher absences and the highest number of referrals in the four weeks leading up to the post-test.
- A decrease in the leadership that administrators have in doing evaluations and setting the tone for curriculum decisions.
Under No Child Left Behind, we used test scores to judge schools. That didn’t work, so now we have Race to the Top, where we use test scores to judge schools and teachers. Explain to me again how this is a step forward.
What do you think of John’s experience with using test scores to evaluate his ability as a teacher? What do you think of the results he posted here?
0 Comments