When 98 Percent Are Proficient the Measure Is Broken

It’s no wonder that most teachers and principals see evaluations as a hoop to jump through.
In Florida, 98 percent of teachers are effective; New York: 95 percent; Tennessee: 98 percent; Michigan: 98 percent. New Jersey implemented a new evaluation system in 2014 and 97 percent of teachers were ‘effective’ or ‘highly effective.’ That 3 percent of teachers were rated as ineffective was a significant increase from the evaluation system it replaced, which had rated 0.8 percent of teachers as ineffective.
Read that last part again: rating 3 percent of teachers as ineffective was considered a significant increase. The previous system found fewer than 1 percent ineffective.
There’s too much of this that makes evaluations worthless. When 98% of teachers are rated proficient, it means one of two things: either the bar is set too low, or the measuring stick is measuring the wrong thing entirely.
I suspect it’s both.
We’ve built elaborate observation systems, trained evaluators, created rubrics and frameworks—and the end result is that nearly everyone passes. What exactly are we learning? What decisions are being improved?
If a measurement system can’t distinguish between different levels of performance, it’s not a measurement system. It’s theater. And expensive theater at that.
The question isn’t how to make observations more rigorous. It’s whether observations, as currently conceived, are even the right tool for the job.
This article is a response to “Teacher Observations Have Been a Waste of Time and Money” by Brookings Institution.
Notes mentioning this note
There are no notes linking to this note.