What was the role of the Washington Education Association in bringing about the legislature’s refusals to add student test score/student growth data as a component of determining teacher/principal effectiveness? WEA opposed the inclusion of test scores in the evaluation protocols. It should be noted, however, that seven very conservative Republican members of the Senate Majority Coalition also voted with WEA on this issue and thereby prevented the Senate from passing SB5246.
The question now is, “Why Did WEA Oppose the Inclusion of Student Test Score Data in Teacher/Principal Evaluations?”
Find out what's happening in Woodinvillefor free with the latest updates from Patch.
Opponents of WEA will be quick to suggest that WEA chose this course in order to make it easier to “protect bad teachers”. However, are there legitimate concerns about the viability and validity of the use of student test scores in this fashion? The answer is “Yes”.
Find out what's happening in Woodinvillefor free with the latest updates from Patch.
First, these test scores may or may not accurately reflect what a student does or doesn’t know on the day of the test. But, it’s pretty clear that the test scores can tell us nothing at all about why they know or don’t know a particular fact. The Educational Testing Service (the folks who make a living designing, publishing, and correcting standardized tests) have very clearly stated that half of the factors that influence student learning are outside the control of schools and teachers. Even in a best case scenario, we can tell little about why a student learned or didn’t learn. You can read more about this at:
Secondly, the use of these scores involves a mathematical extrapolation of their meaning, which can not be supported by the data. Those extrapolations are necessary to create an image that the students’ test scores reflect the effectiveness of their teachers. When the New York Dept of Education made teacher effectiveness data, based upon students’ test scores, available to the New York Post, they included a warning. The error ratio for single test data extrapolations for a teacher of math had a 35 point error ration, which means the actual scores could be 35 points higher or 35 points lower. So, a math teacher who is judged by these extrapolations to be at 50%, might actually be at 85% or at 15%. For reading teachers, the error ratio is worse. It’s 53 points. So, a teacher rated at 50% might actually be at +103% or at –3%. Such disparities make the use of this data impossible.
You can read more about this at: