- Who We Are
- Where We Stand
- Our Rights
- Our Benefits
- Our Chapters
- Administrative Education Analysts and Officers
- Education Officers & Education Analysts
- Guidance Counselors
- Hearing Education Services
- Hearing Officers (Per Session)
- Lab Specialists
- Occupational / Physical Therapists
- Retired Teachers
- School Nurses
- School Secretaries
- Social Workers & Psychologists
- Speech Improvement
- Supervisors of Nurses & Therapists
- Teachers Assigned
- Vision Education Services
- Other DOE Chapters
- Charter School Chapters
- Non-DOE Education Chapters
- Federation of Nurses
- United Cerebral Palsy
- UFT Providers
- Get Involved
- Career Timeline
- Teacher Center
- Teacher Evaluation
- English Language Learners
- Classroom Resources
- Students with Disabilities
- Courses / Workshops
- Teacher's Choice
- Teacher Leadership
- Transfer Opportunities
- Job Opportunities
- District 75
- Positive Learning Collaborative
- Professional Development Resources
- Team High School
UFT.org Home > News > Press Releases > UFT to file suit to prevent release of incorrect teacher data
UFT to file suit to prevent release of incorrect teacher data
Union charges that Teacher Data Reports are marred by missing and wrong information, use an unproven methodology
October 20, 2010
The UFT on Oct. 20 announced it would seek an injunction in New York State Supreme Court to prevent the city’s Department of Education from publicizing Teacher Data Reports, which are based on student scores on state tests. The union said the reports, in addition to relying on tests that the state itself has disavowed, are characterized by missing and erroneous information, and rely on a methodology which national experts consider unproven.
UFT President Michael Mulgrew said, “First, after years of boasting by the Department of Education about our kids’ progress, the state declares that the tests the DOE has been citing are basically useless. Now the DOE wants to make public a group of reports based on these faulty tests, reports that also feature other incomplete and inaccurate student data. Parents have been misled enough.”
The DOE’s Teacher Data Reports (TDRs) cover approximately 12,000 teachers of English Language Arts and mathematics in grades 4 through 8. The TDR’s attempt to calculate a “value add” for each teacher, which is the difference between what the teacher’s students would have scored on the test without the teacher and what they actually scored. The TDR’s use a complicated algorithm to try to take into account all outside factors that affect children’s’ learning, such as poverty, race, gender and other issues, and then compare the teacher’s results, which are the average of this difference for each child in the class, with a DOE-determined peer group of teachers.
The union will charge in its lawsuit that the TDRs are “unreliable, often incorrect, subjective analyses dressed up as scientific facts,” and the methodology’s calculations of individual teachers’ value-added is “a complex and largely subjective guessing game on the part of the DOE.”
Incorrect data, troubled methodology
New York City teachers have found multiple mistakes in their reports, including reports on students and even entire classes which the teachers never taught. Other inaccuracies include the addition of students who were taught for part of the year by a different teacher, inconsistencies in accounting for students who need special help, and other issues with data collection and evaluation.
Other problems with the NYC reports include a high margin of error — as much as 54 points on a 100-point scale — and unpredictable swings that can show a teacher in the top tier in one year, and near the bottom in the next.
President Mulgrew added: “The UFT, which recently negotiated with the State Education Department to develop a new teacher evaluation system that limits the role of standardized tests, took part in the DOE’s “value-added” experiment with the hope that it would produce a product that would help teachers improve their craft. Unfortunately, this has not happened.”
National studies show limits of value-added approach
On a national level, the U.S. Department of Education's research arm, the Institute of Education Sciences (IES), analyzed likely error rates in value-added measures of teachers and schools. Using data from seven different tests, IES found that even if three years of teacher results are used in the VA calculations, one in four (26%) teachers will be mistakenly identified as needing improvement when they are actually within the "average" performance range. In addition, the same percentage (26%) of high-performing teachers would be totally missed.
The Economic Policy Institute found in a study that looked across five large urban districts, fewer than one-third of the teachers who ranked in the top 20% in their first year stayed in the top 20% the next year. Another third moved all the way to the bottom. The reverse was also true: only about a third of those in the lowest 20% stayed in that group the following year.
A study by researchers from Stanford and UC Berkeley of value-added methodology found that teachers “appeared to be significantly more effective when teaching upper-track courses than the same teacher appeared when teaching lower-track courses." It also found that teachers' rankings were “significantly and negatively correlated with the proportions of students they had who were English learners, free lunch recipients, or Hispanic, and were positively correlated with the proportions of students they had who were Asian or whose parents were more highly educated."
TDRs by district in New York City
Most researchers feel that value-added data are more reliable at the school and district levels. When the NYC TDI was averaged by district, the reports showed that the area where teachers as a group added the most value was District 16 in central Brooklyn. In English Language Arts, the districts where teachers added the least value — according to the city’s TDR measure — were the city’s charter schools (District 84) and District 1 on Manhattan’s Lower East Side.
Since there is no national standard for “value-added,” each jurisdiction produces its own algorithm. Here is a sample “value-added” algorithm from the vendor the DOE has used for this program.