Friday, February 21, 2014

Teacher ratings: Best and worst in Charlotte region

The teacher effectiveness ratings released this week provide rich material for analysis and debate. I just got the spreadsheet from the N.C. Department of Public Instruction, and I'll be poring through it to see what trends emerge.

But I couldn't resist a quick search to see which schools fall at the top and the bottom for our area. First I looked for those with the highest percentage of teachers who exceeded the state's goals for student growth on state exams. I eliminated those with fewer than 20 teachers, where percentages are so dramatically swayed by one or two individuals. That shuts out most elementary schools, because state testing starts in third grade and only fourth- and fifth-grade teachers have students with a previous year's scores to base growth projections on.

Here's what came up:

1. Weddington Middle (Union County), 77.8 percent of 45 teachers.
2. Highland School of Technology (Gaston County magnet), 73.9 percent of 23 teachers.
3. South Point High (Gaston County), 68.9 percent of 45 teachers.

Porter Ridge High
4. Ridge Road Middle (CMS), 60.9 percent of 46 teachers.
5. Marvin Ridge High (Union County), 60.5 percent of 38 teachers.
6. Mount Holly Middle (Gaston County), 59.3 percent of 27 teachers.
7. Lake Norman High (Iredell-Statesville), 53.6 percent of 56 teachers.
8-9. Winkler Middle (Cabarrus County), 52.5 percent of 40 teachers.
8-9. Porter Ridge High (Union County), 52.5 percent of 40 teachers.
10. South Charlotte Middle (CMS), 51.4 percent of 35 teachers.

Statewide, 23 percent of teachers exceeded the target.

I also sorted for schools with the highest percentage of teachers who failed to meet the growth target. Again eliminating schools with fewer than 20 teachers, they are:

1. Hopewell High (CMS), 56.8 percent of 44 teachers.
2. Vance High (CMS), 54 percent of 50 teachers.
3. Harding High (CMS), 53.2 percent of 47 teachers.
4. Statesville High (Iredell-Statesville), 51.5 percent of 33 teachers.
5. North Meck High (CMS), 50 percent of 44 teachers.
6. West Meck High (CMS), 48.4 percent of 62 teachers.
7. Friday Middle (Gaston County), 47.8 percent of 23 teachers.
8. Grier Middle (Gaston County), 46.2 percent of 26 teachers.
9. Hunter Huss High (Gaston County), 45.9 percent of 37 teachers.
10. Independence High (CMS), 44.3 percent of 61 teachers.

Twenty-one percent of all N.C. teachers fell short of the growth target.

My search included district and charter schools in Mecklenburg, Union, Cabarrus, Iredell, Catawba, Lincoln and Gaston counties.

I'm intrigued by these numbers, but I want to be clear that this is not a definitive picture of academic quality at these schools.  It's worth noting that all schools on the "worst"  list had teachers with top ratings,  and most on the  "best"  list had teachers who fell short.  There's still plenty of room for debate on whether the value-added formula can really turn student test scores into a meaningful measure of how good a teacher is.  But these ratings are shaping education decisions and teachers' careers,  so they're worth exploring.

48 comments:

Wiley Coyote said...

As I said in the last article, the data is useless.

...All teachers in North Carolina receive a rating for Standard 6. For some teachers, however, this rating is populated with school-level growth and not data from the performance of the individual teacher’s students. For the 2012-13 school year, we do not include ratings for teachers with school-level growth measures. The number of teachers in the count for Standard 6 represents only those teachers with individual estimates of student growth.

Ettolrahc said...

Hey Mint Hill Folks, are you still not wondering why the elected officials let CMS sell you on supporting bonds for a new High School, then made your students go to Independence, all with out any fight?

Independence High (CMS), 44.3 percent of 61 teachers.

Anonymous said...

It is important to note that this data is taken strictly from test scores, which means that somewhere around 40% or more of teachers in any given school are not included in the data. Let's rename it "schools making growth (or not) in math and language arts".

Wiley Coyote said...

9:25

If this data is from tests, then why is this in the list of criteria?:

4. For the 2011-12 school year and after, principals could elect to complete abbreviated evaluations for career-status teachers. Career-status teachers, however, must receive a full evaluation at least once during the five-year license renewal cycle. An abbreviated evaluation consists of ratings on Standards 1, 4, and 6.

It seems to me principals are doing some evaluations.

Anonymous said...

This is only the value-added data from test scores, but teachers' evaluations contain subjective ratings on five other standards.

Wiley Coyote said...

sub·jec·tive
[ səb jéktiv ]


1.not impartial: based on somebody's opinions or feelings rather than on facts or evidence

2.existing by perception: existing only in the mind and not independently of it


...I rest my case.

Anonymous said...

"It's worth noting that all schools on the "worst" list had teachers with top ratings, and most on the "best" list had teachers who fell short."

Most important point made, but you miss the point. Maybe we ought to implement evaluation marks for journalists rather than teachers...I can assure you there are many more incompetent journalists out there than teachers.

The real question that needs investigation - is it more probable that a teacher in a certain school (perhaps in a Title I school) NOT make growth compared to other schools (especially those schools with kids from wealthy schools). Although there may exceptions (which aren't the focus in statistics), from a quick glance it certainly looks as though this is the case.

This is important, because if true, VAM would be no different than using proficiency outcomes to judge teachers...a total sham. Using growth calculations was supposed to be fair.. I don't believe that for a second.

Anonymous said...

These ratings are pretty worthless in CMS considering the students had already taken their "real" finals and knew that these tests didn't count for anything.
In helping score the written responses, we saw many students didn't bother to answer the questions or wrote that it wasn't worth it to answer because the test didn't count.

Drew Polly said...

Ann-
Thank you for sharing the information objectively about this information. There clearly are issues about the value-added model, but the legislators in Raleigh continue to insist on some kind of quantitative analysis. Clearly, there are issues with EoG tests that are summative, as opposed to ongoing assessments as well as issues in the growth model when a teacher focused on test prep to maximize test scores gets great test scores, yet students perform below grade level. The next year this catches up. This is especially the case in mathematics on the Grade 3-5 EoG tests.

Thanks again for your coverage of this topic.

Drew Polly

Mark Twain said...

If CMS would let Mint Hill change the school boundaries to reflect the town population as well as stop the "choice crap" it would remove the trouble students from the mix of what Independence has to teach and drastically improve test scores. Independence can be a good school but they have to remove the trash. Pipe dream ... !

Anonymous said...

The teacher evaluation form is not as simple as the 6 standards indicate. The public should study the details involved to be rated as distinguished. http://teachingislearning2013.com/sites/default/files/resources/O_CMS_User%20Guide_NC_Teacher_Eval_Rubric_2012.pdf
In order to be distinguished in several standards, teachers must be presenting, teaching, modeling, etc. at a district or state level. In order to do this, a teacher would be absent from the classroom! Teachers should be evaluated by what they do IN CLASS for their students. The teacher evaluation instrument in NC is horrible.

Anonymous said...

The kids were told the exama would not count in their grades. These are for 2012-2013. Would the people commenting here put out 100 percent on on an exam that did not count? Lets see what happens when they count.

Anonymous said...

Value Added Measure makes "Voodoo Economics" look like a viable, research-based economic solution.

Has anyone ever seen the algorithms used in VAM? It would make Good Will Hunting blush.

Anonymous said...

Remember, a school can score very well on tests but find itself on the bottom of this list if they didn't meet a projected increase. On the old eocs, schools like Providence found it much harder to earn bonus money. This needs to be made clear when putting these lists out. Regardless, the state exams did not count as part of the students grades and the kids knew it. Amazing that these ar being used for any analytical purpose.

Anonymous said...

On the worst list, the schools at the top did worse (Hopewell) than worse than ones numbered lower. (Independence). The lower down the better. Real confusing.

Anonymous said...

Not to mention that these ratings are total bullS#)* in the first place!

JeanTate said...

Almost all researchers advise that it would not be reliable to base teacher effectiveness ratings on anything less than 3 years worth of data. However, even with three years worth of data, a Dept. of Ed. study from 2010 states:

"Type I and II error rates for teacher-level analyses will be about 26 percent if three years of data are used for estimation. This means that in a typical performance measurement system, more than 1 in 4 teachers who are truly average in performance will be erroneously identified for special treatment, and more than 1 in 4 teachers who differ from average performance by 3 months of student learning in math or 4 months in reading will be overlooked. In addition, Type I and II error rates will likely decrease by only about one half (from 26 to 12 percent) using 10 years of data." (Scholchet & Chiang, July2010).

The V in Value-added is for voodoo statistics. Just because some educrat creates a "formula" doesn't mean the ratings mean what they say. Numbers for numbers sake.

Susan B. Harden said...

What is missing from any report on teacher rating is their response to their scores. We have no context - each classroom is unique. Teachers will describe variation from year to year, yet we look at these aggregate scores and make gross generalizations. We need to stop this or there won't be any good teachers left in the system.
This is an excerpt post from a 3rd grade NC Teacher from Diane Ravitch's blog today: http://dianeravitch.net/2014/02/22/a-third-grade-teacher-in-north-carolina-what-it-is-like-to-teach-in-my-state/
You want to talk about the test? The test is skewed to white upper/middle class students who have had certain experiences. My students have never seen the ocean. They have never touched a seashell before my class. These students don’t have gardens, haven’t seen deer in the wild and many of them don’t ride in cars because their parents don’t have one. Their parents don’t talk to them. Not because they don’t care, but because they are working two and three jobs just to try to survive. These babies are being watched by slightly older babies who use Disney and Nick as babysitters. My students need to be immersed into museums and places in our state. They need to feel the sand between their toes at a beach and feel the cold mountain air blow in their face. They need to visit a real farm, not a pumpkin patch and smell the earth when it has been freshly turned by a plow. They need to see works of fine art and go to the symphony. They need to go to a fine dining restaurant and learn the proper etiquette for eating out. You want to equalize the gap? THAT is how to do it. NOT through testing. They need experiences. I have two important questions. Where is the student accountability in this? Also where is parent accountability? When you have students who flat refuse to do what you ask them, how is that MY fault? I have had classes where the majority of my students were labeled oppositional defiant, autistic, ADHD, bi-polar, etc. I have had students in my class who couldn’t speak English or even read in their native language, but I am supposed to get them ON grade level? Did I teach them? YES. Did they grow? YES. However, try as I might, they did not get on grade level. I never quit teaching them, but what happens when teachers no longer want little Johnny or Susie because it affects their salary? What about the parents who make excuses for their children’s lack of performance? Explain to me how it is my fault that they have not raised their child in a manner that would allow them to succeed. How is it my fault they argue and scream at the teacher instead of doing their work. How is it my fault that they refuse to complete assignments? Parents blame the teacher because obviously it is their fault-the legislature says so. When teachers can no longer teach, when they no longer have the respect of society, how long do you think they will stay in their job? I guess we will see soon. I LOVE my students, I LOVE teaching, but what I am doing now is a pale comparison to what I used to do and I would not classify it as teaching. I spend hundreds of dollars a month on my class. Money as a single mom that I really don’t have, but if I don’t spend that money, my students don’t have pencils, paper, or tissues or other supplies. Parents feel it is MY responsibility to provide these supplies. Schools cannot give out what they do not have, budgets have been cut and schools have to make choices between staff and supplies. I love North Carolina. This is the only state that I have ever lived in and I cannot imagine leaving but I will be hard pressed to continue to do what I love because I cannot pay my bills. I had to tell my high school senior that I have no money to help her with college. Not even for her textbooks. She doesn’t have her driver’s license because I have been unable to afford to put her on my insurance. I will very soon be faced with the choice of moving to another state or choosing a new career. I never thought that my own state would force me into that kind of decision.

Shamash said...

JeanTate.

Good point.

Even the SAS folks admit that it takes a lot of "history" to get good results.

It's in nearly every paper I've seen about the EVAAS written by SAS.

Given this caveat from the inventors of the model, I don't know WHY everyone is so quick to jump on the "results" as meaningful.

They should give it a few years.

And that's if you actually believe the model accurately measures anything at all.

Shamash said...

Susan B Harden,

"I have two important questions. Where is the student accountability in this? Also where is parent accountability?"

The educrats will probably NEVER address these questions.

It's not politically correct today to even try.

As for "equality"...

Well that's for the domesticated cows that these models were originally designed for.

The rest of us are just a bit too unique and complicated for our "growth" to be measured from various "inputs" like livestock.

If you don't include culture and family, you're missing a large component of education.

The "gap" will be with us until both parents and students have decided they will do something about it and not just sit back and wait for others to do it for them.

And for some, if not many, kids, the "gap" will always be there.

And that's the truth no one wants to talk about.

Instead we keep throwing good money after bad, hoping to finally spend enough to make everyone "equal".

Pamela Grundy said...

As I stated in the article about the ratings, in my opinion they are a huge waste of time and money. Like the tests on which they are based, they will do far more harm than good. To start with, those tests are deeply flawed. In addition, the idea that even the fanciest mathematical formula can use standardized test scores to accurately assess what happens in every classroom in the state is absurd.

However, even though these "ratings" are highly problematic, it's important to report on them now, so that those flaws can be exposed before the system becomes more deeply entrenched. I'm not a fan of "best" and "worst" lists because despite disclaimers about the problematic nature of the information, they tend to take on a life of their own. But since leaders from both political parties seem determined to press on with this flawed and damaging strategy, the more public discussion about its problems the better.

Anonymous said...

Let me be clear when I say this. The scores that you see posted are absolutely false! They are based upon student scores from last year that TEACHERS graded! I myself spent hours and hours grading these. I tended to grade low while many of my colleagues graded high. This state is progressively moving toward a system that is based upon false statistics. Who in business would want their pay based upon a system where you sell 40 cars per month, but are only given credit for 15? This will cause teachers to leave, they realize that the state has lost its mind when it comes to testing. If this sad little state can develop a test that actually measures progress instead of the false category of "Growth" then I don't mind having my pay based upon this.

Anonymous said...

Wily Coyote:
You asked a very valid question "Where is the criteria for the test?"
Answer: SAS refused to release this information and the methods for determining growth for years because they knew the statistics were based upon a inaccurate model. The model and algorithms are used for cattle production and are adjusted for human beings. This state is wasting millions upon millions on SAS and other system and it will produce no significant results nor improve teaching.

Anonymous said...

So who made the decision to go with the SAS tests? DPI? Legislators? A former Governor? And how do we get the train back on the track?

To quote a famous individual who was way ahead of his time, "Jane! Stop this crazy thing!!!" ~~ George Jetson

Anonymous said...

http://www.youtube.com/watch?v=5DxDEoDojJ4

Anonymous said...

SAS isn't designing the tests, but they are crunching the results to create the value-added ratings. I am almost certain NC is using three years worth of test scores, but the previous two years are with very different state exams, so it's reasonable to question how well they can project results on the new tests.

SAS actually has released the formulas, though they're incomprehensible to most of us. I blogged about it in December, but I just now checked the link in that post that used to go to the formulas and it no longer works. But here's the post:

http://obsyourschools.blogspot.com/2013/12/academic-growth-formula-not-secret-just.html

Anonymous said...

CMS is using three years of testing, but not on the same tests

Shamash said...

Ann,

I'm going to do more than one post on this...

Here's an example of a critique of the "3 tests" in which the tests are NOT THE SAME.

http://www.serve.org/uploads/docs/EBE%20Responses/500_Teacher%20evaluation%208.6.09.pdf

See pages 6 and 7...

"The value-added approach makes an assumption that tests can be equated from year to year or across subjects such that a scale score one year means the same thing the next year.

The extent to which differences in tests or forms of tests affect value-added scores is another consideration in interpreting value-added scores of teachers.

Sass (2008) in a study using Florida data reported that ―. . .

it is clear that different tests result in different teacher rankings‖ (p. 5).

----------

There are several "responses" to critiques such as this which mention a dampening effect of errors when more years of tests are used with THREE YEARS typically being the MINIMUM.

SAS is typically unclear on whether the tests need to be the SAME, though.

I found a report from HISD which seems to verify the randomness of results from the SAS model.

Equating getting a bonus based on the EVAAS as being a bit like winning the lottery, since it's so random.

That's coming up next...

Shamash said...

The SAS Education Value-Added Assessment System (SAS®
EVAAS®) in the Houston Independent School District
(HISD): Intended and Unintended Consequences

http://files.eric.ed.gov/fulltext/EJ971428.pdf

Page 4...

"Even though the district reported that the majority of teachers favor the ASPIRE program
overall (Harris, 2011), researchers found evidence suggesting that HISD teachers have aversions
towards the program’s SAS® EVAAS®component (Collins, in progress).

In terms of reliability, those
receiving merit monies attached to their SAS® EVAAS® output often compare winning the rewards
to “winning the lottery,” given the random, “chaotic,” year-to-year instabilities they see. Such
consistencies are also well noted in literature (Baeder, 2010; Baker, Barton, Darling-Hammond,
Haertel, Ladd, Linn et al., 2010; Haertel, 2011; Koedel & Betts, 2007; Papay, 2010).

Teachers do not
seem to understand why they are rewarded, especially because they profess that they do nothing
differently from year to year as their SAS® EVAAS® rankings “jump around.”

Along with the highs
come much-appreciated monetary awards, but for what teachers did differently from one year to the
next remains unknown."


------------

So, due to some jumping around of the results, the teachers aren't even sure why they get rewarded.

Sounds like a REAL EFFECTIVE way to "improve" teaching to me.

Of course, we ALL know that there is no way these tests can actually give feedback on teaching techniques, so what can a reward (or punishment) actually accomplish?

Since it can't be directly tied to any SPECIFIC BEHAVIOR on the part of the teacher.

(This stuff gets more ridiculous the more I learn about it...)

Shamash said...

And here's the Sass (NOT SAS) study mentioned before about the instability of model results:

http://www.urban.org/UploadedPDF/1001266_stabilityofvalue.pdf

Where we earlier found (page 5):

"Nonetheless, it is clear
that different tests result in different teacher
rankings."

We also find this gem (page 3):

"McCaffrey et al. demonstrate that much of
the variation in estimated teacher effects is in
fact due to independent student-level variation in
test performance over time, rather than changes
in true teacher productivity."

-----------

Well, now ain't that a big DUH!

It's the STUDENTS (not the TEACHERS) who are INDEPENDENTLY performing differently on those tests.

Who would have guessed?

Pamela Grundy said...

Regarding the value-added ratings, this is my understanding.

The published ratings are based on only the 2013-14 tests.

The "growth" is determined by matching the scores on those tests against a gigantic database of tests that goes back many, many years. How that happens is extremely complicated.

I think value-add was also done in 2012-13. Those are separate numbers. So some teachers may now have two years of numbers.

No teachers are supposed to incur consequences based on value-add until they have three consecutive years of numbers (at which point the numbers are supposedly "more" reliable but according to studies still not particularly reliable). And I think they get to pick the best two years out of those three. I'm not clear if the clock started for some teachers last year, or if it's starting this year.

No matter which, it continues to be a huge waste of time and money.

Shamash said...

Based on accounts of how erratic those numbers are, maybe THREE YEARS OF CONSISTENT RESULTS from EVAAS would be a better measure.

I'm betting that will rarely, if ever, happen.

Wiley Coyote said...

Value added can't go back "many, many years".

It only goes back as far as the student due to actual versus predicted scores. It is suggested that value added shouldn't be used until the third grade and above due to that fact.

But, you're right. There are so many variables with value added that it's ridiculous to even use it.

Pamela Grundy said...

I'll rephrase. First the amount of an individual student's "growth" is determined through the enormous database that goes back many, many years. Then the EVASS formula is used to determine how much of that "growth" can be attributed to the teacher. That second number, such as it is, is the "value" that teacher supposedly "added" to the student's score.

Anonymous said...

The only logical solution is for parents to have their children Opt Out of these tests. It is easy to do and thousands of parents across the country already refuse to allow their children to be subjected to these worthless tests. There are many websites with advice for parents. No test results = no data. Simple enough.

Pamela Grundy said...

If you are interested in information on opting out in North Carolina you might take a look at the opting out section of the MecklenburgACTS.org website.

Anonymous said...

So are they evaluating the teacher based on how their class did last year, this year, and next year, or are they tracking each student for three consecutive years?

I was a first-year teacher last year and met expected growth, but it looked like on the bottom box of the EVASS ratings the Observer linked to that not all our teachers were counted. I'm guessing new teachers were excluded because it was their first year of ratings?

We are being assigned a score of effectiveness based on these things....Wouldn't it be great if someone would actually explain it all to us?

Anonymous said...

Opps, EVAAS.....Freudian Slip!

Pamela Grundy said...

I'm always making that mistake too. Hmmmmm....

Shamash said...

I don't see how the EVAAS can be used to track "growth" for first year students.

Especially since they (SAS) say they need at least three years of data to dampen the noise in their measurements.

Of course, that doesn't mean some moronic educrats won't try to misapply a tool.





Pamela Grundy said...

There certainly has been a lack of communication with teachers, which is appalling.

The three years is for three consecutive years of teacher "effectiveness" ratings. The theory is that while a teacher rating from a single year may be off (actually, studies show enormous swings from year to year) if there are three consecutive years of "effectiveness" rating, then the swings can be averaged out and a teacher's overall ranking is more likely to be accurate. Studies, however, have not been kind to this perspective, as data remains pretty much all over the map.

At MecklenburgACTS.org, we've got an expert working on a layperson-friendly explanation of these (which should be far better than mine) and we hope to have it available in the next few weeks.

Anonymous said...

And now Michelle Rhee, one of America's top EduFrauds, has proposed a new statistic in the VAM model: Grit!

That's right. She believes we can come up with a metric and algorithm to measure a student's "grit."

God help us.

Craig S. Smith said...

Change in Policy for Determining Educator Effectiveness Status:


http://blogofcraigsmith.blogspot.com/2013/12/change-in-policy-for-determining.html

Anonymous said...

Thanks, Craig -- that's helpful!

Anonymous said...

What a mess, teaching in this state must be awful.

Anonymous said...

Almost as bad as being the parent of students.

Anonymous said...

When will the parents and teachers get together and tell all these people to go to hell. No more test. No more intrusion from Raleigh. My sons school is loosing it's best teachers. Why is NC so bad? Why did only the youngest teachers get a raise. I am much better at my job in my 30 s then I was in my 20s. What is going on here?

Anonymous said...

As a middle school teacher in CMS, I find this article misleading and confusing. Let's look at your "good" list, #10 is South Charlotte Middle with 51.4% exceeding growth. However on the "bad" list Grier Middle School has 46.2% of teachers who failed to meet the growth standard. Isn't it possible that these schools aren't that different? For example, couldn't 42% of SCMS teachers have failed to meet the target? On the other hand, couldn't Grier Middle have another 50% of teachers who exceeded growth? This is a sad excuse for journalism and a gross manipulation of data.