Showing posts with label SAS Institute. Show all posts
Showing posts with label SAS Institute. Show all posts

Saturday, March 8, 2014

CMS high school teachers hosed on ratings?

The irony was obvious last spring:  Charlotte-Mecklenburg Schools didn't trust the state's new final exams enough to count them toward high school students' grades.  But those tests were created to rate teachers,  so teachers not only had to give the exams but spend hours grading new open-ended questions.

Now the results of the teacher effectiveness ratings are in,  and they indicate something went awry in rating CMS high school teachers.

Across the state and in CMS,  more than three-quarters of all teachers met or exceeded the goal for student gains.  But when I broke that out by grade level,  more than 80 percent of teachers in CMS elementary,  middle and K-8 schools met or exceeded the goal,  compared with just over 60 percent of CMS high school teachers.

Erlene Lyde at West Charlotte


I also ran the numbers for more than 12,700 non-CMS high school teachers around the state,  and 78 percent of them met or exceeded the target.

It's possible that these numbers reveal a real shortcoming unique to CMS high school teachers.  But a handful of  teachers and principals I spoke with questioned the results on two grounds:  The validity of the tests and the fact that CMS teens knew they had no stake in scoring high.  Erlene Lyde, a West Charlotte High teacher and vice president of the Charlotte-Mecklenburg Association of Educators,  put it most bluntly:  "Flawed data generated from flawed tests administered in flawed conditions and graded using a flawed scoring mechanism."

Thus, the perpetual challenge:  I think it's important to analyze and report on education data.  But at the same time,  you have to question what the numbers really mean.

I'm still not sure how well the EVAAS formulas from SAS Institute turn student test scores into meaningful measures of school growth and teacher value. But the EVAAS site for looking up school growth ratings is one of the best public data presentations I've seen.  It's a simple matter to look up schools and make comparisons in a number of different ways  (fellow geeks, check out the scatterplot option under comparison reports).

School growth and teacher effectiveness are both based on students'  year-to-year progress on state exams.  As you'd expect,  schools that score well on one measure are likely to look good on the other.  But they're not identical for a number of reasons.  One of them is that school growth is based only on End-of-Grade and End-of-Course exams,  while the teacher ratings include more tests.

Some of you asked excellent detail questions when those ratings first came out. I asked Jennifer Preston of the N.C. Department of Public Instruction to clarify the tests used and the way student results were assigned to teachers.  Here's her report,  for those who are interested in diving deep:

  • The Department of Public Instruction and the SAS Institute were able to provide teacher-level value-added data for a pretty expansive list of grades/subjects and courses.  They are: Reading/ELA in Grades 4 – 8, Mathematics in Grades 4 – 8, Science in Grades 5 – 8, Social Studies in Grades 5 – 8, Biology, Earth/Environmental Science, Chemistry, Physics, English I, English II, English III, Algebra I/Math I, Geometry, Algebra II/Integrated Math III, World History, Civics and Economics, United States History, American History I, and American History II.  These estimates are all based on the administration of End-of-Grade assessments, End-of-Course assessments, and NC Final Exams.  North Carolina has also had a well-established Career and Technical Education assessment program for many years; teachers of more than twenty-five Career and Technical Education courses received individual value-added scores.  
  • In order to ensure that all value-added estimates are fair and valid, we do have some safeguards in place around minimum student counts.  For End-of-Grade Assessments in Science, End-of-Course Assessments. NC Final Exams, and the CTE State Assessments, teacher must be connected at least ten students and the equivalent of six "full students," defined as students with 100% instructional responsibility claimed by one teacher.  This point is most easily explained with examples.  Let's say that an Exceptional Children's teacher has claimed 20 students at 10% instructional responsibility for each one.  While the teacher is connected to ten students, he is only connected to the equivalent of two "full students" (20 students X 10% each = 2 full students). The teacher will not have a value-added score because he is connected to fewer than six "full students."  A different Exceptional Children's teacher has claimed 20 students at 50% instructional responsibility for each one.  This teacher is connected to at least ten students, and is connected to the equivalent of 10 "full students."  He will have a value-added score.  Each of the students must have at least three prior test scores (in any grade/subject or course) in order to be used in the analysis.  For End-of-Grade Assessments in English Language Arts and Mathematics (Grades 4 – 8), a teacher must be connected to six "full students," using the same terminology as described above.  These business rules are to ensure the quality of the value-added data – if a value-added estimate is calculated using a very small number of students, it's simply not valid.  While a bit complicated, these rules simply reflect the reality of teaching today – there are lots of cases in which teachers share instructional responsibility for students and work as a team to provide them with the services they need.


Wednesday, September 25, 2013

Student's view: Don't grade me on teacher tests

Leave it to a teenager to put a fresh spin on a topic.  At a school board meeting earlier this month Celia Collias, a junior at Myers Park High,  joined a group urging Charlotte-Mecklenburg Schools to resist adding more state exams that are being created to rate teachers.

Her argument:  It's not fair to count those exams toward student grades.  After all,  if they really measure teacher effectiveness,  a low score just means we had a bad teacher,  right?

She's far from the first person to question the tests,  known as Measures of Student Learning. Local parents,  teachers and advocates have aired doubts about the value of the exams and the time they take away from other classroom work.  So have board members and Superintendent Heath Morrison.  State and federal education officials are still mulling whether to delay the plan to add more MSLs this year.

Still,  Celia's analysis made me smile.  She highlighted a kind of Catch 22:  The tests are supposed to measure teacher effectiveness.  But of course student effort  (not to mention intelligence,  preparation and mood that day)  shapes the scores.  Officials say the exams should count toward final grades to motivate students to give it their best shot.  So if students try hard and still get a lousy score their grade drops,  even if it's the teacher's fault.

The folks who support value-added ratings for teachers  --  and there are many who do, all across the country  --  would say that's oversimplified.  They say they can create formulas that tease out the teacher's contribution to student success or failure.  But it's not clear whether regular people  --  not to mention teachers whose careers are at stake  --  believe them.

Two years ago,  CMS officials made a valiant effort to create a value-added formula and explain it to employees and the public.  I think it's fair to say they failed.  Backlash was strong,  including parents threatening to keep their kids home on testing days.  Key players,  including Superintendent Peter Gorman and performance pay director Andy Baxter,  left CMS and the new crew quickly dropped the effort.

Dr. William Sanders and the Cary-based SAS Institute say they have a formula that works.  It's well regarded in national education circles,  and N.C. education officials have hired them to crunch state test scores for teacher evaluations.  But the rest of us can't examine that formula because it's how SAS earns its income.  Morrison has raised doubts about pinning his teachers'  evaluations to a formula that can't be fact-checked.

So stay tuned.  The quest to create better teacher evaluations is an important one.  We'll be hearing plenty more about this.  And Celia and her classmates will be waiting to learn whether new state exams will shape their grades this year.

Sunday, May 26, 2013

New tests bring twists, frustrations

It's hard to miss the irony:  Charlotte-Mecklenburg Schools didn't have enough confidence in new state exams to stake students'  grades on them,  yet student performance on those tests will be used to evaluate teachers.

That's not necessarily a sign of hypocrisy from the district.  Local officials had a choice about counting the tests toward student grades,  but the state has mandated that value-added ratings generated by the SAS Institute's EVAAS system be part of teacher evaluations.  Still unclear is whether lawmakers will use scores from the new exams to assign letter grades to schools this year.

CMS leaders aren't saying the new exams are bad.  They're just saying there are too many unknowns this year,  with teachers having little information about what would be on the new tests and how to prepare students.  (Those of us who have been around awhile know the state has a history of discovering glitches after kids take a new test,  and these have not been field-tested.)

Rather than risk a student failing a class,  which could potentially jeopardize or delay graduation,  CMS decided the state exams won't count toward grades this year.  That's frustrating to some teachers,  who believe students will put little effort into an exam that can only benefit or harm their instructor.  To top it off,  teachers have to spend unpaid time scoring new items on the tests.

CMS created a parent guide to explain the exams students are taking now  (some exams started earlier in May and some will run through June).  In addition to the familiar terms  -- end-of-grade exams in elementary and middle schools, end-of-course exams in high school  --  you'll now hear about  "common exams,"  sometimes called MSLs,  for measures of student learning.  The difference is that EOGs and EOCs will be used to grade schools,  while common exams will only be used for teacher evaluations.

I'll be curious to hear what parents and teachers think as the exam period plays out.  If there's one consolation for those who think this is too much testing,  it's that the new state program doesn't include the K-2 tests CMS tried in 2010,  requiring adults to administer the tests one student at a time.  However,  officials do expect an early-grades reading test in 2014.