12 Winters Blog

The fallacy of testing in education

Posted in October 2015 by Ted Morrissey on October 18, 2015

For the last several years education reformers have been preaching the religion of testing as the lynchpin to improving education (meanwhile offering no meaningful evidence that education is failing in the first place). Last year, the PARCC test (Partnership for Assessment of Readiness for College and Careers) made its maiden voyage in Illinois. Now teachers and school districts are scrambling to implement phase II of the overhaul of the teacher evaluation system begun two years before by incorporating student testing results into the assessment of teachers’ effectiveness (see the Guidebook on Student Learning Objectives for Type III Assessments). Essentially, school districts have to develop tests, kindergarten through twelfth grade, that will provide data which will be used as a significant part of a teacher’s evaluation (possibly constituting up to 50 percent of the overall rating).

To the public at large — that is, to non-educators — this emphasis on results may seem reasonable. Teachers are paid to teach kids, so what’s wrong with seeing if taxpayers are getting their money’s worth by administering a series of tests at every grade level? Moreover, if these tests reveal that a teacher isn’t teaching effectively, then what’s wrong with using recently weakened tenure and seniority laws to remove “bad teachers” from the classroom?

Again, on the surface, it all sounds reasonable.

But here’s the rub: The data generated by PARCC — and every other assessment — is all but pointless. To begin with, the public at large makes certain tacit assumptions: (1) The tests are valid assessments of the skills and knowledge they claim to measure; (2) the testing circumstances are ideal; and (3) students always take the tests seriously and try to do their best.

assessment blog quote 1

But none of these assumptions are true most of the time — and I would go so far as to say that all of them being true for every student, for every test practically never happens. In other words, when an assessment is given either the assessment itself is invalid, and/or the testing circumstances are less than ideal, and/or nothing is at stake for students so they don’t try their best (in fact, it’s not unusual for students to deliberately sabotage their results).

For simplicity’s sake, let’s look at the PARCC test (primarily) in terms of these three assumptions; and let’s restrict our discussion to validity (mainly). There have been numerous critiques of the test itself that point out its many flaws (see, for example here; or here; or here). But let’s just assume PARCC is beautifully designed and actually measures the things it claims to measure. There are still major problems with its data’s validity. Chief among the problems is the fact that there are too many factors beyond a district’s and — especially — a classroom teacher’s control to render the data meaningful.

For the results of a test — any test — to be meaningful, the test’s administrator must be able to control the testing circumstances to eliminate (or at least greatly reduce) factors which could influence and hence skew the results. Think about when you need to have your blood or urine tested — to check things like blood sugar or cholesterol levels — and you’re required to fast for several hours beforehand to help insure accurate results. Even a cup of tea or a glass of orange juice could throw off the process.

That’s an example that most people can relate to. If you’ve had any experience with scientific testing, you know what lengths have to be gone to in hopes of garnering unsullied results, including establishing a control group — that is, a group that isn’t subjected to whatever is being studied, to see how it fares in comparison to the group receiving whatever is being studied. In drug trials, for instance, one group will receive the drug being tested, while the control group receives a placebo.

Educational tests rarely have control groups — a group of children from whom instruction or a type of instruction is withheld to see how they do compared to a group that’s received the instructional practices intended to improve their knowledge and skills. But the lack of a control group is only the beginning of testing’s problems. School is a wild and woolly place filled with human beings who have complicated lives, and countless needs and desires. Stuff happens every day, all the time, that affects learning. Class size affects learning, class make-up (who’s in the class) affects learning, the caprices of technology affect learning, the physical health of the student affects learning, the mental health of the student affects learning, the health of the teacher affects learning (and in upper grades, each child has several teachers), the health and circumstances of the student’s parents and siblings affect learning, weather affects learning (think “snow days” and natural disasters); sports affects learning (athletes can miss a lot of school, and try teaching when the school’s football or basketball team is advancing toward the state championship); ____________ affects learning (feel free to fill in the blank because this is only a very partial list).

assessment blog quote 2

And let me say what no one ever seems to want to say: Some kids are just plain brighter than other kids. We would never assume a child whose DNA renders them five-foot-two could be taught to play in the NBA; or one whose DNA makes them six-foot-five and 300 pounds could learn to jockey a horse to the Triple Crown. Those statements are, well, no-brainers. Yet society seems to believe that every child can be taught to write a beautifully crafted research paper, or solve calculus problems, or comprehend the principles of physics, or grasp the metaphors of Shakespeare. And if a child can’t, then it must be the lazy teacher’s fault.

What is more, let’s look at that previous sentence: the lazy teacher’s fault. Therein lies another problem with the reformers’ argument for reform. The idea is that if a student underachieves on an exam, it must be the fault of the one teacher who was teaching that subject matter most recently (i.e., that school year). But learning is a synergistic effect. Every teacher who has taught that child previously has contributed to their learning, as have their parents, presumably, and the other people in their lives, and the media, and on and on. But let’s just stay within the framework of school. What if a teacher receives a crop of students who’d been taught the previous year by a first-year teacher (or a student teacher, or a substitute teacher who was standing in for someone on maternity or extended-illness leave), versus a crop of students who were taught by a master teacher with an advanced degree in their subject area?

Surely — if we accept that teaching experience and education contribute to teacher effectiveness — we would expect the students taught by a master teacher to have a leg up on the students who happened to get a newer, less seasoned, less educated teacher. So, from the teacher’s perspective, students are entering their class more or less adept in the subject depending on the teacher(s) they’ve had before. When I taught in southern Illinois, I was in a high school that received students from thirteen separate, curricularly disconnected districts, some small and rural, some larger and more urban — so the freshman teachers, especially, had an extremely diverse group, in terms of past educational experiences, on their hands.

For several years I’ve been an adjunct lecturer at University of Illinois Springfield, teaching in the first-year writing program. UIS attracts students from all over the state, including from places like Chicago and Peoria, in addition to students from nearby rural schools, and everything in between (plus a significant number of international students, especially from India and China). In the first class session I have students write a little about themselves — just answer a few questions on an index card. Leafing through those cards I can quickly get a sense of the quality of their educational backgrounds. Some students are coming from schools with smaller classes and more rigorous writing instruction, some from schools with larger classes and perhaps no writing instruction. The differences are obvious. Yet the expectation is that I will guide them all to be competent college-level writers by the end of the semester.

The point here, of course, is that when one administers a test, the results can provide a snapshot of the student’s abilities — but it’s providing a snapshot of abilities that were cured by uncountable and largely uncontrollable factors. How, then, does it make sense (or, how, then, is it fair) to hang the results around an individual teacher’s neck — either Olympic-medal like or albatross like, depending?

As I mentioned earlier, validity is only one issue. Others include the circumstances of the test, and the student’s motivation to do well (or their motivation to do poorly, which is sometimes the case). I don’t want to turn this into the War and Peace of blog posts, but I think one can see how the setting of the exam (the time of day, the physical space, the comfort level of the room, the noise around the test-taker, the performance of the technology [if it’s a computer-based exam like the PARCC is supposed to be]) can impact the results. Then toss in the fact that most of the many exams kids are (now) subjected to have no bearing on their lives — and you have a recipe for data that has little to do with how effectively students have been taught.

So, are all assessments completely worthless? Of course not — but their results have to be examined within the complex context they were produced. I give my students assessments all the time (papers, projects, tests, quizzes), but I know how I’ve taught them, and how the assessment was intended to work, and what the circumstances were during the assessment, and to some degree what’s been going on in the lives of the test-takers. I can look at their results within this web of complexities, and draw some working hypotheses about what’s going on in their brains — then adjust my teaching accordingly, from day to day, or semester to semester, or year to year. Some adjustments seem to work fairly well for most students, some not — but everything is within a context. I know to take some results seriously, and I know to disregard some altogether.

assessment blog quote 3

Mass testing doesn’t take into account these contexts. Even tests like the ACT and SAT, which have been administered for decades, are only considered as a piece of the whole picture when colleges are evaluating a student’s possible acceptance. Other factors are weighed too, like GPA, class rank, teacher recommendations, portfolios, interviews, and so on.

What does all this mean? One of things that it means is that teachers and administrators are frustrated with having to spend more and more time testing, and more and more time prepping their students for the tests — and less and less time actually teaching. It’s no exaggeration to say that several weeks per year, depending on the grade level and an individual school’s zeal for results, are devoted to assessment.

The goal of assessment is purported to be to improve education, but the true goals are to make school reform big business for exploitative companies like Pearson, and for the consultants who latch onto the movement remora-like, for example, Charlotte Danielson and the Danielson Group; and to implement the self-fulfilling prophecy of school and teacher failure.

(Note that I have sacrificed grammatical correctness in favor of non-gendered pronouns.)

Principals unwitting soldiers in Campbell Brown’s army

Posted in August 2014, Uncategorized by Ted Morrissey on August 17, 2014

(This is a long post — and for that, my apologies. But it’s important, and I encourage you to take your time and read it thoroughly.)

Because of my interest in the subject (as demonstrated in my blog posts over the past few months), I was invited to participate in a video roundtable via Skype with administrators from several schools about implementing the Danielson Framework for Teacher Evaluation, and I found many of the comments, well, bewildering. Even though it was a select group, I strongly suspect that their attitudes and approaches are representative of not only administrators in Illinois, but across the country — as the Danielson Framework has been adopted by numerous states. Before I go any further I must stress that these are all good people who are trying to do their job as they understand it from the State Board of Education, their own local school boards and the public at large. Around the video table were a superintendent of a k-12 district, building principals of elementary, middle, junior high and high schools, and even a k-12 curriculum director, along with three teachers — elementary, junior high and high school (yours truly). I’m going to try to represent their words accurately, but without attribution since their comments were not on the record. In fact, as the two-hour video chat became more heated, several people were speaking with a good deal of candor, and clearly their remarks were not intended for all ears. (By the way, kudos to the tech folks who brought us all together — it worked far better than I would have suspected.)

I considered not writing about the video conference at all, but ultimately felt that I owe it to the profession that I’ve devoted my adult life to (as I enter my 31st year in the classroom), a profession that has been beleaguered in recent years by powerful forces on every side: attacking teachers’ integrity, our skills, our associations, our job security, our pensions. We feel we have so many enemies, we don’t even know where to focus our attention.

What is more, most teachers are afraid to speak candidly with their own administrators and they’re especially afraid to speak out about what’s going on in their buildings. In spite of education reformers blanketing the media with the myth of “powerful teachers unions,” the truth is that associations like the National Education Association and American Federation of Teachers aren’t all that powerful — if they were, would teachers be in the plight we are now? — and individual teachers are very vulnerable.  Nontenured teachers can be terminated without cause, and tenured teachers can be legally harassed right out of the profession. In fact, it happens all the time. Moreover, teachers tend to be naturally non-confrontational, which is why they chose to go into teaching in the first place. People with more aggressive personalities will seek other kinds of professions. As a result, we’ve been lambs to slaughter at the hands of reformers, legislators, school board members, administrators … at the hands of anyone who wants to take a whack at us. Rather than fight back, it’s easier to keep quiet and bear it, or to move on.

I’ve been writing about educational issues for the past several months — the unfair termination of young teachers, the inherent flaws of the Danielson Framework, the way the Framework affects teachers, and my issues with PARCC and the Common Core. My posts have been garnering hundreds of hits, and a few online likes, but many, many private, under-the-radar thumbs-ups and thank-yous. Teachers appreciate that someone is speaking out, but they’re not only afraid to speak out themselves, they’re even afraid to be seen agreeing with my point of view. If this isn’t evidence of the precariousness of being a teacher and the overall weakness of “teachers unions,” I don’t know what is.

Public Opinion and the Rarefied Air of Excellence

Much of the round-table discussion had to do with the Framework’s insistence that very, very few teachers rank in the top category (identified as “Excellent” in many districts’ plans). Before Danielson, districts tended to have three-tier evaluation instruments, which were often labeled as “Excellent,” “Satisfactory” and “Unsatisfactory.” Danielson adds a tier between “Excellent” and “Satisfactory”: “Proficient.” Many veteran teachers who had consistently received an excellent rating under the previous model were downgraded to merely proficient under Danielson. This downgrading was predicted as early as two years ago when the new instrument emerged on the educational horizon.  I didn’t want to believe it would be that severe, but it has been this past year, the year of implementation, with very few teachers being rated as excellent. For the record, I was rated as proficient — not as excellent for the first time since I was a nontenured teacher, more than 25 years ago.

In fact, as I wrote in a previous post, the Illinois Administrators Academy offered a special workshop this past summer to train administrators how to deliver the unpleasant news that a veteran teacher has been downgraded to proficient — the downgrading was so pervasive across the state. The Framework was originally developed by Charlotte Danielson in 1996 as a way to evaluate first-year teachers, so it made perfect sense that a single-digit percentage would be deemed as excellent. The Framework has undergone three revisions since then and now purports to be an instrument that can assess every teacher, K-12, every subject, and even nonclassroom professionals like librarians and school nurses. Nevertheless, the notion that very, very few teachers will rate as excellent has clung tenaciously to the Framework throughout each revision.

I asked the administrators why that aspect of the Framework remains even though the Framework’s purpose has been expanded dramatically since it was conceived in the mid-1990s. I was told by the k-12 superintendent that the Framework has gained such wide acceptance in large part due to that very aspect. Under previous evaluation instruments, 90% of teachers were judged to be excellent, and the public doesn’t accept that as true. In fact, the public believes (and therefore school boards, too, since they, like the public at large, are almost always noneducators with no classroom experience) that the traditional bell curve should apply to teachers. The bell curve, or Gaussian function, is of course the statistical representation that says the fewest examples of anything, qualitatively speaking, are at either extreme of the gathered samples, and the vast majority (let’s say 80%) fall somewhere in the middle, from below average to above average.

According to the superintendent, then, the public believes that the bell curve should apply to experienced, career teachers as well — that only a small percentage are truly excellent, and the vast majority fall somewhere in the middle (to use Danielson terms, in the satisfactory to proficient range). First of all, who cares what the uninformed public thinks? In our country we have a fascination with asking pedestrians on the street what they think of global warming, heightened military involvement in the Middle East, and allowing Ebola victims to enter the country. John Oliver of “Last Week Tonight” did a segment on this phenomenon that went viral on social media:

Assuming this is true — that the public believes only a small percentage of teachers are excellent based, unconsciously, on the principle of the bell curve (and I’m willing to believe that it is true) — the belief yet again speaks to the ignorance of the “man on the street.” In this instance, the bell curve is being fallaciously applied. If you take a random sampling of people (let’s say, you go to the mall at Christmas time and throw a net around a random group of shoppers) and task them with teaching some random topic to a random group of students, then, yes, the bell curve is likely to be on target. In that group of shoppers, lo and behold, you netted a couple of professional teachers, so they’re able to teach the material pretty effectively; another much larger group of shoppers who are decently educated and reasonably articulate could do a passable job imparting the information; and a smaller group on the other extreme would really make a botch of it.

But career teachers are not a random sampling of shoppers at the mall. They’re highly educated professionals who have devoted their lives to teaching, who have constantly worked to improve their craft, and who have honed their skills via thousands of contact hours with students. It stands to reason, in fact, that career teachers should be excellent at what they do after all that training and experience. No one, I suspect, would have an issue with the statement that all Major League baseball players are excellent at baseball — some may be bound for Cooperstown and some may go back to the minors or to some other career altogether after a season or two, but they’re all really, really good at playing baseball compared to the average person. Why is it so hard to believe that 90% of career teachers are excellent at what they do?

The Fallacy of the Bell Curve and Nontenured Teachers

Unfortunately, the acceptance of the bell-curve fallacy has an even more devastating impact when applied to teachers in the beginning of their careers. One administrator shared that her board expects a few nontenured teachers to be terminated every spring, that the board implies the administrators aren’t doing their jobs if every nontenured teacher is retained. I was dumbfounded by this statement. It’s barely a figurative comparison to say that it’s like having to sacrifice a virgin or two to appease the gods at the vernal equinox. It’s no wonder that many young teachers feel as if they’re performing their highly complex duties with a Damoclesian Sword poised above their tender necks. I know firsthand one young teacher who resigned last spring after two years in the classroom to pursue another career option because she’d seen the way other young teachers were treated and had already experienced some administrative harassment. And this was a teacher who by all accounts was doing well in the classroom (in a specialized area in which there aren’t a lot of qualified candidates). She didn’t even know what she wanted to do for a living, but it will have to be better (and professionally safer) than teaching, she believed. I have to believe she’s right.

But, again, in the case of young teachers, the bell curve is being applied erroneously.  Generally speaking, when teachers are hired, administrators are drawing from an applicant pool in the hundreds. They’re college educated, trained in their field, and they’ve passed their professional exams. They often have to go through multiple rounds of interviewing before being offered a position. Of course, even after all of this, there can be young teachers who have chosen their profession poorly and in fact they’re not cut out for teaching — but school board members shouldn’t just assume a certain number should be cut from the herd to make room for potentially more effective young professionals — and if that sort of pressure is being applied to administrators, to be the bearers of the bloody hatchet every spring, that is grossly unfair, too.

The Danielson Group’s Indoctrination

The evaluation training that administrators have to undergo, all forty hours of it, indoctrinates them to the Danielson Framework’s ethos that excellent is all but attainable, and it has led to all kinds of leaping logic and gymnastic semantics. An idea that was expressed multiple times in various ways during the roundtable was that proficient really means excellent, and a rating of excellent really means something beyond excellent — what precisely is unclear, but it has to do with teachers going above and beyond (above what? beyond where? … no one seems to know or be able to articulate). The Framework was often referred to as “fact-based” and “objective,” yet administrator after administrator couldn’t put into words what distinguishes a “proficient” teacher from an “excellent” one. It’s just a certain feeling — which is the very definition of subjectivity. The Framework for Teacher Evaluation approach is fact-laden, but it is far from fact-based.

The Danielson model is supposed to be an improvement over previous ones in part because it requires evaluators to observe teachers more than in the past. In the old system, typically, tenured teachers were observed one class period every other year. Now they’re observed one class period plus several pop-in visits, which may last only a few minutes, every other year. The Framework recommends numerous visits, even for veteran teachers, but in practicality evaluators are doing well to pop in a half dozen times or so because they have so many teachers to evaluate. Nevertheless, the increased frequency seems to give administrators the sense that they have a secure hold on the behaviors of their teachers and know with confidence what they’re doing in their classes. This confidence, frankly, is troubling. Let’s be generous and say that a principal can observe a teacher for a total of three class periods (one full period, plus bits of four or five other ones). Meanwhile, the typical teacher teaches, say, six periods per day for 180 days, which equals 1,080 periods. Three class periods represent less than one percent (0.3 percent, rounding up) of that teacher’s time with students during the year. How in the world can an evaluator say with confidence Teacher A is excellent and Teacher B is really close, but definitely only proficient based on seeing them teach less than one percent of the time?

Yet one principal said with confidence, bravado even, that he could observe two high-performing teachers who had always been rated as excellent in the past, and based on his Danielson-style observations he could differentiate between the excellent high-performing teacher and the proficient high-performing teacher, because, he said, the excellent teacher was doing something consistently, whereas the proficient teacher was doing that something only some of the time — what that something is was left undefined. If a writer submitted an academic article to a peer-reviewed journal and was drawing rock-solid conclusions based on observing anything .03% of the time … well, let us say that acceptance for publication would be unlikely.

The same standards of logic should be applied to judging teachers’ careers and assessing their worth to the profession. Period.

The Portfolio Conundrum

The confident administrator may point to another component of the Danielson model that is supposed to be an improvement over the previous approach: a portfolio prepared by the teacher. Teachers are supposed to provide their evaluator with evidence regarding their training and professionalism (especially for Danielson domains 1 and 4, “Planning and Preparation” and “Professional Responsibilities”), but there are some inherent problems with this approach and a lot of confusion. As far as confusion, principals seem to be in disagreement about how much material teachers should provide them. Some suggest only a few representative items, but the whole idea is for the portfolio to fill in the blanks for the evaluator, to make the evaluator aware of professional behaviors and activities that he or she can’t observe in the classroom (especially when they’re observing a teacher less than one percent of their time with students!). However, if teachers hand in thick portfolios, filled with evidence, the overburdened principal (and I’m not being sarcastic here), the overburdened principal hardly has time to pore through dozens of portfolios that look like they were prepared by James Michener (I debated between Michener and Tolstoy) — which leaves teachers in a conundrum: Do they turn in a modest amount of evidence, thereby selling themselves short, or do they submit copious amounts of evidence that won’t be read and considered by their evaluator anyway?

And it’s a moot question, of course, since nearly all teachers are going to be lumped into the proficient category to satisfy the public’s erroneous bell-curve expectations.

The Undervaluing of Content

I’ll add one bit more from the conversation because it leads to another important point — perhaps the most important — and that is one principal’s statement that he mainly focuses on a teacher’s delivery of the material and not the validity of the content because he usually doesn’t have the background in the subject area. In larger school districts, there may be department chairs who are at least in part responsible for evaluating teachers in their department (so an English teacher evaluates an English teacher, or a math teacher, a math teacher, etc.), but the vast majority of evaluations, for tenured and nontenured teachers alike, are performed by administrators outside of the content area. This, frankly, has always been a problem and largely invalidates the entire teacher evaluation system, but when the system was mainly benign, no one fussed too much about it (not even me). Now, however, when tenure and seniority laws have been weakened, and principals are programmed to be niggardly with excellent ratings, the fact that evaluators oftentimes have no idea if the teacher is dispensing valid knowledge or not undermines the whole approach.

Not to mention, the Danielson Framework claims to place about fifty percent of a teacher’s effectiveness on his or her knowledge of the subject. The  portfolios are supposed to help with this dilemma (the portfolios that aren’t being read with any sort of care because of time issues). I’m dubious, though, that this is a legitimate concern of the framers of the Danielson Framework because they definitely privilege an approach to teaching that places the burden of knowledge production with the students. That is, ideally teachers are facilitating their students’ acquisition of knowledge through self-discovery, but they’re not imparting that knowledge to them directly. Indeed, excellent teachers do very little direct teaching at all.

This devaluation of content-area knowledge has been a growing trend for several years, and it’s not surprising that administrators are easily swayed toward this mindset. After all, teachers who go into administration have made the choice to pursue knowledge not in their subject-area field. Very, very few administrators have a masters degree in their original content area in addition to their administrative degrees and certificates. In theory, they may accept the idea that broader and deeper knowledge in your subject area is important, but they can’t truly understand just how valuable (even invaluable) it is since they didn’t teach as someone with an advanced degree in their field. They’re only human after all, and none of us can truly relate to an experience we haven’t had ourselves.

Campbell Brown and Her Unwitting Campbell Brown-shirts

We didn’t talk about this during the video round-table, but it seems clear to me that none of the administrators had any sense of the role they’re playing in the larger scheme of things. The players are too numerous and the campaign too complex to get into here in any depth, but there’s unquestionably a movement afoot to privatize education — that is, to take education out of the hands of trained professionals and put it in the hands of underpaid managers so that corporations can reap obscene profits, and turn traditional public schools into barely funded welfare institutions. The well-to-do will be able to send their sons and daughters to these corporate-backed charter schools, and middle-class parents can dig their infinite hole of financial debt even deeper in an effort to keep up and send their children to the private, corporate schools as well.

Campbell Brown and the Partnership for Educational Justice were behind the lawsuit that made teacher tenure unconstitutional in California (the Vergara decision), and they’re at it again in New York (Wright v. New York). The Danielson Framework, wielded by brainwashed administrators, is laying the groundwork for Vergara-like lawsuits across the land. Imagine how much easier it will be for Brown and partners in “reform” like David Boles to make the case that public schools are failing because, see, only a handful of teachers are performing at the top of their field. The rest, 90-something percent, are varying shades of mediocre, with powerful teachers unions shielding their mediocrity from public view.

Superintendents and principals have drunk the Campbell Brown-colored Kool-Aid. In this instance the metaphor is especially apropos because there are already movements underway to dismiss traditionally trained administrators as underqualified. In Illinois, the State Board of Education is changing from certificates to licenses and in the process requiring additional training to become an administrator. It is a recent change, but already there are insinuations that administrators who received the traditional training are going to be underqualified compared to their newly licensed colleagues.

Moreover, what does it say about a principal as recruiter of young talent when a significant number of his new hires have to be terminated year after year? What a waste of money and resources, and what a  disservice to children! And what does it say about a principal as educational leader of his building when he can’t even shape the majority of his veteran teachers into excellent practitioners? Clearly, he’s not especially excellent either. And all those well-paid superintendents who hired all those lackluster principals, well … And all those publicly elected boards of education who hired all those lackluster superintendents, well … the gross mismanagement of taxpayer dollars is bordering on criminal fraud.

As I see it, the Partnership for Educational Justice’s grand scheme is to have principals help them dismantle professional associations like the NEA and AFT via their use of the Danielson Framework, state by state. Then they’ll systematically replace public schools with corporate-backed charter schools which will be staffed by undertrained, low-paid “teachers,” and instead of principals, each school/franchise will be overseen by a manager — just as it works in the corporate world now. Instead of boards of education who answer to taxpayers there will be boards of directors who answer to shareholders. Brilliant.

So every time principals sign an evaluation that undervalues their teachers, they’re also signing their own resignation letter. It’s all right: they’ll look quite fetching in their Brown-shirts as they wait in the unemployment line.


Here’s my beef with PARCC and the Common Core

Posted in August 2014, Uncategorized by Ted Morrissey on August 9, 2014

Beginning this school year students in Illinois will be taking the new assessment known as PARCC (Partnership for Assessment of Readiness for College and Careers), which is also an accountability measure — meaning that it will be used to identify the schools (and therefore teachers) who are doing well and the ones who are not, based on their students’ scores. In this post I will be drawing from a document released this month by the Illinois State Board of Education, “The top 10 things teachers need to know about the new Illinois assessments.” PARCC is intended to align with the Common Core, which around here has been rebranded as the New Illinois Learning Standards Incorporating the Common Core (clearly a Madison Avenue PR firm wasn’t involved in selecting that name — though I’m surprised funds weren’t allocated for it).

This could be a very long post, but I’ll limit myself to my main issues with PARCC and the Common Core. The introduction to “The top 10 things” document raises some of the most fundamental problems with the revised approach. It begins, “Illinois has implemented new, higher standards for student learning in all schools across the state.” Let’s stop right there. I’m dubious that rewording the standards makes them “higher,” and from an English/language arts teacher perspective, the Common Core standards aren’t asking us to do anything different from what we’ve been doing since I started teaching in 1984. There’s an implied indictment in the opening sentence, suggesting that until now, the Common Core era, teachers haven’t been holding students to particularly high standards. I mean, logically, if there was space into which the standards could be raised, then they had to be lower before Common Core. It’s yet another iteration of the war-cry: Teachers, lazy dogs that they are, have been sandbagging all these years, and now they’re going to have to up their game — finally!

Then there’s the phrase “in all schools across the state,” that is, from the wealthiest Chicago suburb to the poorest downstate school district, and this idea gets at one of the biggest problems — if not the biggest — in education: grossly inequitable funding. We know that kids from well-to-do homes attending well-to-do schools do significantly better in school — and on assessments! — than kids who are battling poverty and all of its ill-effects. Teachers associations (aka, unions) have been among the many groups advocating to equalize school funding via changes to the tax code and other laws, but money buys power and powerful interests block funding reform again and again. So until the money being spent on every student’s education is the same, no assessment can hope to provide data that isn’t more about economic circumstances than student ability.

As if this disparity in funding weren’t problematic enough, school districts have been suffering cutbacks in state funding year after year, resulting in growing deficits, teacher layoffs (or non-replacement of retirees), and other direct hits to instruction.

According to the “The top 10 things” document, “[a] large number of Illinois educators have been involved in the development of the assessment.” I have no idea how large a “large number” is, but I know there’s a big difference between involvement and influence. From my experience over the last 31 years, it’s quite common for people to present proposals to school boards and the public clothed in the mantle of “teacher input,” but they fail to mention that the input was diametrically opposed to the proposal.

The very fact that the document says in talking point #1 that a large number of educators (who, by the way, are not necessarily the same as teachers) were involved in PARCC’s development tells us that PARCC was not developed by educators, and particularly not by classroom teachers. In other words, this reform movement was neither initiated nor orchestrated by educators. Some undefined number of undefined “educators” were brought on board, but there’s no guarantee that they had any substantive input into the assessment’s final form, or even endorsed it. I would hope that the teachers who were involved were vocal about the pointlessness of a revised assessment when the core problems (pun intended), like inadequate funding, are not being addressed. At all.

“The top 10 things” introduction ends with “Because teachers are at the center of these changes and directly contribute to student success, the Illinois State Board of Education has compiled a list of the ten most important things for teachers to know about the new tests.” In a better world, the sentence would be Because teachers are at the center of these changes and directly contribute to student success … the Illinois State Board of Education has tasked teachers with determining the best way to assess student performance. Instead, teachers are being given a two-page handout, which is heavy in snazzy graphics, two to three weeks before the start of the school year. In my district, we’ve had several inservices over the past two years regarding Common Core and PARCC, but our presenters had practically no concrete information to share with us because everything was in such a state of flux; as a consequence, we left meeting after meeting no better informed than we were after the previous one. Often the new possible developments revised or even replaced the old possible developments.

The second paragraph of the introduction claims that PARCC will “provide educators with reliable data that will help guide instruction … [more so] than the current tests required by the state.” I’ve already spoken to that so-called reliable data above, but a larger issue is that this statement assumes teachers are able to analyze all that data provided by previous tests in an attempt to guide instruction. It happens, and perhaps it happens in younger grades more so than in junior high and high school, but by and large teachers are so overwhelmed with the day-to-day — minute-to-minute! — demands of the job that there’s hardly time to pore through stacks of data and develop strategies based on what they appear to be saying about each student. Teachers generally have one prep or planning period per day, less than an hour in length. The rest of the time they’re up to their dry-erase boards in kids (25 to 30 or more per class is common). In that meager prep time and whatever time they can manage beyond that, they’re writing lesson plans; grading papers; developing worksheets, activities, tests, etc.; photocopying worksheets, activities, tests, etc.; contacting or responding to parents or administrators; filling out paperwork for students with IEPs or 504s; accommodating students’ individual needs, those with documented needs and those with undocumented ones; entering grades and updating their school websites; supervising hallways, cafeterias and parking lots; coaching, advising, sponsoring, chaperoning. . . .

Don’t get me wrong. I’m a scholar as well as a teacher. I believe in analyzing data. I’d love to have a better handle on what my students’ specific abilities are and how I might best deliver instruction to meet their needs. But the reality is that that isn’t a reasonable expectation given the traditional educational model — and it’s only getting worse in terms of time demands on teachers, with larger class sizes, ever-changing technology, and — now — allegedly higher standards.

Educational reformers are so light on classroom experience they haven’t a clue how demanding a teacher’s job is at its most fundamental level. In this regard I think education suffers from the fact that so many of its practitioners are so masterful at their job that their students and parents and board members and even administrators get the impression that it must be easy. Anyone who is excellent at what she or he does makes it look easy to the uninitiated observer.

I touched on ever-changing technology a moment ago; let me return to it. PARCC is intended to be an online assessment, but, as the document points out, having it online in all schools is unrealistic, and that “goal will take a few more years, as schools continue to update their equipment and infrastructure.” The goal of its being online is highly questionable in the first place. The more complicated one makes the assessment tool, the less cognitive processing space the student has to devote to the given question or task. Remember when you started driving a car? Just keeping the darn thing on the road was more than enough to think about. In those first few hours it was difficult to imagine that driving would become so effortless that one day you’d be able to drive, eat a cheeseburger, sing along with your favorite song, and argue with your cousin in the backseat, all simultaneously. At first, the demands of driving the car dominated your cognitive processing space. When students have to use an unfamiliar online environment to demonstrate their abilities to read, write, calculate and so on, how much will the online environment itself compromise the cognitive space they can devote to the reading, writing and calculating processes?

What is more, PARCC implies that schools, which are already financially strapped and overspending on technology (technology that has never been shown to improve student learning and may very well impede it), must channel dwindling resources — whether local, state or federal — to “update their equipment and infrastructure.” These are resources that could, if allowed, be used to lower class sizes, re-staff libraries and learning centers, and offer more diverse educational experiences to students via the fine arts and other non-core components of the curriculum. While PARCC may not require, per se, schools to spend money they don’t have on technology, it certainly encourages it.

What is even more, the online nature of PARCC introduces all kinds of variables into the testing situation that are greatly minimized by the paper-and-pencil tests it is supplanting. Students will need to take the test in computer labs, classrooms and other environments that may or may not be isolated and insulated from other parts of the school, or off-site setting. Granted, the sites of traditional testing have varied somewhat — you can’t make every setting precisely equal to every other setting — but it’s much, much easier to come much, much closer than when trying to do the test online. Desktop versus laptop computers (in myriad models), proximity to Wi-Fi, speed of connection (which may vary minute from minute), how much physical space can be inserted between test-takers — all of these are issues specific to online assessments, and they all will affect the results of the assessment.

So my beef comes down to this about PARCC and the Common Core: Hundreds of millions of dollars have been spent rewording standards and developing a new assessment that won’t actually help improve education. Here’s what would help teachers teach kids:

1. Equalize funding and increase it.

2. Lower class sizes, kindergarten through 12th grade, significantly — maximum fifteen per class, except for subjects that benefit from larger classes, like music courses.

3. Treat teachers better. Stop gunning for their jobs. Stop dismantling their unions. Stop driving them from the profession with onerous evaluation tools, low pay and benefits, underfunded pensions, too many students to teach to do their job well, and ridiculous mandates that make it harder to educate kids. Just stop it.

But these common sense suggestions will never fly because no one will make any money off of them, let alone get filthy rich, and education reform is big business — the test developers, textbook companies, technology companies, and high-priced consultants will make sure the gravy train of “reform” never gets derailed. In fact, the more they can make it look like kids are underachieving and teachers are underperforming, the more secure and more lucrative their scam is.

Thus PARCC and Common Core … let the good times roll.