12 Winters Blog

Fatal flaws of the Danielson Framework

Posted in March 2014, Uncategorized by Ted Morrissey on March 23, 2014

The Danielson Group’s “Framework for Teaching Evaluation Instrument” has been sweeping the nation, including my home state of Illinois, in spite of the fact that the problems with the Group, the Framework, the Instrument, and even Ms. Danielson herself are as obvious as a Cardinals fan in the Wrigley Field bleachers. There have already been some thorough critiques of the Danielson Group, its figurehead, the Framework, and how it’s being used destructively rather than constructively. For example, Alan Singer’s article at the Huffington Post details some of the most glaring problems. I encourage you to read the article, but here are some of the highlights:

[N]obody … [has] demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core. A case demonstrating the relationship could have been made, if it actually exists.

[I]n a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system … I can find no formal academic resume online … I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product. [In an article archived at the Danielson Group site, it describes the “crooked road” of her career, and I have little doubt that she’d be an interesting person with whom to have lunch — but in terms of practical classroom experience as a teacher, her CV, like most educational reformers’, is scant of information.]

The group’s services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. [Prices appear to range from $599 per person to attend a three-day workshop, $1,809 per person to participate in a companion four-week online class. For a Danielson Group consultant, the fee appears to be $4,000 per consultant/per day when three or more days are scheduled, and $4,500 per consultant/per day for one- to two-day consultations (plus travel, food and lodging costs). There are fees for keynote addresses, and several books are available for purchase.]

As I’ve stated, you should read Mr. Singer’s article in its entirety, and look into the Danielson Group and Charlotte Danielson yourself. The snake-oil core of their lucrative operation quickly becomes apparent. One of the chief purposes of the Danielson Framework, which allegedly works in conjunction with Common Core State Standards, is to turn students into critical readers who are able to dissect text, comprehending both its explicit and implicit meanings. What follows is my own dissection of the “Framework for Teaching Evaluation Instrument” (2013 edition). For now, I’m limiting my analysis to the not quite four-page Introduction, which, sadly, is the least problematic part of the Framework.  The difficulties only increase as one reads farther and farther into the four Domains.  (My citations refer to the PDF that is available at DanielsonGroup.org.)

First of all, the wrongheadedness of teacher evaluation

Before beginning my dissection in earnest, I should say that, rubrics aside, the basic idea of teacher evaluation is ludicrous — that sporadic observations, very often by superiors who aren’t themselves qualified to teach your subject, result in nothing especially accurate nor useful. As I’ve blogged before, other professionals — physicians, attorneys, business professionals, and so on — would never allow themselves to be assessed as teachers are. For one thing, and this is a good lead-in to my analysis, there are as many styles of teaching as there are of learning.  There is no “best way” to teach, just as there is no “best way” to learn.  Teachers have individual styles, just as tennis players do, and effective ones know how to adjust their style depending on their students’ needs.

But let us not sell learners short: adjusting to a teacher’s method of delivery is a human attribute — the one that allowed us to do things like wander away from the Savanna, learn to catch and eat meat, and survive the advance of glaciers — and it is well worth fine tuning before graduating from high school.  I didn’t attend any college classes nor hold any jobs where the professor or the employer adjusted to fit me, at least not in any significant ways. Being successful in life (no matter how one chooses to define success) depends almost always on one’s ability to adjust to changing circumstances.

In essence, forcing teachers to adopt a very particular method of teaching tends to inhibit their natural pedagogical talents, and it’s also biased toward students who do, in fact, like the Danielsonesque approach, which places much of the responsibility for learning in the students’ lap. Worse than that, however, a homogenous approach — of any sort — gives students a very skewed sense of the world in which they’re expected to excel beyond graduation.

In fairness, “The Framework for Teaching Evaluation Instrument” begins with a quiet little disclaimer, saying in the second sentence, “While the Framework is not the only possible description of practice, these responsibilities seek to define what teachers should know and be able to do in the exercise of their profession” (3). That is, there are other ways to skin the pedagogical cat. It’s also worth noting that the Danielson Group is seek[ing] to define — it doesn’t claim to have found The Way, at least not explicitly. Nevertheless, that is how untold numbers of legislators, reformers, consultants and administrators have chosen to interpret the Framework. As the Introduction goes on to say, “The Framework quickly found wide acceptance by teachers, administrators, policymakers, and academics as a comprehensive description of good teaching …” (3).

Teachers, well, maybe … though I know very, very few who didn’t recognize it as bologna from the start. Administrators, well, maybe a few more of these, but I didn’t hear any that were loudly singing its praises once it appeared on the Prairie’s horizon. Academics … that’s pretty hard to imagine, too. I’ve been teaching high-school English for 31 years, and I’ve been an adjunct at both private and public universities for 18 years — and I can’t think of very many college folk who would embrace the Danielson Framework tactics. Policymakers (and the privateer consultants and the techno-industrialists who follow remora-like in their wake) … yes, the Framework fits snugly into their worldview.

Thus, the Group doesn’t claim the Framework is comprehensive, but they seem to be all right with others’ deluding themselves into believing it is.

The Framework in the beginning

The Introduction begins by explaining each incarnation of the Framework, starting with its 1996 inception as “an observation-based evaluation of first-year teachers used for the purpose of licensing” (3). The original 1996 edition, based on research compiled by Educational Testing Service (ETS), coined the performance-level labels of “unsatisfactory,” “basic,” “proficient,” and “distinguished” — labels which have clung tenaciously to the Framework through successive editions and adoptions by numerous state legislatures. In Illinois, the Danielson Group Framework of Teaching is the default evaluation instrument if school districts don’t modify it. Mine has … a little. The state mandates a four-part labeling structure, and evaluators have been trained (brainwashed?) to believe that “distinguished” teachers are as rare as four-leaf clovers … that have been hand-plucked and delivered to your doorstep by leprechauns.

In my school, it is virtually (if not literally) impossible to receive a “distinguished” rating, which leads to comments from evaluators like “I think you’re one of the best teachers in the state, but according to the rubric I can only give you a ‘proficient.'” It is the equivalent of teachers telling their students that they’re using the standard A-B-C-D scale, and they want them to do A-quality work and to strive for an A in the course, but, alas, virtually none of them are going to be found worthy and will have to settle for the B (“proficient”): Better luck next time, kids. Given the original purpose of the Framework — to evaluate first-year teachers — it made perfect sense to cast the top level of “distinguished” as all but unattainable, but it makes no sense to place that level beyond reach for high-performing, experienced educators. Quite honestly, it’s demeaning and demoralizing — it erodes morale as well as respect for the legitimacy of both the evaluator and the evaluation process.

Then came (some) differentiation

The 2007 edition of the Framework, according to the Introduction, was improved by providing modified evaluation instruments for “non-classroom specialist positions, such as school librarians, nurses, and counselors,” that is, people who “have very different responsibilities from those of classroom teachers”; and, as such, “they need their own frameworks, tailored to the details of their work” (3).  There is no question that the differentiation is important. However, the problem is that it implies “classroom teacher” is a monolithic position, and nothing could be further from the truth. Thus, having one instrument that is to be used across grade levels, ability levels, not to mention for vocational, academic and fine arts courses is, simply, wrongheaded.

As any experienced teacher will tell you, each class (each gathering of students) has a personality of its own. On paper, you may have three sections of a given course, all with the same sort of students as far as age and ability; yet, in reality, each group is unique, and the lesson that works wonderfully for your 8 a.m. group may be doomed to fail with your 11 a.m. class, right before lunch, or your 1 p.m. after-lunch bunch — and on and on and on. So the Danielson-style approach, which is heavily student directed, may be quite workable for your early group, whereas something more teacher directed may be necessary at 11:00.

Therefore, according to the Danielson Group, I may be “distinguished” in the morning, but merely “proficient” by the middle of the day (and let us not speak of the last period). The evaluator can easily become like the blindman feeling the elephant: Depending on which piece he experiences, he can have very different impressions about what sort of thing, what sort of teacher, he has before him. Throw into the mix that evaluators, due to their training, have taken “distinguished” off the table from the start, and we have a very wobbly Framework indeed.

Enter Bill and Melinda Gates

The 2011 edition reflected revisions based on the Group’s 2009 encounter with the Bill and Melinda Gates Foundation and its Measures of Effective Teaching (MET) research project, which attempted “to determine which aspects of a teacher’s practice were most highly correlated with high levels of student progress” (4). Accordingly, the Danielson Group added more “[p]ossible examples for each level of performance for each component.” They make it clear, though, that “they should be regarded for what they are: possible examples. They are not intended to describe all the possible ways in which a certain level of performance might be demonstrated in the classroom.” Indeed, the “examples simply serve to illustrate what practice might look like in a range of settings” (4).

I would applaud this caveat if not for the fact that it’s embedded within an instrument whose overarching purpose is to make evaluation of a teacher appear easy. Regarding the 2011 revisions, the Group writes, “Practitioners found that the enhancements not only made it easier to determine the level of performance reflected in a classroom … but also contributed to judgments that are more accurate and more worthy of confidence” (4-5). Moreover, the Group says that changes in the rubric’s language helped to simplify the process:  “While providing less detail, the component-level rubrics capture all the essential information from those at the element level and are far easier to use in evaluation than are those at the element level” (4).

I suspect it’s this ease-of-use selling point that has made the Framework so popular among policymakers, who are clueless as to the complexities of teaching and who want a nice, tidy way to assess teachers (especially one designed to find fault with educators and rate them as average to slightly above average). But it is disingenuous, on the part of Charlotte Danielson and the Group, to maintain that a highly complex and difficult activity can be easily evaluated and quantified. In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”

It’s downright naive — or patently deceptive — to say that a highly complex process (and highly complex is a gross understatement) can be easily and simply evaluated — well, it can be done, but not with any accuracy or legitimacy.

Classic fallacy of begging the question

I want to touch on one other inherent flaw (or facet of deception) in the Danielson Framework and that is its bias toward “active, rather than passive, learning by students” (5). Speaking of the Framework’s alignment with the Common Core, the Group writes, “In all areas, they [CCSS] place a premium on deep conceptual understanding, thinking and reasoning, and the skill of argumentation (students taking a position and supporting it with logic and evidence).” On the one hand, I concur that these are worthy goals — ones I’ve had as an educator for more than three decades — but I don’t concur that they can be observed by someone popping into your classroom every so often, perhaps skimming through some bits of documentary evidence (so-called artifacts), and I certainly don’t concur that it can be done easily.

The Group’s reference to active learning, if one goes by the Domains themselves, seems to be the equivalent of students simply being active in an observable way (via small-group work, for example, or leading a class discussion), but learning happens in the brain and signs of it are rarely visible. Not to get too far afield here, but the Framework is intersecting at this point with introverted versus extroverted learning behaviors. Evaluators, perhaps reflecting a cultural bias, prefer extroverted learners because they can see them doing things, whereas introverted learners may very well be engaged in far deeper thinking, far deeper comprehension and analysis — which is, in fact, facilitated by their physical inactivity.

And speaking of “evidence,” the Introduction refers to “empirical research and theoretical research” (3), “analyses” and “stud[ies]” (4) and to “educational research” that “was fully described” in the appendix of the 2007 edition (3), but beyond this vague allusion (to data which must be getting close to a decade old) there are no citations whatsoever, so, in other words, the Danielson Group is making all sorts of fantastic claims void of any evidence, which I find the very definition of “unsatisfactory.” This tactic, of saying practices and policies are based on research (“Research shows …”), is common in education; yet citations, even vague ones, rarely follow — and when they do, the sources and/or methodologies are dubious, to put it politely.

I plan to look at the Danielson Framework Domains in subsequent posts, and I’m also planning a book about what’s really wrong in education, from a classroom teacher’s perspective.

tedmorrissey.com

’Tis the season–to traumatize young teachers

Posted in March 2014, Uncategorized by Ted Morrissey on March 7, 2014

Illinois has many seasons–bow season, shotgun season … and every March is “traumatizing young teachers” season as school administrators across the state dismiss nontenured teachers, and they’re not even required to give a reason for the dismissal, hence, oftentimes they don’t. Teachers are left devastated, humiliated, and profoundly confused about whether they’ve chosen the right professional path after all.

A few years ago the Illinois legislature, in one of the opening salvos in its campaign to destroy and demoralize educators, expanded the length of time that teachers could be let go without cause to four years, which means that young professionals (or older ones entering the profession later in life) can be dedicated, hard-working teachers who are establishing themselves in their communities and developing collegial relationships for one, two, three and even four years when they’re blindsided by the administrator’s news that they won’t be coming back the following year.

Sometimes, of course, there have been issues raised, and the teacher has not corrected them to the administrator’s satisfaction; and sometimes the school district’s desperate financial situation has led to the dismissal. Too often, though, the young and developing educators are sacked without any warning whatsoever–they’ve fallen prey to the caprices of an administration that has no one to answer to, excerpt perhaps school board members, who tend to know only what administrators tell them since they rarely have direct contact with the teaching staff.

The situation has been exacerbated in the past year by the state’s mandate of a new model for evaluating teachers. It is more complicated and more labor intensive than the tools most district’s had been using. The increased complications and time commitments have not led to a better approach to evaluation, however. They’ve only opened the door for even more nebulous assessments of a teacher’s performance. Teacher evaluation is a rich subject in itself, too rich of a subject to discuss here–but the bottom line is that teaching is far too complex an endeavor to be reasonably evaluated by a single rubric that is used across grade levels, disciplines, and teaching assignments. In fact, it’s insulting to the profession that so many people believe such a model can be devised and successfully implemented. Physicians, attorneys, engineers, business professionals–and politicians!–would never allow themselves to be evaluated the way that a teacher’s worth is determined.

But no matter how simple or how complex the evaluation process is, its usefulness and fairness depend on the sagacity and integrity of the evaluating body. Unfortunately, sagacity and integrity are not prerequisites for becoming an administrator or a school board member. There are good administrators out there, of course, and well-meaning board members; but administrators and board members come in all stripes, just like the human population as a whole. Yet there is no check-and-balance built into the process. Young teachers who are dismissed unfairly, and the professional associations who represent their interests, have no recourse. No recourse at all.

In other words, there is no evaluation of the evaluator, whose sagacity and integrity, apparently, are assumed by the Illinois General Assembly … in all of its sagacity and integrity.

When there is an unfair and unwarranted dismissal, a shockwave goes through the faculty and the student body almost as palpable as an accidental death. Other nontenured teachers become like deer in hunting season and worry that they’ll be next–if not this spring maybe the next, or the next, or the next, or the next. Tenured teachers are angered, saddened and frustrated by the loss of a valuable colleague and trusted friend. It greatly diminishes their respect for their superiors and their good will in working with them. It disrupts students’ focus, and it teaches them a hard lesson about the perils of choosing a career in education. And once a district becomes known as one that mistreats young professionals, word spreads virulently and the best and brightest don’t bother to apply.

Who, in their right mind, would want to work for an administration and board that will dismiss them without reason after a year, or two, or three, or four of hard work and dedication? Who, in their right mind, would chance subjecting their spouse and possibly children to the trauma of a lost job beyond their control?

Young teachers have mainly debt (nowadays from colossal student loans) and very little savings. It’s frightening to be jobless, especially when it’s due to no fault of their own–at least, no fault they’ve been made aware of. Yet teachers must continue to teach for the remainder of the school year, while also looking for new employment. They are often–magnanimously–given the option of resigning instead of being dismissed, but it’s likely a thin disguise that fools no one in their search for another teaching job. They find themselves in very difficult situations when interviewing elsewhere because the question must come up “Why did you leave such-and-such school?” What, then, do they say that won’t compromise either their honesty or their chances of landing another job?

The fact that we as a society allow this devastating unfairness to be visited on our young teachers every spring is just another indication of how little we value education, educators and–for that matter–the children they’ve dedicated themselves to educating.

tedmorrissey.com