The myth of ‘best practices’ in education
Last Wednesday I began my thirty-fourth year as a schoolteacher. To be sure, teaching has changed in those years, kids have, too — although neither as much as one might think. There is one thing, however, that has been amazingly consistent: the number of people who, year upon year, insist that I and my peers adopt a method which they bill as a “best practice” — some technique that they know will improve my teaching because, well, how could it not? It’s a best practice.
Not once — in all those innumerable workshops, inservices and presentations — has a purveyor of a best practice offered a shred of evidence that what they’re promoting will actually lead to better (let alone, the best) teaching. It’s always offered under the implied guise of common sense. It’s the epitome of the logical fallacy of begging the question: Dear Teacher, accept the fact that what you’ve been doing (whatever it may be) hasn’t been as effective as what I’m about to tell you to do. Trust me — I’m a presenter.
And teaching is, allegedly, an evidence-based profession. Schools claim that what they’re doing is “evidence-based,” but oftentimes, if there is something like evidence out there, it’s contrary to what’s being prescribed. On the one hand, I don’t really blame folks for not presenting the evidence to support their claims of the effectiveness of the practice they’re advocating, because (as I’ve written about before) testing in education is fraught with problems. It’s extremely difficult, if not impossible, to generate data which can be reliably analyzed. In any given testing situation, there are simply too many variables to control, and many of them are literally beyond the control of educators. Students are not rats confined to the tiny world of a lab where researchers can effect whatever conditions they’re studying. Imagine scientists sending their rats home each night and asking them to return the next morning for continued research; and periodically the group of rats they’ve been studying are replaced by a whole new group of rats whose histories are a total mystery. (Apologies for comparing students to rats — for what it’s worth, I like rats … and students.)
All right, so I don’t blame purveyors of best practices for not presenting their (nonexistent) evidence; however, I do blame them for suggesting, implicitly, that evidence does exist. It must, right? Otherwise how could they say some technique, some approach is “best” (or at least “better”)?
The reality is, best practices are a myth. Forget good, better, best; let’s turn, instead, to effective versus ineffective (and even that paradigm is nebulous). Effectiveness must be considered on a case by case basis. That is, we want all students to benefit as a result of our efforts, but what works for Bobby versus what works for Suzie on any given day at any given moment, for any given skill or knowledge acquisition, may constitute completely opposite approaches; and tomorrow the reverse may be true. And quite honestly, whether an approach is effective or ineffective may be unknowable, in the moment and even in the long term. The learning takes place in the student’s mind, and the mind is a murky, complicated place. Hopefully the skill or knowledge is identifiable and assessible (via a quiz or test or paper or project), but it may not be, especially in the humanities, which are more concerned with creative and critical applications than in the sciences or the vocational area, where right-or-wrong, black-or-white distinctions are the rule rather than the exception.
Generally the purveyor of a best practice is able to communicate the technique in a few bullet points on a handout or a PowerPoint, but the differences — the vast differences — between grade levels, subject matters, demographics of students, backgrounds and knowledge-levels of teachers, etc., etc., etc. make such simplistic declarations ridiculous. Imagine going to an agricultural convention and telling an assembled group of farmers that you have for them a best practice, and here it is in six bullet points. You’re welcome. No matter what they’re growing, where they’re growing it, what sorts of equipment they have at their disposal, what the climate models are suggesting, how the markets are trending — This is it, brother: Just follow these six steps and your yields will be out of this world. Trust me — I’m a presenter.
The farmers would be nonplussed to put it mildly. Plug in professionals from any other arena — business owners, attorneys, medical doctors, engineers — and the ridiculousness of it (that a single set of practices will improve what they’re doing, regardless of individual situations) becomes clear. It’s so clear, in fact, I can’t imagine any presenter doing it — telling a room full of surgeons, for instance, to do this one simple procedure all the time, no matter the patient’s history, no matter their lab work, no matter how they’re responding on the table — and yet it happens to educators all the time.
Almost without fail, techniques that are presented as best practices are observable. It’s about what you say to students or what they say to you; what you write on the chalkboard; what you write in lesson plans or curricular outlines. It simplifies the process of evaluating teachers’ performances if the evaluator can look for a few concrete actions from every teacher, from kindergarten teacher to calculus teacher, from welding teacher to reading teacher; from the teacher of gifted students to the teacher of exceptional students. It makes assessment so much simpler if everyone is singing from the same hymnal.
I deliberately used the word performances in the previous paragraph because so often that’s what evaluation boils down to: a performance for the audience-of-one, the evaluator. We often hear the term “high-stakes testing” in the media (that is, standardized tests whose results have significant consequences for test-takers and their schools), but we have also entered into a time of “high-stakes evaluating” for teachers, performance assessments which impact their literal job security. Teachers quickly learn that if their evaluator claims x, y and z are best practices, they’d better demonstrate x, y and z when they’re being observed — but quite possibly only when they’re being observed because in truth they don’t believe in the validity or the practicality of x, y and z as a rule.
In such cases, teachers are not trying to be insubordinate, or mocking, or rebellious; they’re trying to teach their charges in the most effective ways they know how (based on the training of their individual disciplines and their years of experience in the classroom), and they disagree with the practices which are being thrust upon them. Teachers do no take an oath equivalent to doctors’ Hippocratic oath, but conscientious teachers have, in essence, taken a personal and professional vow to do no harm to their students; thus they find themselves in a conundrum when their judgments about what’s effective and what isn’t are in conflict with the best practices by which they’re being evaluated. For teachers who care about how well they’re teaching — and that’s just about every teacher I’ve had the privilege to know in the last thirty-four years — it’s a source of stress and anxiety and even depression. More and more teachers every year find that the only way to alleviate that stress in their lives is to leave the profession.
Again, much of the problem is derived from the need for observable behaviors. I like to think my interactions with students in the classroom are positive and effective, but, as a teacher of literature and especially as a teacher of writing, I know my most important and most valuable work is all but invisible. My greatest strengths, I believe, are in developing questions and writing prompts that navigate students’ interactions with a text, and (even more so) in responding to the students’ work. When a student hands in an essay based on a prompt I’ve given them about a text, it is essentially a diagram of how their mind worked as they read and analyzed the text (a novel, or story, or poem, or film) — a kind of CAT scan if you will — and my task is to interpret the workings of their mind (in what ways did their mind work well, and in what ways did their mind veer off the path somewhat) and then, once I’ve interpreted their mind-at-work, I have to provide them comments which explain my interpretations and (here’s the really, really hard part) also comments which will alter their mental processes so that next time they’ll write a more effective essay. In short, I’m trying to get them to think better and to express their thoughts better. (I should point out that to do all of this, I also have to possess a thorough understanding of the text under consideration — a text perhaps by Homer or Shakespeare or Keats or Joyce or Morrison.)
It’s the most important thing I do, and no one observing me in the classroom will ever see it. If my students improve in their reading and thinking and writing and speaking — largely it will be because of my skill to interact with them productively, brain to brain, on the page. The process is both invisible and essential. This is what teaching English is; this is what English teachers do. And we are not unique, by any means, in the profession. Yet our value — our very job security — is based on behaviors that are secondary or even tangential to the most profound sorts of interactions we have with our students.
I know that purveyors of best practices mean well (for-profit educational consultants aside). They are good, smart people who sincerely believe in what they’re advocating, and frequently a kernel or two of meaningful advice can be derived from the presentation, but we need to stop pretending that there’s one method that will improve all teaching, regardless of the myriad factors which come into play every time a teacher engages a group of students. It makes teaching seem simple, and teaching is many, many, many things but simple isn’t one of them.
(Image found via Google Images here.)
Here’s my beef with PARCC and the Common Core
Beginning this school year students in Illinois will be taking the new assessment known as PARCC (Partnership for Assessment of Readiness for College and Careers), which is also an accountability measure — meaning that it will be used to identify the schools (and therefore teachers) who are doing well and the ones who are not, based on their students’ scores. In this post I will be drawing from a document released this month by the Illinois State Board of Education, “The top 10 things teachers need to know about the new Illinois assessments.” PARCC is intended to align with the Common Core, which around here has been rebranded as the New Illinois Learning Standards Incorporating the Common Core (clearly a Madison Avenue PR firm wasn’t involved in selecting that name — though I’m surprised funds weren’t allocated for it).
This could be a very long post, but I’ll limit myself to my main issues with PARCC and the Common Core. The introduction to “The top 10 things” document raises some of the most fundamental problems with the revised approach. It begins, “Illinois has implemented new, higher standards for student learning in all schools across the state.” Let’s stop right there. I’m dubious that rewording the standards makes them “higher,” and from an English/language arts teacher perspective, the Common Core standards aren’t asking us to do anything different from what we’ve been doing since I started teaching in 1984. There’s an implied indictment in the opening sentence, suggesting that until now, the Common Core era, teachers haven’t been holding students to particularly high standards. I mean, logically, if there was space into which the standards could be raised, then they had to be lower before Common Core. It’s yet another iteration of the war-cry: Teachers, lazy dogs that they are, have been sandbagging all these years, and now they’re going to have to up their game — finally!
Then there’s the phrase “in all schools across the state,” that is, from the wealthiest Chicago suburb to the poorest downstate school district, and this idea gets at one of the biggest problems — if not the biggest — in education: grossly inequitable funding. We know that kids from well-to-do homes attending well-to-do schools do significantly better in school — and on assessments! — than kids who are battling poverty and all of its ill-effects. Teachers associations (aka, unions) have been among the many groups advocating to equalize school funding via changes to the tax code and other laws, but money buys power and powerful interests block funding reform again and again. So until the money being spent on every student’s education is the same, no assessment can hope to provide data that isn’t more about economic circumstances than student ability.
As if this disparity in funding weren’t problematic enough, school districts have been suffering cutbacks in state funding year after year, resulting in growing deficits, teacher layoffs (or non-replacement of retirees), and other direct hits to instruction.
According to the “The top 10 things” document, “[a] large number of Illinois educators have been involved in the development of the assessment.” I have no idea how large a “large number” is, but I know there’s a big difference between involvement and influence. From my experience over the last 31 years, it’s quite common for people to present proposals to school boards and the public clothed in the mantle of “teacher input,” but they fail to mention that the input was diametrically opposed to the proposal.
The very fact that the document says in talking point #1 that a large number of educators (who, by the way, are not necessarily the same as teachers) were involved in PARCC’s development tells us that PARCC was not developed by educators, and particularly not by classroom teachers. In other words, this reform movement was neither initiated nor orchestrated by educators. Some undefined number of undefined “educators” were brought on board, but there’s no guarantee that they had any substantive input into the assessment’s final form, or even endorsed it. I would hope that the teachers who were involved were vocal about the pointlessness of a revised assessment when the core problems (pun intended), like inadequate funding, are not being addressed. At all.
“The top 10 things” introduction ends with “Because teachers are at the center of these changes and directly contribute to student success, the Illinois State Board of Education has compiled a list of the ten most important things for teachers to know about the new tests.” In a better world, the sentence would be Because teachers are at the center of these changes and directly contribute to student success … the Illinois State Board of Education has tasked teachers with determining the best way to assess student performance. Instead, teachers are being given a two-page handout, which is heavy in snazzy graphics, two to three weeks before the start of the school year. In my district, we’ve had several inservices over the past two years regarding Common Core and PARCC, but our presenters had practically no concrete information to share with us because everything was in such a state of flux; as a consequence, we left meeting after meeting no better informed than we were after the previous one. Often the new possible developments revised or even replaced the old possible developments.
The second paragraph of the introduction claims that PARCC will “provide educators with reliable data that will help guide instruction … [more so] than the current tests required by the state.” I’ve already spoken to that so-called reliable data above, but a larger issue is that this statement assumes teachers are able to analyze all that data provided by previous tests in an attempt to guide instruction. It happens, and perhaps it happens in younger grades more so than in junior high and high school, but by and large teachers are so overwhelmed with the day-to-day — minute-to-minute! — demands of the job that there’s hardly time to pore through stacks of data and develop strategies based on what they appear to be saying about each student. Teachers generally have one prep or planning period per day, less than an hour in length. The rest of the time they’re up to their dry-erase boards in kids (25 to 30 or more per class is common). In that meager prep time and whatever time they can manage beyond that, they’re writing lesson plans; grading papers; developing worksheets, activities, tests, etc.; photocopying worksheets, activities, tests, etc.; contacting or responding to parents or administrators; filling out paperwork for students with IEPs or 504s; accommodating students’ individual needs, those with documented needs and those with undocumented ones; entering grades and updating their school websites; supervising hallways, cafeterias and parking lots; coaching, advising, sponsoring, chaperoning. . . .
Don’t get me wrong. I’m a scholar as well as a teacher. I believe in analyzing data. I’d love to have a better handle on what my students’ specific abilities are and how I might best deliver instruction to meet their needs. But the reality is that that isn’t a reasonable expectation given the traditional educational model — and it’s only getting worse in terms of time demands on teachers, with larger class sizes, ever-changing technology, and — now — allegedly higher standards.
Educational reformers are so light on classroom experience they haven’t a clue how demanding a teacher’s job is at its most fundamental level. In this regard I think education suffers from the fact that so many of its practitioners are so masterful at their job that their students and parents and board members and even administrators get the impression that it must be easy. Anyone who is excellent at what she or he does makes it look easy to the uninitiated observer.
I touched on ever-changing technology a moment ago; let me return to it. PARCC is intended to be an online assessment, but, as the document points out, having it online in all schools is unrealistic, and that “goal will take a few more years, as schools continue to update their equipment and infrastructure.” The goal of its being online is highly questionable in the first place. The more complicated one makes the assessment tool, the less cognitive processing space the student has to devote to the given question or task. Remember when you started driving a car? Just keeping the darn thing on the road was more than enough to think about. In those first few hours it was difficult to imagine that driving would become so effortless that one day you’d be able to drive, eat a cheeseburger, sing along with your favorite song, and argue with your cousin in the backseat, all simultaneously. At first, the demands of driving the car dominated your cognitive processing space. When students have to use an unfamiliar online environment to demonstrate their abilities to read, write, calculate and so on, how much will the online environment itself compromise the cognitive space they can devote to the reading, writing and calculating processes?
What is more, PARCC implies that schools, which are already financially strapped and overspending on technology (technology that has never been shown to improve student learning and may very well impede it), must channel dwindling resources — whether local, state or federal — to “update their equipment and infrastructure.” These are resources that could, if allowed, be used to lower class sizes, re-staff libraries and learning centers, and offer more diverse educational experiences to students via the fine arts and other non-core components of the curriculum. While PARCC may not require, per se, schools to spend money they don’t have on technology, it certainly encourages it.
What is even more, the online nature of PARCC introduces all kinds of variables into the testing situation that are greatly minimized by the paper-and-pencil tests it is supplanting. Students will need to take the test in computer labs, classrooms and other environments that may or may not be isolated and insulated from other parts of the school, or off-site setting. Granted, the sites of traditional testing have varied somewhat — you can’t make every setting precisely equal to every other setting — but it’s much, much easier to come much, much closer than when trying to do the test online. Desktop versus laptop computers (in myriad models), proximity to Wi-Fi, speed of connection (which may vary minute from minute), how much physical space can be inserted between test-takers — all of these are issues specific to online assessments, and they all will affect the results of the assessment.
So my beef comes down to this about PARCC and the Common Core: Hundreds of millions of dollars have been spent rewording standards and developing a new assessment that won’t actually help improve education. Here’s what would help teachers teach kids:
1. Equalize funding and increase it.
2. Lower class sizes, kindergarten through 12th grade, significantly — maximum fifteen per class, except for subjects that benefit from larger classes, like music courses.
3. Treat teachers better. Stop gunning for their jobs. Stop dismantling their unions. Stop driving them from the profession with onerous evaluation tools, low pay and benefits, underfunded pensions, too many students to teach to do their job well, and ridiculous mandates that make it harder to educate kids. Just stop it.
But these common sense suggestions will never fly because no one will make any money off of them, let alone get filthy rich, and education reform is big business — the test developers, textbook companies, technology companies, and high-priced consultants will make sure the gravy train of “reform” never gets derailed. In fact, the more they can make it look like kids are underachieving and teachers are underperforming, the more secure and more lucrative their scam is.
Thus PARCC and Common Core … let the good times roll.
Not speaking about Danielson Framework per se, but
Sir Ken Robinson has several TED Talks regarding education, and his “How to Escape Education’s Death Valley” is an especially appropriate follow-up to my last post about the Danielson Group’s Framework for Teaching Evaluation Instrument. Robinson, who is very funny and engaging, doesn’t reference Charlotte Danielson and her group per se, but he may as well. The Danielson Group’s Framework, which has been adopted as a teacher evaluation instrument in numerous states, including Illinois, is emblematic — in fact, the veritable flagship — of everything that’s wrong with education in America, according to Robinson.
Treat yourself to twenty minutes of Robinson’s wit and wisdom:
Fatal flaws of the Danielson Framework
The Danielson Group’s “Framework for Teaching Evaluation Instrument” has been sweeping the nation, including my home state of Illinois, in spite of the fact that the problems with the Group, the Framework, the Instrument, and even Ms. Danielson herself are as obvious as a Cardinals fan in the Wrigley Field bleachers. There have already been some thorough critiques of the Danielson Group, its figurehead, the Framework, and how it’s being used destructively rather than constructively. For example, Alan Singer’s article at the Huffington Post details some of the most glaring problems. I encourage you to read the article, but here are some of the highlights:
[N]obody … [has] demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core. A case demonstrating the relationship could have been made, if it actually exists.
[I]n a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system … I can find no formal academic resume online … I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product. [In an article archived at the Danielson Group site, it describes the “crooked road” of her career, and I have little doubt that she’d be an interesting person with whom to have lunch — but in terms of practical classroom experience as a teacher, her CV, like most educational reformers’, is scant of information.]
The group’s services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. [Prices appear to range from $599 per person to attend a three-day workshop, $1,809 per person to participate in a companion four-week online class. For a Danielson Group consultant, the fee appears to be $4,000 per consultant/per day when three or more days are scheduled, and $4,500 per consultant/per day for one- to two-day consultations (plus travel, food and lodging costs). There are fees for keynote addresses, and several books are available for purchase.]
As I’ve stated, you should read Mr. Singer’s article in its entirety, and look into the Danielson Group and Charlotte Danielson yourself. The snake-oil core of their lucrative operation quickly becomes apparent. One of the chief purposes of the Danielson Framework, which allegedly works in conjunction with Common Core State Standards, is to turn students into critical readers who are able to dissect text, comprehending both its explicit and implicit meanings. What follows is my own dissection of the “Framework for Teaching Evaluation Instrument” (2013 edition). For now, I’m limiting my analysis to the not quite four-page Introduction, which, sadly, is the least problematic part of the Framework. The difficulties only increase as one reads farther and farther into the four Domains. (My citations refer to the PDF that is available at DanielsonGroup.org.)
First of all, the wrongheadedness of teacher evaluation
Before beginning my dissection in earnest, I should say that, rubrics aside, the basic idea of teacher evaluation is ludicrous — that sporadic observations, very often by superiors who aren’t themselves qualified to teach your subject, result in nothing especially accurate nor useful. As I’ve blogged before, other professionals — physicians, attorneys, business professionals, and so on — would never allow themselves to be assessed as teachers are. For one thing, and this is a good lead-in to my analysis, there are as many styles of teaching as there are of learning. There is no “best way” to teach, just as there is no “best way” to learn. Teachers have individual styles, just as tennis players do, and effective ones know how to adjust their style depending on their students’ needs.
But let us not sell learners short: adjusting to a teacher’s method of delivery is a human attribute — the one that allowed us to do things like wander away from the Savanna, learn to catch and eat meat, and survive the advance of glaciers — and it is well worth fine tuning before graduating from high school. I didn’t attend any college classes nor hold any jobs where the professor or the employer adjusted to fit me, at least not in any significant ways. Being successful in life (no matter how one chooses to define success) depends almost always on one’s ability to adjust to changing circumstances.
In essence, forcing teachers to adopt a very particular method of teaching tends to inhibit their natural pedagogical talents, and it’s also biased toward students who do, in fact, like the Danielsonesque approach, which places much of the responsibility for learning in the students’ lap. Worse than that, however, a homogenous approach — of any sort — gives students a very skewed sense of the world in which they’re expected to excel beyond graduation.
In fairness, “The Framework for Teaching Evaluation Instrument” begins with a quiet little disclaimer, saying in the second sentence, “While the Framework is not the only possible description of practice, these responsibilities seek to define what teachers should know and be able to do in the exercise of their profession” (3). That is, there are other ways to skin the pedagogical cat. It’s also worth noting that the Danielson Group is seek[ing] to define — it doesn’t claim to have found The Way, at least not explicitly. Nevertheless, that is how untold numbers of legislators, reformers, consultants and administrators have chosen to interpret the Framework. As the Introduction goes on to say, “The Framework quickly found wide acceptance by teachers, administrators, policymakers, and academics as a comprehensive description of good teaching …” (3).
Teachers, well, maybe … though I know very, very few who didn’t recognize it as bologna from the start. Administrators, well, maybe a few more of these, but I didn’t hear any that were loudly singing its praises once it appeared on the Prairie’s horizon. Academics … that’s pretty hard to imagine, too. I’ve been teaching high-school English for 31 years, and I’ve been an adjunct at both private and public universities for 18 years — and I can’t think of very many college folk who would embrace the Danielson Framework tactics. Policymakers (and the privateer consultants and the techno-industrialists who follow remora-like in their wake) … yes, the Framework fits snugly into their worldview.
Thus, the Group doesn’t claim the Framework is comprehensive, but they seem to be all right with others’ deluding themselves into believing it is.
The Framework in the beginning
The Introduction begins by explaining each incarnation of the Framework, starting with its 1996 inception as “an observation-based evaluation of first-year teachers used for the purpose of licensing” (3). The original 1996 edition, based on research compiled by Educational Testing Service (ETS), coined the performance-level labels of “unsatisfactory,” “basic,” “proficient,” and “distinguished” — labels which have clung tenaciously to the Framework through successive editions and adoptions by numerous state legislatures. In Illinois, the Danielson Group Framework of Teaching is the default evaluation instrument if school districts don’t modify it. Mine has … a little. The state mandates a four-part labeling structure, and evaluators have been trained (brainwashed?) to believe that “distinguished” teachers are as rare as four-leaf clovers … that have been hand-plucked and delivered to your doorstep by leprechauns.
In my school, it is virtually (if not literally) impossible to receive a “distinguished” rating, which leads to comments from evaluators like “I think you’re one of the best teachers in the state, but according to the rubric I can only give you a ‘proficient.'” It is the equivalent of teachers telling their students that they’re using the standard A-B-C-D scale, and they want them to do A-quality work and to strive for an A in the course, but, alas, virtually none of them are going to be found worthy and will have to settle for the B (“proficient”): Better luck next time, kids. Given the original purpose of the Framework — to evaluate first-year teachers — it made perfect sense to cast the top level of “distinguished” as all but unattainable, but it makes no sense to place that level beyond reach for high-performing, experienced educators. Quite honestly, it’s demeaning and demoralizing — it erodes morale as well as respect for the legitimacy of both the evaluator and the evaluation process.
Then came (some) differentiation
The 2007 edition of the Framework, according to the Introduction, was improved by providing modified evaluation instruments for “non-classroom specialist positions, such as school librarians, nurses, and counselors,” that is, people who “have very different responsibilities from those of classroom teachers”; and, as such, “they need their own frameworks, tailored to the details of their work” (3). There is no question that the differentiation is important. However, the problem is that it implies “classroom teacher” is a monolithic position, and nothing could be further from the truth. Thus, having one instrument that is to be used across grade levels, ability levels, not to mention for vocational, academic and fine arts courses is, simply, wrongheaded.
As any experienced teacher will tell you, each class (each gathering of students) has a personality of its own. On paper, you may have three sections of a given course, all with the same sort of students as far as age and ability; yet, in reality, each group is unique, and the lesson that works wonderfully for your 8 a.m. group may be doomed to fail with your 11 a.m. class, right before lunch, or your 1 p.m. after-lunch bunch — and on and on and on. So the Danielson-style approach, which is heavily student directed, may be quite workable for your early group, whereas something more teacher directed may be necessary at 11:00.
Therefore, according to the Danielson Group, I may be “distinguished” in the morning, but merely “proficient” by the middle of the day (and let us not speak of the last period). The evaluator can easily become like the blindman feeling the elephant: Depending on which piece he experiences, he can have very different impressions about what sort of thing, what sort of teacher, he has before him. Throw into the mix that evaluators, due to their training, have taken “distinguished” off the table from the start, and we have a very wobbly Framework indeed.
Enter Bill and Melinda Gates
The 2011 edition reflected revisions based on the Group’s 2009 encounter with the Bill and Melinda Gates Foundation and its Measures of Effective Teaching (MET) research project, which attempted “to determine which aspects of a teacher’s practice were most highly correlated with high levels of student progress” (4). Accordingly, the Danielson Group added more “[p]ossible examples for each level of performance for each component.” They make it clear, though, that “they should be regarded for what they are: possible examples. They are not intended to describe all the possible ways in which a certain level of performance might be demonstrated in the classroom.” Indeed, the “examples simply serve to illustrate what practice might look like in a range of settings” (4).
I would applaud this caveat if not for the fact that it’s embedded within an instrument whose overarching purpose is to make evaluation of a teacher appear easy. Regarding the 2011 revisions, the Group writes, “Practitioners found that the enhancements not only made it easier to determine the level of performance reflected in a classroom … but also contributed to judgments that are more accurate and more worthy of confidence” (4-5). Moreover, the Group says that changes in the rubric’s language helped to simplify the process: “While providing less detail, the component-level rubrics capture all the essential information from those at the element level and are far easier to use in evaluation than are those at the element level” (4).
I suspect it’s this ease-of-use selling point that has made the Framework so popular among policymakers, who are clueless as to the complexities of teaching and who want a nice, tidy way to assess teachers (especially one designed to find fault with educators and rate them as average to slightly above average). But it is disingenuous, on the part of Charlotte Danielson and the Group, to maintain that a highly complex and difficult activity can be easily evaluated and quantified. In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”
It’s downright naive — or patently deceptive — to say that a highly complex process (and highly complex is a gross understatement) can be easily and simply evaluated — well, it can be done, but not with any accuracy or legitimacy.
Classic fallacy of begging the question
I want to touch on one other inherent flaw (or facet of deception) in the Danielson Framework and that is its bias toward “active, rather than passive, learning by students” (5). Speaking of the Framework’s alignment with the Common Core, the Group writes, “In all areas, they [CCSS] place a premium on deep conceptual understanding, thinking and reasoning, and the skill of argumentation (students taking a position and supporting it with logic and evidence).” On the one hand, I concur that these are worthy goals — ones I’ve had as an educator for more than three decades — but I don’t concur that they can be observed by someone popping into your classroom every so often, perhaps skimming through some bits of documentary evidence (so-called artifacts), and I certainly don’t concur that it can be done easily.
The Group’s reference to active learning, if one goes by the Domains themselves, seems to be the equivalent of students simply being active in an observable way (via small-group work, for example, or leading a class discussion), but learning happens in the brain and signs of it are rarely visible. Not to get too far afield here, but the Framework is intersecting at this point with introverted versus extroverted learning behaviors. Evaluators, perhaps reflecting a cultural bias, prefer extroverted learners because they can see them doing things, whereas introverted learners may very well be engaged in far deeper thinking, far deeper comprehension and analysis — which is, in fact, facilitated by their physical inactivity.
And speaking of “evidence,” the Introduction refers to “empirical research and theoretical research” (3), “analyses” and “stud[ies]” (4) and to “educational research” that “was fully described” in the appendix of the 2007 edition (3), but beyond this vague allusion (to data which must be getting close to a decade old) there are no citations whatsoever, so, in other words, the Danielson Group is making all sorts of fantastic claims void of any evidence, which I find the very definition of “unsatisfactory.” This tactic, of saying practices and policies are based on research (“Research shows …”), is common in education; yet citations, even vague ones, rarely follow — and when they do, the sources and/or methodologies are dubious, to put it politely.
I plan to look at the Danielson Framework Domains in subsequent posts, and I’m also planning a book about what’s really wrong in education, from a classroom teacher’s perspective.
8 comments