Danielson Framework criticized by Charlotte Danielson
I’ve been writing about the Danielson Framework for Teacher Evaluation for a couple of years, and in fact my “Fatal Flaws of the Danielson Framework” has been my most read and most commented on post, with over 5,000 hits to date. I’ve also been outspoken about how administrators have been misusing the Framework, resulting in demoralized teachers and unimproved (if not diminished) performance in the classroom. (See in particular “Principals unwitting soldiers in Campbell Brown’s army” and “Lowered teacher evaluations require special training.”) At present, teachers are preparing — at great time and expense — to embark on the final leg of the revamped teacher evaluation method with the addition of student performance into the mix (see ISBE’s “Implementing the Student Growth Component in Teacher and Principal Evaluation”). I’ve also written about this wrongheaded development: “The fallacy of testing in education.”
Imagine my surprise when I discovered an unlikely ally in my criticism of Charlotte Danielson’s much lauded approach: Charlotte Danielson herself. The founder of the Danielson Framework published an article in Education Week (April 18 online) that called for the “Rethinking of Teacher Evaluation,” and I found myself agreeing with almost all of it — or, more accurately and more egocentrically, I found Charlotte Danielson agreeing with me, for she is the one who has changed her tune.
My sense is that Ms. Danielson is reacting to widespread dissatisfaction among teachers and principals with the evaluation process that has been put in place which is based on her Danielson Framework. Her article appeared concurrently with a report from The Network for Public Education based on a survey of nearly 3,000 educators in 48 states which is highly critical of changes in teacher evaluation and cites said changes as a primary reason for teachers exiting the profession in droves and for young people choosing not to go into education in the first place. For example, the report states, “Evaluations based on frameworks and rubrics, such as those created by Danielson and Marzano, have resulted in wasting far too much time. This is damaging the very work that evaluation is supposed to improve . . .” (p. 2).
Ms. Danielson does not, however, place blame in her Framework, at least not directly. She does state what practically all experienced teachers have known all along when she writes, “I’m deeply troubled by the transformation of teaching from a complex profession requiring nuanced judgment to the performance of certain behaviors that can be ticked off a checklist.” Her opinion is a change from earlier comments when she said that good teaching could be easily defined and identified. In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”
Instead of her Framework, then, Ms. Danielson places the lion’s share of the blame with state legislators who oversimplified her techniques via their adoptions, and — especially — with administrators who are not capable of using the Framework as it was intended. She writes, “[F]ew jurisdictions require their evaluators to actually demonstrate skill in making accurate judgments. But since evaluators must assign a score, teaching is distilled to numbers, ratings, and rankings, conveying a reductive nature to educators’ worth and undermining their overall confidence in the system.”
Amen, Sister Charlotte! Testify, girlfriend!
Ms. Danielson’s critique of administrators is a valid one, especially considering that evaluators were programmed, during their Danielson training, to view virtually every teacher as less than excellent, which put even the best-intentioned evaluators in a nitpicking mode, looking for any reason, no matter how immaterial to effective teaching, to find a teacher lacking and score them “proficient” instead of “excellent.” In her criticism of administrators Ms. Danielson has touched upon what is, in fact, a major shortcoming of our education system: The road to becoming an administrator is not an especially rigorous one — especially when it comes to academic rigor — and once someone has achieved administrative status, there tends to be no apparatus in place to evaluate their performance, including (as Ms. Danielson points out) their performance in evaluating their teachers.
Provided that administrators can keep their immediate superior (if any) content, as well as the seven members of the school board (who are almost never educators themselves), they can appear to be effective. That is, as long as administrators do not violate the terms of the contract, and as long as they are not engaging in some form of obvious harassment, teachers have no way of lodging a complaint or even offering constructive criticism. Therefore, if administrators are using the Danielson Framework as a way of punishing teachers — giving them undeservedly reduced evaluations and thus exposing them to the harms that can befall them, including losing their job regardless of seniority — there is no way for teachers to protect themselves. They cannot appeal an evaluation. They can write a letter to be placed alongside the evaluation explaining why the evaluation is unfair or invalid, but their complaint does not trigger a review of the evaluation. The evaluator’s word is final.
According to the law of averages, not all administrators are excellent; and not all administrators use the evaluation instrument (Danielson or otherwise) excellently. Some administrators are average; some are poor. Some use the evaluation instrument in a mediocre way; some use it poorly. Hence you can quite easily have an entire staff of teachers whose value to the profession is completely distorted by a principal who is, to put it bluntly, bad at evaluating. And there’s not a thing anyone can do about it.
Another crucial point that Charlotte Danielson makes in her Education Week article is that experienced teachers should not be evaluated via the same method as teachers new to the field: “An evaluation policy must be differentiated according to whether teachers are new to the profession or the district, or teach under a continuing contract. . . . Once teachers acquire this status [i.e. tenure], they are full members of the professional community, and their principal professional work consists of ongoing professional learning.” In other words, experienced teachers, with advanced degrees in their content area and a long list of professional accomplishments, shouldn’t be subjected to the same evaluation procedure as someone who is only beginning their career and has much to learn.
In fact, using the same evaluation procedure creates a very odd dynamic: You oftentimes have an administrator who has had only a limited amount of classroom experience (frequently fewer than ten years, and perhaps only two or three) and whose only advanced degree is the one that allows them to be an administrator (whereby they mainly study things like school law and school finance), sitting in judgment of a teacher who has spent twenty or thirty years honing their teaching skills and who has an advanced degree in their subject area. What can the evaluator possibly say in their critique that is meaningful and appropriate? It is commonplace to find this sort of situation: A principal who was a physical education or drivers education teacher, for perhaps five years, is now sitting in an Advanced Placement Chemistry classroom evaluating a twenty-year veteran with a masters degree or perhaps even a Ph.D. in chemistry. The principal feels compelled to find something critical to say, so all they can do is nitpick. They can’t speak to anything of substance.
What merit can there be in a system that makes evaluators omnipotent judges of teachers in subject areas that the evaluators themselves literally are not qualified to teach? It isn’t that veteran teachers don’t have anything to learn. Far from it. Teaching is a highly dynamic, highly challenging occupation; and the successful teacher is constantly learning, growing, self-reflecting, and networking with professional peers. The successful principal makes space for the teacher to teach and for the student to learn, and they protect that space from encroachment by anyone whose design is to impede that critical exchange.
Ms. Danielson offers this alternative to the current approach to evaluation: “An essential step in the system should be the movement from probationary to continuing status. This is the most important contribution of evaluation to the quality of teaching. Beyond that, the emphasis should be on professional learning, within a culture of trust and inquiry. . . . Experienced teachers in good standing should be eligible to apply for teacher-leadership positions, such as mentor, instructional coach, or team leader.”
Ironically, what Ms. Danielson is advocating is a return to evaluation as most teachers knew it prior to adoption of the Danielson Framework.
(Grammar alert: I have opted to use the gender-neutral pronouns they and their etc. even when they don’t agree in number with their antecedents.)
Lowered teacher evaluations of Danielson Framework require special training
In an earlier post I analyzed the “Danielson Framework for Teacher Evaluation,” which has become the adopted model in numerous states, including Illinois, and I pointed out some of its many flaws. One of the aspects of Danielson that has been troubling to teachers from the beginning is its insistence that virtually no teacher is excellent (distinguished, outstanding). When the Framework was designed in 1996 it was intended to rate first-year teachers, so it made sense that very, very few would be rated in the top category. The Framework was revised three times (2007, 2011 and 2013) in an effort to be an evaluation tool for all educators and even non-classroom professionals (like librarians and school nurses). Nevertheless, the idea that virtually no teacher is capable of achieving the top echelon (however it may be labeled in a district’s specific evaluation instrument) has clung to the Framework.
In my district, we were told of the Danielson Framework a full two years before it was implemented, and from the start we were informed that it was all but impossible to achieve an “excellent” rating, even for teachers who have consistently been rated at the top level for several evaluation cycles (pre-Danielson era). After a full year of its being used, it seems that administrators’ predictions were true (or made to be true), and almost no one (or literally no one) received an excellent rating. We were encouraged to compile a substantial portfolio of evidence or artifacts to help insure that our assessment would be more comprehensive than the previous evaluation approach. I foolishly (in retrospect) spent approximately six hours pulling together my portfolio and writing a narrative to accompany it. A portfolio, as it turned out, we never discussed and could only have been glanced at given the timing of its being retrieved and the appointed hour of my conference.
As predicted, I was deemed “proficient.” It was a nearly surreal experience to be complimented again and again only to be informed at the end that I didn’t rate as “excellent” because the Danielson Framework makes it exceptionally difficult for a teacher to receive a top rating. There were literally no weaknesses noted — well, there were comments in the “weakness” areas of the domains, but they were phrased as “continue to …” In other words, I should improve by continuing to do what I’ve been doing all along. In fairness, I should note that the evaluator had numerous teachers to evaluate, therefore observations to record, portfolios to read, summative evaluations to write — so I’m certain the pressure of deadlines figured into the process. Nevertheless, it’s the system that’s in place, and my rating stands as a reflection of my merits as a teacher and my value to the district and the profession — there’s no recourse for appeal, nor, I suppose, purpose in it.
I was feeling a lot of things when I left my evaluation conference: angry, humiliated, defeated, underappreciated, naive, deceived (to list a few). And, moreover, I had zero respect for the Danielson Framework and (to be honest) little remained for my evaluator — though it seems that from the very beginning evaluators are trained (programmed) to give “proficient” as the top mark. After a year of pop-in observations in addition to the scheduled observation, the preparation of a portfolio based on the four domains, a conference, and the delivery of my official evaluation, I literally have no idea how to be a better teacher. Apparently, according to the Framework, I’m not excellent, and entering my fourth decade in the classroom I’m clueless how to be excellent in the World According to Charlotte Danielson (who, by the way, has very little classroom experience).
If the psychological strategy at work is that by denying veteran teachers a top rating, they will strive even harder to achieve the top next time around, it’s an inherently flawed concept, especially when there are no concrete directions for doing things differently. As I said in my previous post on Danielson, it would be like teachers telling their students that they should all strive for an “A” and do “A”-quality work — even though in the end the best they can get on their report card is a “B.” Or business owners telling their salespeople to strive for through-the-roof commissions, even though no matter how many sales they make, they’re all going to get the same modest paycheck. In the classroom, students would quickly realize that the person doing slightly above average work and the person doing exceptional work are both going to get a “B” … so there’s no point in doing exceptional work. On the job, salespeople would opt for the easiest path to the same result.
Under Danielson, it will take great personal and professional integrity to resist the common-sense urge to be the teacher that one’s evaluation says one is — to resist being merely proficient if that, in practice, is the best ranking that is available.
My experience regarding the Danielson Framework is not unique in my school, and clearly it’s not unique in Illinois as a whole. Each year administrators must participate in an Administrators Academy workshop, and one workshop being offered by the Sangamon County Regional Office of Education caught my eye in particular: “Communicating with Staff Regarding Performance Assessment,” presented by Dr. Susan Baker and Anita Plautz. The workshop description says,
“My rating has always been “excellent” [sic] and now it’s “basic”. [sic] Why are you doing this to me?” When a subordinate’s performance rating declines from the previous year, how do you prepare to deliver that difficult message? How do you effectively respond to a negative reaction from a staff member when they [sic] receive a lower performance rating? This course takes proven ideas from research and weaves them into practical activities that provide administrators with the tools needed to successfully communicate with others in difficult situations. (Sangamon Schools’ News, 11.3, spring 2014, p. 11; see here to download)
Apparently, then, school administrators are giving so many reduced ratings to teachers that they could benefit from special coaching on how to deliver the bad news so that the teacher doesn’t go postal right there in their office (I was tempted). In other words, the problem isn’t an instrument and an approach that consistently undervalues and humiliates experienced staff members; the problem, rather, is rhetorical — how do you structure the message to make it as palatable as possible?
While I’m at it, I have to point out the fallacious saw of citing “research,” and in this description even “proven ideas,” which is so common in education. The situation that this workshop speaks to, with its myriad dynamics, is unique and only recently a pervasive phenomenon. Therefore, if there have been studies that attempt to replicate the situation created by the Danielson Framework, they must be recent ones and could at best suggest some preliminary findings — they certainly couldn’t prove anything. If the research is older, it must be regarding some other communication situation which the workshop presenters are using to extrapolate strategies regarding the Danielson situation, and they shouldn’t be trying to pass it off as proof. As a literature person, I’m also amused by the word “weaves” in the description as it is often a metaphor for fanciful storytelling — and the contents of the alluded to research must be fanciful indeed. (By the way, I don’t mean to imply that Dr. Baker and Ms. Plautz are trying to deliberately mislead — they no doubt intend to offer a valuable experience to their participants.)
What is more, a lowered evaluation is not just a matter of hurting one’s pride. With recent changes in tenure and seniority laws in Illinois (and likely other states), evaluations could be manipulated to supersede seniority and remove more experienced teachers in favor of less experienced ones — which is why speaking out carries a certain amount of professional risk even for seasoned teachers.
My belief is that the Danielson Framework and the way that it’s being used are part of a calculated effort to cast teachers as expendable cogs in a broken wheel. Education reform is a billions-of-dollars-a-year industry — between textbook publishers, software and hardware developers, testing companies, and high-priced consultants (like Charlotte Danielson) — and how can cash-strapped states justify spending all those tax dollars on reform products if teachers are doing a damn fine job in the first place? It would make no sense.
It would make no sense.
Not speaking about Danielson Framework per se, but
Sir Ken Robinson has several TED Talks regarding education, and his “How to Escape Education’s Death Valley” is an especially appropriate follow-up to my last post about the Danielson Group’s Framework for Teaching Evaluation Instrument. Robinson, who is very funny and engaging, doesn’t reference Charlotte Danielson and her group per se, but he may as well. The Danielson Group’s Framework, which has been adopted as a teacher evaluation instrument in numerous states, including Illinois, is emblematic — in fact, the veritable flagship — of everything that’s wrong with education in America, according to Robinson.
Treat yourself to twenty minutes of Robinson’s wit and wisdom:
Fatal flaws of the Danielson Framework
The Danielson Group’s “Framework for Teaching Evaluation Instrument” has been sweeping the nation, including my home state of Illinois, in spite of the fact that the problems with the Group, the Framework, the Instrument, and even Ms. Danielson herself are as obvious as a Cardinals fan in the Wrigley Field bleachers. There have already been some thorough critiques of the Danielson Group, its figurehead, the Framework, and how it’s being used destructively rather than constructively. For example, Alan Singer’s article at the Huffington Post details some of the most glaring problems. I encourage you to read the article, but here are some of the highlights:
[N]obody … [has] demonstrated any positive correlation between teacher assessments based on the Danielson rubrics, good teaching, and the implementation of new higher academic standards for students under Common Core. A case demonstrating the relationship could have been made, if it actually exists.
[I]n a pretty comprehensive search on the Internet, I have had difficulty discovering who Charlotte Danielson really is and what her qualifications are for developing a teacher evaluation system … I can find no formal academic resume online … I am still not convinced she really exists as more than a front for the Danielson Group that is selling its teacher evaluation product. [In an article archived at the Danielson Group site, it describes the “crooked road” of her career, and I have little doubt that she’d be an interesting person with whom to have lunch — but in terms of practical classroom experience as a teacher, her CV, like most educational reformers’, is scant of information.]
The group’s services come at a cost, which is not a surprise, although you have to apply for their services to get an actual price quote. [Prices appear to range from $599 per person to attend a three-day workshop, $1,809 per person to participate in a companion four-week online class. For a Danielson Group consultant, the fee appears to be $4,000 per consultant/per day when three or more days are scheduled, and $4,500 per consultant/per day for one- to two-day consultations (plus travel, food and lodging costs). There are fees for keynote addresses, and several books are available for purchase.]
As I’ve stated, you should read Mr. Singer’s article in its entirety, and look into the Danielson Group and Charlotte Danielson yourself. The snake-oil core of their lucrative operation quickly becomes apparent. One of the chief purposes of the Danielson Framework, which allegedly works in conjunction with Common Core State Standards, is to turn students into critical readers who are able to dissect text, comprehending both its explicit and implicit meanings. What follows is my own dissection of the “Framework for Teaching Evaluation Instrument” (2013 edition). For now, I’m limiting my analysis to the not quite four-page Introduction, which, sadly, is the least problematic part of the Framework. The difficulties only increase as one reads farther and farther into the four Domains. (My citations refer to the PDF that is available at DanielsonGroup.org.)
First of all, the wrongheadedness of teacher evaluation
Before beginning my dissection in earnest, I should say that, rubrics aside, the basic idea of teacher evaluation is ludicrous — that sporadic observations, very often by superiors who aren’t themselves qualified to teach your subject, result in nothing especially accurate nor useful. As I’ve blogged before, other professionals — physicians, attorneys, business professionals, and so on — would never allow themselves to be assessed as teachers are. For one thing, and this is a good lead-in to my analysis, there are as many styles of teaching as there are of learning. There is no “best way” to teach, just as there is no “best way” to learn. Teachers have individual styles, just as tennis players do, and effective ones know how to adjust their style depending on their students’ needs.
But let us not sell learners short: adjusting to a teacher’s method of delivery is a human attribute — the one that allowed us to do things like wander away from the Savanna, learn to catch and eat meat, and survive the advance of glaciers — and it is well worth fine tuning before graduating from high school. I didn’t attend any college classes nor hold any jobs where the professor or the employer adjusted to fit me, at least not in any significant ways. Being successful in life (no matter how one chooses to define success) depends almost always on one’s ability to adjust to changing circumstances.
In essence, forcing teachers to adopt a very particular method of teaching tends to inhibit their natural pedagogical talents, and it’s also biased toward students who do, in fact, like the Danielsonesque approach, which places much of the responsibility for learning in the students’ lap. Worse than that, however, a homogenous approach — of any sort — gives students a very skewed sense of the world in which they’re expected to excel beyond graduation.
In fairness, “The Framework for Teaching Evaluation Instrument” begins with a quiet little disclaimer, saying in the second sentence, “While the Framework is not the only possible description of practice, these responsibilities seek to define what teachers should know and be able to do in the exercise of their profession” (3). That is, there are other ways to skin the pedagogical cat. It’s also worth noting that the Danielson Group is seek[ing] to define — it doesn’t claim to have found The Way, at least not explicitly. Nevertheless, that is how untold numbers of legislators, reformers, consultants and administrators have chosen to interpret the Framework. As the Introduction goes on to say, “The Framework quickly found wide acceptance by teachers, administrators, policymakers, and academics as a comprehensive description of good teaching …” (3).
Teachers, well, maybe … though I know very, very few who didn’t recognize it as bologna from the start. Administrators, well, maybe a few more of these, but I didn’t hear any that were loudly singing its praises once it appeared on the Prairie’s horizon. Academics … that’s pretty hard to imagine, too. I’ve been teaching high-school English for 31 years, and I’ve been an adjunct at both private and public universities for 18 years — and I can’t think of very many college folk who would embrace the Danielson Framework tactics. Policymakers (and the privateer consultants and the techno-industrialists who follow remora-like in their wake) … yes, the Framework fits snugly into their worldview.
Thus, the Group doesn’t claim the Framework is comprehensive, but they seem to be all right with others’ deluding themselves into believing it is.
The Framework in the beginning
The Introduction begins by explaining each incarnation of the Framework, starting with its 1996 inception as “an observation-based evaluation of first-year teachers used for the purpose of licensing” (3). The original 1996 edition, based on research compiled by Educational Testing Service (ETS), coined the performance-level labels of “unsatisfactory,” “basic,” “proficient,” and “distinguished” — labels which have clung tenaciously to the Framework through successive editions and adoptions by numerous state legislatures. In Illinois, the Danielson Group Framework of Teaching is the default evaluation instrument if school districts don’t modify it. Mine has … a little. The state mandates a four-part labeling structure, and evaluators have been trained (brainwashed?) to believe that “distinguished” teachers are as rare as four-leaf clovers … that have been hand-plucked and delivered to your doorstep by leprechauns.
In my school, it is virtually (if not literally) impossible to receive a “distinguished” rating, which leads to comments from evaluators like “I think you’re one of the best teachers in the state, but according to the rubric I can only give you a ‘proficient.'” It is the equivalent of teachers telling their students that they’re using the standard A-B-C-D scale, and they want them to do A-quality work and to strive for an A in the course, but, alas, virtually none of them are going to be found worthy and will have to settle for the B (“proficient”): Better luck next time, kids. Given the original purpose of the Framework — to evaluate first-year teachers — it made perfect sense to cast the top level of “distinguished” as all but unattainable, but it makes no sense to place that level beyond reach for high-performing, experienced educators. Quite honestly, it’s demeaning and demoralizing — it erodes morale as well as respect for the legitimacy of both the evaluator and the evaluation process.
Then came (some) differentiation
The 2007 edition of the Framework, according to the Introduction, was improved by providing modified evaluation instruments for “non-classroom specialist positions, such as school librarians, nurses, and counselors,” that is, people who “have very different responsibilities from those of classroom teachers”; and, as such, “they need their own frameworks, tailored to the details of their work” (3). There is no question that the differentiation is important. However, the problem is that it implies “classroom teacher” is a monolithic position, and nothing could be further from the truth. Thus, having one instrument that is to be used across grade levels, ability levels, not to mention for vocational, academic and fine arts courses is, simply, wrongheaded.
As any experienced teacher will tell you, each class (each gathering of students) has a personality of its own. On paper, you may have three sections of a given course, all with the same sort of students as far as age and ability; yet, in reality, each group is unique, and the lesson that works wonderfully for your 8 a.m. group may be doomed to fail with your 11 a.m. class, right before lunch, or your 1 p.m. after-lunch bunch — and on and on and on. So the Danielson-style approach, which is heavily student directed, may be quite workable for your early group, whereas something more teacher directed may be necessary at 11:00.
Therefore, according to the Danielson Group, I may be “distinguished” in the morning, but merely “proficient” by the middle of the day (and let us not speak of the last period). The evaluator can easily become like the blindman feeling the elephant: Depending on which piece he experiences, he can have very different impressions about what sort of thing, what sort of teacher, he has before him. Throw into the mix that evaluators, due to their training, have taken “distinguished” off the table from the start, and we have a very wobbly Framework indeed.
Enter Bill and Melinda Gates
The 2011 edition reflected revisions based on the Group’s 2009 encounter with the Bill and Melinda Gates Foundation and its Measures of Effective Teaching (MET) research project, which attempted “to determine which aspects of a teacher’s practice were most highly correlated with high levels of student progress” (4). Accordingly, the Danielson Group added more “[p]ossible examples for each level of performance for each component.” They make it clear, though, that “they should be regarded for what they are: possible examples. They are not intended to describe all the possible ways in which a certain level of performance might be demonstrated in the classroom.” Indeed, the “examples simply serve to illustrate what practice might look like in a range of settings” (4).
I would applaud this caveat if not for the fact that it’s embedded within an instrument whose overarching purpose is to make evaluation of a teacher appear easy. Regarding the 2011 revisions, the Group writes, “Practitioners found that the enhancements not only made it easier to determine the level of performance reflected in a classroom … but also contributed to judgments that are more accurate and more worthy of confidence” (4-5). Moreover, the Group says that changes in the rubric’s language helped to simplify the process: “While providing less detail, the component-level rubrics capture all the essential information from those at the element level and are far easier to use in evaluation than are those at the element level” (4).
I suspect it’s this ease-of-use selling point that has made the Framework so popular among policymakers, who are clueless as to the complexities of teaching and who want a nice, tidy way to assess teachers (especially one designed to find fault with educators and rate them as average to slightly above average). But it is disingenuous, on the part of Charlotte Danielson and the Group, to maintain that a highly complex and difficult activity can be easily evaluated and quantified. In a 2012 interview, Ms. Danielson said that her assessment techniques are “not like rocket science,” whereas “[t]eaching is rocket science. Teaching is really hard work. But doing that [describing what teaching “looks like in words”] isn’t that big a deal. Honestly, it’s not. But nobody had done it.”
It’s downright naive — or patently deceptive — to say that a highly complex process (and highly complex is a gross understatement) can be easily and simply evaluated — well, it can be done, but not with any accuracy or legitimacy.
Classic fallacy of begging the question
I want to touch on one other inherent flaw (or facet of deception) in the Danielson Framework and that is its bias toward “active, rather than passive, learning by students” (5). Speaking of the Framework’s alignment with the Common Core, the Group writes, “In all areas, they [CCSS] place a premium on deep conceptual understanding, thinking and reasoning, and the skill of argumentation (students taking a position and supporting it with logic and evidence).” On the one hand, I concur that these are worthy goals — ones I’ve had as an educator for more than three decades — but I don’t concur that they can be observed by someone popping into your classroom every so often, perhaps skimming through some bits of documentary evidence (so-called artifacts), and I certainly don’t concur that it can be done easily.
The Group’s reference to active learning, if one goes by the Domains themselves, seems to be the equivalent of students simply being active in an observable way (via small-group work, for example, or leading a class discussion), but learning happens in the brain and signs of it are rarely visible. Not to get too far afield here, but the Framework is intersecting at this point with introverted versus extroverted learning behaviors. Evaluators, perhaps reflecting a cultural bias, prefer extroverted learners because they can see them doing things, whereas introverted learners may very well be engaged in far deeper thinking, far deeper comprehension and analysis — which is, in fact, facilitated by their physical inactivity.
And speaking of “evidence,” the Introduction refers to “empirical research and theoretical research” (3), “analyses” and “stud[ies]” (4) and to “educational research” that “was fully described” in the appendix of the 2007 edition (3), but beyond this vague allusion (to data which must be getting close to a decade old) there are no citations whatsoever, so, in other words, the Danielson Group is making all sorts of fantastic claims void of any evidence, which I find the very definition of “unsatisfactory.” This tactic, of saying practices and policies are based on research (“Research shows …”), is common in education; yet citations, even vague ones, rarely follow — and when they do, the sources and/or methodologies are dubious, to put it politely.
I plan to look at the Danielson Framework Domains in subsequent posts, and I’m also planning a book about what’s really wrong in education, from a classroom teacher’s perspective.
94 comments