Return-Path: <nifl-aalpd@literacy.nifl.gov> Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id h7GLXr715961; Sat, 16 Aug 2003 17:33:53 -0400 (EDT) Date: Sat, 16 Aug 2003 17:33:53 -0400 (EDT) Message-Id: <20030816.172918.6398.1.sophocles5@juno.com> Errors-To: listowner@literacy.nifl.gov Reply-To: nifl-aalpd@literacy.nifl.gov Originator: nifl-aalpd@literacy.nifl.gov Sender: nifl-aalpd@literacy.nifl.gov Precedence: bulk From: "George E. Demetrion" <sophocles5@juno.com> To: Multiple recipients of list <nifl-aalpd@literacy.nifl.gov> Subject: [NIFL-AALPD:600] Using Research and Reason in Education--A Review X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas X-Mailer: Juno 1.49 Status: O Content-Length: 18132 Lines: 327 In Review George Demetrion Independent Scholar gdemetrion@msn.com Stanovich, P.J. and Stanovich, K.E. (2003). Using Research and Reason in Education in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions. Partnership for Reading In working through this text, three things stand out: (a) an array of reasonable and useful statements about science as applicable to education; (b) an unfortunate anti-scientific polemic as a rhetorical form of dismissing arguments and schools of thought that do not fit into the authors' framework; (c) a limited view of science. My commentary follows these points What I Find Reasonable Even as I Would Qualify Some of the Following Statements A good portion (but not all) of the Introduction (pp. 1-2) such as the following quotes: 1. "[L]earning outcomes will serve as the basis of assessment instruments" (p. 1). 2. Instructional methods should be appropriate for the designed curriculum." (p. 1). 3. "While testing seems the most straightforward [way of evaluating student learning], it is not necessarily the clear indicator of good educational practice that the public seems to think it is" (p. 2). 4. "[C]omparing averages or other indicators of overall performance from tests across classrooms, schools, or school districts takes no account of the resources and support provided to a school, school district, or individual professional" (p. 2). Additional Statements Like the Following 5. "Education is informed by formal scientific research through the use of archival research-based knowledge such as found in peer-review journals" (p. 4). 6. Scientific thinking in practice is what characterizes reflective teachers-those who inquire into their own practice and who examine their own classrooms to find out what works best for them and their students" (p. 4). 7. "We need tools for evaluating the credibility of these many and varied sources [about education]; the ability to recognize research-based conclusions is especially important" (p. 6). 8. "Empiricism is the practice of relying on observation" (p. 11). 9. "Empiricism pure and simple is not enough….[P]ure, unstructured observation of the natural world will not lead to scientific knowledge" (p. 12). 10. "Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will [I would say, may] lead them to abandon the practice" (p. 13). I would add several additional qualifiers, even while accepting the general logic of the authors' broad point. 11. "True science is held tentatively and is subject to change based on contrary evidence" (p. 13). 12. "The principle of converging evidence is applied in situations requiring a judgment about where the 'preponderance of evidence' points" (p. 15). 13. "A particular experimental result is never equally relevant to all competing theories" (p. 15). 14. "[W]e need to look for a convergence of results, not just consistency from one method [of research]. Convergence increases our confidence in the external and internal validity of our conclusions" (p. 21). 15. "[T]he key is to use both [qualitative and quantitative methods] where it is most effective" (p. 25). 16. "Scientific knowledge is not infallible knowledge, but knowledge that has at least passed some minimal tests [so would much knowledge in other academic disciplines]. The theories behind research-based practice can be proven wrong, and therefore contain a mechanism for growth and advancement" (p. 26). 17. "Researchers use many different methods to arrive at their conclusions, and the strengths and weaknesses of these methods vary. Most often, conclusions are drawn only after a slow accumulation of data from many studies" (p. 26). (Even so, new knowledge may emerge because paradigms shift. In that respect, scientific change can also be cataclysmic.) 18. "Effective teachers engage in scientific thinking in their classrooms in a variety of ways….[I]terative testing of hypothesis that are revised after the collection of data…can be seen when teachers plan for instruction: they evaluate their students' previous knowledge, develop hypotheses about the best methods for attaining lesson objectives, develop a teaching plan based on those hypotheses, observe results, and base further instruction on the evidence collected" (p. 32). 19. Researchers and educators are kindred spirits [that is, they can be!] in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom." In acknowledging science as a highly important way of knowing, I largely accept the previous statement, with only limited qualification. Obviously, others would find additional aspects of the report reasonable and even highly valuable. While assenting to these points, in terms of application to a field like education, I would contend that it is the SCIENTIFIC METHOD RATHER THAN SCIENCE, PER SE, that is of primary importance. Viewed thusly, science represents a fruitful metaphor for knowledge that in its realm adds much value to a collective understanding of education that has cultural as well as technical dimensions. Rhetorical Polemic /Anti-Scientism or Non-Scientific Statements and Arguments It is the confluence between so many reasonable statements and an unfortunate use of polemics that I find especially disturbing in a report titled "Using Research and Reason in Education." For example, scientific "mechanisms that evaluate claims about teaching methods" are falsely polarized to a complaint about an "'anything goes'" approach. We learn that this "'anything goes' mentality" is "a fertile environment for gurus to sell untested educational 'remedies that are not supported by an established research base" (p. 4). One wonders who the intended target is for this gross caricature. This kind of polarizing terminology is also reflected in the U.S Department of Educations Strategic Plan. In addition, E.D. Hirsch used "guru" terminology in his polemical speech to the California State Board of Education in 1997. In short, references to the "guru principle" are part of the lexicon of neo-conservative educational political rhetoric. It is now inserted into a federally supported report titled "Using Research and Reason in Education." Also, without specifically making the case, the authors contrast the "political" orientation that they claim has dominated educational practice, with its "factions and interest groups," to that of reliance on the surer ground of science (p. 5). The report likewise points to the "contentious disputes about conflicting studies that plague education and other behavioral sciences." These disputes, the authors claim, are more of a "'he-said, she-said' debate." These debates, the authors note, are documented in the academic journals, those journals that the authors view as not valid. It is the hope of the authors that these "disputes" will be dampened by a greater reliance on statistical "meta-analysis'"(p. 18). The concern here is two-fold: (a) the authors do not provide any evidence for this simplistic depiction of conflicting academic discourse. What they fail to do is to directly examine the arguments of the various perspectives. They just view it as part of the pre-scientific woodwork, preventing the field from establishing a cumulative informational basis which "normal" science (my quotes, referring to Kuhn's "The Structure of Scientific Revolutions") would facilitate. What the authors ignore is (b) the knowledge base that the "conflicts." Rather than the embodiment of the peculiarity of personal bias, they reflect the well-developed schools of thought that have their own scholarly basis of support. This dismissive tactic of reducing this rich body of collective work on education to a "plague" of "contentious disputes" is both anti-intellectual and anti-science. It is also a political bias embedded in the Department's Strategic Plan and reflected in the neo-conservative literature on education for over a decade. As it is anything but objective or neutral, of all the places it doesn't belong, is in a report titled "Using Reason and Research in Education." It's unfortunate that the authors did not refer to Donna C. Mertens' (1997) important textbook "Research Methods in Education and Psychology." In this publication, Mertens describes three distinctive paradigms of research. The first is the positivist/post positivist tradition, characteristic of the Stanovich paper. The second is the interpretative/ constructivist paradigm, of which Sharan B. Merriam's (2001) text "Qualitative Research and Case Study Applications in Education" is a good example. The third is the critical/emancipatory paradigmas reflected in the critical theory of Henry Giroux. Ignoring the many issues about educational research that texts like these bring out is an unfortunate gap. It will not suffice either as science or scholarship. It is this target research community that the authors, in fact, should be addressing in their pointed criticism. Instead, they set up straw men to attack fads, gurus, and psseudoscience. This rhetorical ploy, which allows the authors to make an end-run around a broad body of scholarship, needs to be examined very closely. Limited View of Science This is evident in a variety of ways. Let's take the analogy to medicine, with which the authors open the body of their document. The first line is instructive: "When you go to the family physician with a medical complaint, you expect that the recommended treatment has proven to be effective with many other patients who have had the same symptoms" (p. 3). So it is with educational research. The doctor is to the patient what the researcher is to the professional educator, who in turn, will dispense the right medicine to the class. Providing "treatment" is the operative word. The problems I have with this are several-fold. (a) The analogy of medicine and education. Both are applied areas, to be sure. One is rooted much more firmly in the natural and physical sciences than the other. Granted, the functioning of the brain, which has educational relevance, is an important biological consideration. So is the role of culture in shaping educational practice, ideas, and on what even counts as legitimate knowledge. Then, what of all the studies in the growing field of education, the cumulative work of 100 years of scholarship? Much of this scholarship is organized into schools of thought that have shaped discourse on education throughout the 20th century. This literature cannot be passively consumed. Its frames of reference require the critical thinking and creative experimentalism of the educator to operationalize in a given context. The educational researcher most certainly cannot ignore the scholarly literature base. (b) The mechanistic interpretation of medicine and science. Things are not so black and white even in this realm as sometimes portrayed in the positivist and neo-positivist research tradition. The issue is not "anything goes," but that there may be a range of options in a medical setting that could result in the desired objective. One thinks of the variety of treatment plans for cancer or depression, a possible range in which patients can make informed decisions. Consider the following personal example. I take blood pressure medication. My dosage at this stage is very low. I could terminate the treatment and my blood pressure would remain within an acceptable range. I had a mild heart attack a few years ago. I take several medications. The thought of being able to drop one appealed to me a great deal. My doctor and I discussed the options. By dropping the medication I would save a little money and perhaps purify my body a bit by taking on a more organic approach to wellness. I probed him further (first off, consider what I just said. I pushed him further for information). He said according to the latest research, statistically, the medication itself has a slightly positive among those who have had a heart attack even if the blood pressure is within the acceptable range with out it-not much, but just a slight positive effect. He thought one of the issues was the cost. It was not. What I was concerned about was long-term impact of using a drug if I didn't need it. The dosage was so low, he said, that that effect would be virtually nil. I continued to probe, not so much for new words, but to read his body language. It became clear that all things considered, he thought my best option was to stay on the drug, based on a statistical variance of a very slight degree. I had to read him in order to make that discernment. Based on my interpretation of both his explicit and implicit message, I decided to stay with the medicine. Within the given parameters decided my treatment. It wasn't anything goes. No guru principle was at work. It was an informed decision where the alternative decision may have been just as valid. (c) After the 19th and 20th century revolutions in evolutionary biology and quantum physics, science may no longer be as straightforward as sometimes portrayed in the positivist research tradition. Recent books on chaos theory and a new study by scientist Steven Stragatz titled "Sync: The Emerging Science of Spontaneous Order," point to system-wide processes in which the whole is greater than the sum of its parts. Stragatz's hypothesis is that synchronization happens in a variety of realms in the natural world. This is not my field of expertise, so I defer to others. I simply make the point here that science may not be so straight forward as some think. Others are more qualified than I can elaborate on this idea. I also found a number of problems within the main text of "Using Research and Reason in Education" in terms of its interpretation of science. These include: 1. The authors' "first pass" claim that scientifically-based peer-reviewed scientific research journals provide the primary "source credibility" for new methods in education (p. 7). It is not that such source credibility is not potentially a valuable source of information. I simply would not want to privilege that particular source as a first line basis of credibility. The issue of why methods (or materials) work extends beyond what I can discuss in this message. Let it suffice here to argue that "source credibility" has as much to do with the skill of the teacher and the particular dynamics of the class than what may be verified in a scientific journal. That's not anti-science. That's giving teachers their due. 2. Similarly, the claim that unless an idea or practice has "adequate documentation in the peer-reviewed literature of a scientific discipline," the field should exercise caution and wariness (p. 7). This is another untested and dubious assumption. In their effort to flesh the logic of this argument out, the authors make a sharp contrast "between professional educational journals that are magazines of opinion in contrast to journals where primary reports of research, or reviews of research, are peered reviewed." In this, the authors compare American Education Research Journal with Phi delta Kappa. While the latter "contain(s) stimulating discussions of educational issues," it is not "a peer-review journal of original research" (p. 8). The authors are even more concerned about information taken from the Internet where there is a glut of information without proven reliability. One assumes that their comments would apply to the electronic lists serves where practitioners speak directly to each other without the intermediary of the scientist. One does wonder, however, why the scientists in our field do not participate in these discussions and engage in rigorous dialogue with others. 3. There are a number of more technical criticisms, which I will bypass, except for one single point. On p. 19 the authors state, "Scientific thinking is based on the ideas of comparison, control, and manipulation (the terms were italicized in the original). They go on to argue, "these characterizations of scientific investigation must be arranged to work in concert." On this latter point, I tend to agree. What I do not grant is their first premise. One may say that scientific thinking may include comparison, control, and manipulation, but those are specific methodologies than a more global reflection of the essence of the scientific method. As specialized methodologies, their relevance is determined by the problem at hand. Where the authors more accurately point to scientific thinking, is near the end of their report. Its essence is hypothesis formation, based on a particular problem or set of problems, "iterative testing of hypotheses that are [then] revised after the collection of data" (p. 32). The data is not any old data, but data that in some way is relevant to the hypothesis. To put this more strongly, it is first and foremost the hypothesis or idea which directs where and what to look for in terms of confirmable or disconfirmable data. The problem and the initial ideas that emerge will profoundly effect what is viewed as the relevant evidence in any given case. Refinement comes both with idea modification and analysis of observable data. In Dewey's terminology, both the various hypotheses that emerge in a particular line of inquiry, and the accompanying data, take on the role of propositions. The investigation proceeds through an analysis and refinement of these propositions in an increasingly controlled way, until a solution as relevant for that case, emerges in the form, however temporal, of a judgment. I would like to suggest that regardless of the problem at hand, this is the scientific method. Specific methodologies and areas of research emphases will depend on the problem at hand. That includes where things stand in the evolution of particular inquiry project.
This archive was generated by hypermail 2b30 : Thu Mar 11 2004 - 12:15:16 EST