THIS BLOG HAS MOVED AND HAS A NEW HOME PAGE.

Entries for April 2006

April 28, 2006

About SAGES -1: The genesis of the program

As might be expected, some people at Case are all of atwitter about the snide op-ed in a newspaper supposedly called the New York Times by someone supposedly called Stephen Budiansky. (Note to novice writers hoping to develop their snide skills: Putting words like 'supposedly' in front of an easily discernible fact is a weak attempt at sarcasm, by insinuating that something is sneaky when no cause for suspicion exists. Like the way Budiansky says "SAGES (this supposedly stands for Seminar Approach to General Education and Scholarship)" when he has to know this for a fact since he says he has been reading the SAGES website.)

But my point here is not to point out the shallowness of Budiansky's article and make fun of it, although it is a good example of the kind of writing that uses selective quotes and data to support a questionable thesis, and uses a snippy tone to hide its lack of meaningful content. My purpose here is to articulate why I think SAGES has been the best educational idea that I have been associated with in all my years in education in many different institutions. It is clear to me that many people, even those at Case, have not quite understood what went into it and why it is such an important innovation, and this essay seeks to explain it.

I have been involved with SAGES from its inception in the summer of 2002 when I was appointed to the task force by then College of Arts and Sciences Dean Sam Savin, to investigate how to improve the college's general education requirements (GER). American colleges have these kinds of requirements in order to ensure that students have breadth of knowledge, outside their chosen majors. Case's GER were fairly standard in that they required students to select a distribution of courses from a menu classified under different headings, such as Global and Cultural Diversity.

While better than nothing, the task force felt that these kinds of requirements did not have any cohesive rationale, and result in students just picking courses so that they can check off the appropriate boxes. The task force wondered how we could make the distribution requirements more meaningful and part of a coherent educational philosophy. In the process of studying this question, we learned of other problems that were long standing but just lurking beneath our consciousness. Almost all these problems are endemic to many universities, not just Case.

One of these was that students entering Case tended to come in with a sense of identity that was identified with a specific program rather that the school as a whole. They saw themselves primarily as engineering students or nursing students or arts students and so on, rather than as Case students. This fragmented identity was aided by the fact that in the first year they had no common experience that transcended these disciplinary boundaries. So we wondered what we could do to help create a sense of oneness among the student body, a sense of overall belonging.

Another problem we identified was that it was quite possible, even likely, for a first year student to spend the entire year in large lecture classes where they were largely passive recipients of lectures. This could result in students feeling quite anonymous and alone, and since this was also their first year away from home, it was not felt to be a good introduction to college life, let along for the emotional and intellectual health of the student. Furthermore, we know that first impressions can be very formative. When college students spend their first year passively listening in class, we feared that this might become their understanding of their role in the university, and that it would become harder to transform them into the active engagement mode that was necessary when they got into the smaller upper division classes in their majors.

Another problem was that students at Case did not seem to fully appreciate the knowledge creation role that is peculiar to the research function of universities. While they had chosen to attend a research university, many did not seem to know what exactly constituted research, how it was done, and its value.

Another thing that surprised us was when even some seniors told us that there was not a single faculty member they had encountered during their years at Case whom they felt that they knew well, in the sense that if they walked into that faculty member's office that he or she would know the student's name and something about them. We felt that this was a serious deficiency, because faculty-student interactions in and out of the class should play an important role in a student's college experience. We felt that it was a serious indictment of the culture of the university that a student could spend four years here and not know even one faculty member well.

Another very serious problem that was identified was that many students were graduating with poor writing and presentation skills. The existing writing requirement was being met by a stand-alone English course that students took in their first year. Students in general (not just at Case) tend to dislike these stand-alone 'skills' courses and one can understand why. They are not related to any of the 'academic' courses and are thus considered inferior, merely an extra hoop to be jumped through. The writing exercises are necessarily de-contextualized since they are not organically related to any course of study. Students tend to treat such courses as irritants, which makes the teaching of such courses unpleasant as well. But what was worse was that it is clear that a one-shot writing course cannot produce the kinds of improvement in writing skills that are desired.

Furthermore, some tentative internal research seemed to suggest that the one quality that universities seek above all to develop in their students, the level of 'critical thinking' (however that vague term is defined), was not only not being enhanced by the four years spent here, there were alarming hints that it was actually decreasing.

And finally the quality of first year advising that the students received was highly variable. While some advisors were conscientious about their role and tried hard to get to know their students, others hardly ever met them, except for the minute or two it took to sign their course registration slips. Even the conscientious advisors found it hard to get to know students on the basis of a very few brief meetings. This was unsatisfactory because in addition to helping students select courses, the advisor is also the first mentor a student has and should be able to help the student navigate through the university and develop a broader and deeper perspective on education and learning and life. This was unlikely to happen unless the student and advisor got to know each other better.

Out of all these concerns, SAGES was born, and it sought to address all these concerns, by providing a comprehensive and cohesive framework that, one way or another, addressed all the above issues.

The task force decided on a small-class seminar format early on because we saw that this would enable students to engage more, speak and write more, get more feedback from the instructor and fellow students, and thus develop crucially important speaking, writing and critical thinking skills.

Since good writing develops only with a lot of practice of writing and revising, we decided that one writing-intensive seminar was not enough. Furthermore, students need to like and value what they are writing if they are going to persevere in improving their writing. In order to achieve this it was felt that the writing should be embedded in courses that dealt with meaningful content that the students had some choice in selecting. So we decided that students should take a sequence of four writing intensive seminars consisting of a common First Seminar in their first semester, and a sequence of three thematic seminars, one in each subsequent semester, thus covering the first two years of college.

The need for a common experience for all students was met by having the First Seminar be based on a common theme (which we chose to be on The Life of the Mind), with at least some readings and out-of-class programs experienced in common by all first year students. The idea was that this would provide students with intellectual fodder to talk about amongst themselves in their dorms and dining halls and other social settings, irrespective of what majors they were considering or who they happened to be sitting next to. The common book reading program for incoming students was initiated independently of SAGES but fitted naturally into this framework. The First Seminar was also was designed to get students familiar with academic ways of thinking, provide an introduction to what a research university does and why, and provide opportunities for them to access the rich variety of cultural resources that surround the university.

The decision that the First Seminar instructor also serve as the first year advisor was suggested so that the advisor and student would get to know each other well over the course of that first semester and thus enable the kind of stronger relationship that makes mentoring more meaningful

The University Seminars that followed the First Seminar were grouped under three categories (the Natural World, the Social World, and the Symbolic World) and students were required to select one from each category for the next three semesters. Each of these areas of knowledge has a different culture, investigate different types of questions, use different rules for evidence and how to use that evidence in arriving at conclusions, have different ways of constructing knowledge, and develop different modes of thinking and expression. These seminars would be designed around topics selected by the instructor and designed to help students understand better the way that practitioners in those worlds view knowledge.

By taking one from each group based on their own interests, it was hoped that students would learn how to navigate the different academic waters that they encounter while at college. Taken together, we hoped they would complement each other and provide students with a global perspective on the nature of academic discourse.

In order to prevent the risk of content overload that eventually engulfs many university courses, it was decided that the University Seminars would have no pre-requisites and also could not serve as pre-requisites for other courses, thus freeing instructors from the oft-complained problem of feeling burdened to 'cover' a fixed body of material and thus cutting off student participation. Now they were free to explore any question to any depth they wished.

For example, in my own SAGES course The Evolution of Scientific Ideas (part of the Natural World sequence) we explore the following major questions: What is science and can we distinguish it from non-science? What is the process that causes new scientific theories to replace the old? In the process of investigating these questions, I hope that students get a better understanding of how scientists see the world and interact with it. And I do not feel pressure to cover any specific scientific revolution. I can freely change things from year to year depending on the interests of the students.

A senior capstone experience was added to provide students with an opportunity to have a culminating activity and work on a project of their own choosing that would enable them to showcase the investigative, critical thinking, speaking, and writing skills developed over their years at Case.

Next in this series: Implementation issues

POST SCRIPT: Talk on Monday about Abu Ghraib and Guantanamo

On Monday, May 1 at 4:00pm in Strosacker Auditorium, Janis Karpinski, who was a Brigadier General and commanding officer of the Abu Ghraib prison when the prisoner torture and abuse scandal erupted and who feels that she was made a scapegoat for that atrocity and demoted, and James Yee who was U.S. Army Muslim Chaplin at Guantanamo, was arrested for spying and later cleared, will both be speaking.

It should be interesting to hear their sides of the story.

The talk is free and open to the public. More details can be found here.

April 27, 2006

Dover's dominoes-6: Religion in schools

(See part 1, part 2, part 3, part 4, part 5.)

Contrary to what Bill O'Reilly and other hysterical people allege, there is no 'war on Christianity' or 'war on Christians' in the US. To say so is to just be silly and to disqualify yourself from being taken seriously. Nor is there is a blanket ban on teaching about god and religion in public schools. The latter assertion is based on a common misunderstanding about the US constitution and it is worth exploring. The relevant question is how you teach about god and religion and for what purposes. (Usual disclaimer whenever I am discussing the implications of rulings by the courts: I am not a lawyer but would love to play one on TV.)

The issue of whether religious beliefs can be taught in public schools is governed by the establishment clause of the First Amendment (which I have discussed before here) which states that "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."

A key interpretation of this clause was provided in 1947 by Justice Hugo Black in the case of Everson v. Board of Education (330 U.S. 1, 15-16 (1947) where he wrote: "The "establishment of religion" clause of the First Amendment means at least this: Neither a state nor the Federal Government can set up a church. Neither can pass laws which aid one religion, aid all religions, or prefer one religion over another."

This was further clarified in Epperson vs. Arkansas 393 US 97 (1968) which said that the "First Amendment mandates government neutrality between religion and religion, and between religion and nonreligion." (Kitzmiller, p. 91)

This was further clarified in 1971 in the case Lemon v. Kurtzman (403 U.S. 602, 612-613 (1971)) which said that to pass constitutional muster, a law must pass all three of the following tests:

First, the statute must have a secular legislative purpose;

Second, its principal or primary effect must be one that neither advances nor inhibits religion;

Finally, the statute must not foster "an excessive government entanglement with religion."

In the Dover case, both sides agreed that only the first two prongs were relevant to that issue. The first prong of the Lemon test explains why IDC proponents are so anxious to have their beliefs accepted as part of science. If that can be achieved, then they can meet the first prong, since the teaching of science in a science class clearly has a secular purpose. Hence Judge Jones' conclusion that IDC is not science must be seen as a serious blow to their ambitions.

Judge Jones also invoked another US Supreme Court precedent (County of Allegheny v. ACLU (1989)) which said that "School sponsorship of a religious message is impermissible because it sends the ancillary message to members of the audience who are nonadherents 'that they are outsiders, not full members of the political community, and an accompanying message to adherents that they are insiders, favored members of the political community.'"

This is another reason why a secular public sphere is to be preferred. (See here for an earlier discussion of this issue.) If you have a public sphere (in schools or elsewhere) in which one particular religious view is favored or endorsed, then it sends a message that others who are not of this particular religious persuasion are not full members. The Allegheny case resulted in the following 'endorsement test' that could be applied to any laws. It says that there is a "prohibition against government endorsement of religion" and that it "preclude[s] government from conveying or attempting to convey a message that religion or a particular religious belief is favored or preferred." (Kitzmiller, p. 15)

As Judge Jones pointed out (p. 46) all these precedents imply that "[T]he Establishment Clause forbids not just the explicit teaching of religion, but any governmental action that endorses or has the primary purpose or effect of advancing religion."

There would be no problem under these guidelines (as I read the law) about a philosophy course that examined, in a neutral way, the religious beliefs of people. There would be no problem in discussing in a history or social studies course the role that Christianity played in the American political process or the role that Islam played in the development of the middle east. In fact, it would be hard to keep religion out and teach those topics in a meaningful way.

A problem only arises if you use a course to promote religion in general or a specific religious point of view. Now we see more clearly why the El Tejon policies were problematic. It is not how a course is labeled (whether science or philosophy) that is at issue, it is how the course is taught. The El Tejon course was explicitly advocating a particular religious point of view, that of young Earth creationism. And the people at the Discovery Institute (rightly I think) saw that this would be easily ruled unconstitutional. And since the course dragged in IDC ideas as well, a negative ruling on this case would be interpreted as meaning that IDC ideas should not be allowed even in philosophy classes, which would be a huge public relations setback for them.

This is why they must have breathed a huge sigh of relief when the El Tejon school board decided to cancel the course.

Next in this series: Another domino falls in Ohio

POST SCRIPT: First amendment freedoms and the Simpsons

A survey of 1,000 Americans has found that just one in 1,000 people could name all five freedoms guaranteed by the First Amendment (religion, speech, press, assembly, petition the Government for redress of grievances.) while 22% of Americans could name all five Simpson characters.

I am shocked by this result. The Simpsons have been on TV for 17 years. Surely more than 22% should be able to name all five characters?

April 26, 2006

Bush, language, and ideas

The professor of English came into the classroom and gave the assembled students an essay and asked them to critique it. The students went at it with gusto, gleefully pointing out the many grammatical errors, the poor choice of words, the terrible syntax, the non sequiturs, the poor construction of the argument, the awkward metaphors, the lack of attributions and citations, and so on. They were unanimous in concluding that it was an extremely poor piece of writing.

When they were done, the professor quietly told them that he was the author of the piece. The students were stunned into embarrassed silence by this revelation. They sank even further into their seats as the professor said that he had worked long and hard over many days at writing it.

The professor finally said: "The reason it took me so long to write this was that it was really difficult for me to incorporate into it all the errors that you pointed out. What amazes me is that all of you seem to so easily write this way all the time!"

I was reminded of this old joke when George Bush recently made a statement defending Donald Rumsfeld against calls for his resignation as Secretary of Defense. Bush said: "I hear the voices, and I read the front page, and I know the speculation. But I’m the decider. And I decide what is best. And what’s best is for Don Rumsfeld to remain as secretary of defense."

This was the latest in the never-ending stream of Bushisms (see here and here for regularly updated lists) which have been the source of much amusement and parody. For example, watch this Daily Show clip about the exploits of the comic book superhero The Decider and listen to this clever parody based on the Beatles' song I am the walrus, which has the lyrics:

I am me and Rummy's he, Iraq is free and we are all together

See the world run when Dick shoots his gun, see how I lie

I'm lying. . .
Sitting on my own brain, waiting for the end of days

Corporation profits, Bloody oil money

I'm above the law and I'll decide what's right or wrong
I am the egg head, I'm the Commander, I'm the Decider

Koo-Koo-Kachoo. . .

While I am amused by these things, I must admit that I am also puzzled by Bush's difficulties with language and his repeated run-ins with the English language.

The reason for my bemusement is that Bush is the product of a rich and well-educated family. His father was President, his grandfather was a senator from Connecticut and a trustee of Yale University. Bush himself was educated at expensive private prep schools, majored in history at Yale, and obtained an MBA at Harvard. All his life, he has presumably been surrounded by people who have been well educated and would not have routinely made the kinds of speaking mistakes he does. Even if he was not a good student and shrugged off his teachers and classes secure in the knowledge that his privileged background and connections would enable him to get ahead, surely the way that the people around him spoke would have rubbed off on him more than it seems to have done.

First some caveats. You do not need a high level of formal education to speak or write well. There are many, many people who left school early and yet have developed a command of language because they love words, read a lot, and care about what they say. It also goes without saying that you also do not need to come from a wealthy background in order to learn how to speak well. And, conversely, not being able to speak well does not mean that you have nothing important to say. Formally correct language and quality of content are not correlated.

No, the puzzle with Bush is how he could have sent his entire formative years surrounded by formally educated people who belonged to affluent society and would have spoken grammatically correct English, and yet still speak the way he does.

At one time, I thought that he was faking it. I thought that he deliberately cultivated this down-home, aw-shucks, country-boy language, swagger, and attitude as a political strategy to appeal to 'regular' voters and avoid revealing his elitist background. After all, recall that 'rancher Bush', the person who seems to be perpetually clearing brush in Texas, bought his Crawford ranch only in 2000 when he was running for president.

But now I am not so sure. As the anecdote that begins this essay points out, it is hard for someone who speaks and writes well to make the kinds of mistakes that not-so-literate people do. Writers like Mark Twain and William Faulkner and Charles Dickens took a lot of trouble to recreate the language of their uneducated characters to make them sound authentic. To be convincing at it requires a writer to have a good ear for language and to study closely the speech patterns of the people he is writing about. There is no indication whatsoever that George Bush is willing to put in the hard work that this would require, although he loves to talk about 'hard work' and what hard work it is to be president.

There is also something artless in his Bushisms that make them sound natural. Someone who was faking a lack of formal education would likely say more obvious things like "ain't" and drop his g's. Except for saying "nukular," Bush's Bushisms are quite original and have a genuine air of spontaneity, not the kinds of things that sound like they were planned ahead. Could a speechwriter have come up with something like "I hear the voices, and I read the front page, and I know the speculation. But I’m the decider."? It is not crazy-wrong or stupid-wrong. Instead it sounds like something that a small child would say (like "gooder" instead of "better") because children say things that they extrapolate from correct speech.

For another example, consider this quote when Bush was speaking to the press on May 30, 2005:

"It seemed like to me they based some of their decisions on the word of -- and the allegations -- by people who were held in detention, people who hate America, people that had been trained in some instances to disassemble -- that means not tell the truth."

The problem is not that he misspoke and used the word "disassemble" when he meant 'dissemble.' All of us makes slips like that and people who are always in the spotlight and being recorded, like he is, are bound to be caught doing so numerous times. The problem is that after using the wrong word, he proceeds to patronizingly explain to the listeners, like a smug child would, what the fairly common word he should have used means. If you listen to the audio (scroll down to May 31, 2005) of him speaking, you will hear him stretch the word 'disassemble' out, like he was proud of using it.

(Amusingly, I read that "dissemble" was Dictionary.com’s “Word of the Day” for May 30, 2005, the very day that Bush mangled it. Did he learn that word from that site that day, decide to take it out for a test drive, and crash? The fact that he gave the definition suggests that he had just learned a new word and assumed that it might be new to others too.)

So what could be the cause of all this? It could be that Bush has some kind of cognitive disability. It may be something that developed later in life because it seems to be getting worse with time. (Just yesterday I heard him on the radio pronounce the word 'heinous' as 'heinious.') I saw a video clip of him in a debate when he was running for governor of Texas and he was very articulate. I do not recall hearing that he was famous for Bushisms when he was younger.

This is not a trivial issue about verbal gaffes and slips. Inaccurate use of language can often signal a speaker's desire to hide the truth, to dissemble. When we seek to deceive, it requires us to use words in ways that hide their meaning and we can end up saying idiotic things. Careless language can also be due to sloppy thinking, that ideas are churning around in the mind of the speaker in an incoherent mess, and the fractured speech reflects the fact that he is speaking before his mind has formulated the thought. Neither of these things is comforting when we are talking about someone who has the kind of power that Bush does, whose words matter, and can create the kind of misery and destruction that Bush can and has.

As George Orwell wrote in his classic essay Politics and the English Language:

Now, it is clear that the decline of a language must ultimately have political and economic causes: it is not due simply to the bad influence of this or that individual writer. But an effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts.

In this day and age, we cannot really afford to have political leaders who indulge their foolish thoughts.

POST SCRIPT: How the eye evolved

For a long time, it was argued that the eye was too complex, precise, and subtle to have evolved by natural selection. It was the prime example used by anti-evolutionists in the period soon after Darwin's ideas were introduced.

(Kirk Cameron still invokes the eye as an example of design, though why he bothered to do so just after his friend's dazzling revelation of the much more superbly, and clearly intelligently, designed banana is a mystery. See the Post Script to this post to understand what I am talking about.)

Evolutionists have countered this by pointing out how the eye might have evolved, the stages that it would have gone through along the way, and the fact that some organisms still exist that display these interim forms because further evolution was not necessary for them.

This video provides a nice step-by-step explanation of the evolutionary process, with visual aids, showing how each succeeding step provides additional benefits to the organism.

April 25, 2006

Dover's dominoes-5: A Dover domino falls in California

(See part 1, part 2, part 3, part 4.)

The first domino to fall as a result of the Dover verdict was in California where a teacher had decided to create a new optional philosophy class that would promote IDC ideas. This decision was interesting because the people behind it had seemed to draw the lesson from the Dover ruling that while it was problematic to teach IDC ideas in science classes, that it was acceptable to teach it in philosophy courses. As the LA Times reports (link to original article no longer working):

At a special meeting of the El Tejon Unified School District on Jan. 1 [2006], at which the board approved the new course, "Philosophy of Design," school Supt. John W. Wight said that he had consulted the school district's attorneys and that they "had told him that as long as the course was called 'philosophy,' " it could pass legal muster, according to the lawsuit.

The course description is revealing:

Philosophy of Intelligent Design: "This class will take a close look at evolution as a theory and will discuss the scientific, biological, and Biblical aspects that suggest why Darwin's philosophy is not rock solid. This class will discuss Intelligent Design as an alternative response to evolution. Topics that will be covered are the age of the earth, a world wide flood, dinosaurs, pre-human fossils, dating methods, DNA, radioisotopes, and geological evidence. Physical and chemical evidence will be presented suggesting the earth is thousands of years old, not billions. The class will include lecture discussions, guest speakers, and videos. The class grade will be based on a position paper in which students will support or refute the theory of evolution."

The biography of the teacher who proposed and would teach the course says that she has a B. A. Degree in Physical Education, Social Science, with emphasis in Sociology and Special Education. There is no mention of science or philosophy expertise. The reading list and guest speakers seemed to be weighted heavily towards young-Earth creationist ideas.

Some parents objected to this course and immediately filed suit to stop it. Once again, the Discovery Institute had a mixed response, reflecting the confusion in the IDC camp after the Dover verdict. On the one hand they cried foul, saying that those opposed to IDC were hypocritically moving the goals posts after their Dover victory:

Clearly, American’s United for Separation of Students from Science is singing a different tune now than they did last year during the Dover trial.

Then they wanted to outlaw mentioning intelligent design in science classes. Now they want to ban it from all classes.

Then, they said intelligent design was an okay topic for philosophy classes. Now, they claim intelligent design is not suited for any classes.

Then, the Rev. Barry Lynn, executive director of Americans United for Separation of Church and State, was saying specifically about intelligent design that: "when it comes to matters of religion and philosophy, they can be discussed objectively in public schools, but not in biology class."

On the other hand, about the same time, the Discovery Institute itself made a presentation to the El Tejon school board urging that the course be dropped. They were obviously concerned that this school policy, like that of the Dover board, would be another clumsy attempt that would set back their careful strategy even further because this course mixed young earth creationism with IDC. As their attorney Casey Luskin said (echoing Johnny Cash's Folsom Prison Blues):

There is a legal train coming at you and we can see it coming down the tracks. Unfortunately this course was not formulated properly in the beginning, and students were told it would promote young earth creationism as fact. Thus, the only remedy at this point to avoid creating a dangerous legal precedent is to simply cancel the course.
. . .
[I]f you do not cancel this course, and if you let this lawsuit go forward, you are going to lose and there will be a dangerous legal precedent set which could threaten the teaching of intelligent design on the national level. Such a decision would also threaten the scientific research of many scientists who support intelligent design.

Because of the young earth creationist history of this course, this course is not legally defensible and it should be cancelled.

The 'legal train' Luskin spoke of that bearing down on them was presumably powered by the locomotive of Judge Jones' Dover decision, showing how seriously the people at the Discovery Institute viewed that result. On January 17, 2006, less than three weeks after the course was authorized, it was cancelled.

Luskin was concerned that by co-mingling young Earth creationism (which the Supreme Court had already ruled in 1987 was a religious belief) with IDC, the courts would again rule against the school board and that IDC would be dragged down along with creationism. Since the whole Discovery Institute strategy had been to carefully formulate IDC so as cleanse themselves of any creationist taint, they must have been tearing their hair out at the people of Dover and El Tejon, who were clumsily mixing the two up again. With friends like these, they definitely don't need any enemies.

This is always the problem with such stealth strategies. They depend for success on the followers being given nods and winks that tell them not to take the words at face value, that they are merely codes. This works as long as you dealing with those already in the know, who understand how and why the game is being played this way, and who are willing to trust what you say. But there has always been some tension between the smooth-talking sophisticated IDC strategists and their more plain-spoken base, who were getting increasingly impatient with this careful avoidance of any mention of god. From the latter groups' point of view, the US is a Christian country, god belongs everywhere, and if IDC is meant to get god back in the schools, then why not come right out and say so? What is the point of all this tap-dancing around the mention of god?

The El Tejon school board clearly thought they had a winning strategy by explicitly labeling theirs as a philosophy course. Furthermore it was optional. So on the surface, it seemed as if there should be no problem with it. After all, even the most die-hard evolution supporter and anti-IDC militant would concede that religious ideas might be perfectly appropriate for a philosophy, history, or social studies course. It would be hard for any teacher of those subjects to avoid discussing the influence of religion in the social and political histories of nations, and many do so all the time without any legal challenge.

The only people who might express some concern about the El Tejon plan might be philosophy and social studies teachers who might be worried that their curricula are being converted into dumping grounds for the pet causes of religious activists. But while they might find that annoying, that would not be a basis for rejection on legal or constitutional grounds.

So despite the links with creationism, why was the Discovery Institute so nervous about this course going forward and sought to have it cancelled? To understand this, one has to look at a long series of precedents set by the US Supreme Court on the question of the separation of church and state, especially as it pertains to education. Judge Jones in his Dover ruling traced the history of these rulings, which I will review in the next posting.

POST SCRIPT: Proof of Intelligent Design revealed!

Forget the bacterial flagella. Who knows what those are anyway? The proof of intelligent design has been right in front of us all the time, in the produce section of the grocery store no less, but we just did not realize it. Want to see why two evangelists refer to it as 'the atheist's nightmare'? Go here, drag the cursor to the 3:30 mark, watch until the 4:36 mark, and the proof will be revealed. Oh, I've been so blind. (Thanks to Aetiology.)

The clip is from a TV show hosted by an evangelist named Kirk Cameron. His guest who revealed this evidence for god had an English accent. While I was watching the clip I was reminded of the Monty Python sketch where John Cleese is an Army sergeant training his recruits on how to defend themselves if an assailant should attack them armed with fresh fruit.

If I tried to create a parody of intelligent design, I could not come up with something more hilarious than what is shown in this clip. Which raises the questions: Do these people have no sense of how ridiculous such arguments make them seem? How do they expect to be taken seriously? I wonder how the Discovery Institute people view such laughable attempts at providing 'arguments' to support them.

April 24, 2006

Reagan's welfare queen

Former President Reagan had the tendency to invoke anecdotes, his own guesses, and even just make up stuff as 'evidence' for his preferred political positions. For example, he is famous for saying things like "Trees cause more pollution than automobiles do" (to support his position that more stringent automobile emission controls were not necessary) or (to presumably oppose any gun control legislation) that "In England, if a criminal carried a gun, even though he didn't use it, he was not tried for burglary or theft or whatever he was doing. He was tried for first degree murder and hung if he was found guilty." When his spokesperson was told that this statement about English gun law was just false, he said: "Well, it's a good story, though. It made the point, didn't it?"

As a result of this practice, Reagan's assertions were sometimes treated as less than credible. One such claim that he used to support his attempts to discredit the welfare system was his story of the 'welfare queen,' about a woman who lived in a mansion and drove a Cadillac and wore mink coats, all on income derived from defrauding the welfare system by using false names and imaginary children.

This welfare queen story provoked considerable skepticism since he never actually provided any specifics about who she was and the details kept changing. Even after he died in 2004, an obituary by David Shribman in the Boston Globe on June 6, 2004 said: "[Reagan] was a colorful character, both a spinner of anecdotes (many of which were apocryphal) and a spawner of anecdotes. He once said that scientific studies suggested that a substantial amount of air pollution was produced by trees, and he loved to tell stories about a "welfare queen," never identified, who unlawfully used food stamps to buy alcohol."

But in this latter case, Reagan was right, at least partially. A colleague of mine, alerted by one of his students, pointed me to the story of Dorothy Woods, a woman who seemed to fit Reagan's description. A search on the Lexis-Nexis database elicits a New York Times article on December 21, 1980 that says:

She is reported to own Rolls-Royce, Mercedes Benz and Cadillac automobiles, and the Superior Court documents say she allegedly filed fraudulent public assistance claims for 38 nonexistent children
. . .
Investigators also said that records had been found in her home that indicated that the couple owned 100 to 120 rental units, property in Chicago, and other real estate including two homes on Prospect Boulevard in Pasadena, each estimated to be worth $250,000 to $300,000.

The couple lived a lavish lifestyle and it seems that she and her husband were otherwise wealthy people. It is not clear why they also resorted to welfare fraud. Clearly, the amount she stole from welfare ($377,000 over seven years) would not have been enough to purchase all these things and sustain this kind of lifestyle, so Reagan's implication that it was welfare fraud that enabled otherwise poor people to live like this was not correct. This welfare scam seemed like extra pocket money for rich people who got even greedier and defrauded the government. In that sense, she is not much different from other rich people who defraud the government in other ways.

Woods was sentenced in 1983 to an eight-year prison sentence. Queen Latifah is reportedly starring in a film based on her life to be released soon.

The evolution of such stories is interesting because it shows how dangerous and slippery and unreliable our own memories are and how we must be vigilant and especially on our guard against uncritically accepting as 'facts' those things that fit our preconceptions.

Let's speculate and see if we can recreate the genesis of this particular story. Given that Ms. Woods lived in Pasadena (close to Hollywood where Reagan lived and worked), this story must have been reported locally and Reagan must have seen or heard or read about it. Since it involved welfare fraud, and Reagan was opposed to the welfare program, this story would have resonated with him and he would have remembered it. All of us are prone to remembering those things that support our preconceptions. At the time, the story would not have made national headlines because its main point "Rich people defraud the government!" is hardly news.

But over time, the discrepant details of the story, that showed that this was a fraud perpetrated by a rich person, must have disappeared from his mind, and what remained was the story that he wanted to believe, that the welfare system was too lax and too generous and that it enabled shiftless, lazy, poor people to live luxuriously at taxpayer's expense. Again, this is a natural tendency to which we are all prone. I have written before about the ability of our memories to play such tricks on us.

Many people (including me) were skeptical of this story and dismissed it, since we knew that Reagan was cavalier with facts in general. We knew that he was opposed to the welfare system and so we did not take his welfare queen story seriously, similar to the way we dismissed his statements about polluting trees.

How can we prevent this kind of sloppiness? This is where the academic practice of providing supporting evidence and documentation for assertions is important. If you can cite sources for your assertions with names, dates, places, and numbers, those assertions become more credible. If you do that, then people who disagree with you are obliged to investigate the information you provide to see if it really means what you say it does. If, however, you simply say things like "I heard that. . ." and "People say. . ." then the listener really has no idea if you should be taken seriously or not, and they will go by your reputation for reliability or lack of it.

In general, the burden of proof rests with the person who makes the assertion to provide at least some positive evidence in support of it, as otherwise the people who challenge it are left with the impossible task of proving a negative.

I will discuss this question of the burden of proof more thoroughly in a future posting, since it is an important factor in weighing the merits of competing claims.

POST SCRIPT: Politicians live in a different world

I have written before at how surprised I am at the willingness of politicians to be bribed by lobbyists by petty things like golf games and meals and tickets to sporting events. I asked why they would do this since surely they could afford these things on their own salaries.

I was reminded of just how out of touch I am with the lifestyles of the rich and famous by the recent story of Katherine Harris, a congresswoman from Florida and now also the Republican candidate for US Senator. Political junkies may remember her as that state's Secretary of State in 2000 and at the heart of the election shenanigans during that infamous election. It has been alleged that she was instrumental in helping George Bush win that state.

Harris' current senate campaign has been hit by one setback after another, described as a 'train wreck,' with staffers abandoning it in droves. The latest scandal is a dinner that she had with a lobbyist Mitchell Wade, who was one of those who pleaded guilty to bribing Republican congressman Duke Cunningham of San Diego, who had to resign his seat and go to jail.

Apparently the dinner for Wade and Harris (paid by Wade) cost $2,800 as the Orlando Sentinel reports, and adds "House rules forbid members from accepting gifts worth $50 or more[.]"

My first reaction to this story was to question how a meal for two people could possibly cost so much. What could you possibly eat that was so expensive?

My problem in understanding was because when I think of a 'meal' or 'dinner', I think of food. I hardly ever drink alcohol and it is apparently this that can be the big ticket item. Certain wines can be really expensive and this is what apparently drove up the price.

Since I could not, for the life of me, be able to tell the difference between a bottle of wine that costs $10 and one that costs $1,000 bottle, plying me with such things would be a waste. But I wonder how many of our congresspersons can really tell the difference either? Are they simply flattered by the mere fact that someone is paying so much to please them? But if their palates could not tell them that the wine was really expensive, wouldn't that require the briber to point out to the bribee the price of the wine in order to make the attempted bribe work? "Here, have another glass of this $2,000 wine." Wouldn't that be somewhat tacky?

April 21, 2006

Dover's dominos-4: How IDC lost in the Dover case

(See part 1, part 2, part 3.)

The stage was thus set in Dover, PA for what turned out to be an unequal contest in the courtroom of US District Judge John E. Jones III. Matthew Chapman, a great-great-grandson of Charles Darwin, attended the trial and provides an amusing description of its proceedings, the personalities involved, and of the events in the town of Dover leading up to the trial. In his account God or Gorilla: A Darwin descendant at the Dover monkey trial in the February 2006 issue of Harper's Magazine, he describes how the plaintiffs team of lawyers, headed by the ACLU seemed to have the resources and materials at their fingertips while the Thomas More lawyers looked inadequately prepared and with few resources, even having to borrow the expert audio-visual services available to the plaintiffs.

Describing the plaintiffs' (i.e., the people challenging the school board's IDC policy) legal team, Chapman writes: "Here then was a team of highly skilled professionals operating in an atmosphere of frictionless amiability. Here was a collegiate machine," while looking at the defense team "one was reminded more of a dysfunctional family with a frequently absent father." (The 'father' in this case was Thomas More head Richard Thompson, who would be there for a few days and then disappear for a week.)

But more serious than the imbalance in legal resources was the fact that by introducing IDC ideas explicitly into their policy, the Dover school board exposed them to close scrutiny and, under cross-examination, those ideas did not fare well. As I have written earlier one of the expert witnesses who did appear on behalf of the defense was biochemist Michael Behe, probably the main scientist of the IDC movement, author of Darwin's Black Box and creator of the five cases of 'irreducible complexity' on which the credibility of IDC hinges. Behe was cross-examined in a way that he never encounters when he speaks with IDC-friendly audiences and journalists and TV talk show hosts. As a result, he was forced into several damaging admissions, to the extent of even admitting that changing the definition of science to include IDC would also result in astrology being considered science. The judge repeatedly quoted his testimony as reasons evidence why he ruled against the defendants, which is somewhat ominous for IDC ideas if they should venture into the courtroom again

As almost everyone knows by now, the judge ruled overwhelmingly in favor of the plaintiffs, arguing that the Dover action was unconstitutional. He was unsparing in his criticism of the school board, saying "The breathtaking inanity of the Board’s decision is evident when considered against the factual backdrop which has now been fully revealed through this trial. The students, parents, and teachers of the Dover Area School District deserved better than to be dragged into this legal maelstrom, with its resulting utter waste of monetary and personal resources." (p. 138)

He was also clearly angered by the outright lying by some of the Board members in their testimony, saying "The citizens of the Dover area were poorly served by the members of the Board who voted for the ID Policy. It is ironic that several of these individuals, who so staunchly and proudly touted their religious convictions in public, would time and again lie to cover their tracks and disguise the real purpose behind the ID Policy." (p. 137)

But what was most damaging to the IDC case was the fact that the judge had taken the time to analyze closely the important question of whether IDC was science or religion, a question that he could have avoided if he wished, since the unconstitutionality of the Board's actions did not depend on it. And his ruling that IDC was not science may end up being the most significant part of the verdict for the Discovery Institute's long-term goal of slowly bringing it into the schools. He said (p. 64):

After a searching review of the record and applicable caselaw, we find that while ID arguments may be true, a proposition on which the Court takes no position, ID is not science. We find that ID fails on three different levels, any one of which is sufficient to preclude a determination that ID is science. They are: (1) ID violates the centuries-old ground rules of science by invoking and permitting supernatural causation; (2) the argument of irreducible complexity, central to ID, employs the same flawed and illogical contrived dualism that doomed creation science in the 1980's; and (3) ID’s negative attacks on evolution have been refuted by the scientific community. As we will discuss in more detail below, it is additionally important to note that ID has failed to gain acceptance in the scientific community, it has not generated peer-reviewed publications, nor has it been the subject of testing and research.

Equally damaging (in a practical sense) was the judge's decision that the Dover school board should pay for the court costs of the plaintiffs, a pretty large bill for a small school board and one that will have a chilling effect on the aspirations of other school boards to try similar actions.

Although a single verdict by a US District Court judge (unlike rulings by appeals courts or the US Supreme Court) carries with it no formal legal weight outside his district, a comprehensive and broad verdict like this tends to be very influential if a similar case should occur elsewhere. To see how this happens, one can go back to the 1982 ruling in McLean v. Arkansas Board of Education, where the judge in that case ruled that legislating equal time for creationism in biology class was unconstitutional. This was again a US District Court ruling but one which was very detailed and comprehensive.

That ruling by Judge William Overton was influential in creating a similar result in neighboring Louisiana, and it was the latter case (Edwards v. Aguilard) that resulted in the US Supreme Court ruling against creationism in 1987, adopting much of the reasoning used by Judge Overton in McLean. In fact, the McLean ruling was influential even in the Dover case.

There is no doubt that the Dover verdict was slam-dunk victory for the plaintiffs and a devastating defeat for the IDC side, much worse than they had feared. On September 30, 2005, before the trial, key IDC theorist William Dembski had made the following prediction on his blog:

As I see it, there are three possible outcomes:


  1. The Dover policy, in which students are informed that the ID textbook Of Pandas and People is in their library, is upheld.

  2. The Dover policy is overturned but the scientific status of ID is left unchallenged.

  3. The Dover policy is not only overturned but ID is ruled as nonscientific.

For what it’s worth, my subjective probabilities are that outcome 1. has about a 20% probability, outcome 2. has about an 70% probability, and outcome 3. has less than a 10% probability.

While Dembski was pessimistic about the outcome of the trial being favorable for their side (outcome 1), he still expected to salvage the notion that IDC could be science. The result being overwhelmingly outcome 3 may have unnerved Dembski so much that just six days after the Dover verdict, he announced that he was suspending his blog indefinitely. Since then, "his" blog has since reopened under new management, seemingly run by his more frequent commenters

Judge Jones' verdict, which reads like a 139-page monograph on the history of attempts to overthrow evolutionary ideas in science classes and replace them with religious ones, will and should be read by everyone interested in church-state separation issues. I can imagine other judges are likely to read it for guidance if they should have to rule on similar cases.

The verdict immediately set off a series of actions in other parts of the country, as its implications were evaluated.

Next in this series: A domino falls in California.

April 20, 2006

Dover's dominos-3: The Dover school board battles the Discovery Institute

(See part 1, part 2 here.)

The Dover school board members were encouraged to adopt their policy by the offer, when they encountered the inevitable legal challenge, of legal representation by the Thomas More Law Center, based in Michigan, and which was "created in 1999 by Thomas Monaghan, founder of Dominos Pizza and a philanthropist for conservative Catholic causes." The center supports all kinds of religion-based social policies, and was eager to take on the teaching of evolution theory in schools. To give you some idea of how extreme this group's views are, the president and chief counsel of the center Richard Thompson believes that:

Christianity is under siege from all quarters, but especially from the federal courts, the American Civil Liberties Union, and what Thompson calls the "homosexual lobby."

The ACLU and the courts are "basically cleansing America of religion and particularly Christianity," Thompson said. "It’s almost like a genocide. It’s a sophisticated genocide."

So it is clear that Thompson is a charter member (along with Bill O'Reilly) of the crazy cult that believes that it is Christians who are persecuted in the US. Anyone who uses the word "genocide" to describe the situation of Christians in the US clearly needs to lie down and take a nap until the fever passes.

The Thomas More center and the Dover school board were itching for a fight with those they saw as secular Darwinists. "Bring it on!" seemed to be their cry. Needless to say, the somewhat more sophisticated strategists at the Seattle-based Discovery Institute were not happy with their erstwhile allies in Dover shouting loudly about their blatantly religious motives. They could see their cautious, delicately-balanced, and expensive long-range plan, which depended upon carefully avoiding any mention of religion, falling apart because of the clumsy blundering of the Dover board, aided and even egged on by the Thomas More lawyers.

But once that die was cast and the Dover policy adopted, the Discovery Institute was placed in a quandary. The Thomas More center did not have the legal resources to mount the kind of sophisticated arguments necessary in such a case. Should the Discovery Institute completely disassociate themselves from the Dover school board actions and distance themselves from the case as it went down to likely defeat? Or should they throw themselves also into the fray, provide their own expert witnesses, pour all their considerable financial and legal resources into the case, and hope to secure victory from the jaws of an otherwise almost certain defeat?

In the end they waffled, initially agreeing to be part of the case, and then backing out when the Thomas More Law Center did not want the Discovery Institute's own lawyers representing their clients. This caused bad feelings on both sides which spilled out into the open, as The Toledo Blade reported on March 30, 2006:

In fact, when Mr. Thompson decided to defend the Dover intelligent design policy, he angered the group most associated with intelligent design: the Discovery Institute, a conservative think-tank based in Seattle.

“We were incredibly frustrated by arrogance and bad legal judgment of goading the [Dover] school district to keep a policy that the main organization supporting intelligent design was opposed to,” says John West, the associate director of the Discovery Institute’s Center for Science and Culture.

The Thomas More Center acted “in the face of opposition from the group that actually represents most of the scientists who work on intelligent design.’’
. . .
In fact, these two prominent supporters of intelligent design couldn’t be much more at odds.

Mr. Thompson says the Discovery Institute bailed out on the Dover Board of Education when three of its experts refused to testify at the last minute, after the deadline for recruiting witnesses had passed.

But Mr. West says the whole thing was the More Center’s fault. Mr. Thompson wouldn’t let Discovery Institute fellows have separate legal representation.

The Discovery Institute has never advocated the teaching of intelligent design, and told the Dover board to drop its policy, Mr. West says. It participated in the trial only reluctantly.

“We were in a bind,” Mr. West says. “Our ideals were on trial even though it was a policy we didn’t support.”

Richard Thompson countercharges that the Discovery Institute people are essentially wimps, people who just talk a tough game but don't put their beliefs on the line when it counts:

Mr. Thompson says the Discovery Institute’s strategy is to dodge a fight as soon as one appears imminent.

“The moment there’s a conflict they will back away . . .they come up with some sort of compromise.” But in Dover “they got some school board members that didn’t want compromise.”

This intramural battle between two groups supposedly on the same pro-IDC side did not augur well for the trial that was scheduled when some Dover parents led by Tammy Kitzmiller challenged the constitutionality of the school board's decision.

The stage was now set for the courtroom confrontation.

Next in this series: Why the Dover school board lost the case.

April 19, 2006

Dover's dominos-2: The Dover school board undermines the Wedge strategy

(See part 1 here.)

The reason that Judge Jones' verdict in the Dover trial is likely to be so influential is because of the exhaustive nature of the testimony that he heard and the depth and comprehensiveness and scope of his ruling. In essence, the trial provided a place for IDC ideas to get a close examination under controlled conditions.

Prior to the trial, the case for and against IDC had been waged in the media, in legislative hearings, and in debates. As someone who has participated in many such things, I know that such forums can be a place where key ideas get examined and focused. But this happens only if the participants want them to. Otherwise skilled practitioners in those forums can evade tricky questions by diverting attention elsewhere and turn them into public relations exercises and question-begging.

But in a trial, with its fairly strict rules of evidence, it becomes much harder to make unchallenged statements. If you assert something, you have to be able to back it up and you cannot evade the issues easily since you will be cross-examined.

To be frank, the Dover trial was from the beginning a bad situation for the IDC people, especially the strategists at the Discovery Institute. Their whole approach up to that point had been to run a stealth campaign, based on a clever public relations strategy. They carefully avoided talk of god as much as possible (at least in public). They did not even insist on teaching intelligent design in schools. Instead they adopted the strategy of asserting that evolution was 'just a theory,' that it had problems, that there was a controversy over some of its basic tenets, and that good science and teaching practices required that students be exposed to the nature of the controversy.

Their slogan "teach the controversy" had a certain appeal, since people have an intuitive sense of fairness, and the assertion that students should hear all sides of an issue is sure to strike a responsive chord. Thus opinion polls tend to consistently show majorities in favor of "teaching all sides" or "teaching the controversy."

Presumably, the long-term strategy of the Discovery Institute was to first have the ideas of evolution undermined in this way, then later introduce IDC as an alternative to the undermined theory of evolution and lead to its discrediting, then bring god back into science education, and finally put god (and prayer) back into public schools everywhere. They saw this as a slow, incremental advance, taking many years to reach its goal.

The nature of this long-term plan is not entirely speculation on my part. The basic elements are outlined in the Discovery Institute's "Wedge Strategy" document which placed the blame for society's decline on the advancement of materialistic thought, of which they claim Darwinism was a major component. The document says:

"The proposition that human beings are created in the image of God is one of the bedrock principles on which Western civilization was built. Its influence can be detected in most, if not all, of the West's greatest achievements, including representative democracy, human rights, free enterprise, and progress in the arts and sciences.

Yet a little over a century ago, this cardinal idea came under wholesale attack by intellectuals drawing on the discoveries of modern science. Debunking the traditional conceptions of both God and man, thinkers such as Charles Darwin, Karl Marx, and Sigmund Freud portrayed humans not as moral and spiritual beings, but as animals or machines who inhabited a universe ruled by purely impersonal forces and whose behavior and very thoughts were dictated by the unbending forces of biology, chemistry, and environment. This materialistic conception of reality eventually infected virtually every area of our culture, from politics and economics to literature and art.

The Wedge Strategy document says, among other things, that they seek "To replace materialistic explanations with the theistic understanding that nature and human beings are created by God."

But the former members of the Dover school board had no patience for this kind of subtlety and the slow, long-range plan envisaged by the Discovery Institute. They wanted god back in their schools and they wanted it now. So they created their own policy, which required students in biology classes to have a statement read to them that said, in part:

Because Darwin's Theory is a theory, it continues to be tested as new evidence is discovered. The Theory is not a fact. Gaps in the Theory exist for which there is no evidence. A theory is defined as a well-tested explanation that unifies a broad range of observations.

Intelligent Design is an explanation of the origin of life that differs from Darwin's view. The reference book, Of Pandas and People, is available for students who might be interested in gaining an understanding of what Intelligent Design actually involves.

In one stroke, the religious members of the Dover school board, thinking they were advancing god's work, destroyed the entire stealth strategy of the Discovery Institute. By explicitly naming and introducing IDC into the science class, they were inviting a court challenge and thus exposing it to direct judicial review, something the Discovery Institute had been carefully avoiding.

What is worse, they even advocated a book Of Pandas and People which had a blatantly creationist pedigree. The book has been around a long time and in its earlier incarnations it freely used the word 'creationism.' The reason that this book was a problem was that creationism (which roughly stood for the idea that god directly intervened in the creation of the world and its living things) had already been ruled by the US Supreme Court in 1987 to be a religious belief that had no place in public schools. After this setback, a 'new' edition of the book came out which seemed to differ from the earlier versions mainly in the fact that someone had used the 'search and replace' function of their word processor to remove all references to the word 'creationism' and replace it with 'intelligent design.'

It was because of that same court decision that the Discovery Institute had carefully avoided any mention of creationism in its work. In fact, the entire wedge strategy was based on tailoring a policy that avoided all the features of religion mentioned in the landmark 1987 decision, and thus would hopefully pass future constitutional scrutiny.

But the Dover board's action made a hash of that strategy, because it mixed creationism, IDC, and opposition to Darwin into one entangled mess. To make it worse, the advocates of this Dover policy made no secret of the motives for their actions, and in school board meetings and other public forums spoke about how they were doing this so as to bring god back into the schools. (See Matthew Chapman's article God or Gorilla in the February 2006 issue of Harper's Magazine for interesting insights into what was going on in that small town before and during the trial.)

These actions were going to come back and haunt them during the trial.

Next in this series: Dover leads to intramural fights within the creationist camp.

April 18, 2006

Dover's dominos-1: Why Intelligent Design Creationism will lose

The Scottish poet Robert Burns in his poem To a Mouse cautioned those who place too much faith in detailed plans for the future. He said:

The best laid schemes o' Mice an' Men, 

Gang aft agley.

When historians of the future write about the demise of Intelligent Design Creationism (IDC), they will likely point to the Dover, PA court decision as when the carefully thought-out plans and strategy of the IDC movement ganged agley in a big way.

If you recall, US District Judge John E. Jones III ruled on December 20, 2005 (Kitzmiller v. Dover) that the then Dover school board had acted unconstitutionally in its attempts to undermine the credibility of evolutionary theory in its biology class and in its attempt to promote IDC as a viable alternative. (See here for a previous posting giving the background to this topic.)

That case raised many fascinating issues and the final ruling clarified and put in perspective many of the issues clouding the role of intelligent design, science, religion, schools, and the US constitution. This series of posts that begins today will analyze that decision and the ripples it has caused throughout the country. I had been meaning to analyze the decision and its broader implications in depth for some time but kept getting deferred by other issues.

I should emphasize that what I think will disappear is the pretense that IDC is a scientific theory, to be treated on a par with natural selection. The underlying idea behind IDC (that god intervenes somehow in the world) will remain because that is an important component of any god-based religious system and has been around for a long time.

From time immemorial, people have invoked god to explain the things that seemed inexplicable. The advancement of science has merely resulted in the items in the list of inexplicable things changing with time, while the fundamental idea behind it has not changed. One can understand the seemingly unshakeable appeal of this idea for most religious believers. What would be the point of believing in a god who either could not or did not intervene in the workings of the world? The fact that such interventions cannot be demonstrated conclusively will not dissuade devout religious believers from their beliefs.

The formulation of what is now called intelligent design goes at least as far back as Thomas Aquinas in the 13th century. Theologian John Haught's testimony (Kitzmiller, p. 24) described Aquinas' views thusly:

Wherever complex design exists, there must be a designer; nature is complex; therefore nature must have had an intelligent designer. Aquinas was explicit that this intelligent designer "everyone understands to be God."

Christian theologian and apologist William Paley elaborated on this in 1802 in his book Natural Theology when he illustrated the inference of design by god by the example of finding a watch in a field:

[W]hen we come to inspect the watch, we perceive. . . that its several parts are framed and put together for a purpose…the inference we think is inevitable, that the watch must have had a maker.

Paley continues that since nature is far more complex than watches, "The marks of design are too strong to be got over. Design must have had a designer. That designer must have been a person. That person is GOD."

IDC theorist William Dembski's 2003 formulation of intelligent design follows this trend, with the example of the watch being updated to Mount Rushmore.

[W]hat about [Mount Rushmore] would provide convincing circumstantial evidence that it was due to a designing intelligence and not merely to wind and erosion? Designed objects like Mount Rushmore exhibit characteristic features or patterns that point to an intelligence. Such features or patterns constitute signs of intelligence. (emphasis in original)

(I have already written about the fallacy that lies behind this argument and will not repeat it here.)

This whole line of reasoning was based on the premise that the existence of seemingly designed objects or smart animals and people necessarily required an even smarter designer. That kind of regressive argument inevitably led you to believe in something like a god.

Darwin's theory was a direct challenge to this idea because it showed us how it could be possible that life can bootstrap itself from primitive forms to increasingly complex and sophisticated ones. It reveals how you can have the appearance of design without any need for an actual designer. Thus the most intuitive argument for the existence of a higher being has been removed.

People like Daniel Dennett, author of Darwin's Dangerous Idea, and Richard Dawkins have relentlessly driven home the point that evolutionary theory has made belief in a god fundamentally unnecessary. As Dawkins says in his book The Blind Watchmaker (p. 6):

An atheist before Darwin could have said, following Hume: "I have no explanation for complex biological design. All I know is that God isn't a good explanation, so we must wait and hope that somebody comes up with a better one." I can't help feeling that such a position, though logically sound, would have left one feeling pretty unsatisfied, and that although atheism might have been logically tenable before Darwin, Darwin made it possible to be an intellectually fulfilled atheist.

I have written before of the story of the scientist-mathematician Laplace and his book called the System of the World. Napoleon is said to have noted that god did not appear in it, to which Laplace is supposed to have replied that "I have no need for that hypothesis." Given the state of science at that time, Laplace might have felt fully justified in saying so about the inanimate physical world but he would have been hard pressed to justify his claim for living things. Darwin was the one who later closed that gap.

Much of the religious opposition to Darwin's theory has been based on the claim that it promotes atheism. This is not quite correct. There are, after all, many religious biologists. What Darwin's theory does is remove whatever remaining necessity people might have felt for a god hypothesis, leaving it up to the individual to decide whether to believe in a god or not. Clearly, this removal of a major argument for the existence of god is likely to result in greater atheism, but the goal of those who teach evolutionary theory is not to promote atheism. It is to teach the best science.

Another reason that I think IDC ideas will fade away is that the five examples of 'irreducible complexity' that IDC advocates promote as giving proof of god's existence (because it is asserted that they cannot be explained by evolutionary processes) will eventually get chipped away and be explained by evolutionary theory. These five items are identified by IDC biochemist Michael Behe in his book Darwin's Black Box as the bacterial flagella and cilia, blood clotting mechanism, protein transport within a cell, evolution of the immune system, and metabolic pathways.

The process of explaining their creation using natural selection is already well underway with two of the five examples, blood clotting and the bacterial flagella.

The reason that I am confident that all these items will eventually be explained by science is based on the history of science. A similar process has happened with past seemingly inexplicable examples in nature (the stability of the solar system, the human eye, etc.) that have subsequently been explained away. To be sure, one can always come up with new unexplained phenomena since no scientific theory ever explains everything, but once you start shifting your target, your case for god becomes progressively less persuasive.

This is why most sophisticated theologians warn against believing in such a 'god of the gaps.' They argue that doing so results in the maneuvering space for god's actions becoming steadily smaller. It also seems a bit strange that god would choose to act only in such esoteric situations.

Next in this series: The Dover school board inadvertently wrecks the IDC strategy.

April 17, 2006

On writing-5: The three stages of writing

(See part 1, part 2, part 3, and part 4 in the series.)

I believe that part of the reasons students end up plagiarizing, either inadvertently or otherwise, is that they underestimate the time it takes to write. This is because they think that writing only occurs when they are actually putting words on paper or typing on a keyboard.

But writing involves really three phases: prewriting, writing, and post-writing.

Pre-writing probably takes the most time and often does not involve the physical act of writing at all. This is the time when the author is mulling things over in his mind, sorting ideas out, trying to find the main point he is trying to make, asking what kinds of evidence is necessary and what documents should be read for background, and seeking out those sources of information. It also involves (for some) sketching out an outline and making rough notes. It is during this process of slow digestion that you start the important process of synthesizing the ideas that you have got from many sources and making something of your own

Prewriting is best done in a conscious but unrushed manner. For me, most of this prewriting is done in my head while doing other things such as walking or driving somewhere or doing routine chores or in those moments before falling asleep or just after waking. During those times, I am thinking of what to write, even to the extent of mentally creating actual lines of text, examples, and turns of phrase. I do this deliberately and consciously. In the SAGES Peer Writing Crew blog, Nicole Sharp says she thinks about writing while walking between classes, composing sentences in her head. This is an example of using time wisely, because some of the best ideas come to us when we are not consciously trying to generate them. It is to avoid interrupting this kind of prewriting that I have resisted carrying a cell phone or a Blackberry.

I think students may not appreciate how important this pre-writing phase is to writing. When given an assignment, they may wait until shortly before it is due and set aside a large block of time that they think is sufficient to write the five page paper or whatever it is that is required. But then they hit a block and don't know what to say or how to say it because they have not gone through the important pre-writing phase. Without being aware of it, they are trying to compress the pre-writing and writing phases into one. But when you try to do that, it is hard to find your own perspective on a topic. So you end up using ideas from one or a few sources, mashing them together, while paraphrasing them to make it look like your own, thus running the risk of plagiarizing.

Instructors are partly to blame for this. We may not be informing students of the importance of prewriting, and in fact may be undermining that practice by giving short deadlines that do not really allow much time for the kind of thoughtful contemplation it requires. I am not sure how to structure writing assignments in my courses so that students get in the habit of prewriting but it is definitely something I am going to pay more attention to in my next course.

The post-writing phase is equally important, but equally neglected. This involves much more than simply editing the work. Editing for me means simply tightening things up, checking for grammar, improving word choice, and avoiding stylistic ugliness. The more important aspect of post-writing that once the writing phase has put my ideas into a concrete form, I can now keep returning to it, probing it, looking to see how to make it better. This may involve restructuring the argument, providing more evidence, finding a fresh image to capture an idea, inventing a telling metaphor, or looking for better sources. I like to let time percolate through the words I have written, creating a richer text.

All these things are best done in a conscious but unrushed manner. Most of this post-writing takes place in my mind while doing other things, like the prewriting phase. But this requires that we set aside time for it after the writing phase. If we are rushing to meet a deadline, this will not occur.

It is only the writing and editing parts that actually take up any 'real' time. All the other things can be done while living one's life doing other things.

The pre-writing phase takes up the most time for me, followed by the post-writing phase, with the actual writing taking up the least time. When people ask me how long it took me to write either of my books, it is hard for me to answer. I usually say about six months because that is the time the actual writing phase took, and this is what people usually mean by 'writing.' But the prewriting phase that led up to each lasted much, much longer.

The same thing holds for these blog entries. The entire week's entries take me about five to ten hours total of actual writing, depending on the topic. But before I write them, I have done a lot of pre-writing on each topic, doing research, collecting notes and creating the structure in my mind, all done in bits and pieces scattered over time, so that when I actually sit down and write (the writing phase), the words and ideas come fairly easily.

I also write almost all the week's entries during the weekend prior to their posting. One reason for this practice is that the weekend is when I have more time to write. But the main reason is that after the writing is done, I have time to let my thoughts simmer and do some post-writing in my mind, enabling me to polish the entries during the week, before I actually post them.

The exceptions to this rule occur when something comes up in the news during the week that I feel impelled to respond to immediately, like the call center item last week or the Tiktaalik discovery the previous week. But even in these cases, the reason I can respond so promptly is that these topics have touched on something that I either care about a lot or know quite a bit about, which means that I have pretty much done the prewriting in my mind already, although I did not have a plan to actually write about it. I still leave some time for post-writing, even in these cases, usually by completing the writing the night before the morning posting.

But since students working on a short deadline do not have, or are aware of the need for creating, the time for pre- or post-writing, they end up producing work that is of lower quality than they are capable of. The challenge for instructors and students is how to help students become aware of the immense importance of the prewriting and post-writing phases, and how to structure assignments and deadlines to help them get used to doing it and have the time to do so.

Peter Elbow, in his book Everyone Can Write, gives some valuable advice. He recommends that writers create two distinct mindsets when writing. One mindset is a very accepting one, where any idea that comes into one's head, any sentence, any image or metaphor, is accepted by the author as being wonderful and written down or stored away for use. This attitude is great in the prewriting phase, because it enables you to generate a lot of ideas.

The second mindset is the critical one, where we evaluate what we have written and ask whether it is worth retaining, whether it should be improved upon, or phrased better. This is best done in the post-writing phase.

Many of us get stuck in our writing because we are trying to do both things simultaneously. An idea comes into our head and we immediately start to analyze or critique it wondering whether it should be included or not. This blocks our progress and we get stuck.

Of course, none of these distinctions can be really rigid. When we are critiquing an idea in the post-writing phase, that might generate a new idea and we have to switch to an accepting phase. But being aware that an attitude that is accepting of ideas and one that is critical of ideas have to be adopted as the need arises can prevent one from having that awful feeling of thinking that one has 'nothing to say.' We all have something to say. It is just that we do not know if it is worth saying. It helps to postpone that judgment.

Realizing that we need to say whatever is on our minds and only later judge whether it is worth saying is a good habit to cultivate.

This series of postings on writing is, in itself, an illustration of how writing grows. I had initially only meant to write about the plagiarism issue, triggered by the Ben Domenech fiasco in the Washington Post. But as I wrote about it, the topic branched off into many related areas, and ideas occurred to me that were not there when I started.

So I guess the lesson to be taken from all this is that you should just start writing about anything you care about, and see where it goes. You will probably be surprised at where you end up.

POST SCRIPT: Where the religious people are

Ever wondered where Catholics are most concentrated in the US? How about Mennonites? Jews? Muslims? Lutherans? Well, now you can find out with this series of maps that shows, county by county, the density of populations of the various religious denominations.

It did not provide a breakdown for atheists. This is because they were getting their numbers from the membership lists of religious institutions in each area and atheists don't have formal groups. What was interesting, though, was that there were a surprisingly large numbers of counties where the total number of religious adherents of any stripe was less than 50%.

April 14, 2006

Squeezing workers to the limit

So there you are in a fast food drive-through, waiting for the people in the car ahead of you to place their order. They do so and move on, and you slowly move up to the speaker. It takes about 10 seconds for this shifting of cars to take place. Haven't you wondered what the person at the other end of the speaker is doing with that 10 seconds of downtime? Me, neither.

But the good folks at fast food corporate headquarters care. They worry that the employee may be goofing off, perhaps drinking some water, thinking about their children or friends, what to make for dinner later, perhaps even thinking about how they can climb out of this kind of dead-end job. Committed as the corporate suits are to maximizing employee productivity, they feel that those 10 seconds between cars could be put to better use than to allow idle thoughts. But how?

Enter the internet. What if you outsourced the order taking to someone at a central location, who then enters the order into a computer and sends it back via the internet to the store location where you are? The beauty of such a situation is that the person at the central location could be taking an order from another store somewhere else in the country in the 10 second interval that was previously wasted. Genius, no?

Sound bizarre? This is exactly what McDonalds is experimenting with in California. The New York Times on April 11, 2006 reports on the way the process works and one such call center worker, 17 year old Julissa Vargas:

Ms. Vargas works not in a restaurant but in a busy call center in this town [Santa Maria], 150 miles from Los Angeles. She and as many as 35 others take orders remotely from 40 McDonald's outlets around the country. The orders are then sent back to the restaurants by Internet, to be filled a few yards from where they were placed.

The people behind this setup expect it to save just a few seconds on each order. But that can add up to extra sales over the course of a busy day at the drive-through.

What is interesting about the way this story was reported was that it was focused almost entirely on the technology that made such a thing possible, the possible benefits to customers (saving a few seconds on an order) and the extra profits to be made by the company. "Saving seconds to make millions" as one call center executive put it.

There was no discussion of the possible long-term effects on the workers, or the fact that the seconds are taken from the workers' lives while the millions are made by the corporation and its top executives and shareholders. This is typical of the way the media tend to underreport the perspective of the workers, especially low-paid ones.

Look at the working conditions under which the call center people work, all of which are reported as if they are nifty innovations in the business world, with no hint that there was anything negative about these practices:

Software tracks [Ms. Vargas'] productivity and speed, and every so often a red box pops up on her screen to test whether she is paying attention. She is expected to click on it within 1.75 seconds. In the break room, a computer screen lets employees know just how many minutes have elapsed since they left their workstations
. . .
The call-center system allows employees to be monitored and tracked much more closely than would be possible if they were in restaurants. Mr. King's [the chief executive of the call center operation] computer screen gives him constant updates as to which workers are not meeting standards. "You've got to measure everything," he said. "When fractions of seconds count, the environment needs to be controlled."

This is the brave new world of worker exploitation. But in many ways it is not new. It is merely an updated version of what Charlie Chaplin satirized in his 1936 film Modern Times, where workers are given highly repetitious tasks and closely monitored so that they can be made to work faster and faster.

The call center workers are paid barely above minimum wage ($6.75 an hour) and do not get health benefits. But not to worry, there are perks! They do not have to wear uniforms, and "Ms. Vargas, who recently finished high school, wore jeans and a baggy white sweatshirt as she took orders last week." And another plus, she says, is that after work "I don't smell like hamburgers."

Nowhere in the article was any sense of whether it is a good thing to push workers to the limit like this, to squeeze every second out of their lives to increase corporate profit. Nowhere in the article is there any sign that the journalist asks people whether it is ethical or even healthy for employees to be under such tight scrutiny where literally every second of their work life is monitored, an example of how the media has internalized the notion that what is good for corporate interests must be good for everyone. Just because you work for a company, does this mean they own every moment of your workday? Clearly, what these call centers want are people who are facsimiles of machines. They are not treating workers as human beings who have needs other than to earn money.

In many ways, all of us are complicit in the creation of this kind of awful working situation, by demanding low prices for goods and unreasonably quick service and not looking closely at how those prices are driven down and speed arrived at. How far are we willing to go in squeezing every bit of productivity from workers at the low end of the employment scale just so that the rest of us can save a few cents and a few seconds on a hamburger and also help push up corporate profits? As Voltaire said many years ago, "The comfort of the rich depends upon the abundance of the poor."

The upbeat article did not totally ignore what the workers thought about this but even here things were just peachy. "Ms. Vargas seems unfazed by her job, even though it involves being subjected to constant electronic scrutiny." Yes, a 17-year old woman straight put of high school may not be worn out by this routine yet. In fact the novelty of the job may even be appealing. Working with computers may seem a step up from flipping hamburgers at the store. But I would like to hear what she says after a year of this kind of work.

This kind of story, with its cheery focus on the benefits accruing to everyone except the worker, and its callous disregard for what the long-term effects on the workers might be, infuriates me.

I have been fortunate to always work in jobs where I had a great deal of autonomy and where the luxury of just thinking and even day-dreaming are important parts of work, because that is how ideas get generated, plans are formulated, and programs are envisaged. But even if people's jobs do not require much creativity, that is not a reason to deny them their moments of free thought.

April 13, 2006

On writing-4: The role of originality

(See part 1, part 2, and part 3 in the series.)

So why do people end up sometimes plagiarizing? There are many reasons. Apart from the few who deliberately set out to do it because they are too lazy to do any actual writing of their own and lack any compunction about plagiarizing, I believe most end up doing it out of fear that they expected to say something that is interesting, original, and well written, usually (in the case of classroom assignments) about topics that they have little or no interest in.

This is a highly inflated and unrealistic expectation. I doubt that more than a few college or high school teacher really expect a high level of originality in response to classroom assignments, though that does not mean one should not try to achieve it.

A misplaced emphasis on originality creates unrealistic expectations that can cause insecure writers to plagiarize. I think that students who end up plagiarizing make the mistake of thinking that they must start by coming up with an original idea. Few people (let alone students who usually have very little writing experience) can reach such a high standard of originality. This is why they immediately hit a wall, lose a lot of time trying to get an idea, and in desperation end up plagiarizing by finding others who have said something interesting or relevant and "borrowing" their work. But since they want the reader to think that they have done the writing, they sometimes hide the borrowing by means of the 'pointless paraphrase' I wrote about previously.

Originality in ideas is often something that emerges from the writing and is not prior to the writing. A blindingly original idea may sometimes strike you, but this will be rare even for the most gifted and original writers. Instead, what you will usually find is a kind of incremental originality that emerges naturally out of the act of writing, where you are seemingly doing the mundane task of putting together a clear piece of writing using other people's (cited) ideas. If you are writing about things that interest you, then you will be surprised to find that the very act of writing brings about something original, where you discover new relationships between old ideas.

As an instructor, what I am really looking for in student writing is something that just meets the single criterion of being well written. As for being interesting, all I want is to see that at least the writer is interested in the topic, and the evidence for that takes the form of the writer making the effort to try and convince the reader of the writer's point of view. This seems like a modest goal but if followed can lead to pretty good writing.

In my experience, the most important thing is for writers to be interested enough in the topic that they want to say something about it, so the first condition for good writing is that the writer must care about the topic. The second condition is that the writer cares enough about it to want to make the reader care too. Once these two factors are in place, originality (to a greater or lesser degree) follows almost automatically from them.

It took me a long time to understand this. I had never written much in the earlier stages of my career (apart from scientific papers) because I was waiting for great new ideas to strike me, ideas that never came. But there came a time when I felt that a topic I cared a lot about (the nature of science) was one in which the point of view I held was not being articulated clearly enough by others. I began writing about it, not because I had an original idea, but because I felt a need to synthesize the ideas of many others into a simpler, more clearly articulated, position that I felt was missing from the discussion. In the process of creating that synthesis, some papers and my first book Quest for Truth: Scientific Progress and Religious Beliefs emerged. What turned out to be original (at least slightly) in them was the application of the ideas of certain classical philosophers and historians of science to the contemporary science-religion debate, something that I had not had in mind when I started writing. That feature emerged from the writing.

My second book The Achievement gap in US education: Canaries in the mine followed that same pattern. I was very concerned about what I felt were great misunderstandings about the causes of the achievement gap between black and white students in the US and how to deal with it. I felt that my experience and interests in science and education and politics and learning theory put me in a good position where I could bring ideas from these areas together. I did not have anything really original in mind when I started writing but whatever is original in the book emerged from the act of writing, the attempt to create a synthesis.

The same applies to these blog entries. I write about the things I care about, trying to make my point clear, without seeking to be original. After all, who can come up with original ideas five times per week? But very often I find that I have written things that I had not thought about prior to the writing.

To be continued. . .

POST SCRIPT: Is there no end to the deception?

One of the amazing things about they current administration is how brazen they are about misleading the public. The latest is that President Bush rushed to declare that "We have found [Iraq's] weapons of mass destruction" in the form of mobile biological weapons laboratories, even while some intelligence investigators were finding that there was nothing to that charge.

The defense being offered by the administration's spokespersons that these negative findings had not reached the president makes no sense. Before making a serious charge, it is the President and his staff's responsibility to check what information is being gathered and processed. To shoot off his mouth when there was no urgency to do so is to be irresponsible at best and deceitful at worst.

Kevin Drum of Washington Monthly is maintaining a list of the more egregious examples of things the administration knew were not true or for which there were serious doubts, but went ahead and declared them as 'facts' anyway, to justify decisions that they had already made about attacking Iraq.

He is up to #8 and there is no reason to think that the list will not keep growing.

April 12, 2006

Atheism and Agnosticism

In an interview, Douglas Adams, author of The Hitchhiker's Guide to the Galaxy, who called himself a "radical atheist," explains why he uses that term (thanks to onegoodmove):

I think I use the term radical rather loosely, just for emphasis. If you describe yourself as "Atheist," some people will say, "Don't you mean 'Agnostic'?" I have to reply that I really do mean Atheist. I really do not believe that there is a god - in fact I am convinced that there is not a god (a subtle difference). I see not a shred of evidence to suggest that there is one. It's easier to say that I am a radical Atheist, just to signal that I really mean it, have thought about it a great deal, and that it's an opinion I hold seriously…

People will then often say "But surely it's better to remain an Agnostic just in case?" This, to me, suggests such a level of silliness and muddle that I usually edge out of the conversation rather than get sucked into it. (If it turns out that I've been wrong all along, and there is in fact a god, and if it further turned out that this kind of legalistic, cross-your-fingers-behind-your-back, Clintonian hair-splitting impressed him, then I think I would chose not to worship him anyway.) . . .

And making the move from Agnosticism to Atheism takes, I think, much more commitment to intellectual effort than most people are ready to put in. (italics in original)

I think Adams is exactly right. When I tell people that I am an atheist, they also tend to suggest that surely I must really mean that I am an agnostic. (See here for an earlier discussion of the distinction between the two terms.) After all, how can I be sure that there is no god? In that purely logical sense they are right, of course. You cannot prove a negative so there is always the chance that not only that a god exists but, if you take radical clerics Pat Robertson and Jerry Falwell seriously, has a petty, spiteful, vengeful, and cruel personality.

When I say that I am atheist, I am not making that assertion based on logical or evidentiary proofs of non-existence. It is that I have been convinced that the case for no god is far stronger than the case for god. It is the same reasoning that makes me convinced that quantum mechanics is the theory to use for understanding sub-atomic phenomena or natural selection is the theory to be preferred for understanding the diversity of life. There is always the possibility that these theories are 'wrong' in some sense and will be superceded by other theories, but those theories will have to have convincing evidence in their favor.

If, on the other hand, I ask myself what evidence there is for the existence of a god, I come up empty. All I have are the assurances of clergy and assertions in certain books. I have no personal experience of it and there is no scientific evidence for it.

Of course, as long time readers of this blog are aware, I used to be quite religious for most of my life, even an ordained lay preacher of the Methodist Church. How could I have switched? It turns out that my experience is remarkably similar to that of Adams, who describes why he switched from Christianity to atheism.

As a teenager I was a committed Christian. It was in my background. I used to work for the school chapel in fact. Then one day when I was about eighteen I was walking down the street when I heard a street evangelist and, dutifully, stopped to listen. As I listened it began to be borne in on me that he was talking complete nonsense, and that I had better have a bit of a think about it.

I've put that a bit glibly. When I say I realized he was talking nonsense, what I mean is this. In the years I'd spent learning History, Physics, Latin, Math, I'd learnt (the hard way) something about standards of argument, standards of proof, standards of logic, etc. In fact we had just been learning how to spot the different types of logical fallacy, and it suddenly became apparent to me that these standards simply didn't seem to apply in religious matters. In religious education we were asked to listen respectfully to arguments which, if they had been put forward in support of a view of, say, why the Corn Laws came to be abolished when they were, would have been laughed at as silly and childish and - in terms of logic and proof -just plain wrong. Why was this?
. . .
I was already familiar with and (I'm afraid) accepting of, the view that you couldn't apply the logic of physics to religion, that they were dealing with different types of 'truth'. (I now think this is baloney, but to continue...) What astonished me, however, was the realization that the arguments in favor of religious ideas were so feeble and silly next to the robust arguments of something as interpretative and opinionated as history. In fact they were embarrassingly childish. They were never subject to the kind of outright challenge which was the normal stock in trade of any other area of intellectual endeavor whatsoever. Why not? Because they wouldn't stand up to it.
. . .
Sometime around my early thirties I stumbled upon evolutionary biology, particularly in the form of Richard Dawkins's books The Selfish Gene and then The Blind Watchmaker and suddenly (on, I think the second reading of The Selfish Gene) it all fell into place. It was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I'd take the awe of understanding over the awe of ignorance any day.

What Adams is describing is the conversion experience that I described earlier when suddenly, switching your perspective seems to make everything fall into place and make sense.

For me, like Adams, I realized that I was applying completely different standards for religious beliefs than I was for every other aspect of my life. And I could not explain why I should do so. Once I jettisoned the need for that kind of distinction, atheism just naturally emerged as the preferred explanation. Belief in a god required much more explaining away of inconvenient facts than not believing in a god.

POST SCRIPT: The Gospel According to Judas

There was a time in my life when I would have been all a-twitter over the discovery of a new manuscript that sheds a dramatically different light on the standard Gospel story of Jesus and Judas. I would have wondered how it affected my view of Jesus and god and my faith.

Now this kind of news strikes me as an interesting curiosity, but one that does not affect my life or thinking at all. Strange.

April 11, 2006

On writing-3: Why do people plagiarize?

(See part 1 and part 2 in the series.)

Just last week, it was reported that twenty one Ohio University engineering graduates had plagiarized their master's theses. Why would they do that?

I think it is rare that people deliberately set out to use other people's words and ideas while hiding the source. Timothy Noah in his Chatterbox column has a good article in Slate where he points to Harvard's guidelines to students which state that unintentional plagiarism is a frequent culprit:

Most often . . . the plagiarist has started out with good intentions but hasn't left enough time to do the reading and thinking that the assignment requires, has become desperate, and just wants the whole thing done with. At this point, in one common scenario, the student gets careless while taking notes on a source or incorporating notes into a draft, so the source's words and ideas blur into those of the student.

But lack of intent is not a valid defense against the charge of plagiarism. That has not prevented even eminent scholars like Doris Kearns Goodwin from trying to invoke it. But as Noah writes, the American Historical Association's (AHA) and the Organization of American Historians' (OAH) statement on plagiarism is quite clear on this point:

The plagiarist's standard defense-that he or she was misled by hastily taken and imperfect notes-is plausible only in the context of a wider tolerance of shoddy work. . . . Faced with charges of failing to acknowledge dependence on certain sources, a historian usually pleads that the lapse was inadvertent. This excuse will be easily disposed of if scholars take seriously the injunction to check their manuscripts against the underlying texts prior to publication.

Noah cites many authorities that say that citing the source does not always absolve you from the charge of plagiarism either.

Here's the MLA Guide:

Presenting an author's exact wording without marking it as a quotation is plagiarism, even if you cite the source [italics Chatterbox's].

Here's the AHA and the OAH:

Plagiarism includes more subtle and perhaps more pernicious abuses than simply expropriating the exact wording of another author without attribution. Plagiarism also includes the limited borrowing, without attribution, of another person's distinctive and significant research findings, hypotheses, theories, rhetorical strategies, or interpretations, or an extended borrowing even with attribution [italics Chatterbox's].

Noah gives an example of this. In the original FDR, My Boss, the author Grace Tully writes:

Near the end of the dinner Missy arose from her chair to tell me she felt ill and very tired. I urged her to excuse herself and go upstairs to bed but she insisted she would stay until the Boss left. He did so about 9:30 and within a few minutes Missy suddenly wavered and fell to the floor unconscious.

Doris Kearns Goodwin in her book In No Ordinary Time writes:

Near the end of the dinner, Grace Tully recalled, Missy arose from her chair, saying she felt ill and very tired. Tully urged her to excuse herself and retire to her room, but she insisted on staying until the president left. He did so at 9:30 p.m. and, moments later, Missy let out a piercing scream, wavered and fell to the floor unconscious.

Is this plagiarism? After all, she cites the original author in the text itself, and the wording has been changed slightly. Yes, plagiarism has occurred says Noah, citing Harvard's guidelines:

If your own sentences follow the source so closely in idea and sentence structure that the result is really closer to quotation than to paraphrase . . .you are plagiarizing, even if you have cited the source [italics Chatterbox's].

The whole point of a paraphrase is to make a point more clearly, to emphasize or clarify something that may be hidden or obscure in the original text. Russ Hunt gives a good example of the wrongful use of the paraphrase, which he takes from Northwestern University's website The Writing Place:

Original

But Frida's outlook was vastly different from that of the Surrealists. Her art was not the product of a disillusioned European culture searching for an escape from the limits of logic by plumbing the subconscious. Instead, her fantasy was a product of her temperament, life, and place; it was a way of coming to terms with reality, not of passing beyond reality into another realm. 
Hayden Herrera, Frida: A Biography of Frida Kahlo(258)

Paraphrase

As Herrera explains, Frida's surrealistic vision was unlike that of the European Surrealists. While their art grew out of their disenchantment with society and their desire to explore the subconscious mind as a refuge from rational thinking, Frida's vision was an outgrowth of her own personality and life experiences in Mexico. She used her surrealistic images to understand better her actual life, not to create a dreamworld (258).

As Hunt says:

What is clearest about this is that the writer of the second paragraph has no motive for rephrasing the passage other than to put it into different words. Had she really needed the entire passage as part of an argument or explanation she was offering, she would have been far better advised to quote it directly. The paraphrase neither clarifies nor renders newly pointed; it's merely designed to demonstrate to a sceptical reader that the writer actually understands the phrases she is using in her text.

I think that this kind of common excuse, that the authors did not know they were plagiarizing because they had used the 'pointless paraphrase' or because they cited the source, is disingenuous. While they may not have been aware that this kind of paraphrasing technically does constitute plagiarism, it is hard to imagine that the perpetrators were not aware that they were doing something wrong.

The lesson, as I see it, is to always prefer the direct quote with citation to the 'pointless paraphrase.' Changing wording here and there purely for the sake of thinking that doing so makes the passage one's own should be avoided.

POST SCRIPT: Discussing controversial ideas

Chris Weigold, who is a reader of this blog and also a Resident Assistant in one of Case's dorms, has invited me to a free-wheeling discussion about some controversial propositions that I have discussed previously in my blog as well as those that I will probably address in the future, such as:

  • Should military service be mandatory for all citizens?
  • Should everyone be required to work in a service-oriented job for two years?
  • Is torture warranted in some situations?
  • Why shouldn't Iran be allowed to become a nuclear power?
  • Should hospitals be allowed to refuse to keep a patient on life-support if the patient cannot pay?
  • Is patriotism a bad thing?
  • Are atheists more moral than religious people?
  • Why is killing innocent people in war not considered wrong?
  • If we can experiment on non-human animals, why not on humans?
  • How do people decide which religion is right?
or any other topic that people might raise.

The discussion takes place in the Clarke Tower lobby from 8:00-9:30pm on Wednesday, April 12, 2006. All are welcome.

April 10, 2006

The politics of V for Vendetta (no spoilers)

I believe that V for Vendetta will go down in film history as a classic of political cinema. Just as Dr. Strangelove captured the zeitgeist of the cold war, this film does it for the perpetual war on terrorism.

The claim that this film is so significant may sound a little strange, considering that the film's premise is based on a comic book series written two decades ago and set in a futuristic Britain. Let me explain why I think that this is something well worth seeing.

The basic premise of the film (I have not read the original comics so cannot compare to them) is that England has become governed by a High Chancellor, an authoritarian leader who seized power in a landslide election as a response to a biological attack that killed many people. The ruthless leader has arrogated to himself all powers and considers himself to be above the law. The leadership is virulently homophobic and is in league with Christian extremists and corrupt clerics. Arbitrary arrest, denial of due process, and torture is routine, and simply owning a copy of the Koran is liable to get you executed. People's privacy is routinely violated by sophisticated listening devices that can even capture the conversations of people in their homes. Television news and programming are controlled to keep people amused with silliness while at the same keeping them in a state of constant fear. There are color-coded curfews enforced by secret police goons.

Ordinary citizens are told that all these intimidatory and intrusive measures are necessary to protect them from harm from terrorists and that they should trust their leaders. This message is wrapped up in patriotic slogans and flag-waving, and repeated ad nauseum by bloviating pundits in the media.

Any of this seem remotely familiar?

There suddenly emerges a mysterious man named V, a throwback to an earlier era with his costume of a mask with a mocking grin, cape, tights, boots, and long-haired wig, who is highly skilled with knives, classical swordplay, and martial arts. V seeks to awaken the public, to prod them to realize what is happening and rise up and overthrow this oppression masked as benevolence. He does this by spectacularly blowing up London landmarks to the strains of Tchaikovsky's 1812 Overture. Naturally, the authorities immediately label V a terrorist and remind people that this is why they need to have even more faith and trust in the government.

The main plotline is the engrossing cat-and-mouse game between V and the High Chancellor and his agents. Think Batman versus a Cheney/Bush hybrid.

V is an enigmatic character, to put it mildly, and not merely because of his mask. Although ruthless in his methods, he is also a courtly romantic who likes to watch the 1934 swashbuckling film The Count of Monte Cristo starring Robert Donat, listen to romantic songs on his Wurlitzer jukebox, and surround himself with books and traditional artwork.

I think that this film will serve as a touchstone. Those who see the current encroachment on civil liberties and the creation of a government that considers its leaders above the law as a dangerous thing will like the film because it serves as a warning that if they do not take back the government, they will face the same situation as the people in the film. Those who think that Bush is next to god in terms of infallibility and benevolence, and should be given all the powers he asks for (and even those he doesn't ask for but merely takes secretly) will hate the film. [UPDATE: For a view from someone who hates the film's message, read this. Unfortunately there are a lot of spoilers, so it is better read after seeing the film. But in my view, if a film can elicit this kind of response in such people, it must be really good.]

Since the plot is based on a futuristic comic book, its basic premise is fantastic and has to be simply accepted as a metaphor for the larger political point the film is trying to make. What made this film so compelling was that the characters so gripped you that you were willing to suspend disbelief. And the film kept moving so fluidly that you never felt your interest flagging. There were action scenes (with some violence) but these were not allowed to dominate, the focus always being on advancing the story.

The cast was superb, with good performances from Stephen Rea as the Police Commissioner and Stephen Fry as a TV variety show host. Hugo Weaving as V managed to convey emotion even behind a mask, and Natalie Portman, whose path accidentally crosses that of V and thus gets drawn into the action, had a much better vehicle for her performance than the previous films in which I've seen her, Star Wars I and Closer.

There were some interesting philosophical issues raised, such as the role of violence. Is V a revenge-seeking monster or a righteous seeker of justice? Or both? Is he a 'terrorist' as the authorities claim him to be? Or is that merely a convenient label to be used by governments to demonize those who challenge its exclusive use of force? And what of V's politics? He is an anarchist of sorts who never really articulates a political philosophy of his own except that he hates what exists and what the authoritarian rulers did to him personally. The film offers no pat answers to these questions.

(For an excellent review of the film, read James Wolcott. For a good analysis of the film by someone who has actually read the original comic book series, see here but be warned that you should only read the latter after seeing the film, as this analysis contains a lot of spoilers.)

Two lines of dialogue stood out for me as capturing the basic political message of the film. One was when Portman quotes her father: "Artists use lies to show the truth, while politicians use lies to cover it." The other was when V says: "People should not be afraid of their governments. Governments should be afraid of their people." The latter is the tagline of the film.

The Wachowski brothers who wrote the screenplay also created the Matrix films. I saw only the first film in that earlier series and was not impressed, except for the special effects gimmickry. The story seemed unnecessarily confused, contrived, and mystical. With V, the screenwriters seem to have wisely focused on creating a strong narrative arc and characters who were believable, once you accepted the comic book premise. I don't know the politics of the writers and if they were deliberately trying to draw parallels with the Cheney/Bush regime, but that message was there for anyone willing to see it.

I am anxious for the film to come out on DVD to see it again. If it is still in theaters near you, I encourage you to see it.

POST SCRIPT: The Shooting Party

While on the subject of political films, over the weekend I also saw on DVD the1984 film The Shooting Party starring James Mason and John Gielgud. This is a Merchant-Ivory-style slow tempo examination of British upper-class life and mores.

The film takes place in 1913 at the country estate of an English lord (Mason) where people have gathered for a shooting weekend. I realized with a start that this is exactly the kind of canned hunt that Dick Cheney goes on, where tame birds are sent into the air to be slaughtered by a line of hunters, some of whom secretly compete to see who can get the most, although such competition is frowned on by 'real' gentlemen. One of the guests, an arrogant but insecure Lord, who was shooting in this 'ungentlemanly' manner, gets so carried away that he ends up shooting one of the party in the face.

Who would have expected Cheney's shooting of someone in the face during a canned bird hunt to have been anticipated more than twenty years ago in a film? Or that the ruling class in twenty first century America would try to recreate the blood sport practices of the decaying British aristocracy of a century ago? What next? Cheney taking up fox hunting?

The shooting of birds in the film is a metaphor for the senseless slaughter that would begin the following year with World War I. Some of people in the shooting party are almost looking forward to the war as an adventure, an opportunity to earn honor, not realizing that in the end wars create their own dynamic and end up as killing fields, bereft of glory, merely sordid tales of blood and grief, tears and bereavement, pain and misery.

I wonder if people like Cheney and his neoconservative allies, who probably saw the invasion of Iraq as a glorious adventure and themselves as conquering heroes, ever see films like this and understand its underlying message, that wars are not like canned hunts, in which the people of the 'enemy nation' are like birds to be slaughtered, with the killers bathed in adulation? Probably not.

James Mason was always one of my favorite actors. Has there been anyone who could convey so much nuance and meaning with that soft, hesitant voice? And his scene with John Gielgud, who plays an animal rights activist who disrupts the hunt, is a little gem, showcasing the talents of two great actors, masters of their craft, casually displaying the talents that made them such a joy to watch.

April 08, 2006

Don't miss V for Vendetta!

I don't normally post on the weekends but last night I saw the film V for Vendetta and it blew me away. It is a brilliant political thriller with disturbing parallels to what is currently going on in the US. It kept me completely absorbed.

I'll write more about it next week but this is just to urge people to see it before it ends its run.

April 07, 2006

Tiktaalik bridges the gap

As you can imagine, the world of science has been abuzz with the news yesterday of the release of a paper in the prestigious science journal Nature heralding the discovery of a 375-million year old transitional fossil between fish and land animal, which has been named Tiktaalik. The fossil seems to provide evidence of a key evolutionary idea that land animals evolved from fish.

Several well-preserved skeletons of the fossil fish were uncovered in sediments of former stream beds in the Canadian Arctic, 600 miles from the North Pole, it is being reported on Thursday in the journal Nature. The skeletons have the fins and scales and other attributes of a giant fish, four to nine feet long.

But on closer examination, scientists found telling anatomical traits of a transitional creature, a fish that is still a fish but exhibiting changes that anticipate the emergence of land animals - a predecessor thus of amphibians, reptiles and dinosaurs, mammals and eventually humans. . .

The scientists described evidence in the forward fins of limbs in the making. There are the beginnings of digits, proto-wrists, elbows and shoulders. The fish also had a flat skull resembling a crocodile's, a neck, ribs and other parts that were similar to four-legged land animals known as tetrapods. . .

Embedded in the pectoral fins were bones that compare to the upper arm, forearm and primitive parts of the hand of land-living animals. The scientists said the joints of the fins appeared to be capable of functioning for movement on land, a case of a fish improvising with its evolved anatomy. In all likelihood, they said, Tiktaalik flexed its proto-limbs primarily on the floor of streams and may have pulled itself up on the shore for brief stretches.

In their journal report, the scientists concluded that Tiktaalik is an intermediate between the fish Panderichthys, which lived 385 million years ago, and early tetrapods.

For those of us who have long accepted natural selection and evolution as the theoretical prism through which to understand how the diversity of life came about, this discovery comes as a welcome, but not revolutionary, development since it seems to be one more confirmation of a major theory.

But what of those people who reject evolution and think that each species was an act of special creation? Should they treat this new discovery as a counter-example to their model and thus lead to its rejection?

Early ('naïve') versions of falsificationist theories of scientific development would argue that they should. In that model, advocated by philosopher of science Karl Popper, while no number of confirming instances can prove a theory right, a single counterexample can prove a theory wrong and warrant its elimination from the scientific canon.

Some people will argue that Tiktaalik is just that kind of counterexample and that it should serve as the death knell of creationism. Michael J. Novacek, a paleontologist at the American Museum of Natural History in Manhattan is quoted as saying: "We've got Archaeopteryx, an early whale that lived on land and now this animal showing the transition from fish to tetrapod. What more do we need from the fossil record to show that the creationists are flatly wrong?"

But he misunderstands how these arguments work because the naïve falsificationist model, while having an appealing intellectual simplicity, was soon shown to not really describe how science actually progresses. It turns out that there are many ways in which a theory can survive a single, or even several, counter-examples. I expect that the reaction to the Tektaalik discovery will provide us with a ringside seat in real time to see these defenses brought out.

Committed creationists will take one of two tacks. One argument is to assert that this new fossil is "really" just a fish or "really" just an animal, thus forcing it into an existing category, leaving the 'gap' unfilled. For example, this is how the creationist website Answers in Genesis dismisses the claim that the earlier Archaeopteryx was a transitional form between reptile and bird, saying: "Archaeopteryx was genuine. . . as shown by anatomical studies and close analysis of the fossil slab. It was a true bird, not a ‘missing link’."

This problem is inevitable because of the way we classify things, requiring that they fit into discrete and identifiable boxes that can be labeled and treated as if they were distinct categories. But in reality we are really dealing with a continuum of items, and have to make choices of how to label it, and pressure exists to put it into a pre-existing box rather than create a new box. For example, is the new fossil that was discovered a fish? Or a land animal? It is actually neither but the way we structure our evolutionary scheme and our language seems to put pressure on us to make that kind of choice.

If the attempt to put the new discovery into a pre-existing category does not work and a fossil is widely accepted as being transitional, creationists can take a different tack and argue that now there are new gaps that require new features and no fossils exist that have them. This is what has been done in the past with previous finds. As the New York Times article points out:

One creationist Web site (emporium.turnpike.net/C/cs/evid1.htm) declares that "there are no transitional forms," adding: "For example, not a single fossil with part fins part feet has been found. And this is true between every major plant and animal kind."

I think it was Ernst Mayr who said that trying to satisfy people who demand that missing links be provided to convince them of evolution had occurred was to pursue a chimera. Because when you find a link to fill the 'gap' between two species, your opponents now have two new 'gaps' that they can ask you to fill, where they only had one before. After all, if your a theory predicts that species A evolved from species Z, and you discover a transitional fossil M, now critics can ask where the transitional fossils are between A and M and between M and Z. And so on.

The new fossil Tiktaalik is called a fishapod because it is both fish and tetrapod. But creationists can soon begin to ask "What about the gap between fish and fishapod? Or between fishapod and tetrapod?" So paradoxically, the more intermediate fossils that are found, the more 'gaps' in the fossil record that will be created.

It is a little like colors. We know that the colors of pigments form a continuum, going smoothly from one to another depending on how the primary pigments are mixed. But historically and out of a need to be able to communicate with one another, we have classified them into distinct colors: red, green, blue, yellow, brown, black, etc, suggesting that colors are discrete and separable. When we encounter colors that do not fit into existing categories, we have 'gaps.' We sometimes invent new names like magenta, cyan, taupe, beige, etc to describe these transitional colors. But that simply raises new and more gaps. What is the shade of color between magenta and red? Between magenta and blue? We can never eliminate all the gaps in a continuum. Trying to do so only creates an increasing number of gaps.

Scientific evidence alone can never prove which theory is true. But what it can do is convince some people that one side is more plausible or fruitful or useful than the other. Thomas Kuhn in his book The Structure of Scientific Revolutions argues that to switch allegiances from one scientific theory to another is often not a reasoned decision but to have something similar to a conversion experience. But the dramatic conversion experience, unlike Paul's conversion to Christianity on the road to Damascus, may not have a dramatic cause.

Scientific conversions often occur because of incremental changes in the available evidence. As new evidence comes to light, holding on to an old theory becomes more difficult. At some point, one reaches a tipping point that causes one to completely switch one's perspective. It is like the way a see-saw or teeter-totter works when it is near balance. Even a small change can cause it to change its orientation completely. While the process leading up the change may be gradual, the change itself is sudden.

It is thus for the individual and scientific theories. A single new element added to the mix can cause the switch. Suddenly, you see things in a new light and cannot imagine why the old idea ever appealed to you, and defending it seems pointless. And when that happens, you go back and re-evaluate all that you had believed before in the light of this new viewpoint.

Tiktaalik will not sway those who are deeply committed to creationist ideas because they have many ways with which to justify retaining their beliefs. But somewhere, there are people who are reading about it and saying to themselves, "Hmmm. . . You know, maybe I should look into this evolution business more closely." And it is those people who will eventually switch.

April 06, 2006

Precision in language

Some time ago, a commenter to this blog sent me a private email expressing this view:

Have you ever noticed people say "Do you believe in evolution?" just as you would ask "Do you believe in God?" as if both schools of thought have equal footing? I respect others' religious beliefs as I realize I cannot disprove God just as anyone cannot prove His existence, but given the amount of evidence for evolution, shouldn't we insist on asking "Do you accept evolution?"

It may just be semantics, but I feel that the latter wording carries an implied affirmation just as "Do you accept that 2+2=4?" carries a different meaning than "Do you believe 2+2=4?"

I guess the point I'm trying to make is that by stating something as a belief, it opens the debate to the possibility that something is untrue. While this may fine for discussions of religion, shouldn't the scientific community be more insistent that a theory well supported by physical evidence, such as evolution, is not up for debate?

It's a good point. To be fair, scientists themselves are partly responsible for this confusion because we also say that we "believe" in this or that scientific theory, and one cannot blame the general public from picking up on that terminology. What is important to realize, though, is that the word 'believe' is being used by scientists in a different sense from the way it is used in religion.

The late and deeply lamented Douglas Adams, author of The Hitchhiker's Guide to the Galaxy, who called himself a "radical atheist" puts it nicely (thanks to onegoodmove):

First of all I do not believe-that-there-is-not-a-god. I don't see what belief has got to do with it. I believe or don't believe my four-year old daughter when she tells me that she didn't make that mess on the floor. I believe in justice and fair play (though I don't know exactly how we achieve them, other than by continually trying against all possible odds of success). I also believe that England should enter the European Monetary Union. I am not remotely enough of an economist to argue the issue vigorously with someone who is, but what little I do know, reinforced with a hefty dollop of gut feeling, strongly suggests to me that it's the right course. I could very easily turn out to be wrong, and I know that. These seem to me to be legitimate uses for the word believe. As a carapace for the protection of irrational notions from legitimate questions, however, I think that the word has a lot of mischief to answer for. So, I do not believe-that-there-is-no-god. I am, however, convinced that there is no god, which is a totally different stance. . .

There is such a thing as the burden of proof, and in the case of god, as in the case of the composition of the moon, this has shifted radically. God used to be the best explanation we'd got, and we've now got vastly better ones. God is no longer an explanation of anything, but has instead become something that would itself need an insurmountable amount of explaining…

Well, in history, even though the understanding of events, of cause and effect, is a matter of interpretation, and even though interpretation is in many ways a matter of opinion, nevertheless those opinions and interpretations are honed to within an inch of their lives in the withering crossfire of argument and counterargument, and those that are still standing are then subjected to a whole new round of challenges of fact and logic from the next generation of historians - and so on. All opinions are not equal. Some are a very great more robust, sophisticated and well supported in logic and argument than others.

When someone says that they believe in god, they mean that they believe something in the absence of, or even counter to, the evidence, and even to reason and logic. When scientists say they believe a particular theory, they mean that they believe that theory because of the evidence and reason and logic, and the more evidence there is, and the better the reasoning behind it, the more strongly they believe it. Scientists use the word 'belief' the way Adams says, as a kind of synonym for 'convinced,' because we know that no scientific theory can be proven with 100% certainty and so we have to accept things even in the face of this remaining doubt. But the word 'believe' definitely does not carry the same meaning in the two contexts.

This can lead to the generation of confusion as warned by the commenter but what can we do about it? One option is, as was suggested, to use different words, with scientists avoiding use of the word 'believe.' I would have agreed with this some years ago but I am becoming increasingly doubtful that we can control the way that words are used.

For example, there was a time when I used to be on a crusade against the erroneous use of the word 'unique'. The Oxford English Dictionary is pretty clear about what this word means:

  • Of which there is only one; one and no other; single, sole, solitary.
  • That is or forms the only one of its kind; having no like or equal; standing alone in comparison with others, freq. by reason of superior excellence; unequalled, unparalleled, unrivalled.
  • Formed or consisting of one or a single thing
  • A thing of which there is only one example, copy, or specimen; esp., in early use, a coin or medal of this class.
  • A thing, fact, or circumstance which by reason of exceptional or special qualities stands alone and is without equal or parallel in its kind.

It means, in short, one of a kind, so something is either unique or it is not. There are no in-betweens. And yet, you often find people saying things like "quite unique" or "very unique" or "almost unique." I used to try and correct this but have given up. Clearly, people in general think that unique means something like "rare" and I don't know that we can ever change this even if we all become annoying pedants, correcting people all the time, avoided at parties because of our pursuit of linguistic purity.

Some battles, such as with the word unique are, I believe, lost for good and I expect the OED to add the new meaning of 'rare' some time in the near future. It is a pity because then we would then be left with no word with the unique meaning of 'unique', but there we are. We would have to say something like 'absolutely unique' to convey the meaning once reserved for just 'unique.'

In science too we often use words with precise operational meanings while the same words are used in everyday language with much looser meanings. For example, in physics the word 'velocity' is defined operationally by the situation when you have an object moving along a ruler and, at two points along its motion, you take ruler readings and clock readings, where the clocks are located at the points where the ruler readings are taken, and have been previously synchronized. Then the velocity of the moving object is the number you get when you take the difference between the two ruler readings and divide by the difference between the two clock readings.

Most people (especially sports commentators) have no idea of this precise meaning when they use the word velocity in everyday language, and often use the word synonymously with speed or, even worse, acceleration, although those concepts have different operational meanings. Even students who have taken physics courses find it hard to use the word in its strict operational sense.

Take, for another example, the word 'theory'. By now, as a result of the intelligent design creationism (IDC) controversy, everyone should be aware that the way this word is used by scientists is quite different from its everyday use. In science, a theory is a powerful explanatory construct. Science depends crucially on its theories because they are the things that give it is predictive power. "There is nothing so practical as a good theory" as Kurt Lewin famously said. But in everyday language, the word theory is used as meaning 'not factual,' something that can be false or ignored.

I don't think that we can solve this problem by putting constraints on how words can be used. English is a wonderful language precisely because it grows and evolves and trying to fix the meanings of words too rigidly would perhaps be stultifying. I now think that we need to change our tactics.

I think that once the meanings of words enter mainstream consciousness we will not be successful in trying to restrict their meanings beyond their generally accepted usage. What we can do is to make people aware that all words have varying meanings depending on the context, and that scientific and other academic contexts tend to require very precise meanings in order to minimize ambiguity.

Heidi Cool has a nice entry where she talks about the importance of being aware of when you are using specialized vocabulary, and the need to know your audience when speaking or writing, so that some of the pitfalls arising from the imprecise use of words can be avoided.

We have to realize though that despite our best efforts, we can never be sure that the meaning that we intend to convey by our words is the same as the meaning constructed in the minds of the reader or listener. Words always contain an inherent ambiguity that allows the ideas expressed by them to be interpreted differently.

I used to be surprised when people read the stuff I wrote and got a different meaning than I had intended. No longer. I now realize that there is always some residual ambiguity in words that cannot be overcome. While we can and should strive for maximum precision, we can never be totally unambiguous.

I agree with philosopher Karl Popper when he said, "It is impossible to speak in such a way that you cannot be misunderstood." The best we can hope for is to have some sort or negotiated consensus on the meanings of ideas.

April 05, 2006

On writing-2: Why do we cite other people's work?

In the previous post on this topic, I discussed the plagiarism case of Ben Domenech, who had lifted entire chunks of other people's writings and had passed them off as his own.

How could he have done such a thing? After all, all high school and college students get the standard lecture on plagiarism and why it is bad. And even though Domenech was home schooled, it seems unlikely that he thought this was acceptable practice. When he was confronted with his plagiarism, his defense was not one of surprise that it was considered wrong but merely that he had been 'young' when he did it or that he had got permission from the author to use their words or that the offending words had been inserted by his editors.

The cautionary lectures that students receive about plagiarism are usually cast in a moralistic way, that plagiarism is a form of stealing, that taking someone else's words or ideas without proper attribution is as morally reprehensible as taking their money.

What is often overlooked in this kind of approach is that there are many other reasons why writers and academics cite other people's works when appropriate. By focusing too much on this stealing aspect, we tend to not give students an important insight into how scholarship and research works.

Russ Hunt at St. Thomas University argues that writers cite others for a whole complex of reasons that have little to do with avoiding charges of plagiarism:

[P]ublished scholarly literature is full of examples of writers using the texts, words and ideas of others to serve their own immediate purposes. Here's an example of the way two researchers opened their discussion of the context of their work in 1984:

To say that listeners attempt to construct points is not, however, to make clear just what sort of thing a 'point' actually is. Despite recent interest in the pragmatics of oral stories (Polanyi 1979, 1982; Robinson 1981), conversations (Schank et al. 1982), and narrative discourse generally (Prince 1983), definitions of point are hard to come by. Those that do exist are usually couched in negative terms: apparently it is easier to indicate what a point is not than to be clear about what it is. Perhaps the most memorable (negative) definition of point was that of Labov (1972: 366), who observed that a narrative without one is met with the "withering" rejoinder, "So what?" (Vipond & Hunt, 1984)

It is clear here that the motives of the writers do not include prevention of charges of plagiarism; moreover, it's equally clear that they are not. . .attempting to "cite every piece of information that is not a) the result of your own research, or b) common knowledge." What they are doing is more complex. The bouquet of citations offered in this paragraph is informing the reader that the writers know, and are comfortable with, the literature their article is addressing; they are moving to place their argument in an already existing written conversation about the pragmatics of stories; they are advertising to the readers of their article, likely to be interested in psychology or literature, that there is an area of inquiry -- the sociology of discourse -- that is relevant to studies in the psychology of literature; and they are establishing a tone of comfortable authority in that conversation by the acknowledgement of Labov's contribution and by using his language --"withering" is picked out of Labov's article because it is often cited as conveying the power of pointlessness to humiliate (I believe I speak with some authority for the authors' motives, since I was one of them).

Scholars -- writers generally -- use citations for many things: they establish their own bona fides and currency, they advertise their alliances, they bring work to the attention of their reader, they assert ties of collegiality, they exemplify contending positions or define nuances of difference among competing theories or ideas. They do not use them to defend themselves against potential allegations of plagiarism.

The clearest difference between the way undergraduate students, writing essays, cite and quote and the way scholars do it in public is this: typically, the scholars are achieving something positive; the students are avoiding something negative. (my italics)

I think that Hunt has hit exactly the right note.

When you cite the works of others, you are strengthening your own argument because you are making them (and their allies) into your allies, and people who challenge what you say have to take on this entire army. When you cite reputable sources or credible authorities for facts or ideas, you become more credible because you are no longer alone and thus not easily dismissed, even if you personally are not famous or a recognized authority.

To be continued. . .

POST SCRIPT: It's now Daylight Saving Time. Do you know where your spiritual plane is?

It seems like idiotic statements attributing natural events to supernatural causes are not restricted to Christian radical clerics like Pat Robertson. Some Sri Lankan Buddhist clergy are challenging him for the title of Religious Doofus.

Since Sri Lanka sits very close to the equator, the length of the day is the same all year round, not requiring the 'spring-forward-fall-back' biannual adjusting of the US. Sri Lankan time used to be 5.5 hours ahead of Universal Time (UT) but in 1996 the government made a one-time shift it to 6.5 hours in order to have sunset arrive later and save energy. But the influential Buddhist clergy were not happy with the change. As a compromise, the clocks were then again adjusted to make it just 6.0 ahead of UT as a compromise. Now the government is thinking of going back to the original 5.5. hours.

Some of the country's Buddhist clergy are rejoicing at the prospect of a change because they say Sri Lanka's "old" time fitted better with their rituals.

They believe a decade living in the "wrong" time has upset the country's natural order with terrible effect.

The Venerable Gnanawimala says the change moved the country to a spiritual plane 500 miles east of where it should be.

"After this change I feel that many troubles have been caused to Sri Lanka. Tsunamis and other natural disasters have been taking place," he says.

This is what happens when you mix religion and the state. You now have to worry about what your actions are doing to the longitudinal coordinates of your nation's spiritual plane.

April 04, 2006

No more daft women!

Evan Hunter, who was the screenwriter on Alfred Hitchcock's 1963 film The Birds recalled an incident that occurred when he was discussing the screenplay with the director.

I don't know if you recall the movie. There's a scene where after this massive bird attack on the house Mitch, the male character, is asleep in a chair and Melanie hears something. She takes a flashlight and she goes up to investigate, and this leads to the big scene in the attic where all the birds attack her. I was telling [Hitchcock] about this scene and he was listening very intently, and then he said, "Let me see if I understand this correctly. There has been a massive attack on the house and they have boarded it up and Mitch is asleep and she hears a sound and she goes to investigate?'' I said, "Well, yes,'' and he said, "Is she daft? Why doesn't she wake him up?''

I remembered this story when I was watching the film The Interpreter with Nicole Kidman and Sean Penn. The Kidman character accidentally overhears something at the UN that puts her life at risk. After she complains to government agent Penn that no one seems to be bothered about protecting her from harm, Penn puts her on round-the-clock surveillance. So then what does Kidman do? She sneaks around, giving the slip to the very people assigned to protect her and refuses to tell Penn where she went and to whom she spoke and about what, causing herself and other people to be put at risk and even dying because of her actions. Hitchcock would have said, "Is she daft?"

This is one of my pet peeves about films, where the female character insists on doing something incredibly stupid that puts her and other people at peril. Surely in this day and age we have gone beyond the stale plot device of otherwise smart women behaving stupidly in order to create drama? Surely writers have more imagination than that? Do directors really think that viewers won't notice how absurd that is?

According to Hunter, Hitchcock was always exploring the motivations of characters, trying to make their actions plausible. Hunter says:

[Hitchcock] would ask surprising questions. I would be in the middle of telling the story so far and he would say, "Has she called her father yet?" I'd say, "What?'' "The girl, has she called her father?'' And I'd say, "No.'' "Well, she's been away from San Francisco overnight. Does he know where she is? Has she called to tell him she's staying in this town?'' I said, "No.'' And he said, "Don't you think she should call him?'' I said, "Yes." "You know it's not a difficult thing to have a person pick up the phone.'' Questions like that.

(Incidentally, the above link has three screenwriters Arthur Laurents, who wrote Rope (1948), Joseph Stefano, who wrote Psycho (1960), and Evan Hunter reminiscing about working with Hitchcock. It is a fascinating glimpse behind the scenes of how a great director envisages and sets about creating films. The last quote actually reads in the original: "Yes, you know it's not a difficult thing to have a person pick up the phone.'' I changed it because my version makes more sense, and the original is a verbatim transcript of a panel discussion, in which such kinds of punctuation errors can easily occur.)

More generally, I hate it when characters in films and books behave in ways that are unbelievable. The problem is not with an implausible premise, which is often necessary to create a central core for the story. I can even accept the violation of a few laws of physics. For example, I can accept the premise of Superman that a baby with super powers (but susceptible to kryptonite) arrives on Earth from another planet and is adopted by a family and needs to keep his identity secret. I can accept of Batman that a millionaire like Bruce Wayne adopts a secret identity in order to fight crime.

What I cannot stand is when they and the other people act implausibly, when the stories built on this premise have logical holes that you can drive a Batmobile through. The latter, for example, is a flashy vehicle, to say the least, easily picked out in traffic. And yet, nobody in Gotham thinks of following it back to the Batcave, to see who this mysterious hero is. Is the entire population of that city daft?

And how exactly does the Bat-Signal that the Police Commissioner lights up the sky with supposed to work? You don't need a physics degree to realize that shining a light, however bright, into the sky is not going to create a sharp image there. And what if it's daytime? And if there are no clouds? (It's been a long time since I read these comics. Maybe the later editions fixed these problems. But even as a child these things annoyed me.)

And don't get me started on Spiderman going in and out of his apartment window in a building in the middle of a big city in broad daylight without anyone noticing.

As a fan of films, it really bugs me when filmmakers don't take the trouble to write plots that make sense, and have characters who don't behave the way that you would expect normal people to behave. How hard can it be to ensure this, especially when you have the budget to hire writers to create believable characters and a plausible storyline?

If any directors are reading this, I am willing to offer my services to identify and fix plot holes.

So please, no more daft women! No more ditzy damsels in distress! No more Perils of Pauline!

POST SCRIPT: CSA: Confederate States of America

I saw this film last week (see the post script to an earlier posting), just before it ended its very short run in Cleveland. It looks at what history would have been like if the south had won the civil war. Imagine, if you will, an America very much like what we have now except that owning black slaves is as commonplace as owning a dishwasher.

What was troubling is that although this is an imagined alternate history presented in a faux documentary format, much of it is plausible based on what we have now. What was most disturbing for me was seeing in the film racist images and acts that I thought were the over-the-top imaginings of the screenwriter about that might have happened in this alternate history, and then finding out that they actually happened in the real history.

Although the film is a clever satire in the style of This is Spinal Tap, I could not really laugh because the topic itself is so appalling. It is easy to laugh at the preening and pretensions of a rock band. It is hard to laugh at people in shackles.

But the film was well worth seeing, disturbing though it was.

April 03, 2006

On writing-1: Plagiarism at the Washington Post

If you blinked a couple of weeks ago, you might have missed the meteor that was the rise and fall of the career of Ben Domenech as a blogger for WashingtonPost.com.

This online version of the newspaper is apparently managed independently of the print edition and has its own Executive Editor Jim Brady. For reasons that are not wholly clear, Brady decided that he needed to hire a "conservative" blogger for the website.

The problem with this rationale for the hiring was that no "liberal" counterpart blogger existed at the paper. They did have a popular blogger in Dan Froomkin, someone with a journalistic background, who wrote about politics for the Post and who had on occasion been critical of the Bush White House. As I have written earlier, Glenn Greenwald has pointed out that anything but unswavering loyalty to Bush has become the basis for identifying someone as liberal, and maybe Brady had internalized this critique, prompting him to hire someone who could be counted upon to support Bush in all his actions.

For reasons that are even more obscure, rather than choose someone who had serious journalistic credentials for this new column, Brady selected the untested 24-year old Ben Domenech. It is true that Domenech was something of a boy wonder, at least on paper. He had been home-schooled by his affluent and well-connected Republican family. He then went to William and Mary and wrote for their student newspaper The Flat Hat. He dropped out of college before graduating and co-founded a conservative website called Redstate, where he wrote under the pseudonym Augustine.

His father was a Bush political appointee and his new online column for the Washington Post (called Red America) said in its inaugural posting on March 21 that young Ben "was sworn in as the youngest political appointee of President George W. Bush. Following a year as a speechwriter for HHS Secretary Tommy Thompson and two as the chief speechwriter for Texas Senator John Cornyn, Ben is now a book editor for Regnery Publishing, where he has edited multiple bestsellers and books by Michelle Malkin, Ramesh Ponnuru, and Hugh Hewitt."

Not bad for a 24-year old without a college degree. And his bio lists even more accomplishments. But getting his own column in WashingtonPost.com was the peak. Soon after that things started going downhill very rapidly.

His decline began when bloggers looked into his writings and found that, as Augustine, he had written a column of the day of Coretta Scott King's funeral calling her a Communist. This annoyed a lot of people who then started looking more closely at his other writings. It was then that someone discovered that he had plagiarized. And the plagiarism was not subtle. Take for example this excerpt from his review of the film Bringing out the Dead.

Instead of allowing for the incredible nuances that Cage always brings to his performances, the character of Frank sews it all up for him.

But there are those moments that allow Cage to do what he does best. When he's trying to revive Mary's father, the man's family fanned out around him in the living room in frozen semi-circle, he blurts out, "Do you have any music?"

Now compare it with an earlier review posted on Salon.com,

Instead of allowing for the incredible nuance that Cage always brings to his performances, the character of Frank sews it all up for him. . . But there are those moments that allow Cage to do what he does best. When he's trying to revive Mary's father, the man's family fanned out around him in the living room in frozen semi-circle, he blurts out, "Do you have any music?"

Or this sampling from P. J. O'Rourke's book Modern Manners, which also found its way into Domenech's columns:

O'Rourke, p.176: Office Christmas parties • Wine-tasting parties • Book-publishing parties • Parties with themes, such as "Las Vegas Nite" or "Waikiki Whoopee" • Parties at which anyone is wearing a blue velvet tuxedo jacket

BenDom: Christmas parties. Wine tasting parties. Book publishing parties. Parties with themes, such as "Las Vegas Nite" or "Waikiki Whoopee." Parties at which anyone is wearing a blue velvet tuxedo jacket.

O'Rourke: It's not a real party if it doesn't end in an orgy or a food fight. • All your friends should still be there when you come to in the morning.

BenDom: It's not a real party if it doesn't end in an orgy or a food fight. All your friends should still be there when you come to in the morning.

These are not the kinds of accidental plagiarisms that anyone can fall prey to, where a turn of phrase that appealed to you when you read it a long time ago comes out of you when you are writing and you do not remember that you got it from someone else. These examples are undoubtedly deliberate cut-and-paste jobs.

Once the charges of plagiarism were seen to have some credibility, many people went to Google and the floodgates were opened, Kaloogian-style, with bloggers all over poring over his writings. Within the space of three days a torrent of further examples of plagiarism poured out. These new allegations dated back to his writings at his college newspaper and then later for National Review Online, and Domenech was found to have lifted material from Salon and even from National Review Online, the latter being the same publication for which he was writing, which adds the sin of ingratitude to the dishonesty.

On March 24, just three days after starting his Washington Post column, Ben Domenech resigned under pressure. Soon after, he also resigned as book editor at Regnery.

What can we learn from this? One lesson seemingly is that people can get away with plagiarism for a short while, especially if they are writing in obscurity for little known publications. While he was writing for his college newspaper and even for his own website, no one cared to closely look into his work. Even his future employers at WanshintonPost.com did not seem to have checked him out carefully. Apparently his well-connected family and sterling Bush loyalty was enough to satisfy them that he was a good addition to their masthead.

But as soon as a writer becomes high profile, the chances are very high these days that any plagiarism will come to light.

At one level, this is a familiar cautionary tale to everyone to cite other people's work when using it. For us in the academic world, where plagiarism is a big no-no, the reasons for citing are not just there are high penalties if you get caught not doing it. The more important reasons arise from the very nature of scholarly academic activity, which I shall look at in a future posting.

To be continued. . .