THIS BLOG HAS MOVED AND HAS A NEW HOME PAGE.

Entries in "Ethics and morality"

May 07, 2011

Ethics of atheists

Via Machines Like Us, I came across this article by researchers Gregory Paul and Phil Zuckerman that challenges the view among some religious people that atheists have poor ethics.

A growing body of social science research reveals that atheists, and non-religious people in general, are far from the unsavory beings many assume them to be. On basic questions of morality and human decency — issues such as governmental use of torture, the death penalty, punitive hitting of children, racism, sexism, homophobia, anti-Semitism, environmental degradation or human rights — the irreligious tend to be more ethical than their religious peers, particularly compared with those who describe themselves as very religious. [My italics]

As individuals, atheists tend to score high on measures of intelligence, especially verbal ability and scientific literacy. They tend to raise their children to solve problems rationally, to make up their own minds when it comes to existential questions and to obey the golden rule. They are more likely to practice safe sex than the strongly religious are, and are less likely to be nationalistic or ethnocentric. They value freedom of thought.

Atheists may not be the most ethical people around but we can make a strong case that we are much more ethical than a certain prominent Christian theologian who likes to claim that without a god there can be no objective morality, and then proceeds to justify genocide and rape because his god commanded it.

If that is where objective morality takes you, then I am really glad to be a moral relativist.

April 03, 2009

The stem cell issue-2: The ethics

Yesterday, I discussed the science involved in stem cell research. Today I want to discuss the ethics.

The ethical problems associated with stem cell research occur because although the fertilized eggs were not created for the purposes of research but to help infertile couples, since the method of in vitro fertilization for the treatment of infertility has not been perfected, more fertilized eggs are created than can be used to actually generate pregnancies, and the question of what to do with these extra frozen stored embryos is problematic.

If the extra ones are not needed for future implantation in a womb, then the options are to destroy them, preserve them forever, or use them for research. Those favoring stem cell research argue that preserving them forever is not realistic, that they will have to be thrown away eventually, and that using them for research is better than destroying them without any benefit being obtained, even though the resulting blastocyst must be destroyed in order to produce the stem cell lines,

Those opposed to stem cell research (and abortion) have a simple and clear argument: Life begins at the instant when an egg is fertilized, and no human action is permissible thereafter to prevent that egg from being eventually born. So once an egg is fertilized, whether in the uterus or outside, then we have a human life and using a blastocyst for research is effectively destroying life. This is a secular argument, even though many, or even the majority, of those who support it may have religious reasons for their stand, such as the idea that god inserts the soul at the moment of conception when the egg is fertilized. They argue that if such a position requires the preservation of unused embryos indefinitely, then we should do so, however impractical that might be.

Those who support a woman's right to terminate a pregnancy and/or the use of embryonic stem cells for research have more difficulty in justifying their position because drawing a clear line as to when 'life' begins or a clump of cells becomes 'human' is hard. One thing they are agreed upon is that a human being is much more than a fertilized egg or a bunch of cells such as a blastocyst. But where does one draw the line?

One line is that until such time as the fetus can exist independently outside the womb, it is not a human being. Right now that time corresponds roughly to the third trimester of the pregnancy. But as technology improves, that is likely to shift to earlier times. Others argue that any organism (human or otherwise) must have some higher level of capacity, such as a brain, before its life becomes worthy of protection from harm. After all, when it comes to question of death, society seems to have decided that when the brain stops functioning one is effectively dead and one no longer needs to take steps to keep the body alive. And as the Terry Schiavo case tragically illustrated, what we mean by a functioning brain is more than just brain stem functions that maintain basic body processes and some reflexes. It means that the part of the brain, such as memory and cognition, that gives us our personality and makes us who we are must be functioning. Once a person has reached the stage of being in what is known as a 'persistent vegetative state', that person is considered to be effectively dead.

In this debate, both sides usually ignore the need for consistency across species. Why should only human life be so valued? What makes us superior and worthy of special consideration? If life is precious and life begins with a fertilized egg or with higher brain function, then what about the lives of other species? After all, we kill animals, even though they are fully functioning living things with a level of brain function that we would undoubtedly value if a human had it. We even think nothing of eating them after killing them. Why should we have one standard for humans and another for nonhuman animals?

One can take a speciesist position and simply assert as a given that human beings are superior to others and so we have a right to do what we like to other animal forms while treating human life as sacrosanct. But that is hard to justify on general moral or ethical grounds. There is no clear marker that justifies treating humans as special, unless you throw in ideas such as that humans have a soul and other animals do not. This is an argument based on a particular religious viewpoint and should have no place in determining public policy, which should always be based on secular arguments.

In my opinion, the position taken by ethicists such as Peter Singer is the most consistent moral and ethical one, that does not give humans special privileges. They take a utilitarian position, that what one should seek is the minimization of suffering. Since suffering involves sentience, this requires that an organism must have at least some primitive brain function and the development of a nervous system before it can be said to have the possibility of suffering. So it would be acceptable to destroy any system of cells (whether from a human or non-human animal) as long as it has not yet reached the stage where it has the ability to suffer, or it has passed that stage at the end of life.

Even if we do not achieve the high level of consistency that it requires of us, the utilitarian argument that says that what we should aim for is a net reduction of global suffering seems to me to be a workable ethical principle on which to base decisions like these. Hence it is ethically allowable to use embryonic stem cells from a blastocyst (before the cells themselves have reached the capacity to suffer) in order to do research to reduce the suffering of actual living organisms.

Of course, this raises other potential problems that are sure to come down the road. Is it ethical, for example, to deliberately produce blastocysts purely for the purpose of research, as opposed to using those that are the by-products of infertility treatments? If, for example, one wanted to study the early development of a disease that had a genetic basis, would it be ethical to take an egg and sperm from people who have that disease and create a fertilized egg purely in order to study the early onset of that disease or to develop treatments for it?

These are very tough questions but ones that are going to come at us thick and fast in the near future as science and technology inexorably advance.

POST SCRIPT: God will decide if and when and how the world will end

Two days ago, I suggested that religious people make unreliable allies in the battle to save the environment because of their belief in god's plan. Right on cue, we have a member of the US Congress during hearings last week on cap-and-trade policies to reduce carbon emissions, quoting the Bible (Genesis 8:21,22 and Matthew 24:31) to support his belief that the future of the Earth is part of god's plan. Yes, god has our back, based on what he supposedly told Noah after the flood. So don't worry, burn those fossil fuels because Jesus has it covered!

September 01, 2008

Taking advantage of people's poverty

(Due to today being a Labor Day holiday and being ontravel, I am reposting an old item, edited slightly because I can never stop tinkering with what I have written. New posts will begin again tomorrow.)

I read in the paper recently of an incident where the wealthy son of industrialist and his friends were about to enter a Los Angeles restaurant. Outside the restaurant was a homeless person and the youth offered the homeless person $100 to pour a can of soda over himself. The homeless man did so and the crowd of rich people laughed uproariously at this, paid him, and went on their way.

This story infuriated me, as I am sure it will to most people who hear it. It seemed that these people were humiliating the man, taking advantage of his poverty for their warped sense of what is amusing.

But at some level, I feel that I am not being consistent. In earlier postings I have said that we should not concern ourselves and interfere with what consenting adults do. And in this case we have what seems, at least on the surface, to be a purely consensual transaction between two adults. The homeless man was not forced to pour the soda over himself. He did so because he wanted to obtain $100. So on one level, one can view this incident as saying that he was paid for a job. And as things go, there are a lot more disgusting things that one can be asked to do than pour a soft drink over oneself. In fact, as a society, we pay lots of people do things for us that we would shrink from doing ourselves. We pay them to go into sewers, to execute people, clean public toilets, etc. and we do not feel repelled by this. So why did I find this particular story so repellent?

Perhaps it was because we consider the homeless man is in too weak a position to freely give consent. After all, $100 was a lot of money to him. To offer very poor people what is to them a lot of money in return for doing acts that we would not do seems to offend our sense of fairness. But it is not only poor people who can be tempted in this way.

Many years ago, I saw the film The Magic Christian starring Peter Sellers and Ringo Starr, with the former as a millionaire who enjoyed seeing what he could get people to do out of greed. The point the film was making was that people at any level of society would do almost anything, even wading through a disgusting mixture of urine and excrement, provided the price was right.

At that time I thought that the film was an overly cynical representation of human motivation but now I am not so sure. Some of the reality shows on TV seem to indicate that money and fame (however fleeting) are enough for many people to overcome their normal sense of propriety and self-respect. It is a disturbing thing to ask oneself the question as to what one might be willing to do if the price were high enough.

This is why I feel that it is so important that everyone be paid a living wage and have the minimum living requirements of food, clothing, and shelter, so that they are not forced to trade their dignity in exchange for these basic necessities of life. If they do have the basic necessities and are yet willing to do things in exchange for further riches, then that is up to them.

But clearly the homeless man was not in that position and perhaps the reason we are so repelled by this story is that there was no redeeming purpose at all for the action, unlike the situation where people do jobs that society requires but which we might find personally distasteful. Here the whole point seemed to be to flaunt rich people's power over the poor and to gain enjoyment from the humiliation of another human being.

But what constitutes humiliation is also tricky. What for one person is a humiliating act is for another person a chance to proudly flaunt their lack of concern for society's expectations and mores. If the homeless man thought there was a market for his actions and decided to be entrepreneurial and launch a career by offering to pour soda over himself to anyone who would pay, would the action now become respectable, just another job that many of us personally would not do but is otherwise acceptable?

After all, some comedians are willing to have pies thrown in their face as part of their act. And reality shows like Fear Factor show that people are willing to do the grossest things just to be on TV. The only difference between these things and the homeless man story seems to be that the homeless man was destitute and the event was spontaneous, not planned and scripted.

It seems like all these questions come back, in some essential way, to the issues of justice as fairness as the only sound basis for constructing society. Under those conditions, the only power that one person has over another is that freely yielded.

But the soda-pouring episode still angers me.

POST SCRIPT: The world's cheapest car

The Tata company of India introduces their $2,500 Nano. Its engineers show off the car and explain how they managed to obtain a nice looking and seemingly safe car for such a low price.

August 14, 2008

The etiquette of food

After grappling with some heavy moral issues involving the treatment of animals and the eating of meat, I want to look at a related but lighter topic: the etiquette of food restrictions in the host-guest relationship.

Sometimes I wonder if we have gone too far in being accommodating of people's food restrictions, to the extent of creating a sense of entitlement. As someone who organizes meal-based meetings at work where I feel obliged to ask people in advance what restrictions they have, I am sometimes surprised by the specificity of some requests ("I would like wraps", "I would like fresh fruits and vegetables", etc.).

This raises an interesting question that I have been thinking about: How far we should go as both guests and hosts in specifying and meeting dietary restrictions or preferences?

Michael Pollan says in The Omnivore's Dilemma (2006) that during the time he was a vegetarian, he felt that he had in a subtle way become alienated from other people.

Other people now have to accommodate me, and I find this uncomfortable: My new dietary restrictions throw a big wrench into the basic host-guest relationship. As a guest, if I neglect to tell my host in advance that I don't eat meat, she feels bad, and if I do tell her, she'll make something special for me, in which case I'll feel bad. (p. 314)

Whenever we invite people to our home for a meal or as house guests, we always ask them whether they have any dietary restrictions. We get the usual spectrum of requests: no pork, no beef, or vegetarian. But there are more severe restrictions that we have not had to deal with as yet: vegan, strict kosher, no wheat products, allergies to specific foods such as peanuts, salt or sugar free diets, etc.

These restrictions can be split onto four classes: Those that are based on medical reasons, those that are based on religious reasons, those that are based on political/ethical/moral/environmental reasons, and those that are based on personal preferences. The etiquette question is this: which, if any, of these categories of restrictions is it appropriate for a guest to request accommodations and which ones should a host be obliged to meet?

As a host, I feel obliged to ask people what restrictions they have and try to accommodate them, irrespective of the class of restrictions to which it belongs. But I realize that I am laying myself wide open to a potentially awkward situation. Suppose someone says that they have some restriction that would require very elaborate and unfamiliar food preparation on my part. What should I do? Go to extraordinary lengths to meet them, such as preparing a separate meal? At what point does a food request become so onerous that I can feel comfortable declining to meet it?

Similarly, from the point of view of a guest, what is a reasonable request to make of a host to accommodate your preferences? Should people who have very specific and restrictive needs simply decline invitations because they feel that they are imposing too heavy a burden on their host?

Pollan says that, "On this matter I'm inclined to agree with the French, who gaze upon any personal dietary prohibition as bad manners."

Perhaps this is the way we should go. Hosts should stop asking guests what restrictions they have and prepare whatever the host wants. Guests who choose to attend should decline their host's offer to specify dietary limitations, and simply eat and drink what they can from whatever is offered, even if it ends up being just some vegetables and fruit and water. And neither party should feel offended or put out.

(Of course, this suggestion only applies to single-meal events. The situation with houseguests who are staying for some time is different and then some accommodations must be made.)

Some might feel that it is easy for me to advocate this policy since I am an omnivore and thus can eat anything, and that I might view this differently if I were someone who had strong food restrictions and might be faced with having a very restricted choice of food items to eat at a dinner party.

But I have had to deal with something roughly equivalent. In Sri Lanka, dinner parties would often start late, say around 9:00 pm, and they would sometimes serve dinner close to midnight. (Unlike in America where the meal forms either the beginning or the middle of an evening of conversation, in Sri Lanka the end of the meal often signifies the end of the party.) Although I get very hungry by that late hour, I did not tell the host that I would like my own dinner to be served early. Instead, if I suspected dinner would be served late, I got in the habit of eating at home before going for the party. That way, I did not care when the meal was served or even what was served. I simply ate what I felt like from whatever was offered whenever it was offered.

Those who have dietary restrictions or preferences that will likely result in them not being able to eat much from what is offered might consider doing the same thing.

These kinds of etiquette issues may have arisen because we have forgotten that the only reason to accept an invitation to someone else's home is to enjoy their company and the company of their other guests, not to treat their home as a restaurant to obtain food that is acceptable to you. The refreshments on offer should not be a consideration.

I wonder how Miss Manners might respond to this question.

POST SCRIPT: Interesting graphic designs

How to tell if you are in the right place. (Thanks to Progressive Review.)

June 20, 2008

Bioethical dilemmas

(This series of posts reviews in detail Francis Collins's book The Language of God: A Scientist Presents Evidence for Belief, originally published in 2006. The page numbers cited are from the large print edition published in 2007. The complete set of these posts will be archived here.)

In the Appendix of Francis Collins's book The Language of God: A Scientist Presents Evidence for Belief (2006), he tackles the difficult ethical issues raised by advances in science and medicine, especially in the field of molecular biology. His own major contributions to the human genome have undoubtedly made him acutely conscious of these issues. Collins's describes the science and the issues arising from them very clearly and this Appendix is well worth reading.

Having mapped out the entire human genome, scientists are now in the position of being potentially able to identify the presence of genes that may predispose people to certain diseases or behaviors long before those things have manifested themselves in observable ways. This ability has, of course, some obvious advantages in the prevention and treatment of diseases.

For example, breast cancer has a hereditary component that can be identified by the presence of a dangerous mutation in the gene BRCA1 on chromosome 17. This mutation, which also creates a greater risk for ovarian cancer, can be carried by fathers as well, even though they themselves may not have the disease. In those families in which breast cancer is prevalent, knowing who has the mutated gene and who hasn't may influence how closely they are monitored and what treatments they might be given.

As time goes by, our genetic predisposition to more and more hereditary diseases will be revealed. But is this an unqualified good thing?

On the plus side, having this knowledge may enable those people at risk to take steps (diet, exercise, preventative treatment) that can reduce their risk of actually contracting the disease. After all, genes are usually not the only (or even the main) factor in causing disease and we often have some degree of control over the other risk factors for diseases such as diabetes or blood clotting.

We may also be able to treat more genetic diseases by actually changing an individual's genes, although currently the only changes being made are to the genes in the somatic cells (the ones that make up our bodies) and not the ones in the 'germ' line cells (the ones that are passed on to children via the egg and sperm). At present, there is a scientific and medical consensus that influencing the genes of future generations by changing the germ line is not something we should do.

Furthermore, our bodies' reaction to drugs is also often affected by our genes. That knowledge can be used to individualize treatment, to determine which drug should be given to which patient, and even to design drugs that take maximum advantage of an individual's genetic makeup. This kind of personalized medicine lies in our future.

But there are negatives to this brave new world of treatment. Should everyone have their DNA mapped to identify potential risk factors? And who should have access to a person's genetic information?

Some people may prefer not to know the likelihood of what diseases they are predisposed to, especially in those cases where nothing much can be done to avert the disease or what needs to be done would diminish by too much the quality of life of the individual. Furthermore, they may fear that this information could be used against them. If they have a predisposition for a major disease and this knowledge reaches the health insurance companies, the latter may charge them higher premiums or even decline to cover them at all. After all, the profit-making basis on which these companies run makes them want to only insure the pool of healthy people and deny as much coverage as possible to those who actually need it.

It works the other way too. If someone knows they have a potential health problem but the insurance companies don't, they may choose health (and life) insurance policies that work to their advantage.

So genetic information can become a pawn in the chess game played between the individual and the health (and life) insurance agencies.

This is, by the way, another major flaw of the current employer-based private health insurance schemes in the US. If we had a single-payer, universal health care system as is the case in every other developed country, and even in many developing countries, this problem regarding genetic knowledge would not even arise. Everyone would be covered automatically irrespective of their history, the risk would be spread over the entire population, and the only question would be the extent to which the taxpayers wanted to fund the system in order to cover treatment. That would be a matter determined by public policy rather than private profit. There would still be ethical issues to be debated (such as on what basis to prioritize and allocate treatment) but the drive to minimize treatment to maximize private profit would be absent, and that is a huge plus.

There are other issues to consider. What if we find a gene that has a propensity for its bearer to commit crimes or other forms of antisocial behavior? Would it be wrong to use this knowledge to preventively profile and incarcerate people? It has to be emphasized that our genes almost always are not determinants of behavior but at best provide small probabilistic estimates. But as I have written before, probability and statistics is not easy to understand, and the knowledge that someone has a slightly greater chance of committing a crime can, if publicly known, be a stigma that person can never shake, however upstanding and moral a person he or she tries to be.

There is also the question of what to do with people who want to use treatments that have been developed for therapeutic purposes in order to make themselves (or their children) bigger, taller, stronger, faster, better-looking, and even smarter (or so they think) so that they will have an advantage over others. That thought-provoking film Gattaca (1997) envisions a future where parents create many fertilized eggs, examine the DNA of each, and select only those which contain the most advantageous genetic combinations to implant in the uterus. Collins points out that while this is theoretically possible, in practice it cannot be used to select for more than two or three genes. Even then, there are no guarantees that environmental effects as the child is growing up may not swamp the effects of the carefully selected genes. (p. 354)

Collins argues, and I agree with him, that these are important ethical decisions that should not be left only to scientists but should involve the entire spectrum of society. He appeals to the Moral Law as general guidance for dealing with these issues (p. 320). In particular he advocates four ethical principles (formulated by T. L. Beauchamp and J. F. Childress in their book Principles of Biomedical Ethics, 1994) that we might all be able to agree on in making such decisions. They are:

  1. Respect for autonomy – the principle that a rational individual should be given freedom in personal decision making, without undue outside coercion.
  2. Justice – the requirement for fair, moral, and impartial treatment of all persons
  3. Beneficence – the mandate to treat others in their best interest
  4. Nonmaleficence – "First do no harm" (as in the Hippocratic Oath)

These are good guidelines, though many problems will undoubtedly arise when such general secular ethical principles collide with the demands of specific religious beliefs and cultural practices. When supposedly infallible religious texts become part of the discussion, it makes it almost impossible to seek underlying unifying moral and ethical principles on which to base judgments.

POST SCRIPT: Brace yourself

Matt Taibbi warns that this presidential election is going to be very rough.