THIS BLOG HAS MOVED AND HAS A NEW HOME PAGE.

Entries in "Logic of science"

September 12, 2011

The logic of science-17: Some residual issues

(For previous posts in this series, see here.)

Reader Jeff asked three good questions about some of the issues I discussed in my series on the logic of science that I would like to address here. What follows are his questions and my responses.

"First, in Part II you discuss the concepts of Know-How and Know-Why. I am curious as to what extent these concepts might be applied to understanding the differences between the Hard Sciences (Physics, Chemistry, &c.) and the Soft Sciences (Psychology, Sociology, &c.) Are what we call Soft Sciences sciences at all?"

Science has considerable prestige as providing reliable knowledge and as a result many fields of study aspire to that label. But the issue of what distinguishes science from non-science is as yet unresolved. The know-how/know why distinction of Aristotle ceased to be considered viable as a means of distinguishing science from non-science when Newton came along. His laws of motion and gravity were spectacularly successful in explaining the motion of objects, especially the solar system. He thus provided the 'know-why' that had been previously missing from the purely empirical field of astronomy, lifting it into the realm of science.

But Newton's laws had serious know-why deficiencies of their own because they had no explanation for why distant inanimate objects exerted forces on each other. Up until then, forces were believed to be exerted by contact, and the introduction of mysterious forces that acted at a distance was somewhat of an embarrassment. But the immense achievement of unifying our understanding of celestial and terrestrial motion led many to deem that what Newton had done to be unquestionably science, despite its lack of know-why for its major elements. Know-why ceased to be a requirement for science. This development was in some sense inevitable because we now realize that every theory is based on some other theory and that at some point we just have to say 'and that's just the way it seems to be', without being able to elucidate any further.

The search for better ways to distinguish science from non-science went on. To be able to definitely say whether something belongs in some category or not (whether it be science or anything else) requires one to specify both necessary and sufficient conditions for belonging in that category. We can specify some necessary conditions for science. It needs, for example, to be empirical, predictive, and materialistic, and Thomas Kuhn added the condition that it also work within a paradigm. But suitable sufficient conditions are much harder to come by.

If a theory fails to meet the necessary conditions threshold, it means that it is definitely not science, which is why so-called 'intelligent design theory' has been deemed to be not science. But meeting the necessary threshold only allows us to conclude that the theory could be science, not that it definitely is.

This inability to say definitely that something is a science has not proven to be a problem for those areas (the so-called 'hard sciences' such as almost all areas of physics, chemistry, and biology) that are, by broad consensus, unambiguously considered to be science, because nobody except philosophers of science cares whether they meet any criteria or not. But it has proven problematic for the soft sciences, where there is no such unanimity. The scientific status of some areas of physics (such as string theory) has also been challenged on the grounds that it has as yet not generated any predictions that can be tested empirically.

"Second, in Part VII you use the electron as an example of a universal claim that can never be proven because we can never test each and every electron in the universe. I wondered if it would be possible to make the claim that any particle that does not have the mass and charge of an electron is not an electron in the same way that we can state that any atom the does not have solely a single proton is not Hydrogen?"

You can define away the immediate problem by saying that the electron is a particle having a set number of properties. But this simply defers the problem. It does not let us off the hook because we cannot say (for example) that every hydrogen atom has one of these particles because we cannot test each and every atom to see if that is true. We simply have to make the universal claim that it does, and that cannot be proven either.

"Third, in Part X you write “that however much data may support a theory, we are not in a position to unequivocally state that we have proven the theory to be true.” Where does this leave Laws such as the laws of gravity and thermodynamics? Do we no longer speak of Laws as such?"

The terms 'law' and 'theory' do not have any ranking order epistemologically in that there is no sense in which a law is truer than a theory. For example, Newton's laws of motion are known to have limited validity and not be true when it comes to the very small or the very fast, while Einstein's theories of special and general relativity are believed to have no violations.

What gets called a 'law' and what gets called a 'theory' differ in what they imply, though accidents of historical naming can also play a role. A law tends to be an empirical universal generalization of observed relationships between measurable quantities. So the law of conservation of energy says that if we were to measure the sum of all the energy components of a closed system at one time, that total will remain the same if we measure all the components at another time. Newton's laws of motion give us the relationships between forces and mass and acceleration. Boyle's law gives us the relationship between the pressure and volume of a gas. These are all empirical generalizations and none of them try to explain why these relationships hold true.

A theory, on the other hand, consists of a more complicated explanatory structure that specifies the elements of the system that it deals with, as well as how those elements behave and the relationships among them. A theory might be able to explain what undergirds a law, though it rarely proves it because of the many extra assumptions that are needed. So, for example, the kinetic theory of gases tells us what elements comprise an ideal gas and how they interact with each other and their container. Using that theory, we can understand where Boyle's law comes from. Similarly, quantum theory tells us that the conservation of energy is connected to the invariance of laws under time translations, i.e., that the laws of science do not change with time.

September 01, 2011

The logic of science-16: Summary and some concluding thoughts

(For previous posts in this series, see here.)

The roots of religion lie in deep evolutionary history. The book Why we Believe in God(s) by J. Anderson Thomson with Clare Aukofer (2011) marshals the evidence from psychology and neuroscience to argue that the tendency to belief in supernatural agencies by itself has no survival value but that it exists because it is a by-product of qualities that evolved for other purposes and which do have survival value, such as the tendency to detect agency behind natural events.

There is no question that believing in the existence of a god satisfies a need for some people. But in our modern enlightened times, and especially for sophisticated believers, it is embarrassing to say that one believes in a god merely because it fills an emotional vacuum. People feel that they need to justify their beliefs in a way that would pass muster with modern science and so they try to find more acceptable reasons based on logic and reason and empirical evidence.

But as this series has discussed, logic and reason alone cannot establish the existence of an entity. The only case in which a purely logical argument can be used is if the negation of a proposition leads to a logical contradiction, showing that the proposition is true. In the case of the existence of god, this would require one to show that the proposition that there is no god leads to a logical contradiction. This is clearly not the case. Assuming that there is no god does not cause any logical problems whatsoever.

The next question is whether the assumption of the non-existence of god leads to any empirical contradiction. But empirical evidence is manifestly within the realm of science, and so this question is subject to the investigative methods of science. Does assuming that there is no god lead to any contradiction with the observable world? Again the answer is no. Science has proven itself quite capable of making the world intelligible without the need to invoke any supernatural agency. Furthermore, the attempt by religious believers to find some phenomenon that is currently unexplained by science (the origin of life, for example) and attribute it to god fails because it is never the case that a scientist is faced with only two theories in investigating a phenomenon, as was the case with the issue of whether the square root of 2 is a rational number, and hence ruling out one theory by contradiction does not make any of the alternatives true.

The reasoning is simple. In science, the choice is never between theory A and the negation of theory A, as was the case with whether the square root of two is a rational number. Scientific conflicts are always three-cornered fights that involve comparing theories A and B with data. Since the number of potential theories to explain any given phenomenon is infinite, the reductio ad absurdum methods cannot be used because even if one could prove that any one of them was wrong it does not mean that any of the others are right. This was articulated by Pierre Duhem long ago.

Unlike the reduction to absurdity employed by geometers, experimental contradiction does not have the power to transform a physical hypothesis into an indisputable truth; in order to confer this power on it, it would be necessary to enumerate completely the various hypotheses which may cover a determinate group of phenomena; but the physicist is never sure that he has exhausted all the imaginable assumptions. The truth of a physical theory is not decided by heads and tails. (The Aim and Structure of Physical Theory, Pierre Duhem 1906, translated by Philip P. Wiener, 1954, p. 190)

Science is a lot more complicated than mathematics. To have any empirical proposition accepted as true, one must provide sufficient positive evidence in support of it, not merely argue against a competing theory. Those arguing in favor of the existence of god have failed to do that. The failure is quite spectacular given the immense powers attributed to their god. What this series has tried to show is that the verdict of science when it comes to the existence of god is an overwhelming "No!"

Some sophisticated religious apologists try to argue that the question of the existence of god is outside the realm of empirical evidence and thus outside the range of science. This raises the question of what is the use of such a god. A recent television series Curiosity on the Discovery channel had one program that dealt with the question Did God Create the Universe?, arising from Stephen Hawking's recent assertion that god was not necessary to understand the universe. In a discussion after the program, cosmologist Sean Carroll asked Catholic theologian John Haught how the world would look different if there was no god. This is, of course, the key question. If there is no difference, then god is superfluous. If one can point to a specific difference, then that means that there are empirical implications for god's existence and thus it is a question that can be investigated by science. Haught's reply? If there is no god, the universe itself would not exist!

This reply aptly captures the poverty of theology. How does Haught know this? He cannot, of course. It is just another example of theology simply making stuff up to find something for god to do. Theology really is nothing more than the field that manufactures excuses for why we see no evidence for god. As H. L. Mencken said, "A theologian is like a blind man in a dark room searching for a black cat which isn't there - and finding it!"

In this series on the logic of science, I have said that science is not in the business of proving things to be true or disproving them either. Science is in the business of figuring out what works best in any given situation, using the logical and evidentiary methods that it has found useful. The same reasoning that has led to scientific success is what leads naturally to atheism. As population biologist J. B. S. Haldane said, "My practice as a scientist is atheistic. That is to say, when I set up an experiment I assume that no god, angel or devil is going to interfere with its course; and this assumption has been justified by such success as I have achieved in my professional career. I should therefore be intellectually dishonest if I were not also atheistic in the affairs of the world."

Scientific knowledge is always tentative and subject to change in the light of new evidence and never claims that it has the ultimate truth. Some religious apologists seize on this truism to argue that in the absence of achieving absolute truth, scientific knowledge, like religion, is just another form of faith, on an equal footing with it, and thus that the knowledge obtained from each have equal standing. That this is a false equivalency as can be seen by posing the following question: If you had to roll back all the knowledge gained in the last 500 years in just one specific field, which would you choose: to erase what we have learned from religion/theology or what we have learned from science?

I think the answer is obvious.

August 25, 2011

The logic of science-15: Truth by logical contradiction

(For previous posts in this series, see here.)

Theologians often try to claim that they can arrive at eternal truths about god by using pure logic. In some sense, they are forced to make this claim because they have no evidence on their side but it is worthwhile to examine if it is possible to arrive at any truth purely logically. If so, we can see if that method can be co-opted to science, thus bypassing the need for evidence.

In mathematics, there is one way to prove that something is true using just logic alone and this is the method known as reductio ad absurdum or reduction to absurdity. The way it works is like this. Suppose you think that some proposition is true and want to prove it. You start by assuming that the negation of that proposition is true, and then show that this leads to a logical contradiction or a result that is manifestly false. This would convincingly prove that the starting assumption (the negation of the proposition under consideration) was false and hence that the original proposition was true.

The most famous example of this kind of proof is the simple, short, and elegant proof of the proposition that √2 (the square root of 2) is NOT a rational number. I believe that everyone should know this beautiful proof and so I will give it here.

This proof starts by assuming that the negation of that proposition is true, i.e., that the square root of two IS a rational number. You can then show that this assumption leads to a logical contradiction, as follows.

A rational number is one that can be written as the ratio of two integers. For example, the number 1.5 is rational because it can be written as 6/4, 12/8, 3/2, and so on. Similarly 146.98 is a rational number because it can be written as 14698/100. Conversely, the famous number π=3.1415927… is not a rational number. It cannot be written as the ratio of two integers since the number does not terminate AND there is no repeating pattern of digits.

(As a slight digression, to see why an infinite but repeating pattern is a rational number, take the number 4.3151515… where the sequence 15 is repeated indefinitely. Call this number y. If we multiply y by 10, we get 10y=43.151515… If we multiply y by 1000, we get 1000y=4315.151515… Subtracting 10y from 1000y, we get 990y=4272 exactly, since the repeating numbers after the decimal points are equal in both cases. Hence y=4.3151515… =4272/990 exactly and is thus a rational number. Similar reasoning can be applied with any repeating sequence.)

So IF √2 is a rational number, then it can be written as the ratio a/b, where a and b are integers. We then make sure that the ratio has been 'simplified' as much as possible by getting rid of all common factors. For example in the case of 146.98 discussed above, the ratio 14698/100 can be simplified to 7349/50 by cancelling the only common factor that the numerator and denominator share, which is the number 2. In the case of 1.5, the ratio we would use is 3/2, since the others have common factors.

So our starting assumption becomes that √2=a/b where a and b are integers that do not have any common factors. We can now multiply each side by itself to get 2=a2/b2. Hence a2=2b2. This implies that a2 is an even number (because it has a factor of 2). But if the square of a number is even, that means the number itself must be even. Hence a=2c, where c is also an integer. This leads to (2c)2=2b2 and thus b2=2c2. This implies that b2 is an even number and hence b is also an even number. Thus b also has a factor 2 and we have arrived at the conclusion that a and b both have the common factor 2. But if a and b have a common factor, this contradicts what we did at the start of the proof where we got rid of all their common factors. We have thus arrived at a logical contradiction. Hence our starting assumption that the square root of 2 is rational must be wrong. Since there are only two possible alternatives (the square root of 2 is either rational or not rational), we can conclude that it is not rational.

Note that we have proven a result to be true without appealing to any experimental data or the 'real' world. As far as I am aware, the only way to prove that a proposition is true using pure logic alone is of this nature, to show that the negation of the proposition leads to a logical contradiction of this sort.

Philosophers and theologians down the ages have tried to apply the reductio ad absurdum argument to prove the existence of god using logic alone. But the problem is that assuming that there is no god does not lead to a logical contradiction. So instead they appealed to what they felt was manifestly true, that the assumption that god did not exist meant that the existence and properties of the universe were wholly inexplicable. Almost all arguments for the existence of god are at some level appeals to this kind of incredulity.

But this is not a logical contradiction, since they are after all appealing to the empirical properties of the universe. In days gone by when much of how the world works must have seemed deeply mysterious, this subtle equating of empirical incredulity with logical contradiction may have passed without much notice. Even if what was shown was not strictly a logical contradiction, if the negation of a proposition 'god exists' seemed to lead to an obvious disagreement with data in that the properties of the world could not be explained, the negation of the proposition could be rejected, thus proving the original proposition to be true and that god exists.

But those arguments no longer hold since science has explained much of how the would works. Assuming that god does not exist no longer leads to either a logical or empirical contradiction.

Next: Some concluding thoughts

August 23, 2011

The logic of science-14: The rational progress of science

(For previous posts in this series, see here.)

Karl Popper's model of falsification makes the scientific enterprise process seem extremely rational and logical. It also implies that science is progressing along the path to truth by successively eliminating false theories. Hence it should not be surprising that practicing scientists like it and still hold on to it as their model of how science works. In the previous post in this series, I discussed how Thomas Kuhn's work cast serious doubt on the validity of Karl Popper's falsification model of scientific progress, replacing it with a seemingly more subjective process in which scientists switched allegiance from an old theory to a new one based on many factors, some of them subjective, and that this transition had some of the elements of a gestalt switch. This conclusion was disturbing to many.

Another historian and philosopher of science Imre Lakatos was one of those concerned that Thomas Kuhn's model of gestalt switches implied a certain amount of irrationality in the way that scientists choose a new paradigm over the old or the way they pick problems to work on. In his major work The Methodology of Scientific Research Programmes (1978) he argued that scientists are rational in the way they choose paradigms, and he proposed a new model (which he called methodological falsificationism) that he contrasted with Popper's older model (which he called 'naïve falsificationism'), that he claimed solved some of its difficulties

In Popper's naïve falsification model, when there is disagreement between the predictions of a theory and observations or experiment, the theory must be abandoned. Kuhn and Lakatos agree with Duhem that when such a disagreement occurs, it is not obvious where to place the blame for the failure so summarily discarding the theory is unwarranted. In such situations Duhem appealed to the vague 'good sense' of the individual scientists and of the collective scientific community to determine what to do. Kuhn refined this by saying that the choice of which direction to proceed is based on whether the scientific community perceives the existing paradigm to be in a crisis or not, and that when there is a crisis, the revolutionary switch to a new paradigm is akin to a gestalt switch, whose precise mechanism is hard to pin down, in which individual scientists suddenly see things in a new way.

Lakatos agrees with Kuhn (and disagrees with Popper) that experimental tests are never simply a contest between theory and experiment. At the very least they are three-cornered fights between an old paradigm, a new emerging rival, and experiment. But he disagrees with Kuhn that a crisis within the old paradigm is necessary for scientists to switch their allegiance to a new one (p. 206). He argues that a new theory is acceptable over its predecessor if it (1) explains all the previous successes of the old theory; (2) predicts novel facts that the old theory would have forbidden or would not even have considered; and (3) some of its novel predictions are confirmed. (p. 227)

Lakatos says that Kuhn places too much reliance on vague psychological processes to explain scientific revolutions and that the process is more rational, that scientists proceed in a systematic way in choosing between competing theories. In Lakatos' model of methodological falsificationism, he emphasizes that experimental data is never free of theory. An experimental result in its raw form is simply a sensory observation, such as dot on screen, a pointer reading on a meter, a click of a Geiger counter, a track in a bubble chamber, a piece of bone, etc., none of which have any obvious meaning by themselves. In order to give them some meaning, we have to use theories that interpret the raw sensory experience. For example, a fossil bone that is found is useless unless one can determine what animal it belongs to and how old it is, all of which require the use of other theories. In addition, we have to assume that our knowledge about the other elements surrounding the raw data is unproblematic.

Meanwhile, a theoretical prediction is never the product of a single theory but consists of a combination of four components: the basic theory being investigated, the initial conditions, various auxiliary hypotheses that are needed to actually implement the theory, plus the invocation of ceteris paribus (roughly meaning "all other things being equal") clauses. For example, to understand the origins of the Solar System, we need Newton's laws but we also need to make assumptions about the initial state of the gas (the initial conditions), that the laws have not changed since the time the Earth was formed (an auxiliary hypothesis), and that no other unknown factors played a role in the formation (the ceteris paribus clauses).

Lakatos said that when there is a disagreement between a theoretical prediction and experimental data (where the two are interpreted in these more complex ways), scientists use both a 'negative heuristic' and 'positive heuristic' to systematically investigate and isolate the cause of this disagreement and that this process is what makes science rational.

The ‘negative heuristic’ says that one must deflect attention away from the ‘hard core’ theory when there is an inconsistency between predictions and experiment. In other words, scientists look for the culprit in all the factors other than the basic theory. The 'positive heuristic' consists of "a partially articulated set of suggestions or hints on how to change, develop the 'refutable variants' of the research program, how to modify, sophisticate, the 'refutable' protective belt." (p. 243) So the positive heuristic tells scientists how to systematically investigate the initial conditions, auxiliary hypotheses, ceteris paribus clauses, etc., in short everything other than the basic theory. These two strategies protect the basic theory from being easily overthrown. This is important because good theories are hard to come by and one must not discard them too hastily.

Lakatos claims that this process rationally determines how scientists select problems to work on and how they resolve paradigm conflicts (contrasting it with Kuhn’s suggestion that scientists intuitively know what to do in such situations). In some sense, Lakatos seems to be fleshing out the rules of operation that Kuhn refers to but does not elaborate.

Lakatos argues that as long as a basic theory is fruitful and the negative and positive heuristics provide plenty of avenues for people to investigate and thus steadily produces new facts that both advance knowledge and are useful (a state of affairs that he calls a progressive problemshift), the basic theory will be retained. This is why Newtonian physics, one of the most fruitful theories of all time, is still with us even though it would be considered falsified using Popper's criterion. It is only when the theory runs of steam, when all these avenues of investigation are more or less exhausted and do not seem to provide much opportunity to discover novel facts that we have what he calls a degenerating problemshift. At that point, scientists start abandoning their allegiance to the old theory and seek a new one, eventually leading to a scientific revolution.

Next: Truth by logical contradiction

August 17, 2011

The logic of science-13: How 'good sense' emerges in science


(For previous posts in this series, see here.)

The philosopher of science Pierre Duhem said in his book The Aim and Structure of Physical Theory (1906, translated by Philip P. Wiener, 1954) that despite the fact that there is no way to isolate any given theory from all other theories, scientists are saved from sterile discussions about which theory is best because the collective 'good sense' of the scientific community can arrive at verdicts based on the evidence, and these verdicts are widely accepted. In adjudicating the truth or falsity of theories this way, the community of scientists are like a panel of judges in a court case (or a panel of doctors dealing with a particularly baffling set of symptoms), weighing the evidence for and against before pronouncing a verdict, once again showing the similarities of scientific conclusions to legal verdicts. And like judges, we have to try to leave our personal preferences at the door, which, as Duhem pointed out, is not always easy to do.

Now nothing contributes more to entangle good sense and to disturb its insight than passions and interests. Therefore, nothing will delay the decision which should determine a fortunate reform in a physical theory more than the vanity which makes a physicist too indulgent towards his own system and too severe towards the system of another. We are thus led to the conclusion so clearly expressed by Claude Bernard: The sound experimental criticism of a hypothesis is subordinated to certain moral conditions; in order to estimate correctly the agreement of a physical theory with the facts, it is not enough to be a good mathematician and skillful experimenter; one must also be an impartial and faithful judge. (p. 218)

This is why the collective judgment of the community, in which individual biases get diluted, carries more weight than the judgment of a single member, like the way that major legal decisions are made by a jury or a panel of judges rather than a single person.

Duhem's idea that we are ultimately dependent on the somewhat vague collective 'good sense' of the scientific community to tell us what is true and what is false may be disturbing to some as it seems to demote scientific 'truth', reducing it from being objectively determined by data to an act of collective judgment, however competent the community making that judgment is. Surely there must be more to it than that? After all, science has achieved amazing things. Our entire modern lives are made possible because of the power of scientific theories that form the foundation of technology. In short, science works exceedingly well. How can it work so well if the theories we have developed were not true in some objective sense?

Such feelings are so strong that people continue to try and find ways to show that scientific theories, if not absolutely true now, are at least progressing along the road to truth. Popper's idea of falsification seemed, at least initially, to provide a mechanism to understand how this steady progress might be occurring.

It was Thomas Kuhn who delivered the most devastating critique of Karl Popper's idea that scientific theories can be falsified if a key prediction of the theory turns out to be contradicted by experiment. In Kuhn's landmark book The Structure of Scientific Revolutions (1969), he pointed out that falsification fails in two ways. One way is an extension of Duhem's argument, that it is never the case that a pure theoretical prediction based on a single theory is compared with a piece of empirical data. In the event of disagreement, there are always other linked theories that can be blamed. Secondly, even if we accept the idea of falsification at face value, it would not describe actual scientific practice. Kuhn's book contains a wealth of examples that show how scientists live and work quite comfortably, for decades and sometimes even for centuries, with a theory that has been contradicted by data in a few instances, until finally discarding the theory or resolving the contradictions. As long as a theory seems to be generally working well, scientists are not too perturbed by the occasional disagreement, seeing them as merely unsolved problems and not as falsifying events. In fact, he points out that new theories almost always have very little evidence in support of them and disagree with a lot of data. If Popper's model were applied rigorously, every theory would be falsified almost from the get-go.

So how do old theories get rejected and replaced by new ones? Kuhn says that during the period of 'normal science', most scientists work within a given scientific 'paradigm' (which consists of a basic theory plus the rules of operation), picking problems that promise to elucidate the workings of the paradigm. They are not looking to overthrow the paradigm but to stretch its boundaries. In the process, they sometimes encounter problems that resist solutions. If these discrepancies multiply and if a few key ones turn out to be highly resistant to attack by even the best practitioners in the field, science enters a period of crisis in which people start seriously investigating alternative theories. At some point, individual scientists start switching allegiance to a promising new theory that seems to solve some outstanding and vexing problems that the old one failed to solve and this process can begin to snowball. Kuhn suggests that the switch from seeing the old theory as true to seeing it as false and needing to be replaced by the new one is similar to a gestalt switch, a sudden realization of a new truth that is not driven purely by logic.

Kuhn's views aroused considerable passions. Some anti-science people (religious and non-religious alike) have seized on his idea that scientific revolutions are not driven purely by objective facts to extend his views well beyond what he envisaged and claim that science is an irrational activity and that scientific knowledge is just another form of opinion and has no claim to privileged status. Kuhn spent a good part of the rest of his life arguing that this was a distortion of his views and that scientific knowledge had justifiable claims to being more reliable because of the ways that science operated.

Next: The rational progress of science

August 10, 2011

The logic of science-12: The reasoned consensus judgment of science

(For previous posts in this series, see here.)

The previous post illustrated a crucial difference between science and religion that explains why scientists can resolve disagreements amongst themselves as to which theory should be considered true but religious people cannot agree as to which god is the one true god. In competition between scientific theories, after some time the weight of evidence is such that one side concedes that their theory should be rejected, resulting in a consensus verdict. In religion, since evidence plays no role, and reason and logic are invoked only when they support your own case and discarded by appealing to faith when reason goes against you, there is no basis for arriving at agreement. It would be unthinkable for a scientist to argue in favor of his or her theory by denying evidence and logic and telling people that they must have faith in the theory for it to work.

Science can come to a consensus not because all individual scientists on the losing side change their minds. Some of them can be as dogged as the most fervent believer in god in holding on to their beliefs, and as inventive in finding new reasons for belief, though they will never resort to appealing to supernatural forces or faith. The key difference is that over time, the advocates of a failing theory become less influential, more marginalized, and eventually die out. The next generation of research students chooses their areas of study when they are older and more aware of the field and tend to avoid signing on to failing theories, so that those declining theories eventually fade from the scene, to be found only in historical archives. Unlike in the case of religion, there is no institutional structure dedicated to perpetuating old theories, nor is there a sacred text that must be adhered to. As much as scientists admire the works of Isaac Newton and Albert Einstein and Charles Darwin, they do not treat them as divinely inspired. Science has moved on since they were written and their original theories have been modified and elaborated on, even if they still bear their names. Every generation of students is taught the current version of accepted theories, not the original ones.

In the case of religions, however, they are forced to conform to ancient texts. Furthermore, children are not allowed to choose their religious beliefs when they are of more mature age, the way that research scientists choose which theories they want to work with. Religions indoctrinate the next generation of impressionable children with those ancient beliefs when they are very young, thus ensuring that those beliefs persist. Furthermore, there is a vast industry (churches, priests, theologians, etc.) whose very livelihood depends on those ancient religious ideas being perpetuated. Scientists can shift their allegiance from one theory to another without losing their jobs. A theologian or priest cannot. Can you imagine a pope saying that after some thought he has come to the conclusion that there is no god or that Buddhism is the true religion? Hence even though the evidence against the existence of god is far more overwhelming than that against old and rejected scientific theories, theologians will cling on to their old ideas, never conceding that they are wrong, invoking more and more ad hoc hypotheses to justify their beliefs.

This is why science progresses but religions are stuck in a rut, the only progress in the latter being the new excuses that need to be invented to explain why there is no evidence for god, as science makes god increasingly unnecessary as an explanatory concept. In fact, the field of theology largely consists of explaining why there is no evidence for god. Religious believers have the wiggle room to do this because pure logic is never sufficient to eliminate a theory. This is why believers in god who claim that since logical or evidentiary arguments cannot disprove the existence of god, therefore it is reasonable to believe that god exists, are saying something meaningless.

In science too we cannot eliminate the phlogiston theory of combustion or the ether or the geocentric model of the solar system by logic or evidence. So how are scientists able to say with such confidence that some theories (like gravity) are true and that others (like ether or phlogiston) are false? Pierre Duhem (The Aim and Structure of Physical Theory, Pierre Duhem, 1906, translated by Philip P. Wiener, 1954) said that we have to appeal to the collective 'good sense' of the scientific community as a whole to arrive at a judgment of which theory is better. It is the community of professionals working in a given scientific area that is the best judge of how to weigh the evidence and decide whether a theory is right or wrong, true or false, rather than any individual member of that community, since scientists are like any other people and prone to personal failings that can cloud their judgment, unless they exercise great vigilance over themselves.

Next: How 'good sense' emerges in science

August 04, 2011

The logic of science-11: The problem with falsification

(For previous posts in this series, see here.)

In the previous post, I discussed Karl Popper's idea of using falsification as a demarcation criterion to distinguish science from non-science. The basic idea is that for a theory to be considered scientific, it has to make risky predictions that have the potential that a negative result would require us to abandon the theory. i.e., declare it to be false. If you cannot specify a test with the potential that a negative result would be fatal to your theory, then according to Popper's criterion, that theory is not scientific.

Of course, I showed that falsification cannot be used to identify true theories by eliminating all false alternatives, because there is no limit to the theories can be invented to explain any set of phenomena. But steadily eliminating more and more false theories surely has to be a good thing in its own right. This is why falsificationism is highly popular among working scientists because it enables them to claim that science progresses by closing down blind alleys.

But there is a deeper problem with the whole methodology of falsificationism and that it is that even if prediction and data disagree, we cannot infer with absolute certainty that the theory is false because of the interconnectedness of scientific knowledge. Pierre Duhem pointed out over a century ago that in science one is never comparing the predictions of a single theory with experimental data, because the theories of science are all inextricably tangled up with one another. As Duhem said (The Aim and Structure of Physical Theory, Pierre Duhem, 1906, translated by Philip P. Wiener, 1954, p. 199, italics in original):

To seek to separate each of the hypotheses of theoretical physics from the other assumptions on which this science rests is to pursue a chimera; for the realization and interpretation of no matter what experiment in physics imply adherence to a whole set of theoretical propositions.

The only experimental check on a physical theory which is not illogical consists in comparing the entire system of the physical theory with the whole group of experimental laws, and in judging whether the latter is represented by the former in a satisfactory manner.

In other words, since every scientific theory is always part of an interconnected web of theories, when something goes wrong and data does not agree with the prediction, one can never pinpoint with certainty exactly which theory is the culprit. Is it the one that is ostensibly being tested or another one that is indirectly connected to the prediction? One cannot say definitively. All one knows is that something has gone wrong somewhere. Duhem provides an illuminating analogy of the difficulty facing a scientist by saying that the work of a scientist is more similar to that of a physician than a watchmaker.

People generally think that each one of the hypotheses employed in physics can be taken in isolation, checked by experiment, and then, when many varied tests have established its validity, given a definitive place in the system of physics. In reality, this is not the case. Physics is not a machine which lets itself be taken apart; we cannot try each piece in isolation and, in order to adjust it, wait until its solidity has been carefully checked. Physical science is a system that must be taken as a whole; it is an organism in which one part cannot be made to function except when the parts that are most remote from it are called into play, some more so than others, but all to some degree. If something goes wrong, if some discomfort is felt in the functioning of the organism, the physicist will have to ferret out through its effect on the entire system which organ needs to be remedied or modified without the possibility of isolating this organ and examining it apart. The watchmaker to whom you give a watch that has stopped separates all the wheelworks and examines them one by one until he finds the part that is defective or broken. The doctor to whom a patient appears cannot dissect him in order to establish his diagnosis; he has to guess the seat and cause of the ailment solely by inspecting disorders affecting the whole body. Now, the physicist concerned with remedying a limping theory resembles the doctor and not the watchmaker. (p. 187)

Duhem is arguing that one can never deduce whether any individual scientific theory is false, even in principle. This seems to be fly in the face of direct human experience. Anyone with even a cursory knowledge of scientific history knows that individual scientific theories have routinely been pronounced wrong and been replaced by new ones. How could this happen if we cannot isolate a single theory for comparison with data? How can scientists decide which of two competing theories is better at explaining data if a whole slew of other theories are also involved in the process? Is Duhem saying that we can never arrive at any conclusion about the truth or falsity of any scientific theory?

Not quite. What he goes on to say is that, like a physician, a scientist has to exercise a certain amount of discerning judgment in identifying the source of the problem, all the while being aware that one does not know for certain. Duhem argues that this is where the reasoned judgment of the scientific community as a whole plays a role in determining the outcome, overcoming the limitations imposed by strict logic. While there may be a temporary period in which scientists argue over the merits of competing theories,

In any event this state of indecision does not last forever. The day arrives when good sense comes out so clearly in favor of one of the two sides that the other side gives up the struggle even though pure logic would not forbid its continuation… Since logic does not determine with strict precision the time when an inadequate hypothesis should give way to a more fruitful assumption, and since recognizing this moment belongs to good sense, physicists may hasten this judgment and increase the rapidity of scientific progress by trying consciously to make good sense within themselves more lucid and more vigilant. (Duhem, p. 218, my italics.)

In the next post, I will discuss the importance that the consensus judgment of expert communities plays in science.

August 02, 2011

The logic of science-10: Can scientific theories be proven false?

(For previous posts in this series, see here.)

In the previous post in this series, I wrote about the fact that however much data may support a theory, we are not in a position to unequivocally state that we have proven the theory to be true. But what if the prediction disagrees with the data? Surely then we can say something definite, that the theory is false?

The philosopher of science Karl Popper, who was deeply interested in the question of how to distinguish science from non-science, used this idea to develop his notion of falsifiability. He suggested that what makes a theory scientific was that it should make predictions that can be tested, saying that "the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability." (Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, p. 48)

Popper's motivation for doing this was his opposition to the claims of the supporters of Marxism, Freudian psychoanalysis, and Jungian psychology that their respective theories were scientific. He said that those theories seemed to be so flexible that almost anything that happened could be claimed to be in support of the theory. While supporters of these theories used these alleged successes as demonstrating the strength of their theories, Popper argued the converse: that a theory that could never be proven wrong was not scientific. A scientific theory was one that made risky predictions that laid bare the possibility that a negative result would require the discarding of the theory. A theory whose predictions could never be contradicted by any conceivable data was not a scientific theory.

Popper also said that humans were born with a innate tendency to make conjectures, to construct a universal theory based on whatever data was at hand, and that we held on to that theory until it was refuted (or falsified) by new data, whereupon we replaced it with a new universal theory. This process of conjectures and refutations went on all the time and was how science functioned. He claimed that this model also solved the problem of induction, why we expected that things that had always happened in the past would continue to happen in the future, when logically there was no reason to infer that.

Although Popper's main goal was to solve what was known as the demarcation problem, i.e., the ability to distinguish science from non-science, his idea of falsifiability seemed to also advance us along the goal of distinguishing truth from falsehood, because if a prediction disagrees with the data, then we can conclude that the theory is false. This feature seems to give us some hope that we can arrive at a true theory by a back door mechanism. If we can enumerate all the possible alternatives to a theory and prove that all but one are false, then the one remaining theory must be true. To quote Sherlock Holmes in The Sign of Four, "When you have eliminated the impossible, whatever remains, however improbable, must be the truth."

But that option also proves to be illusory, for a purely practical reason. In science, one can never be sure that one has exhausted all the alternatives. There is no limit to the number of theories that can be postulated to explain any given set of phenomena and so showing one or any number of them to be false does not prove that any of the remaining alternatives are true.

This is the fatal flaw of the arguments of almost all religious believers, especially the creationists and the intelligent designers. Their strategy is to argue that there are only two possible explanations for some phenomenon, an intervention by god or an explanation based on naturalistic science. For example, in explaining the diversity of life, the competing theories are said to be evolution by natural selection or a designer/god. They would then seek some phenomenon that had not been convincingly explained by the scientific theory that encompasses it, declare that the scientific theory had thus been falsified, and triumphantly conclude that the phenomenon must be the work of a god. But that is a false dichotomy. Even in the highly unlikely event that some day in the future the theory of evolution by natural selection experiences a serious enough crisis that scientists suspect it to be false, this would not imply that 'god did it' would be accepted as the true explanation. There will be no shortage of other scientific theories competing to replace the theory of evolution, all of them having at least some supporting evidence.

This kind of flawed argument is what religious believers advance even now, with the current candidates for god's intervention being the origin of life, the origin of the universe, the mind, consciousness, intelligence, morality, etc. They have no choice but to pursue this fundamentally flawed strategy because they have no positive evidence for god.

Next: Are theories falsifiable?

July 29, 2011

The logic of science-9: Can scientific theories be proven true?

(For previous posts in this series, see here.)

In mathematics, the standard method of proving something is to start with the axioms and then apply the rules of logic to arrive at a theorem. In science, the parallel exercise is to start with a basic theory that consists of a set of fundamental entities and the laws or principles that are assumed to apply to them (all of which serve as the scientific analogues of axioms) and then apply the rules of logic and the techniques of mathematics to arrive at conclusions. For example, in physics one might start with the Schrodinger equation and the laws of electrodynamics and a system consisting of a proton and electron having specific properties (mass, electric charge, and so on) and use mathematics to arrive at properties of the hydrogen atom, such as its energy levels, emission and absorption spectra, chemical properties, etc. In biology, one might start with the theory of evolution by natural selection and see how it applies to a given set of entities such as genes, cells, or larger organisms.

The kinds of results obtained in science using these methods are not referred to as theorems but as predictions. In addition to the mathematical ideas of axioms, logic, and proof, in science we are also dealing with the empirical world and this gives us another tool for determining the validity of our conclusions, and that is data. This data usually comes either in the form of observations for those situations where conditions cannot be repeated (as is sometimes the case in astronomy, evolution, and geology) but more commonly is in the form of experimental data that is repeatable under controlled conditions. The comparison of these predictions with experimental data or observations is what enables us to draw conclusions in science.

It is here that we run into problems with the idea of truth in science. While we can compare a specific prediction with experimental data and see if the prediction holds up or not, what we are usually more interested in is the more basic question of whether the underlying theory that was used to arrive at the prediction is true. The real power of science comes from its theories because it is those that determine the framework in which science is practiced. So determining whether a theory is true is of prime importance in science, much more so than the question of whether any specific prediction is borne out. While we may be able to directly measure the properties of the entities that enter into our theory (like the mass and charge of particles), we cannot directly test the laws and theories under which those particles operate and show them to be true. Since we cannot treat the basic theory as an axiom whose truth can be established independently, this means that the predictions we make do not have the status of theorems and so cannot be considered a priori true. All we have are the consequences of applying the theory to a given set of entities, i.e., its predictions, and the comparisons of those predictions with data. The results of these comparisons are the things that constitute evidence in science.

So what can we infer about the truth or falsity of a theory using such evidence? For example, if we find evidence that supports a proposition, does that mean that the proposition is necessarily true? Conversely, if we find evidence that contradicts a proposition, does that mean that the proposition is necessarily false?

To take the first case, if a prediction agrees with the results of an experiment, does that mean that the underlying theory is true? It is not that simple. The logic of science does not permit us to make that kind of strong inference. After all, any reasonably sophisticated theory allows for a large (and usually infinite) number of predictions. Only a few of those may be amenable to direct comparison with experiment. The fact that those few agree does not give us the luxury of inferring that any future experiments will also agree, a well known difficulty known as the problem of induction. So at best, those successful predictions will serve as evidence in support of our theory and suggest that it is not obviously wrong, but that is about it. The greater the preponderance of evidence in support of a theory, the more confident we are about its validity, but we never reach a stage where we can unequivocally assert that a theory has been proven true.

So we arrive at a situation in science that is analogous to that in mathematics with Godel's theorem, in that the goal of being able to create a system such that we can find the true theories of science turns out to be illusory.

Next: Can we prove a scientific theory to be false?

July 27, 2011

The logic of science-8: The power of universal claims in science

(For previous posts in this series, see here.)

In the previous post in this series, I argued that in the case of an existence claim, the burden of proof is upon the person making the assertion. In the absence of a preponderance of evidence in its favor, the claim can be dismissed. As has often been said, "What can be asserted without proof can be dismissed without proof". The basis for this stance is the practical one that proving the non-existence of an entity (except in very limited circumstances) is impossible. Hence if we do NOT have a preponderance of evidence in favor of the existence of an entity, we conclude that it is not there.

In the case of a universal claim, however, the situation is reversed and the default position is that the claim is assumed to be true unless evidence is provided that refutes it. So in this case, the burden of proof is on the person disputing the assertion, again for eminently practical reasons.

As an example, the universal claim that all electrons have identical masses and charges can never be proven to be true with just supporting evidence because we cannot measure the properties of every electron in the universe. But once a few of them have been shown to have the same mass and charge, the universal claim that all of them do is presumed to be true unless someone comes up with evidence that disputes it. This is why the proposition "All electrons have the same mass and charge and behave identically in interactions with other particles" is believed to be a true proposition. In this case, absence of evidence (against the universal proposition) can be taken as evidence of absence that such evidence exists at all.

In science, negative evidence can be powerful in the same way that it can be in the legal setting, as in the famous Sherlock Holmes story of the inferences that can be drawn from the dog that did not bark in the night. Since there is a belief that dogs always bark when unexpected events occur in the night, we can infer from a silent dog that nothing untoward happened. In science, we believe that natural laws are invariably followed without exception. For example, the strongly held scientific belief that there exist only two kinds of electric charge is based entirely on this argument, because there has been no evidence produced that we need a third kind of electric charge. Similarly, any universal claim about the properties of an entity whose existence has already been established are taken to be true unless evidence is provided that contradicts the claim.

The laws of science are (as far as I am aware) always phrased as universal claims. There are a number of such laws such as energy, momentum, and angular momentum conservation, baryon number conservation, and CPT conservation (where C stands for 'charge conjugation', P stands for 'parity', and T for 'time reversal) all of which are believed to be true purely because no violations have been observed. Anyone who challenges the validity of those laws has the burden of proof to provide evidence of such violation. This approach is so routine in science that no one even bothers to state it explicitly

The contradiction of a universal claim is done by means of an existence claim. For example, it used to be considered that something called CP was also conserved in every reaction. Why did we believe this universal claim? Because no reaction violating it had ever been seen. But some scientists suspected that it might be violated under certain conditions. Postulating such a reaction constituted a new existence claim. This was not initially accepted since no one had seen a violation of CP. But then one rare reaction was detected that did violate CP and this was confirmed in subsequent experiments. It was only then that the universal claim that CP was never violated was accepted as not being true, because some researchers produced evidence in support of their existence claim of such violations. Now, without further evidence, we are justified in believing the universal claim that this same reaction will violate CP every time it happens, until someone finds evidence for the claim that on occasion it does not.

So in science this interplay of existence and universal claims, and the different ways they are established, goes on all the time and forms an integral part of the way that scientific knowledge is constructed.

'God exists' is an existence claim and the burden of proof lies with those who assert it. In the absence of such evidence, the scientific conclusion is that god does not exist. Similarly, 'god does not exist' is a universal claim and the burden of proof lies with those who deny it and they must again provide evidence that god exists. Since they have not produced any such evidence, the scientific conclusion is that 'god does not exist' is a true statement.

It necessarily follows from the above discussion that in science the word 'true' is used provisionally and not absolutely. In the case of an existence claim, 'true' is taken to mean that it is supported by a preponderance of evidence. In the case of universal claims, 'true' is used as an abbreviation for 'not yet shown to be contradicted by evidence'. It is always within the realm of possibility that someone might come along with data that suggests that there exists a particle that seems to behave identically as the electron but has (say) a different mass. In fact, that has actually happened. The scientific community responded with further experimentation that confirmed the existence of this new particle, now called the muon, and it is now considered a true proposition that muons exist and all have the same mass and charge and behave just like electrons except that their mass differs from that of electrons.

Maybe one day there will be a preponderance of evidence for the existence of god. But until such time, a perfectly valid scientific conclusion is that god does not exist.

Next: Proofs as used in science

July 21, 2011

The logic of science-7: The burden of proof in science

(For previous posts in this series, see here.)

The logic used in arriving at scientific conclusions closely tracks the legal maxim that 'the burden of proof rests on who asserts'. It should be noted that the word proof used here does not correspond the way it is used in mathematics, but more along the lines used in law. As commenter Eric pointed out in response to the previous post in this series, in the legal arena there are two standards for proof. In criminal cases, there is the higher bar of proving beyond a reasonable doubt, but in civil cases the standard is one based on the preponderance of evidence. So if the preponderance of evidence is in favor of one position, it is assumed to be true even if it has not been proven beyond a reasonable doubt. Scientific propositions are judged to be true not because they have been proven to be logically and incontrovertibly true (which is impossible to do) or because they have been established by knowledgeable judges to be beyond a reasonable doubt (which is not impossible but is too high a bar to result in productive science), but because the preponderance of evidence favors them. Evidence plays a crucial role here as it does in legal cases.

Scientific claims can be both existence claims and universal claims, and these two types of propositions are proved in different ways. In science the burden of proof in existence claims lies, as in legal claims, with those who make the claim. If they cannot meet the standard of proof, the claim is presumed to be false. With universal claims however (once at least some positive evidence has been provided in support of existence), the burden of proof lies with those trying to show that it is false. In the legal context, a witness who swears to tell the truth is assumed to be always telling the truth, a universal claim. A lawyer who wishes to make the point that a witness is not truthful is the one who is assumed to making an assertion and thus has the burden of proof to show that the witness has lied.

For an example of proof of an existence claim in science, the claim that an entity called an electron exists has to be supported by evidence that shows that an entity with the postulated properties of an electron (such as its mass and charge) has been, or at least can be, detected in experiments. The reason that I say 'can be' is that in some cases if there is strong circumstantial evidence in favor of the existence of an entity, a provisional verdict in favor of existence may be granted, pending more direct confirmation. The most famous case of this may the 'ether', which was postulated to exist on the basis of circumstantial evidence that it should exist, until it was shown that the theory of relativity undermined all that evidence in its favor and its existence was rejected. The neutrino is example of something that was granted provisional existence and was later directly detected.

The reason for these rules about how to judge the truth of existence and universal claims is simply because without them science would be unworkable. In most cases of scientific interest, it is impossible to prove that an existence claim is false and without these rules we would be swamped with existence claims for non-existent entities. The film Avatar, for example, postulated the existence of a valuable mineral called Unobtainium on another planet called Pandora somewhere in the universe. How could one possibly prove that such a mineral (or even the planet) does not exist? One cannot. Thus originates the scientific rule that to establish that a proposition of existence is true, one has to provide positive evidence in support of it. In the absence of such evidence, a perfectly justifiable scientific conclusion is that the proposition is false and that it does not exist.

This rule is hardly controversial. It is used in everyday life by everyone because would be impossible to live otherwise. To not have such a rule is to open oneself to an infinite number of mythical entities. To allow for the existence of something in the absence of a preponderance of evidence in support of its existence means believing in the existence of unicorns, leprechauns, pixies, dragons, centaurs, mermaids, fairies, demons, vampires, and werewolves.

This is why it is perfectly valid to conclude that there is no god. 'There is a god' is an existence claim and the burden of proof lies with those making the claim. Since no one has produced a preponderance of evidence in support of it, the claim is not to be taken seriously. Religious apologists who try to argue that god exists using logic alone without producing a preponderance of evidence in its favor are not being scientific and have entered the evidence-free realm of theology, in which one starts with whatever one wants to believe and then manufactures reasons for believing in it, even if that same reasoning is not applied to any other sphere of life.

Religious 'logic' is beautifully illustrated by this cartoon.

religiouslogic.jpg

Next in the series: The power of universal claims in science

July 18, 2011

The logic of science-6: The burden of proof in law

(For previous posts in this series, see here.)

For a long time, religion claimed to reveal eternal truths. No one except true believers seriously says that anymore because science has become the source of reliable knowledge while religion is increasingly seen as being based on evidence-free assertions. So some believers tend to try and devalue the insights science provides by elevating what we can call truth to only those statements that reach the level of mathematical proof, because such a high bar can rarely be attained and thus everything else becomes a matter of opinion. They can then claim that scientific statements and religious statements merely reflect the speaker's opinion, nothing more.

But science uses criteria other than proof for making judgments about truth. In making such judgments, scientists act more like judges in legal cases than mathematicians deriving proofs. For example, in legal proceedings, the usual practice is to follow the legal principle ei incumbit probatio qui dicit, non qui negat, which I am told (not knowing Latin myself) translates as "the burden of proof rests on who asserts, not on who denies", where the assertion is of a positive nature and not a negative one. So if someone is accused of committing a crime, the burden of proof is on the accuser and not the defendant. This principle is more popularly stated in English as that a person is presumed innocent until proven guilty beyond a reasonable doubt.

This principle is considered such a fundamental aspect of a civilized society that it is enshrined in Article 11 of the Universal Declaration of Human Rights which states that: "Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which they have had all the guarantees necessary for their defence." Of course many countries (including the US) routinely violate this principle when it suits them, while still smugly claiming to uphold the basic principles of human rights.

A point to note is that technically the only outcomes in a legal proceeding are "guilty" (i.e., proved beyond a reasonable doubt) and "not guilty" (not proved beyond a reasonable doubt). The defendant is never proven to be innocent, and has no obligation to do so. Indeed the defendant is not even obliged to provide any kind of defense at all. This can of course lead to undesirable situations where the jury can suspect that a defendant is indeed guilty of the crime but feels obliged to bring in a verdict of not guilty if the case has not met the 'proved beyond the reasonable doubt' standard which is why the 'proven innocent' phrasing is not appropriate for not guilty verdicts. But this kind of undesirable outcome is the price we pay for trying to have the fairest possible system, even if it should lead to public outcries of the sort seen following the not guilty verdicts in the O. J Simpson and Casey Anthony murder trials which were mistakenly interpreted by the public as statements that innocence had been proven when all it meant was that the presumption of innocence had not been contradicted.

One could have an alternative system in which a person is presumed guilty until proven innocent, shifting the entire burden of proof onto the defendant. There is nothing logically wrong with such system but in practice it would be unworkable since there are many more people who are innocent of a crime than there are those who are guilty. Furthermore it is often difficult, if not impossible, to prove innocence. For example, if I am asleep alone at home, it would be very difficult for me to prove that I was not robbing a nearby convenient store at that time, which is why the 'presumed innocent until proven guilty' standard seems to be a better one. So there are good reasons for having the burden of proof be on the person who asserts a positive claim and not on the person who denies as the method of arriving at legal verdicts or 'truths'.

Unless one agrees on which of the two frameworks (presumed innocent until proven guilty beyond a reasonable doubt or presumed guilty until proven innocent) to use in making legal judgments, it may be impossible to agree on a verdict. But whatever system one chooses, the basic structure is that there is a default position that is assumed to be true unless shown otherwise, so that proof of only one position is required.

Similar considerations apply in arriving at scientific truths.

Next: The burden of proof in science

July 15, 2011

The logic of science-5: The problem of incompleteness

(For previous posts in this series, see here.)

As I discussed in the previous post in this series, our inability to show that an axiomatic system is consistent (i.e., free of contradictions as would be evidenced by the ability to prove two theorems each of which contradicted the other) is not the only problem. Godel also showed that such systems are also necessarily incomplete. In other words, for all systems of interest, there will always be some truths of that system that cannot be proven as theorems using only the axioms and rules of that system. So the tantalizing goal that one day we might be able to develop a system in which every true statement can be proven to be true also turns out to be a mirage. Neither completeness nor consistency is attainable.

Belief in god depends upon ignorance for its very existence and some religious people have seized on Godel's theorem to try and argue that 'god exists' is one of these true statements that cannot be proved. This is a misunderstanding of what Godel proved but is typical of attempts by religious people who seize upon and use important results in science and mathematics (especially those that impose some limits to knowledge, such as the uncertainty principle) to justify the unjustifiable.

The fact is that you cannot simply assert that any proposition you choose belongs in that niche that Godel discovered. The true yet unprovable statements have to be constructed within that particular system to meet certain criteria and are thus dependent on the axioms used, and a statement that is true but unprovable in one system need not be so in another one. Simply by adding a single new axiom to a system, statements that were formerly unprovable cease to be so while new true but unprovable statements emerge. Whenever religious people invoke Godel's theorem (or the uncertainty principle or information theory) in support of their beliefs, you should be on your guard and investigate if what they say is actually what the science says.

So what can we do in the face of Godel's implacable conclusion that we cannot construct an axiomatic system in which the theorems are both complete and consistent? At this point, pure mathematicians and scientists part company. The former have basically decided that they are not concerned with the truth or falsity of their theorems (and hence of the axioms) but only with whether the conclusions they arrive at (the theorems) are the necessary logical conclusions of their chosen axioms and rules of logic. Even a statement such as '2+2=4', which most people might regard as a universal truth that cannot be denied, is seen by them as merely the consequence of certain starting assumptions, and one cannot assign any absolute truth value to it. So pure mathematicians concern themselves with the rigor of proofs, not with whether the theorems resulting from them have any meaning that could be related to truth in the empirical world. Mathematical proofs have become disconnected from absolute truth claims.

For the scientist dealing with the empirical world, however, questions of truth remain paramount. It matters greatly to them whether some result or conclusion is true or not. While the methods of proofs that have been developed in mathematics are used extensively in science, scientists have had to look elsewhere other than proofs to try and establish the truth or falsity of propositions. And that 'elsewhere' lies with empirical data or the 'real world' as some like to call it. This is where the notion of evidence plays an essential role in science. So in mathematics while the statement '2+2=4' is simply a theorem based on a particular set of axioms, in science its empirical truth or falsity of it has to be judged by how well real objects (apples, chairs, etc.) conform to it.

This dependence on data raises a problem similar to that of the consistency problem in mathematics that Godel highlighted. We can see if '2+2=4' is true for many sets of objects by bringing the actual objects in and counting them but we obviously cannot do so for everything in the universe. So how can we know that this result holds all the time, that it is a universal truth? Such a concern may well seem manifestly overblown for a simple and transparent assertion like '2+2=4' but many (if not most) results in science are not obviously and universally true and so they can be challenged. For example, for a long time the tobacco industry challenged the conclusion that smoking causes cancer by pointing out that there exist some smokers who do not get cancer.

So however much the data we obtain supports some proposition, how can we be sure that there does not exist some undiscovered data that will refute it? This does not mean that we cannot be definitive in science. But the justification of scientific conclusions depends upon a line of reasoning that is different from those involving direct proofs, as will be seen in subsequent posts.

Next: The logic of science and the logic of law

July 13, 2011

The logic of science-4: Truth and proof in mathematics

(For previous posts in this series, see here.)

Within mathematics, Euclidean geometry is the prototypical system that demonstrates the power of proof and serves as a model for all axiomatic systems of logic. In such systems, we start with a set of axioms (i.e., basic assumptions) and a set of logical rules, both of which seem to be self-evidently true. By applying the rules of logic to the axioms, we arrive at certain conclusions. i.e., we prove what are called theorems. Using those theorems we can prove yet more theorems, creating a hierarchy of theorems, all ultimately resting on the underlying axioms and the rules of logic. Do these theorems correspond to true statements? Yes, but only if the axioms with which we started out are true and the rules of logic that we used are valid. Those two necessary conditions have to be established independently.

So how does one do that? While we may all be able to agree on the validity of the rules of logic if they are transparent, simple, and straightforward (though there are subtle pitfalls even there) establishing the truth of the axioms is not always easy because things that seem to be obviously true may turn out to be not so.

Furthermore, even assuming for the moment that one knows that the axioms are true and the rules of logic are valid, there are still problems. For example, how can we know that all the theorems that we can prove correspond exactly to all the true statements that exist? Is it possible that there could be true statements that can never be reached however much we may grow the tree of theorems? This is known as the problem of completeness.

There is also another problem known as the problem of consistency. Since the process of proving theorems is open-ended in that there is no limit to how many we can potentially prove, how can we be sure that if keep going and prove more and more theorems we won't eventually prove a new theorem that directly contradicts one that we proved earlier, thus resulting in the absurdity that a statement and its negation have both been proven?

To address this, we rely upon a fundamental principle of logic that 'truth cannot contradict truth', and thus we believe that it can never happen that two true statements contradict each other. Thus establishing the truth of the axioms and using valid rules of logic guarantees that the system is consistent, since any theorem that is based on them must be true and thus no two theorems can contradict each other. Conversely, if we ever find that we can prove as theorems both a statement and its negation, then the entire system is inconsistent and this implies that at least one of the axioms must be false or a rule of logic is invalid.

There is usually little doubt about the validity of the rules of logic that are applicable in a mathematical system (if they are simple and transparent enough) and thus a true set of axioms implies a consistent system of theorems and vice versa. Hence we can at least solve the problem of consistency if we can establish the truth of the axioms, though the completeness problem remains open.

(Those who are familiar with these issues will recognize that we are approaching the terrain known as Godel's theorem. While I will discuss its main results, for those seeking to understand it in more depth I can strongly recommend an excellent little monograph Godel's Proof by Ernest Nagel and James R. Newman, and the clever and entertaining (but much longer) Godel, Escher, Bach: An Eternal Golden Braid by Douglas R. Hofstadter.)

So how do we establish the truth of the axioms? If the system we are dealing with consists of a finite number of objects, we may be able to prove the axioms to be true by seeing if every one of the objects in the system satisfy the axioms by exhaustively applying all the axioms to all the objects and seeing if they hold true in every case. Even if the axioms do not relate to a set of objects, we may be able to construct a model system of objects in which the elements of the model correspond to the elements in the axioms and thus repeat the above process. So, for example, we can take the axioms involving points and lines and so forth in Euclidean geometry (which are abstractions that have purely mathematical relationships with each other) and build a model system of real objects (such as points and lines in space that can be drawn on paper) and see if the axioms apply to the properties of such real objects in real space. Similarly, we can see if the abstract rules for adding numbers correspond to what we get if we add up real objects together.

The catch is that for most systems of interest (such as points and lines in geometry and the integers in number theory), the number of elements in the system is infinite and it is not possible to exhaustively check if (for example) every point and every line that can be drawn in space satisfy the axioms. So then how can we know if the axioms are true? It is not enough that the axioms may look so simple and intuitive that they can be declared to be 'obviously' true. It has been shown that even the most seemingly simple and straightforward mathematical concept, such as that of a 'set', can produce contradictions that destroy the idea that a system is consistent, so we have to be wary of using simplicity and transparency as our sole guide in determining the truth of axioms.

One might wonder why we are so dependent on such a pedestrian method as applying each axiom to every element of the system to establish the truth of axioms and the consistency of systems. Surely we can apply more powerful methods of reasoning to show whether a set of axioms is true even if they involve an infinite number of elements? One would think so except that Godel proved that this could not be done except for very simple systems that do not cover the areas of most interest to mathematicians. Godel "proved that it is impossible to establish the internal logical consistency of a very large class of deductive systems - number theory, for example - unless one adopts principles of reasoning so complex that their internal consistency is as open to doubt as that of the systems themselves." (Nagel and Newman, p. 5, my italics.)

In other words, the price we pay for using more powerful reasoning methods to prove the consistency of some axiomatic system is that we lose transparency and simplicity in the rules of logic used in constructing those very methods and now they cannot be assumed or shown to be valid. As the old saying goes, what we gain on the swings, we lose on the roundabouts. As a result, we arrive at Godel's melancholy conclusion that Nagel and Newman state as "no absolutely impeccable guarantee can be given that many significant branches of mathematical thought are entirely free of internal contradiction." In other words, Godel proved that the goal of proving consistency cannot be achieved even in principle.

This is quite a blow to the idea of determining absolute truth because if we cannot show that a system is consistent, how can we depend upon its results?

Next in the series: The problem of incompleteness

July 11, 2011

The logic of science-3: The demise of infallibility

(For previous posts in this series, see here.)

The idea of scientific infallibility, that the knowledge generated by science should be true and unchanging, suffered a series of blows in the late 19th and early 20th centuries that saw the repeated overthrow of seemingly well-established scientific theories with new ones. Even the venerable Newtonian mechanics, long thought to be unchallengeable, was a casualty of this progress. Aristotle's idea that scientific truths were infallible, universal, and timeless, fell by the wayside, to be replaced with the idea that they were provisional truths, the best we had at the current time, and assumed to be true only until something better came along.

But despite that reduction in status, it is important to realize that for the practicing scientist, the question of 'truth' remains paramount. But what the word 'true' means depends on the context.

One form that this commitment to truth takes is that it requires scientists to be truthful when reporting the results of their work, because others depend upon it. The whole structure of scientific knowledge is created cumulatively, each person building on the work of others, and this requires trust in the work of other people because it is not always feasible to independently verify every claim of other scientists. Because scientific knowledge is so interdependent, falsehoods in one area can do serious damage to that structure.

This does not mean that scientists are more truthful as persons. But it does mean that being dishonest is not a good career strategy because you will likely be found out, especially if your work has important consequences. Scientists are not usually suspicious of the work of other scientists and do not reflexively check their work. But the interdependence of knowledge means that a falsehood or error in one area will eventually be detected because people will try to use that knowledge in new areas and will encounter inexplicable results. When the sources of the error are investigated, it will eventually be traced back to the original perpetrator. This is almost always how scientific errors and frauds are discovered.

As a minor example, in my own research experience I once uncovered an error published by others years before because I could not agreement with data when I used their results. Similarly a published error of my own was discovered by others after a lapse of time, for the same reason. It is because of this kind interdependence that science is largely, but not invariably, self-correcting. This is also why in academia, where the search for true knowledge is the prime mission, people who knowingly publish or otherwise propagate falsehoods or commit many errors, suffer serious harm to their reputations and are either marginalized or drummed out of the profession. Some recent spectacular cases of deliberate fraud are those of Jan Hendrik Schon and Woo Suk Hwang . So in the search for knowledge, accurately reporting honestly obtained data and making true statements about one's work is a prime requirement.

But there is another, more philosophically elusive, search for truth that is also important, and that is determining the truth of scientific theories. It matters greatly whether the theory of special relativity is true or not or whether some chemical is a carcinogen or not. To get those things wrong can have serious consequences extending far beyond any individual scientist. But it is important to realize that in such cases, truth is always a provisional inference made on the basis of evidence, similar to the verdict arrived at in a legal case. And just as a legal judgment can be overturned on the basis of new evidence, so can such scientific truths be overturned, thus eliminating the idea of infallibility.

So how does one arrive at provisional truths in science? In establishing the truth of a scientific proposition, scientists use reasoning and logical arguments that are closely similar to, but not identical with, mathematical and legal reasoning. Being aware of the similarities and distinctions is important to avoid claiming scientific justification for claims that are not valid, as often happens when religious people try to co-opt science in support of their beliefs in god and the afterlife.

The first issue that I would like to discuss is the relationship between truth and proof, because in everyday language truth and proof are considered to be almost synonymous. The idea of 'proof' plays an important role in establishing truth because most of us associate the word proof as being conclusive, and it is always more authoritative if we are able to say that we have proven something to be true or false.

The gold standard of proof comes from mathematics and much of our intuitive notions of proof come from that field so it is worthwhile to see how proof works there, what its limitations are when applied even within mathematics, and what further limitations arise when we attempt to transfer those ideas into science.

Next: Truth and proof in mathematics

July 07, 2011

The logic of science-2: Determining what is true

(For previous posts in this series, see here.)

An important question in any area of knowledge is being able to identify what is true and what is false. The search for what is true and the ability to know when we have discovered truth is, after all, the Holy Grail of epistemology, because we believe that those things that are true are of lasting value while false statements are ephemeral, usually a waste of time and at worst harmful and dangerous.

Aristotle tried to make a clear distinction between those things that we feel we know for certain and are thus unchanging, and those things that are subject to change. The two categories were variously distinguished as knowledge versus opinion, reality versus appearance, or truth versus error. Aristotle made the crucial identification that true knowledge consisted of scientific knowledge, and his close association of scientific knowledge with truth has persisted through the ages. It also made the ability to distinguish between scientific knowledge and other forms of knowledge, now known as the demarcation problem, into an important question since this presumably also demarcates truth from error. (This brief summary of this history is taken from the essay The Demise of the Demarcation Problem by Larry Laudan which should be referred to for a fuller treatment.)

Aristotle said that scientific knowledge was based on foundations that were certain and thus was infallible. Since he identified scientific knowledge with true knowledge, it followed that scientific knowledge had to be unchanging because how could truth ever become false?

The second characteristic of scientific (and hence true) knowledge was that it should consist of not just ‘know-how’ but also of ‘know-why’. 'Know-how’ knowledge was considered to be the domain of craftsmen and engineers. Such people can (and do) successfully build boats, bridges, houses, and all manner of valuable and important things without needing an understanding of the underlying theoretical principles on which they work. The electrician I call to identify and fix problems in my house has plenty of know-how and does his work quickly and efficiently without having to understand, or even know about, Maxwell's laws of electrodynamics (the know-why), whereas any scientist would claim that the latter was essential for really understanding the nature of electricity.

It is for this reason that Ptolemaic and early Copernican astronomy were not considered scientific during their time even though they made highly accurate predictions of planetary motions. Their work was not based on an understanding of the laws that governed the motion of objects but on purely empirical correlations, and thus lacked 'know-why'. If, for example, a new planet were to have been discovered, existing knowledge would not have been of much help to them in predicting its motion. Hence astronomy was considered to be merely know-how and astronomers to be a species of craftsmen.

The arrival of Isaac Newton and his laws of motion provided the underlying principles that governed the motion of planets. These laws not only explained the existing extensive body of data on planetary motions, they would also be able to predict the motion of any newly discovered planet and even led to the prediction of the existence of an actual new planet (Pluto Neptune) and where it would be located. Newton's theories provided the 'know-why' that shifted astronomy into the realm of science.

It was thought that it was this know-why element that made us confident that scientific knowledge was true and based on certain foundations. After all, even if a boat builder finds that all the wood he has encountered floats in water, this does not mean that the proposition that all wood will always float is necessarily true since it is conceivable that some new wood might turn up that sinks. But the scientific principle that all objects with a lower density than water will float while those with a higher density will sink seems to be on a much firmer footing since that knowledge penetrates to the core of the phenomenon of sinking and floating and gets at its root cause. It seems to have certain foundations.

As a consequence of the appreciation that 'know-why' knowledge has greater value, science now largely deals with abstract laws, principles, causes, and logical arguments. Empirical data is still essential, of course, but mainly as a means of testing and validating those ideas. Many of these basic ideas are somewhat removed from direct empirical test and thus determining if they are true requires considerably more effort. For example, I can easily determine if the pen lying on my desk will float or sink in water by just dropping it in a bucket. But establishing the truth of a scientific proposition, say about the role that relative densities play in sinking and floating, is not that easy.

So given the primacy of scientific principles and laws in epistemology, and since the discovery of eternal truths is to be always preferred over falsehood, an elaborate structure has grown around the whole exercise of how to establish the truth and falsity of scientific propositions, often requiring the construction of expensive and specialized equipment to determine the empirical facts relating to those propositions, and extensive long-term study of esoteric subjects to relate the propositions to the data.

Next in the series: The demise of infallibility

July 06, 2011

The logic of science-1: The basic ideas

In the course of writing these blog posts, especially those dealing with religion, atheism, science, and philosophy, I have often appealed to the way that principles of logic are used in science in making my points. But these are scattered over many posts and I thought that I should collect and archive the ideas into one set of posts (despite the risk of some repetition) for easy reference and clarity. Besides, I haven't had a multi-part series of posts in a long time, so I am due.

Learning about the principles of logic in science is important because you need a common framework in order to adjudicate disagreements. A big step towards in resolving arguments can be taken by either agreeing to a common framework or deciding that one cannot agree and that further discussion is pointless. Either outcome is more desirable than going around in circles endlessly, not realizing what the ultimate source of the disagreement is.

When people seek definite knowledge, they turn to science, not religion. For all its claims of revealing timeless truths, religion completely fails to deliver the goods. Nobody except religious fanatics seek answers to empirical questions in their religious texts, whereas the power and reliability of science is such that people accept completely counter-intuitive things as true, as long as a scientific consensus can be invoked in support of it. For example, the idea that stars are flaming hot gases is by no means self-evident, and yet everyone now accepts it. The idea that entire continents move is also accepted even though we cannot sense it directly. How does science get such persuasive authority? In this series of posts, I will examine how it can be so successful.

A good example of how the logic of science works is to see how the advance of science has made it quite obvious that there is no god. But it is important to be clear about how that conclusion is reached. Science has not proved that there is no god, can never prove that there is no god, and does not need to prove that there is no god. So why is it that so many scientists are so confident that god does not exist? It is really very simple. While the logic of science is such that it can never prove the non-existence of whatever entity that one might like to postulate, what it has shown is that god is an unnecessary explanatory concept for anything. It is just like the ether or caloric or phlogiston, scientific concepts that ceased to be necessary explanatory concepts, making them effectively non-existent. God has joined the ether, caloric, and phlogiston in the trash heap of discarded knowledge.

You would think that this simple point would be easy to understand. But as the cartoon below by Jesus and Mo shows, religious people somehow don't seem to get this simple point, perhaps because it throws their own arguments for a loop. They seem to willfully misunderstand it, perhaps so that they can continue to argue against straw men. So let me repeat it for emphasis: Science has not proved, and can never prove, that there is no god. Science is not in the business of proving and disproving things. What it has shown is that god is an unnecessary explanatory concept.

Jesus&Mo-proof.jpg

A big source of confusion about the logic of science comes from religious believers in their efforts to create some wiggle room for them to claim that believing in god is rational. What they try to argue is that even if there is no evidence for god, it is still reasonable to believe in he/she/it. Some religious people claim that since we cannot logically or empirically prove that god exists or does not exist, taking either point of view is an act of faith on an equal footing.

This is flat-out wrong because the logic of science is different from the logic of mathematics or the logic of philosophy because evidence is an essential ingredient in science. In science, logic does not remain in the abstract but is applied to data. When it comes to empirical questions such as whether any entity (including god) exists, the role of logic is to draw inferences from evidence. In the absence of evidence in favor of existence, the presumption is nonexistence.

We believe in the existence of horses because there is evidence for them. We do not believe in the existence of unicorns (or leprechauns, pixies, dragons, centaurs, mermaids, fairies, demons, vampires, werewolves) because there is no evidence for them even though we cannot logically prove they do not exist. It really is that simple. Anyone who argues that it is as reasonable to believe in god as it is to not believe in god is forced, by their own logic, to assert that it is as rational to believe in the existence of unicorns, etc. as it is to not believe in them.

The only time one encounters this type of 'logic' is from people who are defending god, the afterlife, and all the other forms of magical thinking that they cannot bear to give up and cannot defend in any other way.

So what follows in this series of posts is my attempt to clarify some of the underlying logical principles on which science functions and why one can confidently say that, applying the logic of science, the only reasonable conclusion has to be that god does not exist. I have few illusions that it will persuade religious people to give up belief. As the TV character House said, "Rational arguments don't usually work on religious people. Otherwise there would be no religious people."

My goals are more limited and that is to enable atheists to more effectively expose the fallacious arguments of religious believers and to facilitate more meaningful discussions about the role of science in arriving at firm conclusions about things. Over time, as religious believers find their assertions firmly challenged by others in every sphere of life, we will see an accelerating erosion of belief.

Next in the series: Determining truth