Thomas Kuhn
Thomas Kuhn is probably best understood as a historian of
science rather than a philosopher of science. His book, The Structure of Scientific Revolutions, however, is a significant
text for an understanding of the philosophy of science. Prof. Kasser described
a pattern Kuhn discovered in the history of science—“normal science punctuated
by periods of revolution.” Kuhn, according to the lecturer, dealt logical
positivism its most severe blow.
Popper and the positivists focused heavily on the scientific
method and the rationale by which science increases understanding about the
world. However, for Kuhn, the underlying method and logic was not nearly as
important as understanding how scientific views are adopted and modified. Kuhn
believed that the way to study science itself is to evaluate the activities
that scientists spend most of their time doing. Science generally had been
taught in terms of its success only. Kuhn described this history being taught
to future scientists as similar to brainwashing. Thus, science textbooks are
filled with heroes, hyperbole, and drama about experiments. Kuhn argued that
science is governed by a paradigm:
1. A paradigm is, first and
foremost, an object of consensus. 2. Exemplary illustrations of how scientific
work is done are particularly important components of a paradigm. Scientific
education is governed more by examples than by rules or methods.
Paradigms create consensus concerning the way in which work
should be done in a particular field and this is unique to science.
Puzzle-solving is the work of normal science. This paradigm “identifies
puzzles, governs expectations, assures scientists that each puzzle has a
solution, and provides standards for evaluating solutions. It is generally
assumed to be correct and doing science involves fitting into the categories of
this paradigm, observations about the behavior of nature.
Thus, the paradigm is a test for the scientist in that
failing to solve a puzzle reflects poorly upon the scientist rather than on the
paradigm itself. Sometimes, a crisis occurs in a particular scientific
community when its members lose their “faith” in the paradigm. According to
Kuhn, these crises often occur as a result of anomalies and puzzles that scientists
have repeatedly failed to answer. Thus, this is a kind of crisis of confidence.
Kuhn argued that Popper’s view was that this was the normal state of science.
Not so, according to Kuhn, if this were true science would fail to accomplish
anything. Sometimes, paradigms may be abandoned in favor of new ones. Kuhn
argued that this is a good thing for understanding and for science as long as
it occurs rather rarely.
A lot of Kuhn’s assertions can be boiled down to “his
insistence that rival paradigms cannot be judged on a common scale. They are
incommensurable. This means they cannot be compared via a neutral or
objectively correct measure.” Therefore, changing paradigms resembles something
of a “conversion experience.” Since individual psychology has a lot to do with
how individuals “convert” to a new paradigm, Hungarian philosopher Imre Lakatos
referred to Kuhn’s model of science as “one of mob psychology.”
Imre Lakatos was the first to try to reconcile the
rationalism of the “received view” and Kuhn’s “historicism.” His methodology concerning
scientific research attempts to incorporate both Popper’s openness to criticism
and Kuhn’s attachment to theories. Methodological rules retroactively judge
science research as either progressive or degenerative. Paul Feyerabend, another
significant philosopher of science, views Kuhn’s model as dull, mindless
scientific activity. “In arguments alternately sober and outlandish, Feyerabend
defends scientific creativity and epistemological anarchism.”
Sociology and postmodernism have also provided some insight
into science. One researcher believed that science was often reduced to semantic
absurdities. He entered a bogus scientific “white paper” as a presentation to a
scientific symposium. These are supposed to be reviewed for originality and
quality. His phony, meaningless presentation was accepted and he went through
with the ruse undetected, only to reveal the deception later in an attempt to
bee constructive.
All right, at this point, I’m ready to wrap things up.
However, there is still an immense amount of material covered in the lecture
series that I haven’t even mentioned. There are arguments about how values and
objectivity influence science. Most importantly there is a lot of discussion
about language and how language influences our construction and understanding
of reality. This has been a major movement within philosophy. Consider now that
the Massachusetts Institute of Technology houses their philosophy and
linguistics programs in the same department. Unfortunately, the limitations of
my meager skills to reduce this material to something worthy of being called a
summary prevent me from condensing this material.
While these subjects and others are vitally important to a
full understanding of the philosophy of science, I choose instead to devote the
remainder of my final installment to subjects with which I am more familiar due
to my own academic background: probability and Bayesian Theory.
The history of probability is quite interesting: its basic
mathematical theory came about only around the year 1660. This might have been
because people did not consider probability something that could be theorized
about effectively. It also might have been the result of the Christian notion
that everything is determined by God’s will. However, it was the great Blaise
Pascal who really got probability theory going when someone asked him to solve
some problems concerning dividing up gambling stakes fairly. It quickly spread
through the fields of business and law.
Probability is critical to the conception of evidence in the
modern sense. Probability was first associated with testimony: Opinions were
considered probable if they were “grounded in reputable authorities.”
Probability gradually changed enough to come to bear on the “causes” of natural
sciences like physics and astronomy and was further utilized in “low sciences”
like medicine. Such sciences relied on testimony until the Renaissance when
diagnosis was established to differentiate from authority and testimony on one
side and dissections and deduction used as proof on the other.
The 19th century saw the rise of probability and
statistics thinking which undermined deterministic trends. Governments kept
better records of births, deaths, crimes and began to see patterns that were
predictive. Statistics moved from disciplines like sociology into the hard
sciences like physics. This then gave rise to quantum mechanics which held that
the universe is governed by statistical laws.
The mathematics that underlies probability theory is
relatively straightforward. All probabilities are given as a value between 0
and 1. A necessary truth is assigned the probability of 1. If we say that event
A and B are mutually exclusive, the probability that one or the other will
occur is the sum of their singular probabilities. Thus, if there is a 30%
chance that you will eat pizza for dinner and a 40% chance that you will eat
spaghetti for dinner, there is a 70% chance that you will have either. It is
more complicated when events are not mutually exclusive. So the chance that you
will have pizza or spaghetti (when you might also eat both) is the chance of
pizza plus the chance of spaghetti minus the chance of both.
As probability theory continues to build in complexity there
are three ways to interpret the mathematics. Frequency theories put probability
in real world context and this is the most common use of probability within a
statistical context. “Probabilities could be construed as actual relative
frequencies.” This, however, creates a problem that the probabilistic account
is “too empiricist” in that it connects scientific research too closely to
actual experience:
A coin that has been tossed an odd
number of times cannot, on this view, have a probability of .5 of coming up
heads. In addition, a coin that has been tossed once and landed on heads has,
on this view, a probability of 1 of landing on heads. Such single-case
probabilities are a real problem for many conceptions of probability. One might
go with hypothetical limit frequencies: The probability of rolling a seven
using two standard dice is the relative frequency that would be found if the
dice were rolled forever. We saw an idea like this in the pragmatic vindication
of induction. This version might not be empiricist enough. The empiricist will
want to know how our experience in the actual world tells us about worlds in
which, for example, dice are rolled forever without wearing out.
Logical theories use probabilities as statements about
relationships for evidence of phenomena. Probability, thus, gives “partial” or
“incomplete” evidence similar to the way deduction provides conclusive
evidence. Just like with deduction, probabilistic evidence must be consistent.
If we have assigned a probability of 0.8 to p then we must make a 0.2 to
‘not p.’ “Having coherent beliefs is not sufficient for getting the world
right, but having incoherent beliefs is sufficient for having gotten part of it
wrong. Probabilistic coherence is a matter of how well an agent’s partial
beliefs hang together.” On the other hand, if the evidence does not present a
reason to prefer one outcome to another they should be regarded as equally
probable. “The mathematics of probability does not require this principle, and
it turns out to be very troublesome. There are many possible ways of
distributing indifference, and it’s hard to see that rationality requires
favoring one of these ways.”
Bayesian conceptions of probabilistic reasoning combine a
subjectivist interpretation of probability statements with the demand that
rational agents revise their degrees of belief in accordance with Bayes’s
Theorem. Bayesianism attempts to combine the positivists’ demand for rules
governing rational choice with a Kuhnian interpretation of values and
subjectivity. In the process, Bayesianism has revitalized philosophy of science
with respect to confirmation and evidence.
Bayes’s theorem begins with a subjective interpretation of
probability statements. These statements are of conditional probability,
meaning that they characterize degrees of belief of the person. Partly it
resembles gambling behavior: “the more unlikely you think a statement is, the
higher the payoff you would insist on for a bet on the truth of the statement.
Your degrees of belief need not align with any particular relative frequencies,
and they need not obey any principle of indifference.” The main importance is
coherence in probabilistic coherence.
The Dutch book argument is designed
to show the importance of probabilistic coherence. To say that a Dutch book can
be made against you is to say that, if you put your degrees of belief into
practice, you could be turned into a money pump. If I assign a .6 probability
to the proposition that it will rain today and a .6 probability to the
proposition that it will not rain today, I do not straightforwardly contradict
myself. The problem emerges when I
realize that I should be willing to pay $6 for a bet that pays $10 if it rains,
and I should be willing to pay $6 for a bet that pays $10 if it does not rain.
At the end of the day, whether it rains or not, I will have spent $12 and
gotten back only $10. It seems like a failing of rationality if acting on my
beliefs would cause me to lose money no matter how the world goes. It can be
shown that if your degrees of belief obey the probability calculus, no Dutch
book can be made against you.
However, some rather ridiculous beliefs can maintain
probabilistic coherence. Bayesianism uses a theory of how evidence should be
handled which helps it become a serious scientific theory of rationality. The
first element of this theory is the idea that confirmation raises the
probability of a hypothesis. “E confirms H just in case E raises the prior
probability of H. This means that the probability of H given E is higher than
the probability of H had been: P(H/E) > P(H). E disconfirms H if P(H/E) <
P(H).” This is done in a subjective interpretation of probability.
The second element critical to Bayesianism is that beliefs
should be updated in accordance with Bayes’s Theorem. Non- Bayesians acknowledge
the truth of Bayes’s theorem but don’t find it as useful as Bayesians.
The classic statement of the
theorem is:
P(E/H)×P(H)
P(H/E)=
P(E)
.
The more unexpected a given bit of evidence is against a
given hypothesis and the more expected it is according to the hypothesis, the
more confirmatory the evidence of the hypothesis.
The course began by asking what it is that makes science
special from a philosophical perspective. It is unclear how much we would like
to separate scientific theorizing from everyday theorizing. Unlike those who
would dismiss philosophy, it is hopefully apparent that philosophical inquiry
exists on a continuum with scientific inquiry. One is helpful in understanding,
clarifying, challenging, and enlarging the other. It is quite obvious that
controversy will continue to exist about this matter.
Course notes for the lecture conclude, quite eloquently:
Philosophy, especially philosophy
of science, is hard. It compensates us only with clarity, with the ability to
see that the really deep problems resist solutions. But clarity is not such
cold comfort after all. As Bertrand Russell argued, it can be freeing. When
things go well, philosophy can help us to see things and to say things that we
wouldn’t have been able to see or to say otherwise.
I know that this has been quite a saga, quite an undertaking
for this insignificant little blog. However, I hope, at the very least that it
would plant the seed in someone’s mind that science is a useful tool but not
the end-all be-all of understanding. It rests on certain axioms about the
material world which should never be ignored. Keep thinking. And, as always,
happy learning!
I am so glad this one is over.
No comments:
Post a Comment
Comments, criticisms, or corrections? Let me know what you think!