Tuesday, August 19, 2014

Political Theory: The Classic Texts and Their Continuing Relevance



I recently completed the course offered by the Modern Scholar on Political Theory. Like all offerings by Recorded Books, this course comprised seven CDs each with two approximately 30-minute lectures for a total of 7 hours of lecture time. Despite being a bit more formulaic than its rival The Great Courses, selections from the Modern Scholar bookshelf are always dense with information and efficient in its delivery. This course on political theory was no exception.

The instructor was Professor Joshua Kaplan, who earned his master’s in the subject at the University of Chicago and his doctorate at UC-Santa Cruz. He has taught political theory to students at the University of Notre Dame since 1987. Professor Kaplan is well spoken with an even diction and articulates in the way only an expert in the field could possibly be.

Dr. Kaplan describes political theory as “slow food in a fast food world.” We crave simple, easy answers but often analysis is highly complex. In general, political theory does not dictate a set of actions or behaviors that lead to statistically probable outcomes. Instead, it gives us perspective, helps us think about the issues, and gives us the ability “to act with purpose and vision.”

To enlighten the student with a survey of thought across the nations, kingdoms, and states and the times in which they held sway, the professor surveys a number of essential classical texts. The first text he recommends, highly recommends I might add, is an essay by George Orwell titled, “Politics and the English Language.” While he does refer to this text, I regrettably did not complete this reading during the course due to constraints imposed by other obligations. I am so fond of Orwell that after I do read the piece I might post an entry examining it by itself, or in retrospect to the completed course. Another recommendation, which I have previously read several times, is Oedipus the King by the Greek tragedian Sophocles.

An important early point that Dr. Kaplan makes is that “The first thing to understand about political theory is that it is not a collection of doctrines or assertions about politics, but rather a way of understanding the significance of political events.” This is important to keep in mind during the seven hour lecture series. There are as many ways of looking at politics as there are politicians—or at least it seems that way often times.

I found this course somewhat nostalgic as the bulk of texts and material was contained within my wide-ranging and near useless (only joking, folks!) undergraduate education in the liberal arts. After a brief recounting of how horrible was poor Oedipus’s life the professor moves on to less morbid Greeks. (Yes, according to my doctoral education that is the correct way to render the possessive of a singular proper noun ending in the letter “s.” I do believe it is a sad comment on the English language, myself, but pick up a copy of The Elements of Style by Strunk and White and please prove me wrong. I’ll enjoy it!)

Plato begins the discussion proper—and doesn’t he always? I must admit, I was very glad that I paid attention during my Introduction to Philosophy course way back in 1995. Those Greek gentlemen, particularly Plato and Aristotle, have oriented me time and again. You know, an annoying party trick just might be to wait until the discussion turns political (or philosophical, or scientific, artistic, etc.—just wait until you are in need of a refill) and then with a surreptitious interjection, and in your most refined voice proclaim, “Well you know, it all goes back to Plato’s Republic…” Coincidentally, that is the path of the course in Lectures 2 and 3; let us remember the lesson of that great text—that justice is within ourselves. Then, we move on—still in ancient Greece, however, we now consider the historian Thucydides who chronicled the Peloponnesian War. Rather than review the tangled web of that conflict, let us simply say that Thucydides considered himself a realist, espousing something along the lines of, “might makes right.”

Next, Dr. Kaplan reviews Aristotle’s Politics, which (I think rather naively) holds that the citizen is in partnership with all other citizens—kind of like a team—in which the good of the polis is superior to what’s good for any individual. Next, the course considers Machiavelli’s The Prince which essentially asserts that it is often beneficial to the ruler and the state to behave like Richard Nixon when necessary and when power is at stake. (For a good crash course in Machiavellian politics taken to the extreme for drama’s sake, check out the thrilling Netflix show House of Cards which is based on a British TV show of the same name but it suffers from the absence of Kevin Spacey and Robin Wright.

Now, we move on to the idea of a social contract theory, first espoused in Leviathan by essentially the father of modern political science, Thomas Hobbes. Hobbes’s thinking was foundational in what we now know as Western political thought. He introduced such revolutionary and dangerous ideas such as the “equality of all men” (apologies to the better half of the species, as we now know that all men are created equal and all are equally inferior to women), and representative government.

Another seminal social contract theorist was Jean-Jacques Rousseau who had a pretty rough up-bringing. Rousseau was a truly revolutionary thinker whose concepts about the right to private property and the nature of human beings had a bit of influence in starting the French Revolution. He was also somehow influential on Cambodian madman Pol Pot, a genocidal maniac who surely must have severely misread some of Rousseau.

From social contract theory, Dr. Kaplan considers the revolutionary ideas espoused anonymously in The Federalist Papers. This is another area in which my undergraduate education provided a bit of background as I did my senior thesis on “The Political Philosophy of James Madison.” I won’t digress much at present but there is both an elitist as well as a careful protectiveness of the nascent republic throughout Madison’s thought. It might do many Americans much good to undertake a quick review of at least some of Madison’s careful thought.

Two of the most fascinating lectures in the entire series are given on the penultimate disc. Both concern Alexis de Tocqueville and his work Democracy in America. Despite its concern with American democracy, de Tocqueville’s audience was in France. His book was very generous toward the US and thus it became popular and made him something of a celebrity. Essentially, de Tocqueville described America in usually glowing terms in an attempt to reassure the French that a fading aristocracy and societal equality would not threaten utter ruin: Just look at the United States!

Next, Professor Kaplan turned his discussion to the always-controversial Karl Marx. Yet, he prefaced his lecture with one of the most brilliant caveats I have ever heard given regarding Marx. Kaplan said, “Karl Marx is one of the most difficult political theorists for us to read and
understand. One problem is that we feel compelled to take sides when we read Marx, to reject him or to convert to Marxist. We don’t run into this problem when we read Aristotle, for example, but it is hard for us to accept the idea that we can simply learn from Marx without signing up or rejecting him out of hand.” I could not agree more—and it makes me feel much better about the months-long experience of reading all 140 lbs of Das Kapital when I was studying macroeconomic theory for pre-dissertation research several years ago. (Don’t worry, I was a Keynesian from the start! But I still had to wade through the entirety of Adam Smith’s Wealth of Nations as well as John Stuart Mill’s On Liberty. Well, back to Marx, while communism was a stark over-reach, Marx’s criticisms of exploitation by the owners of the means of production against laborers were spot-on.

Dr. Kaplan closes the lecture series with a discussion of game theory, the predominant mode of political analysis during the 20th century. Classic examples of non-cooperation such as the “Problem of the Commons” or the “Prisoner’s dilemma” gave mathematic-minded analysts insight into rational choice behavior. It also might have contributed to the absurd levels of the nuclear arms race. However, all in all, game theory proved to be a very useful tool for providing insight into human decision making.

I would definitely recommend this course to anyone who has the opportunity to listen. It seems more like a history course than a theory course but that is primarily due to the survey nature. It does help if you, like me, have some familiarity with most of the theorists and their works prior to the discussion. I think you would enjoy it just as much if you lacked such knowledge but you would find yourself checking Wikipedia quite often to deepen your understanding of certain theoretical perspectives, timeframes, and personalities. A very enjoyable course from a knowledgeable professor! Happy learning!

Sunday, August 17, 2014

Philosophy of Science - Part III


Thomas Kuhn
Thomas Kuhn is probably best understood as a historian of science rather than a philosopher of science. His book, The Structure of Scientific Revolutions, however, is a significant text for an understanding of the philosophy of science. Prof. Kasser described a pattern Kuhn discovered in the history of science—“normal science punctuated by periods of revolution.” Kuhn, according to the lecturer, dealt logical positivism its most severe blow.

Popper and the positivists focused heavily on the scientific method and the rationale by which science increases understanding about the world. However, for Kuhn, the underlying method and logic was not nearly as important as understanding how scientific views are adopted and modified. Kuhn believed that the way to study science itself is to evaluate the activities that scientists spend most of their time doing. Science generally had been taught in terms of its success only. Kuhn described this history being taught to future scientists as similar to brainwashing. Thus, science textbooks are filled with heroes, hyperbole, and drama about experiments. Kuhn argued that science is governed by a paradigm:

1. A paradigm is, first and foremost, an object of consensus. 2. Exemplary illustrations of how scientific work is done are particularly important components of a paradigm. Scientific education is governed more by examples than by rules or methods.


Paradigms create consensus concerning the way in which work should be done in a particular field and this is unique to science. Puzzle-solving is the work of normal science. This paradigm “identifies puzzles, governs expectations, assures scientists that each puzzle has a solution, and provides standards for evaluating solutions. It is generally assumed to be correct and doing science involves fitting into the categories of this paradigm, observations about the behavior of nature.

Thus, the paradigm is a test for the scientist in that failing to solve a puzzle reflects poorly upon the scientist rather than on the paradigm itself. Sometimes, a crisis occurs in a particular scientific community when its members lose their “faith” in the paradigm. According to Kuhn, these crises often occur as a result of anomalies and puzzles that scientists have repeatedly failed to answer. Thus, this is a kind of crisis of confidence. Kuhn argued that Popper’s view was that this was the normal state of science. Not so, according to Kuhn, if this were true science would fail to accomplish anything. Sometimes, paradigms may be abandoned in favor of new ones. Kuhn argued that this is a good thing for understanding and for science as long as it occurs rather rarely.

A lot of Kuhn’s assertions can be boiled down to “his insistence that rival paradigms cannot be judged on a common scale. They are incommensurable. This means they cannot be compared via a neutral or objectively correct measure.” Therefore, changing paradigms resembles something of a “conversion experience.” Since individual psychology has a lot to do with how individuals “convert” to a new paradigm, Hungarian philosopher Imre Lakatos referred to Kuhn’s model of science as “one of mob psychology.”

Imre Lakatos was the first to try to reconcile the rationalism of the “received view” and Kuhn’s “historicism.” His methodology concerning scientific research attempts to incorporate both Popper’s openness to criticism and Kuhn’s attachment to theories. Methodological rules retroactively judge science research as either progressive or degenerative. Paul Feyerabend, another significant philosopher of science, views Kuhn’s model as dull, mindless scientific activity. “In arguments alternately sober and outlandish, Feyerabend defends scientific creativity and epistemological anarchism.”

Sociology and postmodernism have also provided some insight into science. One researcher believed that science was often reduced to semantic absurdities. He entered a bogus scientific “white paper” as a presentation to a scientific symposium. These are supposed to be reviewed for originality and quality. His phony, meaningless presentation was accepted and he went through with the ruse undetected, only to reveal the deception later in an attempt to bee constructive.

All right, at this point, I’m ready to wrap things up. However, there is still an immense amount of material covered in the lecture series that I haven’t even mentioned. There are arguments about how values and objectivity influence science. Most importantly there is a lot of discussion about language and how language influences our construction and understanding of reality. This has been a major movement within philosophy. Consider now that the Massachusetts Institute of Technology houses their philosophy and linguistics programs in the same department. Unfortunately, the limitations of my meager skills to reduce this material to something worthy of being called a summary prevent me from condensing this material.

While these subjects and others are vitally important to a full understanding of the philosophy of science, I choose instead to devote the remainder of my final installment to subjects with which I am more familiar due to my own academic background: probability and Bayesian Theory.

The history of probability is quite interesting: its basic mathematical theory came about only around the year 1660. This might have been because people did not consider probability something that could be theorized about effectively. It also might have been the result of the Christian notion that everything is determined by God’s will. However, it was the great Blaise Pascal who really got probability theory going when someone asked him to solve some problems concerning dividing up gambling stakes fairly. It quickly spread through the fields of business and law.

Probability is critical to the conception of evidence in the modern sense. Probability was first associated with testimony: Opinions were considered probable if they were “grounded in reputable authorities.” Probability gradually changed enough to come to bear on the “causes” of natural sciences like physics and astronomy and was further utilized in “low sciences” like medicine. Such sciences relied on testimony until the Renaissance when diagnosis was established to differentiate from authority and testimony on one side and dissections and deduction used as proof on the other.

The 19th century saw the rise of probability and statistics thinking which undermined deterministic trends. Governments kept better records of births, deaths, crimes and began to see patterns that were predictive. Statistics moved from disciplines like sociology into the hard sciences like physics. This then gave rise to quantum mechanics which held that the universe is governed by statistical laws.

The mathematics that underlies probability theory is relatively straightforward. All probabilities are given as a value between 0 and 1. A necessary truth is assigned the probability of 1. If we say that event A and B are mutually exclusive, the probability that one or the other will occur is the sum of their singular probabilities. Thus, if there is a 30% chance that you will eat pizza for dinner and a 40% chance that you will eat spaghetti for dinner, there is a 70% chance that you will have either. It is more complicated when events are not mutually exclusive. So the chance that you will have pizza or spaghetti (when you might also eat both) is the chance of pizza plus the chance of spaghetti minus the chance of both.

As probability theory continues to build in complexity there are three ways to interpret the mathematics. Frequency theories put probability in real world context and this is the most common use of probability within a statistical context. “Probabilities could be construed as actual relative frequencies.” This, however, creates a problem that the probabilistic account is “too empiricist” in that it connects scientific research too closely to actual experience:

A coin that has been tossed an odd number of times cannot, on this view, have a probability of .5 of coming up heads. In addition, a coin that has been tossed once and landed on heads has, on this view, a probability of 1 of landing on heads. Such single-case probabilities are a real problem for many conceptions of probability. One might go with hypothetical limit frequencies: The probability of rolling a seven using two standard dice is the relative frequency that would be found if the dice were rolled forever. We saw an idea like this in the pragmatic vindication of induction. This version might not be empiricist enough. The empiricist will want to know how our experience in the actual world tells us about worlds in which, for example, dice are rolled forever without wearing out.

Logical theories use probabilities as statements about relationships for evidence of phenomena. Probability, thus, gives “partial” or “incomplete” evidence similar to the way deduction provides conclusive evidence. Just like with deduction, probabilistic evidence must be consistent. If we have assigned a probability of 0.8 to p then we must make a 0.2 to ‘not p.’ “Having coherent beliefs is not sufficient for getting the world right, but having incoherent beliefs is sufficient for having gotten part of it wrong. Probabilistic coherence is a matter of how well an agent’s partial beliefs hang together.” On the other hand, if the evidence does not present a reason to prefer one outcome to another they should be regarded as equally probable. “The mathematics of probability does not require this principle, and it turns out to be very troublesome. There are many possible ways of distributing indifference, and it’s hard to see that rationality requires favoring one of these ways.”

Bayesian conceptions of probabilistic reasoning combine a subjectivist interpretation of probability statements with the demand that rational agents revise their degrees of belief in accordance with Bayes’s Theorem. Bayesianism attempts to combine the positivists’ demand for rules governing rational choice with a Kuhnian interpretation of values and subjectivity. In the process, Bayesianism has revitalized philosophy of science with respect to confirmation and evidence.

Bayes’s theorem begins with a subjective interpretation of probability statements. These statements are of conditional probability, meaning that they characterize degrees of belief of the person. Partly it resembles gambling behavior: “the more unlikely you think a statement is, the higher the payoff you would insist on for a bet on the truth of the statement. Your degrees of belief need not align with any particular relative frequencies, and they need not obey any principle of indifference.” The main importance is coherence in probabilistic coherence.

The Dutch book argument is designed to show the importance of probabilistic coherence. To say that a Dutch book can be made against you is to say that, if you put your degrees of belief into practice, you could be turned into a money pump. If I assign a .6 probability to the proposition that it will rain today and a .6 probability to the proposition that it will not rain today, I do not straightforwardly contradict myself.  The problem emerges when I realize that I should be willing to pay $6 for a bet that pays $10 if it rains, and I should be willing to pay $6 for a bet that pays $10 if it does not rain. At the end of the day, whether it rains or not, I will have spent $12 and gotten back only $10. It seems like a failing of rationality if acting on my beliefs would cause me to lose money no matter how the world goes. It can be shown that if your degrees of belief obey the probability calculus, no Dutch book can be made against you.

However, some rather ridiculous beliefs can maintain probabilistic coherence. Bayesianism uses a theory of how evidence should be handled which helps it become a serious scientific theory of rationality. The first element of this theory is the idea that confirmation raises the probability of a hypothesis. “E confirms H just in case E raises the prior probability of H. This means that the probability of H given E is higher than the probability of H had been: P(H/E) > P(H). E disconfirms H if P(H/E) < P(H).” This is done in a subjective interpretation of probability.

The second element critical to Bayesianism is that beliefs should be updated in accordance with Bayes’s Theorem. Non- Bayesians acknowledge the truth of Bayes’s theorem but don’t find it as useful as Bayesians.
The classic statement of the theorem is:
P(E/H)×P(H)
P(H/E)=
P(E)
.
The more unexpected a given bit of evidence is against a given hypothesis and the more expected it is according to the hypothesis, the more confirmatory the evidence of the hypothesis.

The course began by asking what it is that makes science special from a philosophical perspective. It is unclear how much we would like to separate scientific theorizing from everyday theorizing. Unlike those who would dismiss philosophy, it is hopefully apparent that philosophical inquiry exists on a continuum with scientific inquiry. One is helpful in understanding, clarifying, challenging, and enlarging the other. It is quite obvious that controversy will continue to exist about this matter.

Course notes for the lecture conclude, quite eloquently:

Philosophy, especially philosophy of science, is hard. It compensates us only with clarity, with the ability to see that the really deep problems resist solutions. But clarity is not such cold comfort after all. As Bertrand Russell argued, it can be freeing. When things go well, philosophy can help us to see things and to say things that we wouldn’t have been able to see or to say otherwise. 

I know that this has been quite a saga, quite an undertaking for this insignificant little blog. However, I hope, at the very least that it would plant the seed in someone’s mind that science is a useful tool but not the end-all be-all of understanding. It rests on certain axioms about the material world which should never be ignored. Keep thinking. And, as always, happy learning!

I am so glad this one is over.

Monday, August 11, 2014

Philosophy of Science - Part II



Philosophy of Science – Part II

Einstein
Albert Einstein solved a central problem of science with his theory of special relativity and it was this that influenced philosophers of science to further understanding in the field. A principle of relativity refers to the idea that unaccelerated motion can be described only in respect to a specific frame of reference.

Einstein was associated with two specific principles that are germane to the present thread of philosophy of science: 1) the relativity of unaccelerated motion as well as 2) the non-relativity of the speed of light. Einstein re-examined the assumptions scientists made about space and time. Depending on their frame of reference two observers traveling relative to one another will have different opinions about when an event happened, or whether one event happened before another. Einstein asserted that the question “when did this event happen” is scientifically meaningless.

In a similar way, different observers in motion (as described above) will measure an object’s length differently. “All can be right provided we reject the notion that the object’s length is independent of the reference frame from which it is measured.” A lot of what Einstein accomplished hinged on his linking such concepts very tightly to experience and measurement, while he denied that they had “legitimate use when disconnected from experience and measurement.”

P.W. Bridgeman expanded upon the philosophical ramifications of Einstein’s innovations with operationalism, which entails defining each scientific concept solely in terms of the operations required to detect and/or measure specific instances of phenomena.

Classical Empiricism
To understand better the connections between experience and meaning that operationalism attempts to convey, we must examine the philosophical history of “reflection about experience, language, and belief.”  Locke, Berkeley, and Hume created a common tradition of empiricism—“the idea that experience sets the boundaries of, and provides the justification for, our claims to knowledge.” The classical empiricist began to reject the very notion of a useful metaphysics and to build knowledge upon the bedrock of that which can be observed by the senses and deduced by the intellect from these observations.

Broadly speaking, empiricism refers to the notion that all that we can know about the universe is only that which can be communicated by sensory input—observation of the external world. Locke wanted to determine the boundaries of our knowledge by investigating its sources. He asserted, “Nothing is in the mind that was not first in the senses.” Thus, for Locke, when the mind thinks, that is the substance of an idea. These ideas are mind-dependent. We perceive sights, sounds, textures, odors, etc. but we do not perceive actual objects. Also according to Locke, simple ideas come from experience and then our innate mental powers, such as pattern-recognition and abstraction, elaborate our simple ideas. “Abstraction lets us focus on a part of a presented idea… and these parts can be recombined to form ideas of things never presented in experience, such as unicorns.”

Despite his rejection of the label, many of Berkeley’s colleagues considered him a skeptic because he denied the existence of matter. Berkeley argued that we have no an idea of material substance. First, we have no direct experience of matter. We grasp to describe what matter alone might look or feel like. Second, Berkeley argued that it was impossible for us to construct a legitimate idea of matter based on our ability of abstraction. Without knowledge of a thing’s properties, how might we imagine it at all? Finally, Locke had admitted some confusion about how ideas are produced in us and according to Berkeley, “God simply produces ideas in us directly.” God does not need the intermediary of matter to produce in us ideas. According to Berkeley, our patterns of experience are the world itself. Science is reducible to the development of rules which predict our experiences.

Finally, David Hume demonstrated the devastating effects of this pure skepticism espoused within, if not by, Berkeley. Hume’s sought to bring experimentation to philosophy. However, this proved difficult because of the lack of “testable” hypotheses. According to Hume we have no inkling of causation, how one event might be observed to “make” another event happen. Experience simply shows us the correlation: the first thing happens, then the next. For us, their connections are not experiential. There is a lack of continuity as well. The only thing experience provides us with is that which is “currently perceived by us.”

This position as a starting point is deeply skeptical. Thus, Hume asserted that “all meaningful statements must concern either relations of ideas, as in logic and mathematics, or matters of fact, as in the empirical sciences. This influential dichotomy is known as Hume’s fork.” Hume thus felt that he was engaged in a kind of psychology, attempting to determine principles about the human mind in a manner like Newton had applied toward the cosmos. 

Logical Positivism and the Closely Related Logical Empiricism
Logical positivism and logical empiricism, together form neopositivism, a movement within Western philosophy that turned to verificationism, which tries to make philosophy a bit more like science. For example, under verificationism, statements that are verifiable either logically or empirically are the only ones considered cognitively meaningful. The Berlin Circle and the Vienna Circle propounded logical positivism starting in the late 1920s. It attempted to reduce confusion in language and method between philosophy and science.

The logical positivists were responsible for making philosophy of science a much more serious undertaking within the overall study of philosophy. Logical positivists were influenced by Einstein’s work and other physics developments but largely disregarded 20th century German philosophy which was otherwise somewhat influential in other subfields.

Logical positivists were less worried than Popper had been about pseudoscience and focused more on metaphysics and philosophy’s potential for impeding progress in physics. Like Auguste Comte the logical positivists were largely disappointed with efforts at a metaphysics—this accounted for the positivist part of the name. The logical part of logical positivism concerns their belief that mathematical logic had tools which could give rise to a stronger version of empiricism and weaker version of metaphysics. “This new version of empiricism grasped the other option presented by Hume’s fork. For the positivists, the philosopher deals in relations of ideas, not matters of fact.” It is the job of philosophy to clarify linguistic problems and show signs of the relationships between scientific statements and experience.  

Basically, logical positivism holds that every cognitively meaningful statement falls into one of two categories:  1) analytic or, 2) a claim about experience. Cognitively meaningful statements are literally true or false. Thus commands and questions are not statements subject to this dichotomy. Analytic statements have to do with Hume’s relation of ideas. They are true or false based on their meaning and are not true or false statements. Thus, analytic statements are knowable a priori; empirical evidence is not required to know the truth of logical and mathematical propositions. Such analytic truths also hold necessarily. For example, “It is not merely true that no bachelor is married; it must be true.” It is true in all possible worlds. At this point, metaphysics is disregarded as a kind of “poetry” not beholden to truth or falsity.

As previously mentioned, logical positivists defined meaningfulness in terms of verification. “To be cognitively meaningful is to be either true or false; thus, a statement is meaningful if there is the right sort of method for testing truth or falsehood.” Analytic statements, on the other hand, are demonstrated true or false by mathematical or logical proof.

While operationalism and classical empiricism focus on the relation of a term to experience, the verificationism of logical positivists makes empirical meaningfulness a matter of a statement’s ability to meet experience, made possible partly due to advances in logic. As you can see, this is where philosophy really begins to take on a linguistic turn. Thus, a term can get its meaning from its role in making meaningful statements rather than being independently meaningful. Verification of a synthetic statement entails finding observations that speak to its truth. However, it is too stringent to require a sufficient number of observations to prove conclusively the verifiability of a synthetic statement. A weaker version of verification must suffice.

For logical positivists, theories need not get the world correct, but instead experience it right. Acupuncture is one example: One can respect reliable (at least within a certain domains) predictions that acupuncture theory makes and the cures that result, and avoid taking the theory’s assertions concerning energy channels and other pseudoscience seriously. Thus, for the logical positivists, the connections between theoretical terms are critical but for deriving observations, not for objectively describing reality. Often, statements that compose a scientific theory do not have to be true to be “good.” They do not attempt to describe the world but, instead, permit us to infer “this from that.”  “They can still play a needed role in a theory’s ability to take observational inputs and generate true observational outputs. This is the instrumental conception of scientific theories.”  


Wednesday, August 6, 2014

Philosophy of Science - Part I



Philosophy of Science

**WARNING: The following review of materials for a course in the philosophy of science is necessarily long and exhaustive, replete with the confusing jargon of both the fields of philosophy and science, as well as the unique jargon that emerges when the concepts of each field of study is unleashed upon the other. It will require patience and time to wade through these waters, but for those interested in this essential inquiry, the reward will be well worth the effort—S.I.P. Blogger. 

“Getting the Course” [Just skip the first 3 paragraphs if you only want to see material review.]

Around this time last year, my brother was looking for some intellectually challenging audio material to listen to during his daily commute, which is an hour and a half one-way. We’ve both been frequent listeners to audio books of non-fiction and literature; however, our true commute-driven audio passion has been for university lectures, particularly those produced by The Teaching Company under the titular series The Great Courses. I have reviewed other courses in this series frequently on this blog. Ironically, I discovered The Teaching Company and their excellent lectures in my last semester prior to graduating with my Bachelor of Arts degree.

Of course, I quickly introduced the lectures to my brother and we have been loyal customers ever since. At the time, The Teaching Company was the only source of recorded lectures readily available. These days, there are numerous sources of such lectures: The Modern Scholar, iTunes U, podcasts, and individual university offerings such as MIT’s Open Courseware—just to name a few. I mention all of this only to explain to you that neither of us is an amateur when it comes to consumption of recorded academic lectures and coursework. So, when my brother was looking for something challenging last year, we did some research and comparison and finally settled on The Great Courses’ Philosophy of Science. The outline indicated that the course addressed topics we had both encountered but not understood or studied in previous settings: logical positivism, the problem of demarcation, Karl Popper, axioms, a theory of everything, etc.

After getting the CDs, I waited about a week and then asked him how the course was going. Now, not to bloat his ego, but we’re talking about someone who has an IQ in the 160 range and a nearly eidetic memory. “It’s rough, man,” he responded. “I hate to say it, but I’m just going to have to quit it. It’s just too abstract and complex—I find it hard to follow.” So, he did quit it. The first time either of us had shied away from a course. I was thoroughly intimidated. The course just sat around for the next year. Then, finally, about a month ago, I decided it was time to slay this beast. “Good luck, man,” said my brother, when I told him of my intention.

“Experiencing the Course”

On 8/3/2014 I completed The Great Courses series on Philosophy of Science which was presented by Professor Jeffrey L. Kasser, PhD. He is an Assistant Professor of Philosophy at Colorado State University. His undergraduate degree is from Rice University and his doctoral study was completed at the University of Michigan. My completion where my brother failed was, however, a far cry from a celebratory occasion. Understanding the philosophy of science requires one to move in a circular (perhaps elliptical?) orbit around the same pertinent questions that have plagued the study from at lwawt 

At first, the 18 hours might sound slim for a full course on the philosophy of science. I can assure you it is not.

With the production style of The Teaching Company, ideas come quickly and elaboration even quicker. These lectures are planned and efficient. If you went to a university where the standard sixteen week course was 3 classes of 50 minute lectures per week, with alternating Fridays, then you remember very easily that this does not translate into the following relationship: (50 min x 3 days) + (50 min x 2 days) / 2 = a mean average of 125 minutes per week x 16 wks = 2000 min / 60 min/hr = 33.3 hours of instruction. That is quite a fantasy. I attended three state schools and five private colleges during my extended academic career and always found the same thing: At least the first 10 minutes were concerned with review of the previous lecture material. Another 15 minutes were typically used for answering student questions throughout the lecture or at the end of the course as well as various administrative matters (announcements, anecdotes, etc.) Finally, there were almost always two entire class sessions devoted to review of the midterm and final exams and two for the actual conduct of those exams.

All of this means that the typical classroom student loses (let’s be conservative) around 20 minutes of lecture/instruction time for each class—bringing the actual time down to 30 minutes per class session. Then, we must account for the total of 4 completely lost classes concerned with examinations (100 + 100 = 200). Thus, we arrive at the following calculation of actual time devoted to lecture/instruction in a typical 3 semester hour credit course: (30 min x 3 days + 30 min x 2 days)/2 wks = a mean average of 75 minutes per week x 16 wks = 1200 min – 200 min (the 4 days devoted to exams) = 1000 min / 60 min/hr = 16.7 or approximately 17 hours of lecture/instruction time in the typical course. This explains why we all had to study so hard and make outside office appointments with professors to review critical course concepts.

The course consists of 36 different thirty-minute lectures, each of which builds upon the last and is available as a set of DVDs, audio-only CD packages (our medium of choice), or audio download. Each lecture addresses a different aspect of the philosophy of science. Lecture titles include the following:
1)      Science and Philosophy, 2) Popper and the Problem of Demarcation, 3) Further Thoughts on Demarcation, 4) Einstein, Measurement, and Meaning, 5) Classical Empiricism, 6) Logical Positivism and Verifiability, 7) Logical Positivism, Science, and Meaning, 8) Holism, 9) Discovery and Justification, 10) Induction as Illegitimate, 11) Some Solutions and a New Riddle, 12) Instances and Consequences, 13) Kuhn and the Challenge of History, 14) Revolutions and Rationality, 15) Assessment of Kuhn, 16) For and Against Method, 17)     Sociology, Postmodernism, and Science Wars, 18) (How) Does Science Explain? 19) Putting the Cause Back in "Because,” 20) Probability, Pragmatics, and Unification, 21) Laws and Regularities, 22) Laws and Necessity, 23) Reduction and Progress, 24) Reduction and Physicalism, 25) New Views of Meaning and Reference, 26) Scientific Realism, 27) Success, Experience, and Explanation, 28) Realism and Naturalism, 29) Values and Objectivity, 30) Probability, 31) Bayesianism, 32) Problems with Bayesianism, 33) Entropy and Explanation, 34) Species and Reality, 35) The Elimination of Persons? and 36) Philosophy and Science.

I included this long list for a quite obvious reason: Simply looking at the titles of lectures in the course provides quite a bit of information about what you can expect as you progress through the 18 hours of philosophical and scientific material.

The first thing that stands out is that the title of the first and last lectures in the series are very similar, simply transposed. What you can deduce from this is that philosophy not only influences scientific practice but the relationship is “give and take” with science having a strong influence on developments in philosophy as well. Another glance at the titles will reveal some of the major names involved in the development of a philosophy of science: Popper, Einstein, and Kuhn. Finally, by looking at the last several lecture titles we can reasonably predict that probability, and particularly, Beysian probability figure prominently in the later trends of thinking in philosophy of science.

We often place an intrinsic faith in science seen in our acceptance and integration of technology into our daily lives. Yet we also know that science is sometimes done poorly and its theories proven to be accepted erroneously. So it’s good to think about science—how it relates to our society, culture, as well as to us individually. A course in the philosophy of science attempts to use the tools of philosophy to reflect on these things.

Furthermore, philosophy can be used to evaluate the assumptions upon which science is based. Should we accept as axiomatic the assertion that there is a material world and furthermore that we can know, that is, predict its behavior based on historical observations) anything about it?  Many scientists would obviously argue that we should and we can, based primarily on the value judgment that doing so has proven useful—that it’s pragmatic—based on the outcomes of scientific advancement.

In their recent book The Grand Design, physicists Stephen Hawking and Leonard Mlodinow (2011) argued that philosophy of science is dead. Indeed, Dr. Kasser allows that most of the philosophy of science was done prior to the 20th century and what remains today is largely semantics. However, a truly curious and skeptical mind should be cautious about accepting statements like Hawking’s and Mlodinow’s.

Yes, our materialist perspective has proven very useful in examining our little corner of the universe, and yes, it is the best explanation we have now. Yet, failing to keep an open mind about the possibility that we are wrong (and there is always, always the possibility that we are wrong) can lead to the kind of dogmatism in science that has been so vehemently criticized in religion. The first time this really hit home for me was (no, not when I was watching The Matrix) several years ago when I happened upon Dr. Nick Bostrom’s (2003) paper, “Are You Living in a Computer Simulation?”Bostrom’s thesis is presented so clearly in the paper abstract, there could be no better summary:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

Note that the definition of posthuman is somewhat contentious. Bosterom generally refers to posthumanity in terms that it has developed the capability to exceed material and energy constraints due to technology. While this topic is a slight digression from the main focus at hand, the point I wish to make I that philosophy still has plenty to say about science and technology and their applications.

“Philosophers & Philosophy in Science”

Chances are, if you’ve ever had a course or series of courses in research methods, you will have had at least some exposure to our first prominent philosopher of science: Karl Popper. (Please resist the urge to call him John Popper, the singer/songwriter and supernatural harmonica player for Blues Traveller. Karl Popper was a teacher at the London School of Economics and he was originally from the lively intellectual city of Vienna. Popper’s unique insight was the concept of falsifiability. Inductive reasoning presented a difficulty in that no matter how many confirmatory observations a scientist makes, he or she can never prove something to be universally true, such as all ravens are black. However, according to Popper our hypothesis can be falsified. One observation of a white raven falsifies the hypothesis: All ravens are black.

This was also Popper’s answer to the demarcation problem—we’re doing science if and only if our hypotheses are falsifiable. This helps us to further distance genuine scientific inquiry from pseudoscience. Unfortunately finding this distinction continues to be a problem for those in the general public—such as people who buy “magnetic energy bracelets.”

In the concluding post, Philosophy of Science II, I will look at the following topics and how they relate to our basis of the philosophy of science: Einstein, Classical Empiricism, Logical Positivism (such as A. J. Ayer’s), Holism, Hume and Induction, Kuhn’s historical perspective, sociology of science, postmodernism, laws, reduction and physicalism, scientific realism, probability, Bayesianism, and entropy.