Friday, March 13, 2009

Aristotle and Modern Science: Common Sense vs. Expert Sense

Recently reading Aristotle's Physics I came across this passage: 

Why not suppose, then, that the same is true of the parts of natural organisms [i.e. that nature acts not for something, i.e. for some final cause, but of necessity]? On this view, it is of necessity that, for example, the front teeth grow sharp and well adapted for biting, and the back ones broad and useful for chewing food; this  result was coincidental, not what they were there for. The same will be true of all the other parts that seem to be for something. On this view, then, whenever all the parts come about coincidentally as though they were for something, these animals survived, since their constitution, though coming about by chance, made them suitable . Other animals, however, were differently constituted and so were destroyed; indeed they are still being destroyed, as Empedocles says of the man-headed calves. (Physics II.8, 198b23-33). 
Italic
What we have here is an articulation of the evolution of species by natural selection. Say some animals have some parts well-adapted ("proper") for biting - sharp teeth. These sharp teeth are not there because that's what they are for, but rather coincidentally. What happens is that these parts get there coincidentally (a modern biologists would say this is "random variation"), and the organisms who had these adaptations would survive, while the one's who do not (like Empedocles' man-headed calves) would not. 

This folks, is natural selection in essence. Let's compare it to Darwin's definition: 
 
Can it, then, be thought improbable, seeing that variations useful to man [in the breeding of animals] have undoubtably occurred, that other variations useful in some way to each being in the great and complex battles of life, should occur in the course of many successive generations. If such do occur, can we doubt (remember that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating its kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable individual differences and variations, and the destruction of those which are  injurious, I have called Natural Selection, or the Survival of the Fittest. (Origin, 107)

Now of course Darwin knew of this Aristotle passage - he quotes it in a footnote on the first page of the second edition - but he didn't think Aristotle understood its significance. The significance, from this passage, is that variation can and does occur, and the principle of "selection" of which variations will be reproduced and what will not is their fittest for survival. Aristotle obviously says the same thing, but with a big difference: Aristotle didn't think chance would work like this. 

In fact, Aristotle rejects this argument precisely that because we speak of "chance" as something hardly ever taking place, while teeth are "normally" or usually there. In other words, animals having teeth is a very normal occurrence. If it is normal, then it is the very opposite of coincidentally or by chance, which by definition is abnormal. Thus we need to look at another cause. For Aristotle this is the "final" cause, i.e. that an animal needs this or that part to become a fully mature animal.

Now, I can imagine Stephen Jay Gould's response to this: if you want to get on board with Darwin, you need to expand how you think about "chance" to include lots and lots of time (thousands, millions, of years), and lots and lots of interactions at the genetic level. If you look at chance events over millions of years, instead of just a couple, or a few hundred, you can imagine how chance variations can come up, and while they seem like chance variations from a certain perspective, they take on a statistical regularity over the long haul. This "regularity" is enough, if looked at in the right perspective, for natural selection by random variation to make sense. Aristotle of course did not entertain this perspective (the reasons for this are not merely historical, but also philosophical). How could you, Aristotle might say, when by definition chance is not something that is regular? Chance by necessity is rare! But then again, as Daniel Dennett says, perhaps this was a case where Aristotle mistook necessity for a lack of imagination. 

So why oh why did it take over 2000 years, from Aristotle to Darwin, for humans to recognize this fact? Was it Aristotle's commitment to "final causes" in nature (an idea that, so far as I've read scientists reading it, is very little understood by most scientists today)? Was it an "essentialism" in Aristotle, Platonism, Islam, and Christianity, modes of thinking that dominated our culture for those thousands of years? Was it that Darwin finally let go the shackles of dogmatism that plagued everyone from Plato to Paley? This is the standard story, no doubt, biologists tell. 

Yet I have an alternative hypothesis, something I have not done enough research on to really defend here. My hypothesis is this: in order to entertain random variation there had to be the historical development of population science. This historical develop only arose with the modern state and notions of managing large populations. The very idea of "population" is something almost completely foreign to Aristotle's time. When you ask a question about a species you ask the question about a typical example of the species, not about the "distribution of attributes across populations," which is what statistics like the birth rate and the death rate do. This only happens with the advent of the modern nation-state, where territory and population take on a completely different sense than it ever had, and where you understand the nation in terms of its population, rather than its ethnicity, or its king, etc. 

One might retort - 'so what. So that's the development. It's no secret that science and ideas develop over time, and that there are antecedents to ideas. So population science was antecedent to Darwinian evolution. Big deal.' The big deal, however, is a question of perception. What type of perception goes into population studies? How has this perception altered our very everyday senses and perceptions of our world? Do we understand, on the whole, more or less about the world now that we see in "populations"? I don't mean expert scientists, but common people. If we are, presumably, "enlightened" individuals, "moderns" who are no longer bound by the shackles of dogma and religion (unless you happen to be one of those creationist fundamentalists), then we should actually know more about the world than people did, say, 300 years ago. But is the case? 

I would say not. In fact, we're probably more ignorant of our world, as a general rule, than people were 300 years ago. Back when people actually engaged with things around them, instead of relying on technology or experts to give them all of their "knowledge" ("giving" here merely means: making things we use, without the slightest idea how they work). I would actually say, even if Aristotle is completely deficient in terms of "scientific" knowledge, he actually does give us something, in terms of common understanding, that really does help us know about our world. 

Asking the question "why", for Aristotle, can be answered almost entirely by your senses. You can ask: "why does this bird build a nest like this"? If you look at the nest, you look at the four ways we speak of "cause": you can find the "material" cause, the "efficient" cause, the "formal" cause, and the "final" cause all with your eyes. Paradoxically, modern science actually takes your eyes out of the equation - and replaces it with an equation - algorithms for variation, models of bird behavior, etc. Your eyes are actually deceptive - 'you think this table looks solid? Well my friend, you must know that it is made up of billions of atoms, and most of the volume of an atom - that space between the electron and the nucleus is complete void, empty space. If it weren't for the electromagnetic force that attracts and repels, friends, we'd fall right through this floor!' This type of explanation, like an evolutionary one might be for a bird nest, is much more "accurate" than Aristotle's four ways of speaking about causes, but at the same time, for our normal interaction with the world, is almost useless

Population studies, and most science today, are the sole purview of well-trained, well-funded, policy-arms of national governments. The "knowledge" of populations and the "management" of populations go hand in hand. Today, as Sajay Samuel says polemically, the polis or people are the subjects of experiment by the "experts." And this has been true since the beginning of population studies. Biology is not merely a way of seeing the world, but a way of making the world. 

Now, I'm not saying we go back to Aristotle. What I am saying is that Aristotle's attitude - that we begin with our common sense, our common understanding, is essential. We as modern people have given up our knowledge to a science that has become so obscure that most people have almost no knowledge about their surroundings. Perhaps we need to re-think the value of this common sense? 


Thursday, March 12, 2009

Arguments - In and Out of Science

I've just started what I hope will be a really good refresher for me on current science by the New York Times science correspondent, Natalie Angier (The Canon). So far it's pretty interesting, and I'm really excited to learn a bit about new experiments going on, and her writing style, while a bit verbose, is in general really good. 

One thing really annoys me though. Her first chapter is about the scientific "critical thinking" mindset. It is important to start here, because as she points out (and many scientists say), science is not about a set of facts, but about a way of thinking. This is different from other ways of thinking, called "opinion." This of course is a distinction that goes back to Plato, but in her book seems to make this distinction between science and all other types of thinking. E.g., she quotes Andrew Knoll of Harvard: 
"In politics, you can say, I like George Bush, or I don't like George Bush, or I do or don't like Howard Dean or John Kerry or Mr. Magoo... You don't need a principled reason for that political opinion. You don't need evidence that someone else can replicate to justify your opinion. You don't need to think of alternative explanations that would render your opinion invalid..." 
Of course, after this, science comes in with its methods of control and institutional checks and balances, peer review, etc. She then has a few pages on the way science is critical of itself, which culminates in a few pages about the uncertainty of science. Uncertainty is one of the most important aspects of science, because its precisely in uncertainty there is a motivation for searching and working and discovering (incidentely, this is also one of Plato's contributions: the philosopher is precisely that person who desires wisdom, but does not have it, and thus is continually impelled toward it). I would say kudos - the best theories out there are ones that have just a enough certainty to keep working them out, but not enough to shut down debate. Those are normally the most productive theories.

The problem, she and some scientists say, is that this creates a poor public image. "How do you convey the need for uncertainty in science, the crucial role it plays in nudging research forward and keeping standards high, without undermining its credibility?" This is an excellent question, but the real issue has nothing to do with science's uncertainty, but with people's standard for argumentation. If you consider argument to be either mere opinion, or scientific, then you're setting up a false dichotomy. 

In other words, if you go around saying there are two ways of thinking - "critical scientific" thinking and "opinions" that supposedly do not need "evidence" - then you'll always have this problem. Andrew Knoll is just plain wrong that we do not need to justify our views. "Critical scientific" thinking is a species of the "critical thinking" genus. The Greek word krinein, from which we derive "critical," means to divide or cut (our word decision comes from the Latin synonym, caedere), and this is exactly what we do with "critical" thinking: we separate out good reasons from bad, cut certain perspectives while keeping others (winnow, if you like). This is a process that is much broader than modern institutionalized science. It happens in the everyday (should I go to this store or that?), and ought to happen whenever we think politically and in communities (where, sadly, it does not often occur). We separate this from that based on communal standards, principles we hope are true, and a whole bunch of cultural history and knowledge gained through thousands of years of experience as humans. 

The solution to this PR problem is merely to note what science can and cannot do, and what political argument should but does not do. Scientific findings are a basis to make reasoned arguments about what one ought to do. They are not idols we must serve in order to appease the gods of modern style, because ultimately that merely shifts responsibility to an impersonal jumble of information instead of to actual people (like you and me, and our nation's leaders), who have to do the real work of decision. Science ought to keep its standards high, to attain the most certainty it can; then politicians and thinkers, citizens and individuals, must take responsibility for their actions in the light of how they see their lives playing out. The questions "how ought we to be in our world" is clearly helped by critical scientific thinking (although it is not always helped by science in general - atom bomb anyone?), but it is not the whole of that question.

Ultimately what really irks me about both scientific rhetoric on this point, and about people's annoyance at science's uncertainty, is that both seem to have an attitude that one must find authority somewhere else. 

'You can't argue for your opinion, because it's just opinion! You don't have the scientific method to back you up!'  

'We can't trust you scientists, because you guys get it wrong!'

Apparently I have been under the illusion that that little Enlightenment dicta, "think for yourself!" still applies. 

Wednesday, March 11, 2009

Epistemology and Love: Wendell Berry and "Science"

I've been reading Wendell Berry's fiction as of late, reading a few essays, and have listened to one of his interviews. I am really fascinated by his phrase "the way of ignorance." He and Wes Jackson use this phrase to indicated a certain epistemological humility, on the one hand, and a certain affection for their localities on the other. Berry thinks that one of the major problems with modern techno-science (and this almost always goes together) is that it has no respect for localities, prefers to distance itself from what it studies, generalizes and abstracts to a damaging effect, and loves to reduce and dissect everything it comes in contact with. The effect, Berry thinks, is the destruction of land through the use of methods not suitable to that locality, the destruction of community through the mechinizing labor and conglomerating land, and ultimately the destruction of knowledge through the over-specialization of the scientists themselves, and through the loss of a knowledge-base in farming communities.

Clearly, Berry is against a mainstream understanding of "knowledge" if he thinks science destroys it. But what does he think knowledge is? From reading his short stories and novels so far, I can only say that he believes knowledge to be connected intimately with love. Love is attachment. Love is a focused care, a watching and waiting, a giving of space and a giving of time. Love is when you see yourself in that thing you're studying, you see every consequence of your knowledge as a consequence for your person. I think Berry's epistemology, and indeed an epistemology using love, is best seen in Shakespeare's 73rd sonnet:

That time of year thou mayest in me behold
When yellow leaves, or none, or few, do hang
Upon those bows which shake against the cold,
Bare ruined choirs where late the sweet birds sang.
In me thou see'st the twilight of such a day,
As after sunset fadeth in the west,
Which by and by the black night doth take away,
Death's second self that seals up all in rest.
In me thou see'st the glowing of such fire,
That on the ashes of his youth doth lie,
As the death-bed whereon it must expire,
Consumed with that which it was nourished by.
This though perceive'st, that makes they love more strong:
To love well that which thou must leave ere long.

I think this sonnet typifies Berry's approach, and I must say I'm starting to think of knowledge in this way. The poem, in three quatrains and a couplet, moves like this: first, we hear the poet is in decline. The phrase "bare ruined choirs" is one of the most beautiful in the language for speaking in particular of decline, because we are speaking of seasons, of fall, where soon winter will take over, a sort of death. Next, we meet another sort of death, the death of a day - "death's second self," i.e. night, which gives rest, as much as it "seals up" in a tomb. Yet third we find the most interesting aspect of this: the "glowing" of the poet is actually that thing which leads to his decline, as he is "consumed with that which [he] was nourished by." This has always reminded me of the life cycle: the agent of our growth is the agent of out decline, since on the one hand, as we replace our cells (which takes 7 years for our entire body to be replaced with new cells), and grow, we area also being undone, because cell replication gets worse each time they replicate. This is merely the process of growing old.

But when we get to the couplet, we realize that even as we see this in the world, we still love the world. 

Sunday, March 01, 2009

Plantinga v. Dennett

I just finished listening to the Plantinga/Dennett encounter at the APA this year. I have to say I was a bit disappointed in both Plantinga and Dennett, for various reasons. I was disappointed in Plantinga because he did not really engage Dennett directly much, nor did he really answer a basic issue that Dennett often brings up, i.e. the charge of dualism (skyhooks and cranes, as it were). Dennett on the other hand was so ad hominem and rhetorical, it was hard to even consider him worthy in any type of debate of a philosophic nature. He is great at getting you to imagine things in different ways, but he treated Plantinga with disrespect, which was unfortunate. He used so much rhetoric and anecdote he seemed like a sophist trying just to win the argument, and if you're a philosopher, that is supposedly the last thing you'd want to do. 

Be that as it may, some interesting things came up right off the bat. Plantinga basically started by arguing that theism and Darwinian evolution are compatible, but that metaphysical naturalism and Darwinian evolution is not (which is a really old saw for Plantinga, which is another unfortunate thing about the encounter) The former thesis he argues by saying that evolutionary theory does not rule out that it was guided by God, since "random variation" is something of a misnomer (there is a cause of everything), and there is no reason, from a theistic point of view, that God could not have guided this variation or have been the cause of the variations. This of course brings up the "problem of evil," since natural selection seems like an extremely harsh thing for a supposed all benevolent, all powerful God to use. He didn't really spend much time on this, because it is one of those issues that are perennial (at least since the Enlightenment), and which he's dealt with in other places.

This later thesis, that Darwinian evolution and metaphysical naturalism are incompatible, he argues using a probablistic argument. Naturalism has a built-in "defeater." By saying that we can completely account for human thought from evolutionary origins, you undercut any notion that our thinking is fully reliable according to this view. The reason for this is that in order to aid survival, the content of our beliefs does not have to be true, but our behavior only has to be adaptable (and if you define the "true" as the "adaptable" it would be just a big fat tautology, and would really mean nothing). In other words, Plantinga thinks that this type of naturalism does not really aim at truth but at some sort of adaptation, which can come at the expense of our belief we can get at the true. Here I quote Darwin, a quote that Plantinga likes, since he was aware of this issue: 
    
With me the horrid doubt always arises whether the convictions of man's mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey's mind, if there are any convictions in such a mind? (Letter to William Graham)

In other words, the issue is that adaptive behavior doesn't require that our belief claims be true, just that our cognitive faculties work in such a way as to enable our species to survive and reproduce. His formula is that the reliability (R) of belief P given naturalism (N) and the evolutionary origins of all our thought (E) is low (so: the probability of P(R/N&E) is low). If this is the case, then any claim we make in the realm of naturalism is self-defeating, because if our cognitive faculties merely arise from adaptive behaviors, no guarantee is made that they actually function reliability, and this would hold true for naturalistic evolutionary claims themselves. 

Now, in this encounter, Dan Dennett didn't really address this directly. Instead, he kind of went in a round about fashion to argue that it is simply silly to consider anything other than naturalistic causes for things (this is his famous "skyhooks or cranes" argument). He said, 'listen, it's true that theism and Darwinism are compatible - but so what? Let's start a new religion, and call it Supermanism. We'll say Superman created the world, and there are certain evidences of this throughout the earth. Is this plausible? No, of course not. The same with theism.' Now, later in Plantinga's rebuttal he did mention he wasn't trying to argue from an empirical standpoint that God's "footprint" is in the world, as it were. In other words, he isn't trying to say that we should somehow replace scientific research with research using theistic "causes." Dennett's point, however, turned toward "intelligent design," is a good one. As far as science is concerned, putting an intelligent designer into your equation does nothing for the science itself. It's a gratuitous (his word) addition, with no point whatsoever. So what's the harm? 

The real issue Dennett gets at, eventually (after much rhetorical baiting) is by making claims like Plantinga, we undercut the epistemic responsibility of thinking humans. This responsibility is to make only those claims we can actually empirically verify using the best methods we know, but on the flip side, this gives us freedom to do as much as possible with science at our side. To do anything less is to give in to irrationality, and that is morally culpable. 

I think one of things that gets at Dennett's goat about Plantinga (and I don't really know, since I have not read enough of Dennett's responses to Plantinga, and he did not directly address this here) is that Plantinga argues is that most of the time, humans do not really accept religion based on argument and evidence, nor do they need to. In other words, Plantinga says that argument isn't everything, that there are other ways of apprehending truth than argumentation. If you are a religious person, you certain understand what Plantinga is talking about. No argument from intelligent design, nor any cosmological argument, made you believe. Belief is not even primarily cognitive when it comes to religious belief: it has much to do with trusting others with actions, with your fears, wishes, hopes, etc. There may be something cognitive about all these, but there is something more too. Dennett did not directly talk about this, but I think this is a huge hurdle for him, because he really seems to have the position that one must argue for everything that you believe. Of course, I'm not sure about this. Some philosophers, like Donald Davidson, argue that we assume that most of our beliefs are true without argument, and that is probably right (beliefs about our everyday life, e.g.). Dennett might just think the big question need to be argued for. 

And here Dennett may have a good point. It seems we really should have evidence for all our beliefs, right? This encounter was very uninspiring because this is really the crux of the debate between religion and a version of science (metaphysical naturalism), but it was hardly touched upon. Dennett did bring it up when he talked about epistemic responsibility, and Plantinga did as well, when he talked about the reliability of naturalism, but in both of these cases they didn't really announce the issue. 

By the end, Dennett finally brought his main idea in dealing with religion: we ought to study it scientifically so we can see its contingent history (its evolutionary history), how it arises, how religious belief is formed, etc., and in this way we can unmask it. By unmasking it we "demystify" it, so to speak, and ultimately (I suppose the hope is) we can get rid of it. It's funny, because Dennett never seems to acknowledge that the social sciences have been doing this since the early 1800's. But the problem for Dennett, and those who want to demystify it, is just precisely the issue that Plantinga brings up: most religious people do not accept their religion based on argument, so why would they get rid of it on those grounds? There are lots of really good reasons people reject religion, and we can see this West especially clearly. The Christian church, for instance, has lost much ground in the West not merely because of science, but just as much because of politics: the church has been horrible on issues that really matter to people, like sexuality, the poor, child abuse, gender roles, race, environmentalism, etc. Does it really matter to people whether they have the right view (at the time) of the big bang, or the which ancestor we came from? Probably not. More important is action: how are their lives? That's the important question. 

In Dennett, as in Dawkins and other metaphysical naturalist, I don't see the responsiveness to these type of questions. Not that I necessarily agree with Plantinga either. I think his argument about the unreliability of our thinking based on a naturalistic account is rather a thin argument. Right now, in fact, I'm reading Dennett's Consciousness Explained, and in his very Wittgensteinian procedure, he is just great at showing how we can really imagine how things work, if we change the way we think about those pictures that have been holding us captive for so long in philosophy. I really like how Dennett tries to get us to see something new, rather than rely on philosophical logic (in the way Plantinga does). This way of thinking is really interesting, because it is creative of new thought, and instead of focusing on how things have to be, he focuses on how things could be (Dennett often charges philosophers for confusing necessity with lack of imagination - which is very true). 

My only complaint is that Dennett then himself lacks imagination. It's not that we need to postulate God in order to explain science, or explain the natural world. But there are compelling reasons for thinking that religion really does add something to our lives, something that we would loose if we restricted our view of the world to evolutionary scientific thinking. On the flip side, evolutionary thinking is creative and exciting, so we should not get rid of that, either (although I do have some huge reservations about "memes" and other such evolutionary psychological ways of looking at culture - although I am reading Dennett now to consider it).

In short, humans have such variety of ways of life, and such variety of ways of thinking, we shouldn't be too quick to restrict these ways of thinking. Certainly we should call out beliefs that produce great harm. But the problem is, we are in the midsts of all these ethical questions, and so just taking one top-down approach (such as metaphysical naturalism) would just obscure all of these issues. In addition, ethical questions are not just about genealogy, about where our beliefs come from (and naturalistic thinking is not the only way to do genealogy - there's also historicism), but what to do now. To reduce the variety of human thinking and imagination just because you want to use one way of thinking that works well in science is short-sighted, and will ultimately not work, even if you wanted it to. As the ancient philosophers recognized, theory by itself really doesn't do anything for us: it's the practice of theory that does something. 

And so far, I haven't heard how a metaphysical naturalist would practice metaphysical naturalism. 

Tuesday, February 10, 2009

Head in the air or nose to the ground: Mysterious Skin Reconsidered

I recently re-watched Greg Araki's "Mysterious Skin," a film about two boys dealing with one traumatic event: sexual abuse at the hands of their little league coach. Both Neil and Brian have reactions to this event that are deeply disturbing, and by the end of the film, we get the sense that their lives have been irreparably damaged. The actual narration in the last scene, and how it is filmed, are both very instructive. But before we get to that, let me point out the ways they react to this event.

Brian is the "typical" geek type character, someone who is bad at baseball, has huge glasses, who's mother coddles him to a fault, and who in general doesn't seem to be able to fit in socially. His narration of events revolves around explaining these experiences of "time-loss." He gets the first one when he's 8, and every time he has this experience, his nose bleeds. The summer of 1981 includes not merely these time-loss and bloody nose episodes, but an "actual" space-ship over their house. Clearly his memory has faded into fantasy. 

As he grows older, this fantasy becomes more and more a part of his daily experience. His dream world - the return of the repressed, no doubt a Lacanian would say - gets filled with visions of aliens and exam tables, and eventually he watches a T.V. show that talks of alien abductions. He finds another person with a similar tale (an excellent Mary Lynn Rajskub), and together they commiserate about their numerous abductions. Through this fantasy world, Brian attempts to explain something un-explainable in his past, something that left him so vacant and empty (as Neil says later on), that Brian fills this gap with an account that appeals to something transcendent or beyond, an other world. This is not so different from various accusations against religion, by the likes of Marx, or Nietzsche. Nietzsche of course says this type of appeal is the ultimate nihilism, for it refuses to affirm life here and now

Yet this affirmation is just what Neil's character provides. Instead of throwing himself into some explanation of an unknowable, traumatic event, he is fully aware of his relationship with his coach, and fully aware that it's "fucked up," as he puts it. At the same time, the way he deals with is almost an immersion in its materiality, in its physicality. This immersion partly means he becomes a call boy, and is fills the needs of various men in the town of Hutchinson, Kansas. We see various experiences in this vein, like when he's 15, and when he goes after the one guy he never had. Later, when he moves to NYC, there are other, more disturbing scenes: his first time in NYC (and his first experience with a condom); a moving and ultimately life-alter encounter with a man with AIDS; and finally, the one event the brings this destructive and disastrous life to a head, an experience of being raped. Throughout all of these experiences, Neil maintains this deep and moving distance, partly because we often see his moment of orgasm, and the indifference that lead him to these actions at the same time. This cognitive dissonance is enough to see how this immersion in sensuality almost destroys him. 

The final scene is the most fascinating in light of the story. Brian and Neil meet, and Neil takes him to "coaches" house (now inhabited by someone else). They sneak in, and sit in the room where it all happened to Brian and Neil. Neil remembers everything, and as he narrates the entire event, Brian leans on him, and eventually lays in his lap, and his nose starts to bleed. Outside (it is Christmas Eve) a coral group starts signing "Silent Night," and the camera moves to an over-head shot. The voice-over has Neil talking about how sorry he was this stuff happened, and how he wondered if there was any escape from this world. He wonders (as the camera moves upward, and the only illuminated space is Brian and Neil on the coach) whether they might become two angels, who leave their bodies, and disappear. 

As I reflected on this, I realized that this film does a wonderful job of contextualizing two possible responses (although extreme) to intense suffering, and how these two responses ultimately lead to something of a "gnostic" answer. What I mean by this is the basic answer that "gnosticism" gives to suffering and the world: denigration and flight. For gnosticism (whether the gospel of Thomas, or the gospel of Judas, or any other of the various modern forms) the body is the problem, and situations like sexual abuse just point this out all the more. Gnosticism is both an other-worldly flight, because of it's ultimate goal of the release from this present body, and a gross materialism, which can conceive of matter and the body merely as decay and something to be spent.

For Brian, it was a matter of spiritualizing the experience (not so different from many Christians who emphasizes dying and going to heaven, although this is not the orthodox Christian position), and in this spiritualization becomes something of an a-sexual person who is visibly uncomfortable with himself. His body becomes a problem for him, or rather the problem, since it seems to him that the encounter with something alien has completely altered his life. Brian denigrates his body precisely because it becomes something alien to him, understood only in its relation to his fantasy world (again, there are some very interesting parallels to many strains in contemporary Christianity).

For Neil, it was a matter of complete immersion in the "materiality" of his body. This is completely different from Brian's response, but ends up being similar in that Neil clearly distances himself from his body in order to endure these experiences. In submitting to the bodies of others, Neil similarly denigrates his body, but the difference is that he also uses his body as a tool, and in some sense, practically disrespects it even more than Brian did.

These I am calling "gnostic" responses, and in my view, they are ultimately inadequate (which also seems to be the view that Araki takes with that last scene - the escapism Neil advocates is clearly a dream). What would be adequate? This is for another post, but at this point I would say from a theological prospective this is why Jesus' resurrection and the language of "new creation" is so important. The entire idea of the resurrection of the body is that it affirms the body - against the "other-worldy" response of Brian and gnosticism - and opposes the dualism of material/spirit. On the other hand, the idea of new creation argues that this new creation is firmly planted within the old, and that creation is thus understood from the perspective of the possibility of renewal - against the gross materialism of Neil and gnosticism. 


Saturday, January 24, 2009

Mythos and Logos: A productive tension

It's a platitude in the history of philosophy that Thales of Miletus was the first "philosopher," and that is defined as someone explains the world in terms of logos (or "reason") instead of mythos (or "myth"). Now, Jean-Pierre Vernant is right that this translation of "reason" as opposed to myth is too simplistic - the old cosmologies and cosmogenies were filled with their own "reason" - but in essentials this is right. The question is: what is the status of myth? 

This is a question that filled the head of ancient scholars, especially the grammarians of Alexandria. They developed full-scale allegories to explain the battles of the gods in terms of principles rather than persons. This seeped into the Judeo-Christian tradition (the most obvious example being Philo of Alexandria), and there have been plenty of thinkers in Christian history who have tried to balance these two, but who have ultimately had the logos reign over the mythos, and make sure Christianity was understood as the "true" philosophy. 

In our contemporary era we too, have something of a battle between these. I find myself at this point in my faith and thought to be stuck at the fulcrum, so to speak, of myth and reason. On the one hand, there are excellent explanations of the world thought principles instead of stories. On the other hand, reducing our understanding of the world to principles instead of stories is a choice that is not necessarily self-evident, and seems, to me at least, to loose something valuable about the world. After all, why do we model our knowledge of the world on the natural science, and knowledge about things? Why don't we model our knowledge of the world on how we know people? In my view, that is exactly the kind of knowledge religion offers. 

How do we know people? First, we 'get to know' someone by hearing their story. When we become friends with someone, the first things we want to know are things like where they're from (their "geography"), who their parents and family are (their "genealogy"), their interests, goals, and ideals (their "axiology", or what they value), and major events in their lives (their "history"). Second, we get to know someone by seeing them, hearing their voice, and by acting with them (i.e. doing things). Once we've gained knowledge through these avenues, we then say that we know them - more or less. Now, if an evolutionary biologist were to challenge this knowledge based on the canons of scientific methodology, we would probably say that they're crazy. We'd say that the biologist might know that person in general, but not that person in particular. Even then, this "general" knowledge would be pretty much worthless for our purposes - being friends, or family, etc. 

Religion, it seems to me, is modeled on the notion of knowing people. That's what the old Greek "myths" are all about. That's what the Christian "myths" (stories) are all about as well. Now, one of the major differences between Greek myth and Christian myth is that Christianity focuses much more of history than genealogy. It does focus on the latter (especially the Adam and Eve story), but anywhere to the degree of Greek religion (there is no "Theognis" in Christianity). There is a personification of God, precisely for the reason that religion sees knowledge in terms of people rather than things. And the history of Israel, and the history of the early community surrounding Jesus, is a history that precisely does not attempt to reduce humans to things, or even the community to a "thing" (as the social science would do). In early Imperial Rome this only makes sense. Because the Romans were so good at reducing people to property (something like 80% of humans in the Roman empire were slaves), it only makes sense that human dignity was important.

It is often precisely the personification of God that annoys many scientists. Scientists pride themselves on reducing every entity in the world to things, which can be explained by laws (since they all model themselves on physics). There is nothing wrong with this reduction. It is clear from the history of science this produces wonderful things in the world - vaccines, better agricultural techniques, and lots of other wonderful technology. In fact, treating every entity in the world as a thing, methodologically, helps you to do very interesting things, and there is no doubt that our world would be much harder without it.

But can we discount knowledge of people as untrue, because it does not reduce them to things? That, in my view, is what people attempt to do when they extol Darwin for changing the world. There is an assumption that after Darwin, we can no longer hold to the old "myths" (the God of Christianity, or at least 19th century Natural Theology Christianity), because they are so unconvincing "for thinking people" (as Ernest Meyer says). "Thinking" people must consider knowledge in terms of science, or else they are not "thinking." Hence religion cannot make sense to the thinking person, because it personifies everything, making them unexplainable. 

And here is the rub: their is something ultimately inscrutable about every person. When we "know" people we are not saying we have exhausted all the possibilities of that person, that we have gotten to the point where we know everything, so we can anticipate everything. It is the same with God. Christianity never purports to actually know everything about God, because ultimately we do not. There is something fundamentally unknowable about God in principle. Clearly, in terms of evolutionary biology, this is unacceptable. Even if we concede there is plenty of mystery in the world, and have some type of reverence for this mystery, this is not a mystery in principle, but rather in fact. We expect eventually to plumb the depths of this mystery, while religion never assumes it will plumb the depths of the mystery of God. 

Knowledge of people, and knowledge of things. In my view, this is not an either/or, but a both/and. We need both, because both are, because we experience both. The one is not reducible to the other, although I would not say their are complementary either. Instead, they are in tension. Hopefully though, instead of a tension where each sides waits for an apocalyptic annihilation of the other, the tension is productive of thought. Religion needs science to remind it does indeed personify things, and hence making idols out of them (and here I'm thinking of the so-called "health and wealth" gospel); on the other hand, science needs religion to remind it that there is more to the world than things, and its reduction of people to things is not absolute. I expect there always to be a tug of war between religion and science. If they both stick to their guns we can expect plenty of fruitful thought for many years to come.

Wednesday, January 21, 2009

The rhetoric of the absolute

As of late I have been reading Walter Brueggemann's "Theology of the Old Testament: Testimony, Dispute, Advocacy." However, this morning something really annoyed me, a tendency in much post-Derridean critical thinking. Following Derrida's critique of people like Heidegger, Plato, and Hegel, thinkers sometimes bring out a rather simplified picture of "totalizing" and "absolutizing" picture of western thought. In my view, this is tremendously wrong: not just because it over-generalizes in its own way, but rather because it seriously misreads so much of western philosophy. Derrida's deconstruction, I would argue, is quite egregious in this. 

First Brueggemann. In explaining the concept of "countertestimony" in the Old Testament (i.e., those texts of the Old Testament that seem to challenge Yahweh's sovereignty), Brueggemann sets up the opposition between a "Jewish" way of thinking, and a "mode of reason" associated with the West, "rooted in Plato", which tries to settle all disputes, and hence "to stop the political discourse that was sponsored by the Sophists" (330). What Brueggemann is pointing to here is the difference between "eristics" and "dialectics" for Plato. E.g., in the Meno, Socrates contrasts the "contentious and eristical wise men" to a more "gentle" form of discourse, "dialectic" (75d). For Brueggemann, and I would add a number of other contemporary Biblical scholars (Elisabeth Schüssler-Fiorenza comes to mind here), this is a contrast between an "open" rhetoric and sophistical movement, with a "closed" philosophical and absolutizing movement. Derrida certainly was fond of pointing to the chinks in the supposed armor of Platonic "realism." 

But is this what goes on in Plato? Hardly. One of the most consistent mistakes in understanding Plato is forgetting that he wrote dialogues, and that "Plato" never appears in any one of them. There is a good reason for this: Plato's Academy, as many attest to later (especially Cicero), was actually the most open of all the ancient philosophical schools (especially in comparison with the Epicureans), because so much of their philosophical way of life was rooted in dialectic. Dialectic is a communal process that philosophers in Plato's school engaged in, and it was primarily an askesis, or a self-transformative practice, that aimed at enabling the student to transform him or herself into one that submitted to reasonable discourse, instead of "eristics" - which in their view, was argument for argument's sake. In other words, the point of dialectic was to transform individuals into those who could recognize that force of the better argument, and participate in argumentation in order to search for the truth. 

Furthermore, it is clear from later dialogues such as the Parmenides (which is a thorough-going critique of the supposed "theory of forms") that there is no such thing as "Plato's doctrine" (written or, as the esoterics would have it, unwritten), and that the dialogues were not meant as a "system" of philosophy. From various testimonies in the Hellenistic period, Plato's Academy never had one over-arching "doctrine," like the Epicureans or Stoics (the Peripatos did not either), but included a multitude of perspectives. The philosophical way of life for Plato's Academy was thus philosophers learning how to dialogue with one another. I suspect it was rather the Imperial Period - especially the Neoplatonists - that codified a "Platonic" doctrine. This period was marked by commentary - which the earlier Hellenistic period was not.

Be that as it may, this brings up a tendency in contemporary thought, out of a somewhat Derridean lineage, to generalize certain aspects of a thinker without a seemingly clear interpretation of that thinker. Derrida, in my view, is the Socrates of the early dialogues. That Socrates asked the impossible: define "piety," "poetry," "virtue," etc. He was never asking to actually get a definition - the terms he sets up for this definition are too impossible by half - and so most of the early dialogues end in aporia, or in a puzzle. Derrida does this by looking at the impossible in texts, the little contradictions, the peculiarities, those parts of the text that seem to intentionally be misunderstood. He does this, it seems, for a very particular purpose: like Socrates, to engage us in active re-interpretation. Derrida is too, like Socrates, repetitive and annoying (Socrates himself mentions this fact in the Apology), and if you read too much Derrida, you start to be annoying. Nevertheless, Derrida's philosophical purpose seems to me to be right in line with the Academic (in the sense of Plato's school) way of thinking. Deconstruction is "justice," as Derrida says. In other words, deconstruction is a practice, an askesis that tries to move one out from under a self-satisfied "knowledge" of our philosophical tradition, toward a constant re-engagement with that tradition. 

The problem comes in with the "disciples" of the "deconstructionist." Too many thinkers take the easy way out, and refuse to deconstruct Derrida himself, or resort to the Pythagorean "ipse dixit" - Derrida tells us that Heidegger is absolutizing, Hegel is a totalist, Plato squashes all political debate. From the beginning though Derrida has always pointed to deconstruction as a method, like the Platonic dialectic, that does not issue into doctrines, but is an end in itself. Furthermore, the reason it is an end in itself is that it seems to me, at least, to be primarily a practice. Philosophy is a way of life, and like other thinkers before - the entire ancient tradition, but also thinkers like Wittgenstein, Hegel, and Heidegger - philosophy is meant to issue forth into an entire life. And as it was for Plato, so it is for Derrida: that life is a life of justice

Saturday, October 04, 2008

Of Ambiguity

Plato exiled poets in the Republic. The problem, Socrates says in Book III and again in Book X, is that they are imitators, and what they imitate is often the worst in humans. Moreover, since poetry does not truck in knowledge, the poets do not know what they are doing (so the argument goes in the Ion). The truth is not the point of poetry. 

Philosophers do not like ambiguity. In fact, some even think the basic cause of our philosophizing is ambiguity, the need to make things clear, to enlighten us about a certain conceptual usage, to make explicit a certain implied consequence to a commitment (of course philosophical prose might belie this intention - Kant no doubt ignored his editor). Ambiguity is the cause of a whole host of problems, one might say. Conceptual fuzziness does not help one to get along. All the things we wish to do with thinking is subverted by it, like setting up parameters for scientific inquiry (which ultimately helps us to control nature better), or setting up rules for political discourse, or trying to explain religious belief, etc. Ambiguity makes all of this more, not less, difficult.

I recently watched director Michael Haneke's first three films, which apparently form a trilogy: The Seventh Continent, Benny's Video, and 71 Fragments of a Chronology of Chance. They all deal with similar themes: death (a family suicide in The Seventh, a teen murder in Benny's, and a triple homicide/suicide in 71 Fragments), distances between people, and material objects (in all three films much of the scenes are dominated not by faces, as is normal, but by hands, feet, and things). 

Haneke, in addition, is the master of teaching us how to look, how to notice things we don't normally notice in an image. Haneke is a master of the precise image, the image that tells us everything we need to know, and hints at even more. As he says in one of the interviews connected with the DVDs, his intention is to get the scene just long enough that we really see what's happening, without assuming we immediately know. E.g., in 71 Fragments there is a scene of a character practicing ping-pong very seriously. It's a nine-minute shot, and he's doing one thing: hitting ping-pong balls. Haneke imagines an audience doing this: seeing the shot, and saying 'I know what that's about.' Then waiting for the scene to end, and realizing it won't, getting bored; then getting a bit upset, because it hasn't ended; and after being upset and bored, realizing that there is something to look at, and starting to actually see what's there. I think I had this exact experience when I watched it.

Another very important aspects of these films it he lack of explanation. We never know why the family commits suicide in the Seventh Continent, why Benny kills the girl, or why Maximillian B. opens fire at the bank. These are all completely unexplained, and Haneke wants it that way. As opposed to famous film explanations - I'm thinking of Hitchock's Psycho, with that deflating scene at the end, trying to explain Baits, or Don Siegel's 1956 version of Invasion of the Body Snatchers, with an explanation and wrap up at the end - Haneke never lets anything be explained. The ambiguity of the motives is an essential aspect of these films, one that really allows the viewer to feel emotion. One of the problems with films like Psycho is that the emotion that is conjured up with that last view of Baits, who is talking in a voice like his mother's, is completely evaporated in the very next scene with an explanation. Explanation guides you, directs you, forces you into one particular view of the matter at hand that any emotion you may have particularly felt is immediately lost. 

Which is exactly what Plato wanted philosophy to do. The problem with poets is just that - it guides you to whatever particular emotion you have, not to the correct emotion, the emotion that leads to the truth, to the correct behavior, etc. 

Contrast this to what Haneke does. He juxtaposes exactness of images with ambiguity of narration. Sensations of course are the most particular thing we can have, and so instead of guiding our thought, he lays out the possibility of reactions through sensations - which of course is the point of a film image instead of theater production (although, as Jean Renoir has said, much film is actually theater). The ambiguity of the narration then takes up where these sensations leave off, and leave us as viewers as participants in the production, in the story itself. Why does the family commit suicide? We have no idea, but as they are flushing money down the toilet, as they are sitting watching T.V., slowly dying, we are barraged with questions from our own selves - how could they do this? How much like their lives mine has been! What would drive a father and mother to help their 8 year-old daughter kill herself? What does this say about our society (it was a true story)? These questions are productive, they produce reflection, instead of answers. There are no answers. 

And perhaps that's the most important aspect of philosophy - to question. Ambiguity is not something to be afraid of. It's something to stimulate, to move you, to make you wonder. Why? Because it asks you to think. That's the point, right? 

Tuesday, September 30, 2008

The Truth of an Illusion

Yesterday, in my introduction to philosophy classes, I taught my students about the three big theories of truth: the correspondence theory, the pragmatic theory, and the coherence theory. These three theories represent the basic positions on what truth is, although hermeneutic theorists like Heidegger and Gadamer certain offer their own visions, albeit in not so formal a manner. In any case, I was listening to Terry Gross this afternoon, and she had on Bill Maher and Larry Charles, the star and director of "Religulous." One of the things that Terry Gross asked was whether or not religion, even if one admits that it is a bunch of stories, can still be useful. Maher thought no, for ethical reasons (because of all the bad things that have been justified using religion), but Larry Charles said no, for theoretical reasons: the stories are not true, so one should not believe them. 

Of course this brings up a question for me: what is Larry Charles' view of truth? I would hazard a guess it's the correspondence theory. Like most of the so-called "new atheists" (Hitchens, Dawkins, Harris), I would assume most atheists in this mold probably think of their theory of truth as the correspondence of our ideas to reality. If they do correspond, then everything is good (enter modern science); if the don't everything is bad (enter religion). This basic view entails a whole bunch of theories, like the theory of representation: to "know" something is to represent it faithfully, to re-describe "reality" in language. So when a biologist analyzes something into the language of evolutionary biology, they are representing reality faithfully in a different language - one that hopefully helps us to understand the world a bit better. 

There is only one small problem with the correspondence theory of truth, a problem that when it is pointed out, makes correspondence theorist merely shout louder: they beg the question of what "reality" is. In other words, if you define "true" by reality (in this theory, it is reality that makes a proposition or view true), you have not answered the question of what reality is like. If you try to answer that question, you immediately have to come back to say that this definition of reality is the true one, and not that. But if you use the conclusion as a premise, and then the premise as the conclusion, you're merely begging the question. 

In the case of Larry Charles' views, this would mean that however he defines "reality" (which probably excludes certain types of "supernatural" phenomena, an "immanent frame," as Charles Taylor puts it), he is merely assuming it is true, and then defining what things count as true based on this assumption. Why should truth be based on that particular assumption rather than another? If its not argued for and defended, we'll never know. And to me, that is one of the major drawbacks of the correspondence theory of truth. Most of these theorist do not argue for their picture of reality, because they want "reality" to be some inert thing that is absolutely untouched by anything human. It is just "there" and there's nothing you can do about it, is the attitude. 

One of my basic problems with this attitude, is that if you does not argue for your view of reality (and you can't with the correspondence theory - it would arguing in a vicious, not a virtuous, circle), then you're apt not only to be a totalizer, but to lump all things that just "seem" similar, together. Why? Because of a habit of thought. If you don't argue for reality, if you don't think about your own assumptions toward it, how well would you be able to reach behind yourself and to see how your own history, social location, economic standing, etc., help to color how you think of other things? If you are not use to doing this, why would you be able to be hermeneutically sensitive in your definitions of anything? 

Take religion. The very idea that there is one thing called "religion" is ludicrous. First, the history of the term is interesting. It developed in the 17th century to precisely describe a certain war, and so trying to decide what is religious or not is from the outset put in terms of Protestant and Catholic disputes (also, since this is the case, for most of history no one thought of themselves as "religious"). But how can you do that? Religions are so different as to be unrecognizable, and even within religions the diversity is so great that I probably have more in common with atheists than I do with a lot of Christians. The standard of Protestant Christianity becomes the standard for all "religion" (which standard is also a misrepresentation of Protestant Christianity). Yet someone like Larry Charles thinks that because their assumption of reality does not include this so called "religion" then this "religion" has no place in life. 

To me, this is just a case of "the superstition of science scoff[ing] at the superstition of faith," to quote James Anthony Froude (himself a famous apostate and Carlyle biographer). To have a view of reality without argument is just as egregious as believing in so-called "myth." In either case, it is standing on a foundation that ultimately does not have recourse to reasons and argumentation. This very well may be the human condition, but in my view, we ought just to be up front about it, and change a vicious circle into a virtuous one going beyond any such "correspondence" theory of truth. 

And if Maher and Charles want to know what a Christian like me thinks about "religion," then I offer this quote from Kierkegaard: "to stand on one leg and prove God's existence is a very different thing from going down on one's knees and thanking him."

Monday, September 29, 2008

Absolute and Relative - Truth, Morality, Anything...

While questions of "absolute right and wrong" are not as pressing these days as they were, say, in the 90's (the heyday of groups like Focus on the Family and other "moral majority" groups), thinking about "absolutes" is an interesting and fruitful question. I recently came across one of Richard Rorty's arguments on this point. And like typical Rorty, it is grand, dismissive, and extraordinarily interesting. 

Rorty begins by mentioning that many people attack him on this particular point: that he denies there is any concept that we call "truth," and is thereby a relativist. What he denies, instead, is the dichotomy "reality-appearance", and the attendant correspondence theory of truth that this dichotomy implies. His detractors think that any theory of truth besides the correspondence theory leads us on the path toward relativism (especially the pragmatic theory, like Rorty's). 

But that doesn't mean he doesn't believe in truth, or so he maintains. Truth is surely an absolute notion. He gives two examples: we don't say "true for me but not for you," or "true then, but not now." Clearly, the geocentric view of the solar system is untrue, and never was true, absolutely and with no preconditions. But, he then says, "justified for me but not for you" is a common locution, and one most of us are quite happy to go along with (although not for everything). And the thing is, "justification" is the application of truth - or at least it goes along with it quite strongly, as William James points out. In fact, Rorty argues, justification does indeed seem to be something that always goes along with any claim to truth. From this Rorty draws a conclusion I've often myself thought about. 

His conclusion is this: granted that truth is an absolute notion, the application of this concept is always relative to the situation we are in. The criterion for applying the concept "true" is relative to where we are, our limitations, and our expectations. At the same time, the nature of truth is certain absolute. But if this is the case, what is the point of a theory of the nature of truth? Rorty sees none. If we only encounter truth in application to relative situations, what is the point of specifying "absolute" truth. We never encounter it, so even if we saw it we wouldn't know that we were looking at it. 

I have to be honest here. I should have written this like eight years ago, when I first encountered James Dobson's "Right versus Wrong" campaign. At the time I thought, "sure, in general we have a good idea of right and wrong. But absolute right and wrong? How could we ever know that? We are never in situations that are clear enough, never in situations that present themselves to us so cleanly. Who has ever had to make a decision a.) with full information, and b.) with full moral certitude? How would we even get this certitude, since by definition 'absolute' means something unconditioned by how we think about it. But what could that be?" 

I believe this line of thinking comes from a specific source for me, and on this Rorty agrees. Orthodox Monotheists (Jews, Christians, and Muslims) actually basically view God in this way. They say, "God has indeed been revealed to us - but we can never fully capture God conceptually, given our limitations." Even the most fundamentalist Christians recognize this point, at least in principle. And indeed, this is a point that has been driven home to me throughout my life. We use the language of Scripture, but we also recognize that we fall short of intellectual capacity to understand it. Calvin makes this point repeatedly when he says that God "condescends to us," speaks to us with the language like a wet-nurse, with concepts like the Trinity, and salvation, etc. 

What does this mean philosophically? I'm not entire sure, except that perhaps Christianity ought not to be so hostile to pragmatists, or at least that notion of truth. Perhaps there is something we can learn from Rorty and other pragmatists, who insist on the "useful" as the basic category of thinking. 

Thursday, September 25, 2008

Ad venalicium: on the "free" in free trade

Alfred Hugh Clough once said that "thou shalt not covet - but tradition approves all forms of competition." An apt saying in a strange time, no doubt. And in this strange time I have been trying to reformulate my own view of things with big capital letters like the "Economy," and "Free Trade," and all such other concepts. I do believe my views have shifted a lot since my youth, and I figured I might set them out clearly here - at the very least as an exercise for myself. 

Growing up I listened to Rush Limbaugh with my parents. I remember driving in the summer, from noon to three, hearing him denounce Bill Clinton, and trumpet his view of economics. His view is rather simple and elegant: enable "free enterprise" and the rising of the water will float everyone's boat. This is "supply-side" economics (the "trickle-down" stuff is not really a economic position), i.e. if you bolster the producers and employers, you will bolster everyone. This does have prima facie plausibility, if you think about the reasons a business might expand. There are two basic reasons a business expands: the costs of production goes down, or there is an increase in demand. The main reasons for a decrease in the cost of production would be paying employees less, figuring out better techniques of production, or the lessing of other "exogenous" factors (exogenous just means those factors that do not have to do with the market per se). The worse exogenous factors, according to Limbaugh and most supply-siders, is government intervention - in the form of taxes and regulation. And so the argument goes, instead of decreasing production cost through paying employees less (although this too is argued - against the minimum wage, for example), one should get the government off the backs of the employers. 

At the heart of this view is the basic neoclassical economic view that the market is the most efficient instrument for the allocation of scarce resources. The reason for this is because of the principle of marginal utility, or marginality. This principle basically states, all things being equal, that (from the demand side) as an individual consumes more and more of something, it becomes less desirable (or, as economists say, the "utility" is lessened). Thus, after the first can of caviar that I have, there will be decreasing returns on my enjoyment of additional cans. This principle makes sense: the more you have something, the more "normal" it becomes, the less of a treat it is, the less desirable for itself it becomes. 

From the supply side, the principle of marginality is similar. The idea is that as a producer expands production to meet the increasing demand, each additional unit costs more to produce than the ones before. Consider the caviar. As I eat more and more caviar, and the caviar fishermen expand their operation, it will take them more and more resources and energy to get enough caviar to fill my demand. At the same time, the price will go down, since the supply goes up, and my desire goes down. 

The outcome of these two movements is what economists call a "competitive equilibrium. It will mean that demand has equaled supply. Alfred Marshall was perhaps the most famous and one of the earliest economists to talk about this. Now, if one thinks about this, it's great. It means that left on its own, the market does everything we need. It allocates things based on what the individual wants, and that all the productivity in an economy is focused. It also means that there is no other way to increase one player's welfare at the expense of another (what economists call the "Pareto optimum"). It is a world where the market - not the government or some other thing - allocates all the resources out there in the most efficient manner possible. 

There is only one slight problem, a small caveat that makes a big difference. This is the caveat "all things being equal" (ceteris paribus). What does this mean? This means that the very concept of a market is an abstraction, an utopia, a pie-in-the-sky world where no one really lives. The "market" correcting itself is a fiction - a pious fiction, perhaps, but a fiction nonetheless. Let me explain why. 

The major reason is that this world is filled, not with pure competition, but with "oligopolies." This is a nice economic term for an asymmetrical allocation of productivity between producers. Monopolies and "duopolies" are examples of this, and of course most free-marketers would say that you need really strong anti-trust laws to make sure these don't occur (almost the only place they say the government should intervene). However, in our economy, and the international economy, it is clear that in fact oligopolies are the norm, not the exception. Even though we heap praise on small business as the back-bone of the economy, it is really not the case that there could ever be market equilibrium because of the very real problem that there is no perfect competition, and their never will be. The reasons are simple: history, geography, and politics. 

Let's take an example: Microsoft. There is this concept economists now use called "path dependencies." This is a fancy way to say that you only have so many options. Microsoft is not considered a monopoly because it's not like they make all the software or hardware. They're just the main operating system people have to use. Imagine you're a small business owner who is dependent on computers. You can either go with Mac - which has much less software and compatibility - or with Microsoft, who has the basic system most people use. In perfect competition this would not be the case. You would have lots of choices, and you would pick the one that is the most efficient. Instead, because a contingent thing like the poor marketing of Apple in the 80's, you only have one choice, and not even the most efficient. This is a path dependency. 

The second major thing about oligopolies is the fact that these are not merely asymmetries in production and allocation, but in real political power. Political power is one of those things that neoclassical economists don't like to talk about (it cannot be quantified), but in the real world (not the world of ceteris paribus) political power, and power asymmetries in general, really effect the outcome of the market. This of course is seen with the recent Wall-Street debacle, and with organizations such as the IMF and WTO, who continually reinforce the trade advantages of developed countries (by things like not demanding the dismantling of agricultural subsidies and other things). In fact, even with regional trade agreements that are supposed to be "free" like NAFTA, there is no free trade. Barriers are lessened to a degree (and disproportionately for less-developed countries), but the fact remains that for all the free-trade talk, there is very little free trade at all. 

Which brings me to the title of this blog. I'm not a socialist. In theory, free markets are great. What I have issue with is the notion that any market really is free, or really can be. Supply siders might retort that 'it's because of government!' But it is a fundamental misrecognition of power to think that somehow those in power (such as the powerful in oligopolitic markets) will just give it up for an ideal that has never been seen in the history of the world (the ideal of market equilibrium). 

Then again, maybe I'm wrong, and we should say with Nietzsche that "the lie is a condition of life." Or, maybe we should buck up and say that humans making decisions has more to do with allocation of resources that we acknowledge, and then we should figure out what good judgment entails. Just an idea. 

Wednesday, August 13, 2008

Look and See the resemblances - Reading Wittgenstein

§§31-80. These sections of the Philosophical Investigations introduce two important conceptions of Wittgenstein's view of the entire philosophical enterprise. The first is his notion that if you want to understand something you must look and see (§66). The second is that of "family resemblances" (§67). 

Look and see: Why does Wittgenstein discuss philosophy as a task of 'look and see'? In his discussion of "ostensive definition" that carries over from the first 30 sections, Wittgenstein notices that often there is a serious problem with the word "this." If one considers language as a collection of "names" that point to an "object" (logical atomism, as it were), the word 'this' - which is the "most" ostensive word you can think of - starts to seem like "the only genuine name." But of course how could it be? There is no one definite object "this" points to, and hence it is always in need of a supplemental definition. 

Wittgenstein thinks that in this entire discussion there is a certain "subliming" of our language. He says, about "this" being the only genuine name, "this queer conception springs from a tendency to sublime [sublimieren] the logic of our language" (§38). This verb should be better translated "sublimate," as in the chemical process of a solid turning into a gas - not the "sublime" in the 19th century sense, i.e. that thing - like an abyss - that reaches the limits of our language, and extends beyond them. Sublime in that sense is a bit too dramatic. Instead, Wittgenstein is trying to point out that philosophers, whenever they have a hard time with fitting a particular thing (like the word "this") into a certain way of thinking (such as "ostensive definition") - when language "goes on holiday" as he says - they resort to "subliming" this language, i.e. making something concrete and ordinary into a much more serious affair.

It is in this context that W. launches a more detailed discussion of "naming," and repeats what he said earlier in §49, when he says that "naming is so far not a move in the language-game" (see my last post). It culminates in a discussion of broom-sticks. The question is whether if I say my broomstick is in the corner, is it a more "fundamental" analysis to say that in the corner is a broom-handle with a brush on it, or a broomstick? In other words, does analysis give us something better than just plain old "broomstick." And if I said, 'bring me the broomstick with the brush which it is fitted on to it' wouldn't I answer "Do you want the broom? Why do you put it so oddly" (§60)? The point here is that what matters is the use of language in the language game, and so no, the analysis of the broomstick into stick and brush is not better at all, its just a different language game. 

And then Wittgenstein anticipates an objection - and here we get what he means with the phrase 'look and see.' The objection might be 'but what is the essence of a language-game?' "You take the easy way out" one might say, "you talk about all sorts of language-games, but have nowhere said what the essence of a language-game, and hence of language, is" (§65). W.'s rejoinder is that phenomena are related to each other in many different ways. If we're interested in an "essence", or what is at least in "common," we have to "look and see whether there is anything in common to them all" (§66). This is essential to how W. sees language, because as his next analysis of the concept of "game" shows, there are many similarities between football and handball (and other such games), but many differences as well. 

Family Resemblances. But if one must 'look and see' to understand a concept such as "game" and the relationships between the many different types of games, what is one looking at? W. says here that he "can think of no better expression to characterize these similarities than 'family resemblances'" (§67). E.g., if we think about the concept of number, we get things like cardinal numbers, rational numbers, etc., and they all have similarities to each other, and also differences. Does that mean there is a single "essence?" No. Instead, W. uses the metaphor of a thread: "we extend our concept of number as in spinning a thread we twist fiber on fiber. And the strength of the thread does not reside in the fact that some one fiber runs through its whole length, but in the overlapping of many fibers" (§67). 

But what does this mean for our conception of things like games? It means that in a certain measure, there is no boundary to concepts like this. There is as much difference between basketball and solitare as there is similarity, but we would be hard pressed to say that the one is or the other isn't. Now, this does not mean you cannot "draw" a boundary, but it does mean we don't need a boundary in order to use the concept (§68). 

This notion of not having sharp boundaries to our concepts certainly does not feed into the physics envy of philosophers (although I'm not sure how many philosophers still feel this - according to Rorty, it's more the scientists who now have "philosopher-envy"), because it means that one's concepts are not, in themselves, all that exact. But for W., it does not matter, because the point of "defining" something is secondary to what we are doing with it. In other words, if one is to point to a certain conceptual space, the issue is not how clearly it is demarcated, but rather how this space is employed (§71). When we look at what is common in things we are trying to show how this conceptual space is used in similar ways, not how this conceptual space "is" ontologically. 

Now, the philosopher in me is quite uncomfortable with W.'s project. In a certain sense, I would not want to give up all ontological claims. What about claims to justice and a vision of a new world? Would these be accommodated if the point was to "look and see?" The notion of an ideal can be quite critical of the present age, and hence progressive, while focusing all one's attention on the thing in front of you can be quite opposite. Of course, it is still too early in my reading to really know how W.'s project might affect this, and so I guess I will look and see. 
 

Friday, August 08, 2008

Labeling and Classification - Reading Wittgenstein

[Preface: I've decided to tackle the Philosophical Investigations again. The first time I tried, I got 86 pages in, and then stopped (not sure why). But after reading Robert Brandom's Articulating Reasons: An Introduction to Inferentialism, I thought it best to look back to W. But I must warn my readers that I may be coming to this text with specifically "inferentialist" concerns, which may not be fair to Wittgenstein. So, with this in mind, I offer some thoughts on the text, my reading of it (to the best of my ability).]

§§1-30: Labeling and Classification.  
I begin quoting §13: "When we say: 'Every word in language signifies something' we have so far said nothing whatever; unless we have explained exactly what distinction we wish to make." 

From the very beginning of the book, W. is trying to get at how mistaken it is to think of language as a thing we attach to things, or correlate sounds and objects, etc, and then combine those labels (or names, as §1 has it) into larger and larger units. A simple correlation, e.g., when we say that "naming something is like attaching a label to a thing" (§15) really does nothing for us. And why? Because merely to name something is not yet to make a move in what W. calls a "language game" (§22). 

So what is a language game? W. says that "I shall also call the whole, consisting of language and the actions into which it is woven, a 'language game.'" (§7). Hence in these first thirty sections we hear this word "training" [Abrichten/Unterricht] quite a bit, because when we say "language" we're not merely talking about labeling or naming, but doing something. And so when you teach someone a particular word, you train them in the use of the word, and this "use" is set within an entire complex of various actions and words. And so when you tell someone to do something with a 'rod and lever,' "given the whole rest of the mechanism" (§6), they can do something. 

But if this holism is the case, then clearly when we train people in language, we have definite limits and bounds to a certain language game. So, if I want to teach a child how to cook, the notion of "measurement" takes on a very specific use that may or may not be the same if I were to teach them how to do scientific experiments. Grouping words together into certain "kinds" (W.'s examples are words like "slab" and numerals) thus is essential to any endeavor we enjoin. From this if follows that "how we group words into kinds will depend on the aim of classification, - and on our own inclination" (§17). 

And so we have a major distinction here between labeling and classification. When we analyze a word we are not analyzing how this word "name" something, we are not asking how we first "entertain" a notion. Instead, we are asking what "the part which uttering these words plays in the language-game" (§21), or rather how this word is used, and based on this use, come to something like an understanding through analysis.

Now, this brings up the last issue I'll deal with, and that is this word Lebensform, "form of life." In §23 W. brings up the fact that his use of language game is meant to emphasize how when we speak of language, we are talking about a certain activity, and there are lots of different types of activities (giving orders, reporting an event, play-acting, praying, etc). A form of life, in W.'s terms, seems to be just the things we do, and language is similar to the tools in a tool-box, with many different uses, depending on the objective of what is being done. "Classification" comes into play as we engage in these activities. 

This seems all too clear to me, and I'll give an example. I've meet a few people who are not from the U.S. complain about this thing Americans do - when we see someone we say "how's it going," or "how are you?" It is in the form of a question, but as my friends have complained, we don't really want to know how people are doing. In actual fact, this phrase is really just a greeting, and for whatever reason, we've developed it as just something you say, in a declarative way. But that doesn't also preclude this sentence from ever being used as a question. We can certainly imagine that there are times when the "proper" thing is not done and someone genuinely answers the question - much to our surprise, most likely.

And so Wittgenstein is right - "one has already to know (or be able to do) something in order to be capable of asking a thing's name" (§30). In order to label one must have first classified, and this classification occurs according to the language game (language plus action) it is a part of.