Saturday, October 04, 2008

Of Ambiguity

Plato exiled poets in the Republic. The problem, Socrates says in Book III and again in Book X, is that they are imitators, and what they imitate is often the worst in humans. Moreover, since poetry does not truck in knowledge, the poets do not know what they are doing (so the argument goes in the Ion). The truth is not the point of poetry. 

Philosophers do not like ambiguity. In fact, some even think the basic cause of our philosophizing is ambiguity, the need to make things clear, to enlighten us about a certain conceptual usage, to make explicit a certain implied consequence to a commitment (of course philosophical prose might belie this intention - Kant no doubt ignored his editor). Ambiguity is the cause of a whole host of problems, one might say. Conceptual fuzziness does not help one to get along. All the things we wish to do with thinking is subverted by it, like setting up parameters for scientific inquiry (which ultimately helps us to control nature better), or setting up rules for political discourse, or trying to explain religious belief, etc. Ambiguity makes all of this more, not less, difficult.

I recently watched director Michael Haneke's first three films, which apparently form a trilogy: The Seventh Continent, Benny's Video, and 71 Fragments of a Chronology of Chance. They all deal with similar themes: death (a family suicide in The Seventh, a teen murder in Benny's, and a triple homicide/suicide in 71 Fragments), distances between people, and material objects (in all three films much of the scenes are dominated not by faces, as is normal, but by hands, feet, and things). 

Haneke, in addition, is the master of teaching us how to look, how to notice things we don't normally notice in an image. Haneke is a master of the precise image, the image that tells us everything we need to know, and hints at even more. As he says in one of the interviews connected with the DVDs, his intention is to get the scene just long enough that we really see what's happening, without assuming we immediately know. E.g., in 71 Fragments there is a scene of a character practicing ping-pong very seriously. It's a nine-minute shot, and he's doing one thing: hitting ping-pong balls. Haneke imagines an audience doing this: seeing the shot, and saying 'I know what that's about.' Then waiting for the scene to end, and realizing it won't, getting bored; then getting a bit upset, because it hasn't ended; and after being upset and bored, realizing that there is something to look at, and starting to actually see what's there. I think I had this exact experience when I watched it.

Another very important aspects of these films it he lack of explanation. We never know why the family commits suicide in the Seventh Continent, why Benny kills the girl, or why Maximillian B. opens fire at the bank. These are all completely unexplained, and Haneke wants it that way. As opposed to famous film explanations - I'm thinking of Hitchock's Psycho, with that deflating scene at the end, trying to explain Baits, or Don Siegel's 1956 version of Invasion of the Body Snatchers, with an explanation and wrap up at the end - Haneke never lets anything be explained. The ambiguity of the motives is an essential aspect of these films, one that really allows the viewer to feel emotion. One of the problems with films like Psycho is that the emotion that is conjured up with that last view of Baits, who is talking in a voice like his mother's, is completely evaporated in the very next scene with an explanation. Explanation guides you, directs you, forces you into one particular view of the matter at hand that any emotion you may have particularly felt is immediately lost. 

Which is exactly what Plato wanted philosophy to do. The problem with poets is just that - it guides you to whatever particular emotion you have, not to the correct emotion, the emotion that leads to the truth, to the correct behavior, etc. 

Contrast this to what Haneke does. He juxtaposes exactness of images with ambiguity of narration. Sensations of course are the most particular thing we can have, and so instead of guiding our thought, he lays out the possibility of reactions through sensations - which of course is the point of a film image instead of theater production (although, as Jean Renoir has said, much film is actually theater). The ambiguity of the narration then takes up where these sensations leave off, and leave us as viewers as participants in the production, in the story itself. Why does the family commit suicide? We have no idea, but as they are flushing money down the toilet, as they are sitting watching T.V., slowly dying, we are barraged with questions from our own selves - how could they do this? How much like their lives mine has been! What would drive a father and mother to help their 8 year-old daughter kill herself? What does this say about our society (it was a true story)? These questions are productive, they produce reflection, instead of answers. There are no answers. 

And perhaps that's the most important aspect of philosophy - to question. Ambiguity is not something to be afraid of. It's something to stimulate, to move you, to make you wonder. Why? Because it asks you to think. That's the point, right? 

Tuesday, September 30, 2008

The Truth of an Illusion

Yesterday, in my introduction to philosophy classes, I taught my students about the three big theories of truth: the correspondence theory, the pragmatic theory, and the coherence theory. These three theories represent the basic positions on what truth is, although hermeneutic theorists like Heidegger and Gadamer certain offer their own visions, albeit in not so formal a manner. In any case, I was listening to Terry Gross this afternoon, and she had on Bill Maher and Larry Charles, the star and director of "Religulous." One of the things that Terry Gross asked was whether or not religion, even if one admits that it is a bunch of stories, can still be useful. Maher thought no, for ethical reasons (because of all the bad things that have been justified using religion), but Larry Charles said no, for theoretical reasons: the stories are not true, so one should not believe them. 

Of course this brings up a question for me: what is Larry Charles' view of truth? I would hazard a guess it's the correspondence theory. Like most of the so-called "new atheists" (Hitchens, Dawkins, Harris), I would assume most atheists in this mold probably think of their theory of truth as the correspondence of our ideas to reality. If they do correspond, then everything is good (enter modern science); if the don't everything is bad (enter religion). This basic view entails a whole bunch of theories, like the theory of representation: to "know" something is to represent it faithfully, to re-describe "reality" in language. So when a biologist analyzes something into the language of evolutionary biology, they are representing reality faithfully in a different language - one that hopefully helps us to understand the world a bit better. 

There is only one small problem with the correspondence theory of truth, a problem that when it is pointed out, makes correspondence theorist merely shout louder: they beg the question of what "reality" is. In other words, if you define "true" by reality (in this theory, it is reality that makes a proposition or view true), you have not answered the question of what reality is like. If you try to answer that question, you immediately have to come back to say that this definition of reality is the true one, and not that. But if you use the conclusion as a premise, and then the premise as the conclusion, you're merely begging the question. 

In the case of Larry Charles' views, this would mean that however he defines "reality" (which probably excludes certain types of "supernatural" phenomena, an "immanent frame," as Charles Taylor puts it), he is merely assuming it is true, and then defining what things count as true based on this assumption. Why should truth be based on that particular assumption rather than another? If its not argued for and defended, we'll never know. And to me, that is one of the major drawbacks of the correspondence theory of truth. Most of these theorist do not argue for their picture of reality, because they want "reality" to be some inert thing that is absolutely untouched by anything human. It is just "there" and there's nothing you can do about it, is the attitude. 

One of my basic problems with this attitude, is that if you does not argue for your view of reality (and you can't with the correspondence theory - it would arguing in a vicious, not a virtuous, circle), then you're apt not only to be a totalizer, but to lump all things that just "seem" similar, together. Why? Because of a habit of thought. If you don't argue for reality, if you don't think about your own assumptions toward it, how well would you be able to reach behind yourself and to see how your own history, social location, economic standing, etc., help to color how you think of other things? If you are not use to doing this, why would you be able to be hermeneutically sensitive in your definitions of anything? 

Take religion. The very idea that there is one thing called "religion" is ludicrous. First, the history of the term is interesting. It developed in the 17th century to precisely describe a certain war, and so trying to decide what is religious or not is from the outset put in terms of Protestant and Catholic disputes (also, since this is the case, for most of history no one thought of themselves as "religious"). But how can you do that? Religions are so different as to be unrecognizable, and even within religions the diversity is so great that I probably have more in common with atheists than I do with a lot of Christians. The standard of Protestant Christianity becomes the standard for all "religion" (which standard is also a misrepresentation of Protestant Christianity). Yet someone like Larry Charles thinks that because their assumption of reality does not include this so called "religion" then this "religion" has no place in life. 

To me, this is just a case of "the superstition of science scoff[ing] at the superstition of faith," to quote James Anthony Froude (himself a famous apostate and Carlyle biographer). To have a view of reality without argument is just as egregious as believing in so-called "myth." In either case, it is standing on a foundation that ultimately does not have recourse to reasons and argumentation. This very well may be the human condition, but in my view, we ought just to be up front about it, and change a vicious circle into a virtuous one going beyond any such "correspondence" theory of truth. 

And if Maher and Charles want to know what a Christian like me thinks about "religion," then I offer this quote from Kierkegaard: "to stand on one leg and prove God's existence is a very different thing from going down on one's knees and thanking him."

Monday, September 29, 2008

Absolute and Relative - Truth, Morality, Anything...

While questions of "absolute right and wrong" are not as pressing these days as they were, say, in the 90's (the heyday of groups like Focus on the Family and other "moral majority" groups), thinking about "absolutes" is an interesting and fruitful question. I recently came across one of Richard Rorty's arguments on this point. And like typical Rorty, it is grand, dismissive, and extraordinarily interesting. 

Rorty begins by mentioning that many people attack him on this particular point: that he denies there is any concept that we call "truth," and is thereby a relativist. What he denies, instead, is the dichotomy "reality-appearance", and the attendant correspondence theory of truth that this dichotomy implies. His detractors think that any theory of truth besides the correspondence theory leads us on the path toward relativism (especially the pragmatic theory, like Rorty's). 

But that doesn't mean he doesn't believe in truth, or so he maintains. Truth is surely an absolute notion. He gives two examples: we don't say "true for me but not for you," or "true then, but not now." Clearly, the geocentric view of the solar system is untrue, and never was true, absolutely and with no preconditions. But, he then says, "justified for me but not for you" is a common locution, and one most of us are quite happy to go along with (although not for everything). And the thing is, "justification" is the application of truth - or at least it goes along with it quite strongly, as William James points out. In fact, Rorty argues, justification does indeed seem to be something that always goes along with any claim to truth. From this Rorty draws a conclusion I've often myself thought about. 

His conclusion is this: granted that truth is an absolute notion, the application of this concept is always relative to the situation we are in. The criterion for applying the concept "true" is relative to where we are, our limitations, and our expectations. At the same time, the nature of truth is certain absolute. But if this is the case, what is the point of a theory of the nature of truth? Rorty sees none. If we only encounter truth in application to relative situations, what is the point of specifying "absolute" truth. We never encounter it, so even if we saw it we wouldn't know that we were looking at it. 

I have to be honest here. I should have written this like eight years ago, when I first encountered James Dobson's "Right versus Wrong" campaign. At the time I thought, "sure, in general we have a good idea of right and wrong. But absolute right and wrong? How could we ever know that? We are never in situations that are clear enough, never in situations that present themselves to us so cleanly. Who has ever had to make a decision a.) with full information, and b.) with full moral certitude? How would we even get this certitude, since by definition 'absolute' means something unconditioned by how we think about it. But what could that be?" 

I believe this line of thinking comes from a specific source for me, and on this Rorty agrees. Orthodox Monotheists (Jews, Christians, and Muslims) actually basically view God in this way. They say, "God has indeed been revealed to us - but we can never fully capture God conceptually, given our limitations." Even the most fundamentalist Christians recognize this point, at least in principle. And indeed, this is a point that has been driven home to me throughout my life. We use the language of Scripture, but we also recognize that we fall short of intellectual capacity to understand it. Calvin makes this point repeatedly when he says that God "condescends to us," speaks to us with the language like a wet-nurse, with concepts like the Trinity, and salvation, etc. 

What does this mean philosophically? I'm not entire sure, except that perhaps Christianity ought not to be so hostile to pragmatists, or at least that notion of truth. Perhaps there is something we can learn from Rorty and other pragmatists, who insist on the "useful" as the basic category of thinking. 

Thursday, September 25, 2008

Ad venalicium: on the "free" in free trade

Alfred Hugh Clough once said that "thou shalt not covet - but tradition approves all forms of competition." An apt saying in a strange time, no doubt. And in this strange time I have been trying to reformulate my own view of things with big capital letters like the "Economy," and "Free Trade," and all such other concepts. I do believe my views have shifted a lot since my youth, and I figured I might set them out clearly here - at the very least as an exercise for myself. 

Growing up I listened to Rush Limbaugh with my parents. I remember driving in the summer, from noon to three, hearing him denounce Bill Clinton, and trumpet his view of economics. His view is rather simple and elegant: enable "free enterprise" and the rising of the water will float everyone's boat. This is "supply-side" economics (the "trickle-down" stuff is not really a economic position), i.e. if you bolster the producers and employers, you will bolster everyone. This does have prima facie plausibility, if you think about the reasons a business might expand. There are two basic reasons a business expands: the costs of production goes down, or there is an increase in demand. The main reasons for a decrease in the cost of production would be paying employees less, figuring out better techniques of production, or the lessing of other "exogenous" factors (exogenous just means those factors that do not have to do with the market per se). The worse exogenous factors, according to Limbaugh and most supply-siders, is government intervention - in the form of taxes and regulation. And so the argument goes, instead of decreasing production cost through paying employees less (although this too is argued - against the minimum wage, for example), one should get the government off the backs of the employers. 

At the heart of this view is the basic neoclassical economic view that the market is the most efficient instrument for the allocation of scarce resources. The reason for this is because of the principle of marginal utility, or marginality. This principle basically states, all things being equal, that (from the demand side) as an individual consumes more and more of something, it becomes less desirable (or, as economists say, the "utility" is lessened). Thus, after the first can of caviar that I have, there will be decreasing returns on my enjoyment of additional cans. This principle makes sense: the more you have something, the more "normal" it becomes, the less of a treat it is, the less desirable for itself it becomes. 

From the supply side, the principle of marginality is similar. The idea is that as a producer expands production to meet the increasing demand, each additional unit costs more to produce than the ones before. Consider the caviar. As I eat more and more caviar, and the caviar fishermen expand their operation, it will take them more and more resources and energy to get enough caviar to fill my demand. At the same time, the price will go down, since the supply goes up, and my desire goes down. 

The outcome of these two movements is what economists call a "competitive equilibrium. It will mean that demand has equaled supply. Alfred Marshall was perhaps the most famous and one of the earliest economists to talk about this. Now, if one thinks about this, it's great. It means that left on its own, the market does everything we need. It allocates things based on what the individual wants, and that all the productivity in an economy is focused. It also means that there is no other way to increase one player's welfare at the expense of another (what economists call the "Pareto optimum"). It is a world where the market - not the government or some other thing - allocates all the resources out there in the most efficient manner possible. 

There is only one slight problem, a small caveat that makes a big difference. This is the caveat "all things being equal" (ceteris paribus). What does this mean? This means that the very concept of a market is an abstraction, an utopia, a pie-in-the-sky world where no one really lives. The "market" correcting itself is a fiction - a pious fiction, perhaps, but a fiction nonetheless. Let me explain why. 

The major reason is that this world is filled, not with pure competition, but with "oligopolies." This is a nice economic term for an asymmetrical allocation of productivity between producers. Monopolies and "duopolies" are examples of this, and of course most free-marketers would say that you need really strong anti-trust laws to make sure these don't occur (almost the only place they say the government should intervene). However, in our economy, and the international economy, it is clear that in fact oligopolies are the norm, not the exception. Even though we heap praise on small business as the back-bone of the economy, it is really not the case that there could ever be market equilibrium because of the very real problem that there is no perfect competition, and their never will be. The reasons are simple: history, geography, and politics. 

Let's take an example: Microsoft. There is this concept economists now use called "path dependencies." This is a fancy way to say that you only have so many options. Microsoft is not considered a monopoly because it's not like they make all the software or hardware. They're just the main operating system people have to use. Imagine you're a small business owner who is dependent on computers. You can either go with Mac - which has much less software and compatibility - or with Microsoft, who has the basic system most people use. In perfect competition this would not be the case. You would have lots of choices, and you would pick the one that is the most efficient. Instead, because a contingent thing like the poor marketing of Apple in the 80's, you only have one choice, and not even the most efficient. This is a path dependency. 

The second major thing about oligopolies is the fact that these are not merely asymmetries in production and allocation, but in real political power. Political power is one of those things that neoclassical economists don't like to talk about (it cannot be quantified), but in the real world (not the world of ceteris paribus) political power, and power asymmetries in general, really effect the outcome of the market. This of course is seen with the recent Wall-Street debacle, and with organizations such as the IMF and WTO, who continually reinforce the trade advantages of developed countries (by things like not demanding the dismantling of agricultural subsidies and other things). In fact, even with regional trade agreements that are supposed to be "free" like NAFTA, there is no free trade. Barriers are lessened to a degree (and disproportionately for less-developed countries), but the fact remains that for all the free-trade talk, there is very little free trade at all. 

Which brings me to the title of this blog. I'm not a socialist. In theory, free markets are great. What I have issue with is the notion that any market really is free, or really can be. Supply siders might retort that 'it's because of government!' But it is a fundamental misrecognition of power to think that somehow those in power (such as the powerful in oligopolitic markets) will just give it up for an ideal that has never been seen in the history of the world (the ideal of market equilibrium). 

Then again, maybe I'm wrong, and we should say with Nietzsche that "the lie is a condition of life." Or, maybe we should buck up and say that humans making decisions has more to do with allocation of resources that we acknowledge, and then we should figure out what good judgment entails. Just an idea. 

Wednesday, August 13, 2008

Look and See the resemblances - Reading Wittgenstein

§§31-80. These sections of the Philosophical Investigations introduce two important conceptions of Wittgenstein's view of the entire philosophical enterprise. The first is his notion that if you want to understand something you must look and see (§66). The second is that of "family resemblances" (§67). 

Look and see: Why does Wittgenstein discuss philosophy as a task of 'look and see'? In his discussion of "ostensive definition" that carries over from the first 30 sections, Wittgenstein notices that often there is a serious problem with the word "this." If one considers language as a collection of "names" that point to an "object" (logical atomism, as it were), the word 'this' - which is the "most" ostensive word you can think of - starts to seem like "the only genuine name." But of course how could it be? There is no one definite object "this" points to, and hence it is always in need of a supplemental definition. 

Wittgenstein thinks that in this entire discussion there is a certain "subliming" of our language. He says, about "this" being the only genuine name, "this queer conception springs from a tendency to sublime [sublimieren] the logic of our language" (§38). This verb should be better translated "sublimate," as in the chemical process of a solid turning into a gas - not the "sublime" in the 19th century sense, i.e. that thing - like an abyss - that reaches the limits of our language, and extends beyond them. Sublime in that sense is a bit too dramatic. Instead, Wittgenstein is trying to point out that philosophers, whenever they have a hard time with fitting a particular thing (like the word "this") into a certain way of thinking (such as "ostensive definition") - when language "goes on holiday" as he says - they resort to "subliming" this language, i.e. making something concrete and ordinary into a much more serious affair.

It is in this context that W. launches a more detailed discussion of "naming," and repeats what he said earlier in §49, when he says that "naming is so far not a move in the language-game" (see my last post). It culminates in a discussion of broom-sticks. The question is whether if I say my broomstick is in the corner, is it a more "fundamental" analysis to say that in the corner is a broom-handle with a brush on it, or a broomstick? In other words, does analysis give us something better than just plain old "broomstick." And if I said, 'bring me the broomstick with the brush which it is fitted on to it' wouldn't I answer "Do you want the broom? Why do you put it so oddly" (§60)? The point here is that what matters is the use of language in the language game, and so no, the analysis of the broomstick into stick and brush is not better at all, its just a different language game. 

And then Wittgenstein anticipates an objection - and here we get what he means with the phrase 'look and see.' The objection might be 'but what is the essence of a language-game?' "You take the easy way out" one might say, "you talk about all sorts of language-games, but have nowhere said what the essence of a language-game, and hence of language, is" (§65). W.'s rejoinder is that phenomena are related to each other in many different ways. If we're interested in an "essence", or what is at least in "common," we have to "look and see whether there is anything in common to them all" (§66). This is essential to how W. sees language, because as his next analysis of the concept of "game" shows, there are many similarities between football and handball (and other such games), but many differences as well. 

Family Resemblances. But if one must 'look and see' to understand a concept such as "game" and the relationships between the many different types of games, what is one looking at? W. says here that he "can think of no better expression to characterize these similarities than 'family resemblances'" (§67). E.g., if we think about the concept of number, we get things like cardinal numbers, rational numbers, etc., and they all have similarities to each other, and also differences. Does that mean there is a single "essence?" No. Instead, W. uses the metaphor of a thread: "we extend our concept of number as in spinning a thread we twist fiber on fiber. And the strength of the thread does not reside in the fact that some one fiber runs through its whole length, but in the overlapping of many fibers" (§67). 

But what does this mean for our conception of things like games? It means that in a certain measure, there is no boundary to concepts like this. There is as much difference between basketball and solitare as there is similarity, but we would be hard pressed to say that the one is or the other isn't. Now, this does not mean you cannot "draw" a boundary, but it does mean we don't need a boundary in order to use the concept (§68). 

This notion of not having sharp boundaries to our concepts certainly does not feed into the physics envy of philosophers (although I'm not sure how many philosophers still feel this - according to Rorty, it's more the scientists who now have "philosopher-envy"), because it means that one's concepts are not, in themselves, all that exact. But for W., it does not matter, because the point of "defining" something is secondary to what we are doing with it. In other words, if one is to point to a certain conceptual space, the issue is not how clearly it is demarcated, but rather how this space is employed (§71). When we look at what is common in things we are trying to show how this conceptual space is used in similar ways, not how this conceptual space "is" ontologically. 

Now, the philosopher in me is quite uncomfortable with W.'s project. In a certain sense, I would not want to give up all ontological claims. What about claims to justice and a vision of a new world? Would these be accommodated if the point was to "look and see?" The notion of an ideal can be quite critical of the present age, and hence progressive, while focusing all one's attention on the thing in front of you can be quite opposite. Of course, it is still too early in my reading to really know how W.'s project might affect this, and so I guess I will look and see. 
 

Friday, August 08, 2008

Labeling and Classification - Reading Wittgenstein

[Preface: I've decided to tackle the Philosophical Investigations again. The first time I tried, I got 86 pages in, and then stopped (not sure why). But after reading Robert Brandom's Articulating Reasons: An Introduction to Inferentialism, I thought it best to look back to W. But I must warn my readers that I may be coming to this text with specifically "inferentialist" concerns, which may not be fair to Wittgenstein. So, with this in mind, I offer some thoughts on the text, my reading of it (to the best of my ability).]

§§1-30: Labeling and Classification.  
I begin quoting §13: "When we say: 'Every word in language signifies something' we have so far said nothing whatever; unless we have explained exactly what distinction we wish to make." 

From the very beginning of the book, W. is trying to get at how mistaken it is to think of language as a thing we attach to things, or correlate sounds and objects, etc, and then combine those labels (or names, as §1 has it) into larger and larger units. A simple correlation, e.g., when we say that "naming something is like attaching a label to a thing" (§15) really does nothing for us. And why? Because merely to name something is not yet to make a move in what W. calls a "language game" (§22). 

So what is a language game? W. says that "I shall also call the whole, consisting of language and the actions into which it is woven, a 'language game.'" (§7). Hence in these first thirty sections we hear this word "training" [Abrichten/Unterricht] quite a bit, because when we say "language" we're not merely talking about labeling or naming, but doing something. And so when you teach someone a particular word, you train them in the use of the word, and this "use" is set within an entire complex of various actions and words. And so when you tell someone to do something with a 'rod and lever,' "given the whole rest of the mechanism" (§6), they can do something. 

But if this holism is the case, then clearly when we train people in language, we have definite limits and bounds to a certain language game. So, if I want to teach a child how to cook, the notion of "measurement" takes on a very specific use that may or may not be the same if I were to teach them how to do scientific experiments. Grouping words together into certain "kinds" (W.'s examples are words like "slab" and numerals) thus is essential to any endeavor we enjoin. From this if follows that "how we group words into kinds will depend on the aim of classification, - and on our own inclination" (§17). 

And so we have a major distinction here between labeling and classification. When we analyze a word we are not analyzing how this word "name" something, we are not asking how we first "entertain" a notion. Instead, we are asking what "the part which uttering these words plays in the language-game" (§21), or rather how this word is used, and based on this use, come to something like an understanding through analysis.

Now, this brings up the last issue I'll deal with, and that is this word Lebensform, "form of life." In §23 W. brings up the fact that his use of language game is meant to emphasize how when we speak of language, we are talking about a certain activity, and there are lots of different types of activities (giving orders, reporting an event, play-acting, praying, etc). A form of life, in W.'s terms, seems to be just the things we do, and language is similar to the tools in a tool-box, with many different uses, depending on the objective of what is being done. "Classification" comes into play as we engage in these activities. 

This seems all too clear to me, and I'll give an example. I've meet a few people who are not from the U.S. complain about this thing Americans do - when we see someone we say "how's it going," or "how are you?" It is in the form of a question, but as my friends have complained, we don't really want to know how people are doing. In actual fact, this phrase is really just a greeting, and for whatever reason, we've developed it as just something you say, in a declarative way. But that doesn't also preclude this sentence from ever being used as a question. We can certainly imagine that there are times when the "proper" thing is not done and someone genuinely answers the question - much to our surprise, most likely.

And so Wittgenstein is right - "one has already to know (or be able to do) something in order to be capable of asking a thing's name" (§30). In order to label one must have first classified, and this classification occurs according to the language game (language plus action) it is a part of. 

Tuesday, August 05, 2008

Cognitive Scientist seeks real physicalist - must love dogs

One of my favorite lines in any pop song, by the band Of Montreal, is "it's like we weren't made for this world - although I wouldn't want to me someone who was..." They're singing about a broken relationship, but in a certain way, this is a basic attitude humans have with their environment, about their lives, about a whole host of things. The traditional Christian notion that we are merely sojourners in this world takes this intuition and runs with it, as does most version of radical politics (although the latter is concerned with reshaping the world so that it is made for us.) 

This is an idea lost in much discussion of cognitive science these days. Recently on the blog "Immanent Frame," under a discussion of David Brook's op-ed the "Neural Buddhists", Edward Slingerland ("Let's get clear about materialism") argues what the real issue of cognitive science, and materialism in general, brings up, is that it contradicts what he calls our "deeply seated" intuition of a folk dualism. This dualism says that we are not our bodies or brains, that somehow we transcend the material world, and in so transcending this world we are "free" and "responsible" beings. Slingerland argues that one of the big problems with someone like Brooks is that he thinks somehow by saying that brain state X corresponds to something Y proves that Y is real. Brooks then goes on to say that religion is most likely going to be more compelling because of this, although the Bible may be on its way out. But Slingerland points out the problem with this. Saying that brain state X corresponds to Y does not prove anything. All it says is that there is a brain state X. Come on!

The real problem Slingerland highlights - and I would suspect a bunch of cognitive scientists (including the likes of Terry Pinker)-  is that correlating brain states to behaviors leads to the "my brain made me do it" argument. In other words, humans are no longer responsible for their actions, because humans are no longer free in a meaningful sense. Empirically, it seems, we are just material; and if we are, then this means the world causes our behavior just as much as it causes the behavior of mushrooms, dogs, and ocean waves (let's not even discuss that this is no "discovery" - Kant said the same thing qua natural law).

And so Slingerland ends with what he thinks will be the "real" challenge given the "consensus" from cognitive scientist community: "how to get our intuitive notions of free will and moral responsibility to peaceably coexist with a materialist conception of the person." 

But is there a real problem here? Do notions of "freedom" and "responsibility" depend whatsoever on a theory of the human person, materialist or what have you? Do you need to say that 'there is this faculty called "will," and only on this basis can you have responsibility?' I say no. There is absolutely no reason we need any such theory, because our practice does entail us holding each other responsible. We don't need such a conception of "person" because in living in any community we continually challenge each other in how we live up to that community's norms - which even Slingerland, who appeals to a "consensus" in the scientific community, realizes. 

Now, if you are not convinced, if you think that correlating brain states with "undesirable behavior Y" (as Slingerland says) does make a difference, there we have a fundamental disagreement as to who or what we are answerable to. What I mean is that by saying that somehow "free will" is an illusion because certain experiments show us that we justify our actions after the fact (such as the one Ben Libet conducted in the 70's - being told to push a button, they found that brain signals connected with pushing the button happened a half second before subjects were 'conscious' of doing it), or that certain mental states occur when such and such environmental factors are in play, is basically saying that we are answerable to the things in the world. Slingerland calls this "objective and measurable." The things in the world are then normative vis-a-vis our mental states, and apparently our ethical states as well (as some cognitive scientist argue). This conception of "answerable" owes much to the traditional analogy for knowledge as perception, "knowing" something is very similar to "seeing" something. E.g., "I know that there is gravity" is similar to saying "I know my computer is white." Both can be verified - one by experiments, the other by observation. 

But this analogy for knowledge is neither the only analogy, nor the most compelling analogy. A much better analogy - something that corresponds to the late 20th century "linguistic turn" - is the analogy for knowledge of "discursive practice." The hinge of this analogy is that knowledge is something that we can use as a premise or a conclusion in an argument (Brandom). In other words, the real importance of knowledge is not whether it matches some inert lifeless "fact" or "world," but how it plays out in games of inference, of commitments, and of judging. Ultimately this makes more sense, because instead of being answerable to an inert world, we become answerable to each other. This is more compelling, because this is more pressing, and indeed, this is a presupposition for any discussion on the topic. Even Slingerland, who talks of "undesirable behavior Y", presupposes these types of language games. Cognitive science can never show us what is undesirable, precisely because in taking "material" as what one is answerable to, all questions of 'desire' are moot. What would it mean to desire a cognitive process? It's just there, that's all.

Of course, this does not mean that somehow we have a mystical soul or spirit that is immaterial. There are not just two options here. One can eschew any theory of human nature if you want, and you'll do just fine. One can talk about narrative instead (which is what assuming a discursive analogy for knowledge would lead one to do). One can say that 'I am such and such a person,' or 'we are such and such a society,' based on history, based on how we have been, and how we are, with each other. Then, based on these notions, we can propose new projects, we can appeal to old memories, projects and memories that we call "ideals," notions of how we ought to be, and how to get there, without the illusion that we have always "been" only one thing. 

But this just means that we acknowledge that description of how we do things, such as cognitive science tries to describe, is nothing we really want to reconcile ourselves to. We are always fighting against this. And if one day, with the help of cognitive science, we finally reconciled ourselves to the way the world is, I doubt anyone would want to live here. 

Friday, August 01, 2008

Lessing's Ditch and Theology

The well known problem of "Lessing's Ditch" has been bothering my lately. The basic idea of the problem is that there can be no "necessary" truths based on "contingent" events. In other words, the ditch is trying to jump from contingent happenings (i.e. historical revelation) to truth that is necessary (i.e., divine revelation). How can one assert the ultimate truth of something that is particular, that happens just at one time and one time only? I really have no good answer to this. It has been particularly bothersome because, as someone who will be a pastor in a congregation in a few months, I have to figure out how to deal with very contingent historical material that is presented in a book that the community takes to be more than contingent. 
One good example. In Corinthians Paul admonishes women to keep their hair long, because that's what women do. Now, from all the historical research on this particular passage, it appears in the ancient world that a woman's fecundity was intimately connected with the length of her hair. So, the longer the hair, the more babies she could produce. In a world where average length of life was normally less than 30 years, and in which famine and death were the norm, one could very well understand why Paul went along with it. It would be analogous to a pastor today saying, 'we know that smoking is really bad when you're pregnant, so don't do it.' That's good advice, but of course its historical. This was not known 80 years ago. 
So, what to do with this? One might dismiss this particular passage, and the justification would be quite good: this is an archaic belief that we know now is not the case. But then the claim that this is divine revelation of an ultimate kind starts to be murky. Drawing a line, of course, is not really that difficult. Theologians for centuries have had "rules for reading," i.e. rules for how to take problematic passages and interpret them in the "correct" theological way. I suppose a passage like this would need to a rule for reading. Perhaps something like this: 'if the author is pointing to an untranslatable practice based on certain medical knowledge of the time, then one ought to not take it too seriously.' 
Of course, that does nothing at all for the problem of Lessing's ditch. But what it does do is point out something essential about reading texts like this. Let's say this points out the way interpretation works, i.e. the "hermeneutical circle." This "circle," especially as articulated by Heidegger (and perhaps as thought through in Wittgenstein's notion of form of life), is essential to any interpretation of such complex passages. The basic notion is that there is what Heidegger called a "fore-structure" to all understanding. When we try to understand something, it's not just a "subject" (us) and and "object" (a particular text), and the subject - without any presuppositions attached - just "reads" the object in a flat, one-way manner. Instead, its a circle: we have presuppositions that are necessary for us to be able to read, and we bring these presuppositions to the particular texts. These presuppositions include our language, our cultural history, our life history, etc. These are absolutely essential aspects of reading anything, and hence cannot be thrown away. At the same time, the more we attend to the "matter itself" (as Heidegger puts it), we will find that the text throws us back on ourselves, and forces us to sometimes adjust our interpretation, and sometimes to allow it to change our presuppositions. This process, this hermeneutical circle, is essential to any working out of problems just as the one from Corinthians. We hear this historically, and realize that our understanding of it (in the sense of an understanding that forces us to act, or to change our behavior) is predicated upon our ability to match our presuppositions with the text, and the ability of the text to alter those same presuppositions. But of course, when it comes to the "hair" example, this would hardly alter our presuppositions - we are too imbued with modern medicine to even be able to hear this text fully. 
This also, I think, helps with Lessing's ditch. It helps because it collapses the problem. The real issue isn't the transition form "contingent" to "necessary" truth. The real issue is how does one's understanding interact with a text which one's understanding and tradition claim is ultimate. The real problem is not the historical events themselves, but rather how our understanding interacts with these historical events, and makes sense of the claim they have on us now. This is less a philosophical and more a hermeneutical problem. This is the direction people like Hans Frei and George Linbeck go. And this is the only direction I see myself going.

Wednesday, April 23, 2008

Its Reality Stupid!

It sometimes seems to me we typically in the U.S. live in a reality vacuum. Reading a story in the NY Times about the exit polls from the PA primary, you get the sense that when we "analyze" (I use this term very loosely) results of a contest what we are really doing is expanding on a fiction plot which has received its bare-bones outline from the interaction of pollsters making up questions to ask, and a few people answering them (40 precincts in the Edison/Mitofsky poll). The networks then look at this outline, and depending on their predilection for mystery (will Clinton make a comeback?), romance (she put up a good fight, but will lose in the end), or adventure (the ongoing drama will kill the Democratic chances), and finish the plot.
The interesting thing about these polls is that the "Reality" they point to is a reality created than a reality reflected. In some ways its really similar to Lucretius' (and a few others, like Marx) argument against religion, i.e. they are made by humans in order to deal with a reality that is too harsh for them. The reality in this case is that we have no clue as to why people vote the way they do, and as to what the outcome will be. We don't want it to be true that we cannot categorize human action, because then this would really make us question a whole host of other assumptions, like the economy (as if this is not really the biggest fiction of all), or energy, or "America," assumptions that we have, pious fictions as it were that allow us to go about our day.
Again, perhaps this is another instance of Calvin's insistence that we are a factory of idols.

Tuesday, April 15, 2008

Beautiful Idolatry

I just finished reading Charles Bock's debute novel Beautiful Children. It's about a whole lot of crazy stuff: runaway kids, strippers, pornographers, marriages falling apart, young boys finding their sexuality, drugs, and a rape. It's certainly difficult to read, in some sense because the subject matter - the Las Vegas that tourists don't see - is difficult. I'm not sure how accurate Las Vegas is in the novel (Bock grew up there), but the world is convincing enough.

I was most struck by one interesting line of imaginative thought. Cheri Blossom is a stripper who's boyfriend (named "Ponyboy") convinced her to get breast implants (she even ends up putting sparklers in her nipples because they are fake), and who also gets her to be in an amateur porn film (although she leaves before everything goes down with "Rod Erectile"). One of the ways she "gets through" the night as a stripper (and in other situations) is to pretend she's in a film. She plays a naughty Catholic School girl, and she has these dialogues with her nun teacher (and sometimes she strips even in the movie), and these conversations act as something like her moral compass (e.g., when she has mercy on Ponyboy in the recognition of his self-destructiveness). It is very interesting that she needs this, and there is something very true in this - how in order to deal with any situation, we as humans create images and narratives, place ourselves in these and attempt to construct some meaning to a horrible situation.

This seems to my mind very similar to Calvin's conception of idolatry: we almost have a compulsion for religion he thinks, and so we end up creating the idols we need to function with. I think there is an analogy here: Cheri's idol isn't a "god" in a sense that we normally understand it. Yet, it is something that has nothing to do with her current situation, a kind of narrative or image that she relies on.

This is a larger point, I suppose: that we are a factory of idols, that in some way we live on images to show us something that is not ourselves precisely so we can be ourselves.

Monday, March 17, 2008

loss

the sense of loss is like the sense of a color. Red is a bright pink insect, twenty legs that crawl and leave a sticky residue, so that when you try to brush it off it will not go. It just stays there, like an elephant's egg incubating and hatching a 200 hundred pound lie that one must plaster over with truth, like inscribing words on a stone, 'TRVTH GOODNESS AND BEAVTY.'