An idle self-scolding:
1. Everyone – not all the time, but every once in a while, acknowledge the fact that human beings occasionally produce something noble, beautiful, or virtuous without oppressing anyone. Writing like Eeyore is not doing us any favors.
2. English faculty – it’s okay to get excited once in a while about silly things (Archie comics, zombie movies, Happy Meal toys), but it’s gotten out of hand; try to spend more time with grown-up material.
3. Philosophers – if you are working on problems that only interest people with PhDs in philosophy, there’s a decent chance you are spinning your wheels. Get real.
4. Historians – just because it hasn’t been studied before doesn’t automatically mean it’s worth studying. You’ve got to make the case that it’s worth someone’s time.
5. Classicists – You’re doing a great job; keep up the good work!
6. Everyone again – bitching about not getting enough respect is a losing strategy for getting respect. Write about something important in a way that lots of people can understand. If you get it right, you’ll get respect. (That’s right, by the way: we can get things right, and we can get them wrong – if, that is, we are actually saying anything. It’s not all just interpretation. What? You think I’m wrong about that? Good. Point made.)
Recently I sat in on a debate about gay marriage. It failed in the noble end of giving everyone in the audience a more complete picture of the arguments and concerns surrounding the issue, but it did at any rate give me the chance to organize my own thoughts on the issue.
It seems to me that, as an institution, marriage has three goods attached to it. First, it recognizes the value we place on a loving, meaningful bond between people who declare their lifelong commitment to one another. Second, it recognizes the value we place on creating stable home environments in which children can be raised to become good people and good citizens. I’ll call these two goods the social goods of marriage, and I think rational people must agree that these goods can be met by gay couples as well as by straight couples. Clearly, gay couples long to have their loving, meaningful commitment to one another recognized through the institution of marriage. And while some people argue that the homes of gay couples are not the best place in which children can be raised, the evidence is very far from clear. Moreover, even if there were child-rearing disadvantages to gay marriage, those disadvantages could well result from the general resentment or resistance toward gay marriage that’s found throughout our society. And if this is so, then such evidence should only spur our efforts to ease that resistance, just as we are spurred to ease the resistance to mixed-race couples or single-parent households.
So the first two goods of marriage provide no reason against gay marriage; indeed, there is every reason to think that these goods can be promoted by allowing gay marriage. But the third good of marriage is different. Many people in our society view marriage as providing a religious good. In their view, God has instituted a holy bond between members of the opposite sex: man and woman complete one another in a profoundly important way that members of the same sex couples cannot. The fundamental, biological difference between the sexes exists because of a purpose God has for human beings, which is to wed, complete one another, and produce more human beings. Marriage is the societal recognition of this religious good, in addition to the two social goods, and from this perspective, making marriage available to same-sex couples in fact repudiates any societal recognition of this particular religious good.
I’m going to argue that religious people need to give way on this good, but first I want to stress that this religious good is in fact a very important to a great many of our fellow citizens, and demanding that they give way on it is asking them to give up on something they value quite highly. This needs to be more widely appreciated, I think. It is not at all like asking an atheist to give way on prayers at school assemblies, or saying “so help me God” at the end of an oath. Having to kowtow to such religious observances are fairly small potatoes; they are very minor concessions in comparison to demanding that religious folks give way on their view of a divinely-instituted bond between human beings. In fact, I can’t think of any other institution that has such a great and equal mix of social and religious significance. It is uniquely a big deal.
Okay; so why do I think that, despite this cost, the religious folks need to give way on this issue? I think they need to for two reasons. First, the cost of denying the social goods of marriage to gay couples is an even greater cost than is the cost to religious folks of giving way. This is becoming more and more obvious to a lot of people. As gay couples out themselves, and as there is greater public understanding of their love and commitment to one another, and their familiar humanity, it feels increasingly wrong to refuse to recognize their bonds through marriage. I’m taking this as a social fact. Second, if gay marriage is generally made available and legal, religious folks can still find a lesser way of saving some of the religious good they see in marriage. This lesser way is that religious institutions can always make a distinction between a secular marriage and one that is recognized in the church and through their ceremonies. (In fact, some religions do this already when they refuse to recognize the marriages, baptisms, and rituals performed in other religions or denominations.) This is not an easy fix; there are many complex ramifications that would need to be sorted out, just as there are between religious hospitals and insurance programs and the Affordable Care Act. But though tricky, this is the only way to go. There can no longer be any denial of the social goods of marriage to same sex couples, and allowing religious institutions to preserve their own sense of marriage is about the best we can do in preserving the good they find in marriage.
When I started as an Assistant Professor, people started calling my kind of academic reading and writing “RESEARCH”, and they encouraged me to do the same. At least, this is how I remember it. It seemed to me strange and awkward, because in my mind “RESEARCH” required surveys at the very minimum, and perhaps also processing numbers and running statistics and boiling liquids and writing on clipboards. But I did (and do) none of these things. I read and read and read; and then I walk around in a daze for a while; and then I write; and read some more; and walk around some more; and write; and so on, until somehow someone publishes something. Repeat. That’s been my method; twenty years ago it seemed bizarre to call it “RESEARCH”.
Time erodes most opinions, and now I can only barely remember how bizarre it seemed to me so long ago. In fact, “RESEARCH” has become for me so watered down as to mean “those activities academics can document in such ways as to duly impress, or fail to unimpress, university administrators.” But recently I came across someone who insisted on making some distinction between “RESEARCH” and “SCHOLARSHIP” on some university form or other, and it got me thinking about whether there is indeed a distinction between the two.
I think there is. In my mind, and apart from my own professional cynicism, “RESEARCH” means finding out new stuff (mainly facts). Scientists do this as they run experiments and find hitherto unknown correlations or causal connections; social scientists, same; archival historians do this as they search through records and piece together what must have happened. But it seems to me that for the most part philosophers, and some historians, and scholars of literature – “humanists” – are not in this kind of business. I have seen plenty of cases where someone claims that their area of research is (say) virtue ethics, or late Renaissance literature, or how some people get relegated to the margins in canonical works. But for the most part this does not require discovering new stuff. It mainly requires looking at the stuff we have in new and (hopefully) interesting and revealing ways.
The ideal for such humanists is not research, but “SCHOLARSHIP”. To be a humanist scholar, one needs to read a great deal, think deeply and humanely about it, and pick up on interesting patterns or glaring exceptions to patterns commonly thought to exist. It is rare to find such scholars (the rarity I’ll try to explain next). Lately I have been finding lectures on YouTube by scholars like Robert Brandom, Peter Brown, and Anthony Grafton, and they are scholarly exemplars. They possess a synoptic knowledge across broad domains, and they have the intelligence to sort the significant from the insignificant and issue meaningful opinions about important matters, with fairness and grace.
Usually I listen to these lectures as I am doing my exercises. It’s maybe this association that leads me identify to a third kind of activity, in addition to research and scholarship, which I am calling “PUSH-UPS”. Of course, we do push-ups or sit-ups not because they are in themselves valuable (duh!), but because they improve our strength in some way. This presents the most charitable description I can think of for what many scholars are engaged in prior to embarking on either research or scholarship. I’m sure that many scientists are not really discovering new facts; they engage in the outward form of research activity, but what they discover is either spurious or too trivial to be dignified by the designation “new stuff”. Also, many humanists perform the outward activities of scholarship, but what they discover and write up is hardly interesting, compelling, or general. But what can be said favorably of such academics is that they are young. Hopefully, with time and confidence, they will become scholars or researchers. Right now, they are in some kind of disciplinary training, like push-ups, that will yield valuable results down the road.
That’s optimistic, of course. There is every likelihood that academics will get caught up in the illusion that their push-ups are in fact ends in themselves and they may forget or never come to know that there is genuine scholarship and research. Universities as institutions demand quick return on investment in their faculty, and that certainly promotes a lot of push-ups, and what we do repeatedly becomes habitual. If it weren’t for tenure we’d have practically no researchers or scholars, as no one would ever have the chance of advancing from training to the real thing. As it is, with tenure, some push-up skills mature into genuinely valuable work. The price we pay is that sometimes it doesn’t happen. But I can’t see any more effective way to promote scholarship and research.
I confess that I write this as someone who has done his fair share of push-ups. I can’t say I have become anything more than a rudimentary scholar. But I’m glad to see now the merit in a distinction I was encouraged to forget some time ago.
“In his middle to late thirties (over the years 1679-85), Gottfried Wilhelm Leibniz spent more than three years in his visits to a silver mining region in the Harz mountains. He believed he could devise new and more efficient ways of pumping water out of the deep shafts, enabling miners to dig even deeper and extract more silver from the earth. Had he succeeded, he would have doubled his salary and freed himself from the drudgery of his service to the House of Brunswick…..
Wow, that is a mouthful, and the talk does feature some elaborate (if not baroque) constructions with complex concepts. But the underlying ideas are clear and powerful. Robert Brandom argues (the start of his talk is about at 9:44) that there may seem to be irreconcilable differences between a genealogical approach to philosophy (which identifies all of the social, historical, and psychological causes leading up to an idea) and a more purely rationalistic approach (which considers the reasons behind the idea). Basically, it seems like a thorough-going naturalism will make it impossible to take ideas seriously. But this only “seems” so. In fact, trying to sharply demarcate naturalistic causes from reasons in the formation of ideas rests upon a naive view of concepts and meanings. This is the point that Quine was making to Carnap, and Hegel was making to Kant. We should understand ideas as complicated structures that reflect both causes and reasons. In Hegel’s philosophy, this view means reading history in the way that judges of common law read the past as precedent – looking for a rational trajectory behind the mess of daily details, and making use of it for the future.
I found the lecture to be brilliant, and worthy of attention. Plus, Brandom looks like Dumbledore, which is cool. I wonder if grad students at Pitt call him “Brandomdore,” or maybe “Brandalf.”
Arthur Schopenhauer did not have much use for Fichte. He thought Fichte’s mistakes arose from the fact that Fichte did away with Kant’s realm of things in themselves, leaving human consciousness free to just spin in any direction without any friction from anything external to it. And, to make matters worse, he did so without any compelling reason, and in prose that left his readers baffled. In Schopenhauer’s words:
[Fichte] declared everything to be a priori, naturally without any evidence for such a monstrous assertion; instead of evidence, he gave sophisms and even crazy sham demonstrations whose absurdity was concealed under the mask of profundity and of the incomprehensibility ostensibly arising therefrom. (Parerga and Paralipomena, 1.13)
I think Schopenhauer discovered the way not to read Fichte – namely, as a philosopher arguing cogently for a conclusion based on reason and evidence. But then how should we read Fichte? My sense is that Fichte is best read as providing a metaphysics for a certain mood. I would like to try to articulate that mood, and then see how well his speculative and moral philosophy generates it.
Though we live in a cynical age, try for a moment to adopt the view of someone who regards utopia as possible, and perhaps even necessary. Forget for now that we live in a broken and fractured age, with conflict, injustice, inequalities, and absurdity. For a moment, try believing that all these fractures can be healed or repaired. How? Believe that through the dedicated use of our reason we can establish a political community, replete with a scientific understanding of nature, that can bring us into an understanding with one another and into living in harmony with nature’s boundaries and requirements. Through intelligence and justice, we can make whole what is now fractured. Having this belief, and having the temperament to act on it, is what I shall call the mood of Enlightened Optimism, and this is precisely the mood of Fichte.
Now what kind of metaphysics would generate such Enlightened Optimism? If what we see as now fractured can be brought into wholeness, then there must be a wholeness that is possible. Moreover, this wholeness is not merely accidental; it is not the case that it just so happens that there turns out to be a way to make everything whole. Rather, this possibility of wholeness is guaranteed. The wholeness is thus a pre-condition for our fractured world; the possibility of wholeness places constraints upon the kinds of fractures our world can have. Our world can be fractured only in such ways as wholeness is still a possibility. So this means that our disunity proceeds from a prior unity – not prior in time, but prior in possibility. Unity is ontologically more fundamental than disunity.
But then, if unity is fundamental, why should there be any disunity at all? Perhaps it is because the unity attains its own unity only through recognizing itself through something other than itself. That is to say, the unity marks off its own limits, and brings itself identity, through something other than itself. Thus the fracturing is the way for the unity to come to itself. In this fracturing the unity is in its own funhouse, catching true glimpses of itself alongside distortions, curved images, and broken images.
What this means is that our own consciousness is the striving to bring unity out of what is fractured. It is imperative for us to do so, for what we are arises out of the negotiation between unity and disunity that must eventually resolve itself in unity. Moreover, to retain our mood of Enlightened Optimism, we must not see ourselves as pawns in some cosmic game that is beyond our control. Instead, we must see the postulation of the original unity, to which we are returning, as our own postulation. That is, we freely, absolutely freely, posit unity. In our efforts to repair the fractures, and restore unity, we are fulfilling our absolute freedom.
Okay. You may now return to Earth. Schopenhauer was right to find much to complain about in Fichte’s philosophy. But insofar as such “crazy sham demonstrations” encourage us to believe that our freedom consists in restoring unity to a fractured world, it must be admitted that it might be just crazy enough to work.
In the summer and autumn of 1665, a German expatriate in London exchanged a series of fascinating letters with a renegade Dutch Jew. The expatriate was Henry Oldenburg, who was serving as secretary of the newly-formed Royal Society of London. The Royal Society of London for the Improvement of Natural Knowledge – which, if formed today, probably would be styled far less handsomely as “RS-LINK” – was a science club of sorts. It provided gentlemen with the occasion to assemble and share their discoveries, puzzlements, and wonders – without their conversation degenerating into disputes over politics and religion. In the earliest history of the Society, Thomas Sprat described it as a respite from insanity: “Their first purpose was no more, then onely the satisfaction of breathing a freer air, and of conversing in quiet one with another, without being ingag’d in the passions, and madness of that dismal Age”.
- See more at: http://www.3quarksdaily.com/3quarksdaily/2014/01/sea-battles-beasties-in-the-blood-and-the-summer-of-1665.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+3quarksdaily+(3quarksdaily)#sthash.qhfG32Qe.dpuf
Or, in other words: is there really any difference between the universe and its mirror image? I remember once reading one of Richard Feynman’s lectures in physics in which he took on this problem. True to form, Feynman found a funny, imaginative, and perfectly clear way of spelling it out:
Imagine that we were talking to a Martian, or someone very far away, by telephone. We are not allowed to send him any actual samples to inspect; for instance, if we could send light, we could send him right-hand circularly polarized light and say, “That is right-hand light—just watch the way it is going.” But we cannot give him anything, we can only talk to him. He is far away, or in some strange location, and he cannot see anything we can see. For instance, we cannot say, “Look at Ursa major; now see how those stars are arranged. What we mean by ‘right’ is …” We are only allowed to telephone him.
Now we want to tell him all about us. Of course, first we start defining numbers, and say, “Tick, tick, two, tick, tick, tick, three, …,” so that gradually he can understand a couple of words, and so on. After a while we may become very familiar with this fellow, and he says, “What do you guys look like?” We start to describe ourselves, and say, “Well, we are six feet tall.” He says, “Wait a minute, what is six feet?” Is it possible to tell him what six feet is? Certainly! We say, “You know about the diameter of hydrogen atoms—we are 17,000,000,000 hydrogen atoms high!” That is possible because physical laws are not invariant under change of scale, and therefore we can define an absolute length. And so we define the size of the body, and tell him what the general shape is—it has prongs with five bumps sticking out on the ends, and so on, and he follows us along, and we finish describing how we look on the outside, presumably without encountering any particular difficulties. He is even making a model of us as we go along. He says, “My, you are certainly very handsome fellows; now what is on the inside?” So we start to describe the various organs on the inside, and we come to the heart, and we carefully describe the shape of it, and say, “Now put the heart on the left side.” He says, “Duhhh—the left side?” Now our problem is to describe to him which side the heart goes on without his ever seeing anything that we see, and without our ever sending any sample to him of what we mean by “right”—no standard right-handed object. Can we do it? (http://www.feynmanlectures.caltech.edu/I_52.html)
Feynman thought we would have to go to some pretty extreme lengths to explain “left” and “right” to the Martian:
In short, we can tell a Martian where to put the heart: we say, “Listen, build yourself a magnet, and put the coils in, and put the current on, and then take some cobalt and lower the temperature. Arrange the experiment so the electrons go from the foot to the head, then the direction in which the current goes through the coils is the direction that goes in on what we call the right and comes out on the left.” So it is possible to define right and left, now, by doing an experiment of this kind.
It turns out we could just tell the Martian how to build one of these cool tops:
It’s interesting that Kant thought (a) that there is no intrinsic mathematical difference between left-hand and right-hand, or clockwise vs. counterclockwise, and that (b) the fact that we can distinguish the two shows that space itself is not fully described by mathematical laws, and must possess essentially some intuitive component. Meaning: spatial objects aren’t just formulae, but have to be “seen” to be grasped. I don’t think anything coming out of the physics of the natural world would challenge Kant’s claim (a), since these physical distinctions are not purely mathematical. Still, Kant would finds these results interesting, I’m sure. And Feynman would’ve have liked those tops.
“Sit in your local coffee shop and your laptop can tell you a lot, especially if you wield your search terms adeptly. But if you want deeper, more local knowledge, you will still have to take the narrower path that leads between the lions and up the stone stairs. There – as in great libraries around the world – you will use all the new sources, all the time. [...] But these streams of data, rich as they are, will illuminate rather than eliminate the unique books and prints and manuscripts that only the library can put in front of you. For now, and for the foreseeable future, if you want to piece together the richest possible mosaic of documents and texts and images, you will have to do it in those crowded public rooms where sunlight gleams on varnished tables, as it has for more than a century, and knowledge is still embodied in millions of dusty, crombling, smelly, irreplaceable manuscripts and books” (Anthony Grafton, “Codex in Crisis,” Worlds Made by Words).
I am a great believer in technology’s capacity to build our native skills, and so lately I have been augmenting my talents for world domination through playing Sid Meier’s Civilization IV. (For some reason, Sid Meier thinks it’s important that Sid Meier’s Civilization IV be known as “Sid Meier’s Civilization IV,” but I’m not typing that whole thing anymore, and shall henceforth refer to it as “SidCiv.”) SidCiv is an automated version of the war board games I watched my brother play when I was a kid. You have to build cities, and a wide range of classes of people (settlers, workers, soldiers of various types, religious leaders, scientists, etc.), institutions and buildings for city infrastructure, and great cultural monuments. You can win in four ways: (a) be the first to establish a space program, (b) win a diplomatic victory by establishing the United Nations and pass a resolution proclaiming your victory, (c) at the end of the year 2050, be the richest, most advanced, and strongest civilization, or (d) take over the entire world. You may choose to lead different empires (British, Greek, Russian, etc.); you may choose the geography of the globe, as well as the sea level; you may choose the difficulty of your computer-managed opponents. At the end, your world-governing abilities are ranked from Augustus Caesar at the top to Dan Quayle at the very bottom. (Poor Dan Quayle; so far, this is the only association my children have for him. More on this below.)
The game is a totally mind-absorbing challenge, forcing you to multi-task while building an empire along economic, military, and cultural fronts. While the game draws upon actual and historical figures and buildings and technologies, SidCiv freely departs from our world’s actual arrangement. So you’re playing along and are suddenly informed that Euclid has been born in Tokyo and the Taj Mahal has been built in London. You might find Archimedes in one of your cities, and consuming him yields the innovation of Chemistry. If you’re Japan you have access to samurais; if you’re Russia you get cossacks. In conflicts there are cavalry pitted against catapults, and tanks against archers. Cities without aqueducts soon become filthy, vermin-infested plague holes, so you’d best take care of your populace, and eventually they’ll celebrate “We love the monarch!” day. Every achievement along the way comes with a pithy quote read by Leonard Nimoy. His impersonation of Sputnik is hilarious.
I’ve played multiple times, and thus have learned some tricks. For example, when I am invading other countries, I like to use marines. They show up later in SidCiv, as you need first to acquire combustion and industrialism and assemby line production, but boy are they worth it: your best friend or worst enemy, as the slogan goes. Then once I take over a city (selecting the benevolent “Install a new governor,” rather than the dismal invitation to “Burn, baby burn!”), I quickly establish some institution that will turn the population toward my favor and start to spread my culture to the surrounding countryside. So I build a theater, as they are fast and cheap and effective. Before long, my newly-conquered citizenry is setting aside days to celebrate me.
I play at low settings, and have won everytime, twice with diplomatic victories, and the rest with time victories. So you might think I would rock as a world leader. Alas, you would be wrong. I have been awarded “Dan Quayle” status with such steady frequency that I’m worried Sid Meier will soon put my name below Dan’s in the ranking. I used to care, and maybe someday I’ll invest more thought into smarter ways to play. Until then, I am whiling away hours as Dan Quayle presiding over an army of theater-crazed marines.
Answer: by not working very well. I’ll explain.
My son spends a lot of time playing Minecraft. It’s a brilliant game that operates in two modes: creative mode, in which you can build all sorts of structures and even simple circuits my collecting raw materials and re-shaping them; and play mode, in which you and other creatures try to kill each other. (I call it “brilliant” because of the sizable ratio between its simplicity and the amount of stuff you can do with it. By this measure, writing and pocket knives are about the most brilliant inventions ever.)
But you can make Minecraft even cooler by downloading different “mods” which add new animals or events or features to the Minecraft land. The trick is that you have to figure out how to download a mod and plug it into your game. My son was immediately baffled, and so I tried to help him. Now I’m nothing like an expert, but I’m generally pretty good at solving this sort of problem. (I’m even better at electrical or plumbing problems.) But I soon was baffled as well and gave up. He kept at it, figured it out, and by now has incorporated several mods. He has learned a lot.
There’s nothing special computers are offering here. They are basically presenting an environment of raw materials which need to be cleverly exploited in order for us to get what we want. In ye olden days, this environment was presented by “the world” or “the garage” or “the broken bike”, which similarly provided promising potential for those willing to exercise some cleverness. The reason I’m so good (well, not a total failure) at electrical work is that when I was 10 or so my brother gave me a shoebox full of toggle switches, and within a couple weeks everything in my room was toggle-switched. I learned how some things work, but more importantly I learned that, for many problems, I could figure it out. That, I hope, is what my son has learned from the difficulty of incorporating mods into Minecraft.
“The lecturer pumps laboriously into sieves. The water may be wholesome, but it runs through. A mind must work to grow,” wrote Charles W. Eliot. But this sentiment has been used to support all sorts of classroom activities and projects which only provide fake problems to be solved by ad hoc teams. A large part of the education provided by higher ed (I suspect) has to do not with these ersatz engagements, but with the obstacles and problems thrown up by higher ed institutions (and by early-adulthood life generally). How do I satisfy the Gen Ed requirements? What are they, anyway? How can I complete this major? How do I convince the Financial Aid office that in fact the check did not arrive? Big institutions are better than broken bicycles at providing the appropriate sorts of challenges for adult life. And computers, of course, make the problems even more difficult, since one can no longer simply rely on genuine communication with an intelligent human, but most now figure out how to get a stupid system to accept a certain cluster of data.
So bring on the advantages, speed, and efficiencies of computers into education. They can only make our problems that much more difficult, and thereby make us smarter.
Al-Ghazali (1058-1111) was a Persian mystic philosopher, and wrote the Deliverance from Error as a kind of intellectual autobiography, while at the same time an argument for sufism. (A student gave me the book after sitting through my epistemology class, probably thinking (a) I’d like it, and (b) I could use the help.) Its similarities to Descartes’s Meditations are striking. Al-Ghazali is writing to a friend, recounting his spiritual journey, which began with “a thirst for grasping the real meaning of things.” He soon realizes he must understand the nature of knowledge. At first he feels sure only self-evident truths and the reports of his senses; but he soon finds himself able to doubt even these, as he considers that he might be in a state like a dream. His soul tells him:
“Don’t you see that when you are asleep you believe certain things and imagine certain circumstances and believe they are fixed and lasting and entertain no doubts about that being their status? Then you wake up and know that all your imaginings and beliefs were groundless and unsubstantial. So while everything you believe through sensation or intellection in your waking state may be true in relation to that state, what assurance have you that you may not suddenly experience a state which would have the same relation to your waking state as the latter has to your dreaming, and your waking state would be dreaming in relation to that new and further state?”
If Descartes had put his doubt this carefully, centuries of undergraduate philosophy professors would have had greater success in making that doubt compelling! Actually: maybe too compelling. For how does one answer the doubt that one’s own powers of conception, the lining of conceivability, might be askew? Descartes’s skeptical scenario is only meant to cast doubt on sensations, for then his powers of “intellection” can save the day. Al-Ghazali lumps them together in his “same relation to a dream state” scenario, and there’s no way out of that, except for a lifeline thrown by an external source. This is exactly what al-Ghazali recounts: he is in skeptical despair for about two months before he is cured of this illness – “My soul regained its health and equilibrium and once again I accepted the self-evident data of reason and relied on them with safety and certainty. But that was not achieved by constructing a proof or putting together an argument. On the contrary, it was the effect of a light which God Most High cast into my breast. And that light is the key to most knowledge.”
This too is echoed by Descartes, in a way. His skepticism is cured or answered by the light of nature, which reveals to him certain truths which, it turns out, render the deceiving-demon scenario, and finally the dreaming scenario, inconceivable after all. But does Descartes take the same attitude as al-Ghazali – as that of a patient acted upon by a higher doctor? Or is Descartes’s remedy more of a self-cure? It seems to me something of both: our natural light, our reason, cures us of skepticism, but it is able to do so only because God invests us with an innate ability to discern the structure God imposes on the cosmos. It’s self-help with prescription medication.
While reading al-Ghazali I also read David Deutsch’s The Beginning of Infinity. Early on, Deutsch argues against the sort of doubt al-Ghazali expresses, labeling it as a “parochial” view of human reason. By this Deutsch means that we are able to offer the kinds of explanations we do, and enjoy the sort of technological success we have, precisely because our intellect grasps fundamental patterns and principles of nature. To give in to the doubt that “maybe, somehow, in a way we cannot possibly imagine, the world is other than we suppose” is to give in to utterly unwarranted, superstitious, magical thinking. Deutsch sees this sort of doubt as an outgrowth of what he calls “the principle of mediocrity,” or a principle which says that there is nothing especially significant about human beings. Posh, says Deutsch: our ability to reason and offer explanations is very significant, and we should not be afraid of it. We stand at the beginning of infinity … [cue Star Trek music]
Al-Ghazali and Deutsch sit at opposite ends of the Enlightenment, which was fundamentally a transition from passive to active voice. Autonomy, for al-Ghazali, manifested itself as a kind of illness – a lacking loneliness, or pointlessness. For Deutsch (and, of course, Kant and Hegel before him), autonomy was a mark of liberation, a graduation from self-incurred tutelage to the world of being a master. But the Enlightenment thinkers see reason at our core, and our liberation from external saviors means the freedom to exercise reason. Once we entertain a doubt as to whether reason constitutes our core (and hello, Nietzsche!), then that “liberation” begins to resemble the illness al-Ghazali found it to be, and we start to seek outside aid. The landscape of the intellect changes; but the need for some sort of salvation, either from within or without, does not.
David Chalmers recently addressed the Moral Sciences Club at Cambridge, and he jokingly announced at the beginning that everything he was about to say was not to leave the room. Of course, there are links to the talk everywhere now, and here is another one. His joke makes sense as a joke because of the general assumption in higher ed that each and every faculty member at a research university is in the business of making piecemeal contributions to an ongoing project of construction and discovery. That’s progress. Faculty members are routinely evaluated on the impact they make upon their profession, or the ways in which they advance their discipline, and that is measured by frequent publications that get cited frequently in other publications. Now if a professional academic like Chalmers comes forward and asks, “How come none of this is getting us anywhere?”, he invites the people who fund higher ed, or the administrators of those funds, to pull the plug on that discipline. That’s why he asked that his remarks not leave the room – knowing full well they would, and that it really wouldn’t matter, as the people with plug-pulling powers do not routinely take into consideration remarks made to the Moral Sciences Club.
Three blah-blah points before going on to say what I want to say. First, yes, there are many sense in which philosophy does indeed make piecemeal progress in the way that bug-collecting and star counting do … blah blah blah. Second, higher ed administrators are generally more sensible than the caricatures faculty make of them, and they make allowances for poets and sculptors and so on … blah blah blah. Third, even with what I’m going to say, I really have no problem with a group of philosophers who like seeing what they do along the same lines as bug collecting and star counting; there’s room in our garden for everything … blah blah blah.
Okay, on to business. What bothers me is that Chalmers, and almost all of the discussion I have read in response to his talk (see comments on Leiter’s blog here), accept this paradigm of progress, and then set to work on explaining why philosophy isn’t advancing as robustly as the marvelous advances in polymers, or microchips. The basic theme of Chalmers talk is this: if we could see that philosophers were all smoothly gravitating over time to the same answers to the big questions, then we would know that there has been progress in philosophy; but that isn’t happening; so how can we explain why it isn’t? Why aren’t philosophers as successful as cell phone engineers?
It seems to me a decent and rational response to this paradigm is, “Are you out of your focking mind?” The fact that there are irresolvably deep differences over the biggest philosophical questions is not something to hide and apologize for. On the contrary: no educated person would expect philosophers as a corporate bunch to settle these questions, as their unsettlement is itself the value of studying philosophy. Understanding how and why Aquinas and Hume could argue for all eternity and never agree is the beginning of a philosophical education. The step after that is for individuals to make some decisions on their own – about Aquinas and Hume, about the nature of the controversy, and about how that understanding will inform their lives. The clearest progress in philosophy is at the level of individuals, in the details of their philosophical biographies and in the evolution of their minds.
Chalmers might agree to all this – at least as possibly true – but then point out that the question he raises is still worth asking: why isn’t there greater convergence on the big philosophical questions over time? But now my answer would be: because individuals make different decisions in their responses to philosophical controversies.
Now I also must admit that becoming a philosopher – learning the material, developing insight, and making your own decisions – might require the “corporate progress model” at least as a heuristic. Philosophers hold one another accountable by raising objections to arguments and responding to them. If we all merely asked one another, “How is your own personal voyage of discovery working out?”, that would certainly be annoying, and the death of philosophy. We need to argue, and we can’t argue unless we take ourselves to be getting somewhere. How one can employ this heuristic while at the same time recognizing the truth of what I have said about decisions is itself an interesting philosophical question: “To what extent must a philosopher be forgetful?” or “Can philosophy take itself seriously?” are questions Nietzsche might have asked. But they also are questions too important and too serious to be raised in the company of those in charge of higher ed.
The invention or discover of non-Euclidean geometry really messed up philosophers’ claims to apriori knowledge. For centuries, philosophers were sure that claims like, “The angles of a triangle are equal to two right angles” are paradigmatically clear examples of apriori truths. But these claims are false in any geometry other than Euclid’s, as has been known from roughly 1818 onward. Worse yet, physicists regard non-Euclidean geometry as (at the very least) the most useful model for physical space. So Euclidean geometry turns out to be, on this view, not only not necessarily, but not even actually true. Real triangles, the kind obtaining among points in real space, have angles summing to more or less than two right angles, depending on where they are and what’s in the neighborhood.
Kant claimed that space is a form we impose upon our experience, and as he had no inkling of non-Euclidean geometry, he of course believed that truths about Euclidean space are apriori synthetic. So what is a good Kantian to do in the light of non-Euclidean geometry? One easy (but dead-end) option is (A) to reign in Kant’s claims and say that he wasn’t talking about the fancy experience physicists describe, but only ordinary, everyday human experience, where Euclidean geometry still holds. But that clearly does not capture Kant’s intent, and turns his epistemology into – what? – a chronicle of the structures of casual experience? An account of untutored beliefs about geometry?
A slightly better option is (B) to simply upgrade Kant to current geometrical knowledge, but here matters get tricky. But from what I understand, there is disagreement among philosophers of physics about how to regard the nature of the geometry of space. Realists believe there is a fact about whether space is truly Euclidean or non-Euclidean. Others, following Poincare, think the geometry of space is conventional: we can choose to regard space as Euclidean, and make certain changes in our assumptions about how objects change shapes in certain situations, or we can choose to regard space as non-Euclidean, and as non-Euclidean in this way or in that way, and then change other assumptions. So if we want to simply upgrade Kant, we have a variety of packages to choose from; and the very existence of that choice makes the move to upgrade Kant troubling, since the whole idea of the apriori synthetic is to capture what is necessary for the possibility of experience.
One point to think about is a claim Frege made early on: that space, and geometry, requires some kind of intuition. In cheap words, space is essentially spacey. Geometers these days don’t really use or need diagrams, as their work is mainly done through equations and nonspatial models. Frege would say they have stopped doing real geometry. If we follow Frege, we could say that Euclidean geometry is still necessary for us when we are dealing with true space, the kind of structure we can represent to ourselves as space. When we try to represent to ourselves non-Euclidean geometry, we have to use three-dimensional Euclidean space in order to exhibit some curved two-dimensional surface which serves as a metaphor for what’s going on in a non-Euclidean space (see diagram). So we are stuck with Euclidean space if we ever want to represent space in a spacelike way to ourselves. Frege would say that this is significant: Kant was right to insist that space, true space, is Euclidean, though we have found all kinds of nonspatial (and strictly nongeometrical) ways to describe other possibilities. This is a dressed-up variant of the (A) strategy.
A second point to consider is whether there are still some features or elements binding together our models of both Euclidean and non-Euclidean spaces. I trust that contemporary geometers are still constrained in various ways as they assemble different kinds of space, and those ways are not mere consistency; in other words, there is still some spacey-ness underlying all different possible models of space; there is something in virtue of which these models count as spatial models. (I could be wrong about this.) If this is so, then those more fundamental constraints might be candidates for the synthetic apriori.
In the previous posts, I’ve been pursuing the idea that our ability to understand experience – interpret it and offer explanations and justifications – requires making a Kantian move: we should postulate some structure inherent to our minds that formats experience and makes our understanding of it possible. I have also argued that this Kantian move cannot be identified with anything found through empirical psychology. But then what does such a “postulation” mean? Does the fact of this structure entail anything supernatural or spooky? I hope not.
In Mind and World, McDowell tries to answer this question by shifting the goal posts of what counts as natural, so that we do not limit what’s natural to the current domain of the natural sciences. Nature is bigger than that, he says. The basic situation, as well as McDowell’s response to it, is very clearly summarized by Jason Bridges in a review of a book by Richard Gaskin that responds to McDowell. According to McDowell’s view,
We are, or ought to be, attracted to the idea that perceptual experience is a “tribunal” — an occasion on which our thoughts are made to answer to the world they are about. Viewing experience as a tribunal involves supposing that experiences serve for the subject as reasons for and against judgments and attitudes, and in so doing, shape the subject’s judgments and attitudes. But there is a problem in seeing how this supposition could be borne out. On the one hand, human perceptual experience, being an instance of the more general phenomenon of an animal’s sensory capacities putting it in touch with the surrounding environment, is clearly a natural occurrence, and natural occurrences, as we moderns know, are the explanatory province of the natural sciences. On the other hand, we are attracted, or ought to be attracted, to the idea that the “space of reasons” is sui generis — that we cannot construct normative (justificatory, reason-involving) facts out of non-normative conceptual materials. This would exclude in particular the conceptual materials of the natural sciences, organized as they are around the concept of a natural law rather than that of a normative relationship. And so the question arises: how can we view an experience both as the natural phenomenon it evidently is and as belonging to the space of reasons — as the ‘tribunal’ conception requires?
Various philosophical views about experience, such as the myth of the Given and Davidsonian coherentism, can be construed as responses to an awareness, however inchoate or partial, of this problem. These views fail to solve the problem and are hopeless in themselves. A better solution is to see our way to a relaxed conception of the natural. We can give due respect to the role of the natural sciences in making the natural world intelligible to us while stopping short of presuming that everything that happens or is so in the natural world can be fully explained and understood in natural-scientific discourse. There is then no problem in countenancing an experience as natural even if some of the characteristic claims we make about that experience — as, for example, when we cite that experience as the subject’s reason for a belief — cannot be captured in natural-scientific terms.
So the dialectic is this. It seems like the domain of nature is the domain of causes. But the space of reasons is its own sort of domain, where reasons rule. McDowell’s gambit is to “relax” his conception of the natural domain so that it includes the space of reasons. I find this unsatisfying; it seems like a genuine conflict is being circumvented through creative rezoning.
In an earlier draft of this post, I tried out the idea that our ability to engage with reasons is the result of some virtual machine that runs on our brains’ hardware. The idea was appealing because, it seemed, I could insulate “what’s on the inside of the virtual machine” (reasons, explanations, justifications) from the causality of the hardware on which the virtual machine is running. But then I realized that such a ploy could not possibly deliver the sort of Kantian structure I am after; the virtual machine of reasons would be another empirical artifact, susceptible to natural forces and discoverable through cognitive science. So far as I can see, that can’t generate what I’m after.
The structure Kant and McDowell are postulating is transcendental; it must “take hold” prior to any understanding we achieve through efforts in cognitive science. This means it’s hopeless to base it on brain science. But then again, consider that when neuroscientists do their work, they approach it with a theory, and that theory, like any theory, is underdetermined by any evidence they find, and is also a structure through which evidence is parsed, understood, and assessed (see discussion of Kuhn, in part 1). The neuroscientists are also approaching their work, of course, with whatever fixtures are generally required by human understanding. These structures govern our interpretation of evidence and experience in just the way any lesser theory governs our interpretation of data; it’s just that it is a deeper theory, which has no alternatives. This means it’s not best to call it a “theory.” It’s a “theory” we cannot talk or reason ourselves out of: a fixed paradigm, a non-negotiable constraint upon our experience, or what Henry Allison (in Kant’s Transcendental Idealism) calls an “epistemic condition.”
But doesn’t such a fixed paradigm have to be grounded in material facts about us? Or, failing that, spiritual facts about our souls? This question launches us into Kant’s “paralogisms,” or the seemingly powerful but ultimately fruitless arguments about our nature as cognitive beings. He argues that we simply cannot answer this question; we cannot know ourselves. (It is worth noting that the motto of the CPR begins “De nobis ipsis silemus” – “Of ourselves we are silent”.) Thinking of this fixed paradigm merely as a paradigm, without trying to explain whose paradigm how how it came to be put in place, is as far as human inquiry can go.
For that reason, it is going too far to call this fixed paradigm “natural,” or (for that matter) to call it “unnatural” or “supernatural.” As the fixed limit of our understanding, it cannot be mapped into any domain subject to itself.
Nevertheless, I think Kant is right to see this as some kind of idealism. Not Berkeley’s idealism, of course. The view is idealistic in that its most basic fixture is something we arrive at through reflection, and posit as an apriori theory. A naturalist makes sense of experience by positing a world of objects, forces, and laws; a Kantian makes sense of experience by positing a fixed theory. While the Kantian cannot make claims about “the world in itself,” apart from all theories (they are more modest than the naturalist in this regard), we can say that the world humans experience is conditional upon something “theory-like.” That makes it idealism – or as Kant called it, “transcendental idealism.”