Or, in other words: is there really any difference between the universe and its mirror image? I remember once reading one of Richard Feynman’s lectures in physics in which he took on this problem. True to form, Feynman found a funny, imaginative, and perfectly clear way of spelling it out:
Imagine that we were talking to a Martian, or someone very far away, by telephone. We are not allowed to send him any actual samples to inspect; for instance, if we could send light, we could send him right-hand circularly polarized light and say, “That is right-hand light—just watch the way it is going.” But we cannot give him anything, we can only talk to him. He is far away, or in some strange location, and he cannot see anything we can see. For instance, we cannot say, “Look at Ursa major; now see how those stars are arranged. What we mean by ‘right’ is …” We are only allowed to telephone him.
Now we want to tell him all about us. Of course, first we start defining numbers, and say, “Tick, tick, two, tick, tick, tick, three, …,” so that gradually he can understand a couple of words, and so on. After a while we may become very familiar with this fellow, and he says, “What do you guys look like?” We start to describe ourselves, and say, “Well, we are six feet tall.” He says, “Wait a minute, what is six feet?” Is it possible to tell him what six feet is? Certainly! We say, “You know about the diameter of hydrogen atoms—we are 17,000,000,000 hydrogen atoms high!” That is possible because physical laws are not invariant under change of scale, and therefore we can define an absolute length. And so we define the size of the body, and tell him what the general shape is—it has prongs with five bumps sticking out on the ends, and so on, and he follows us along, and we finish describing how we look on the outside, presumably without encountering any particular difficulties. He is even making a model of us as we go along. He says, “My, you are certainly very handsome fellows; now what is on the inside?” So we start to describe the various organs on the inside, and we come to the heart, and we carefully describe the shape of it, and say, “Now put the heart on the left side.” He says, “Duhhh—the left side?” Now our problem is to describe to him which side the heart goes on without his ever seeing anything that we see, and without our ever sending any sample to him of what we mean by “right”—no standard right-handed object. Can we do it? (http://www.feynmanlectures.caltech.edu/I_52.html)
Feynman thought we would have to go to some pretty extreme lengths to explain “left” and “right” to the Martian:
In short, we can tell a Martian where to put the heart: we say, “Listen, build yourself a magnet, and put the coils in, and put the current on, and then take some cobalt and lower the temperature. Arrange the experiment so the electrons go from the foot to the head, then the direction in which the current goes through the coils is the direction that goes in on what we call the right and comes out on the left.” So it is possible to define right and left, now, by doing an experiment of this kind.
It turns out we could just tell the Martian how to build one of these cool tops:
It’s interesting that Kant thought (a) that there is no intrinsic mathematical difference between left-hand and right-hand, or clockwise vs. counterclockwise, and that (b) the fact that we can distinguish the two shows that space itself is not fully described by mathematical laws, and must possess essentially some intuitive component. Meaning: spatial objects aren’t just formulae, but have to be “seen” to be grasped. I don’t think anything coming out of the physics of the natural world would challenge Kant’s claim (a), since these physical distinctions are not purely mathematical. Still, Kant would finds these results interesting, I’m sure. And Feynman would’ve have liked those tops.
“Sit in your local coffee shop and your laptop can tell you a lot, especially if you wield your search terms adeptly. But if you want deeper, more local knowledge, you will still have to take the narrower path that leads between the lions and up the stone stairs. There – as in great libraries around the world – you will use all the new sources, all the time. [...] But these streams of data, rich as they are, will illuminate rather than eliminate the unique books and prints and manuscripts that only the library can put in front of you. For now, and for the foreseeable future, if you want to piece together the richest possible mosaic of documents and texts and images, you will have to do it in those crowded public rooms where sunlight gleams on varnished tables, as it has for more than a century, and knowledge is still embodied in millions of dusty, crombling, smelly, irreplaceable manuscripts and books” (Anthony Grafton, “Codex in Crisis,” Worlds Made by Words).
I am a great believer in technology’s capacity to build our native skills, and so lately I have been augmenting my talents for world domination through playing Sid Meier’s Civilization IV. (For some reason, Sid Meier thinks it’s important that Sid Meier’s Civilization IV be known as “Sid Meier’s Civilization IV,” but I’m not typing that whole thing anymore, and shall henceforth refer to it as “SidCiv.”) SidCiv is an automated version of the war board games I watched my brother play when I was a kid. You have to build cities, and a wide range of classes of people (settlers, workers, soldiers of various types, religious leaders, scientists, etc.), institutions and buildings for city infrastructure, and great cultural monuments. You can win in four ways: (a) be the first to establish a space program, (b) win a diplomatic victory by establishing the United Nations and pass a resolution proclaiming your victory, (c) at the end of the year 2050, be the richest, most advanced, and strongest civilization, or (d) take over the entire world. You may choose to lead different empires (British, Greek, Russian, etc.); you may choose the geography of the globe, as well as the sea level; you may choose the difficulty of your computer-managed opponents. At the end, your world-governing abilities are ranked from Augustus Caesar at the top to Dan Quayle at the very bottom. (Poor Dan Quayle; so far, this is the only association my children have for him. More on this below.)
The game is a totally mind-absorbing challenge, forcing you to multi-task while building an empire along economic, military, and cultural fronts. While the game draws upon actual and historical figures and buildings and technologies, SidCiv freely departs from our world’s actual arrangement. So you’re playing along and are suddenly informed that Euclid has been born in Tokyo and the Taj Mahal has been built in London. You might find Archimedes in one of your cities, and consuming him yields the innovation of Chemistry. If you’re Japan you have access to samurais; if you’re Russia you get cossacks. In conflicts there are cavalry pitted against catapults, and tanks against archers. Cities without aqueducts soon become filthy, vermin-infested plague holes, so you’d best take care of your populace, and eventually they’ll celebrate “We love the monarch!” day. Every achievement along the way comes with a pithy quote read by Leonard Nimoy. His impersonation of Sputnik is hilarious.
I’ve played multiple times, and thus have learned some tricks. For example, when I am invading other countries, I like to use marines. They show up later in SidCiv, as you need first to acquire combustion and industrialism and assemby line production, but boy are they worth it: your best friend or worst enemy, as the slogan goes. Then once I take over a city (selecting the benevolent “Install a new governor,” rather than the dismal invitation to “Burn, baby burn!”), I quickly establish some institution that will turn the population toward my favor and start to spread my culture to the surrounding countryside. So I build a theater, as they are fast and cheap and effective. Before long, my newly-conquered citizenry is setting aside days to celebrate me.
I play at low settings, and have won everytime, twice with diplomatic victories, and the rest with time victories. So you might think I would rock as a world leader. Alas, you would be wrong. I have been awarded “Dan Quayle” status with such steady frequency that I’m worried Sid Meier will soon put my name below Dan’s in the ranking. I used to care, and maybe someday I’ll invest more thought into smarter ways to play. Until then, I am whiling away hours as Dan Quayle presiding over an army of theater-crazed marines.
Answer: by not working very well. I’ll explain.
My son spends a lot of time playing Minecraft. It’s a brilliant game that operates in two modes: creative mode, in which you can build all sorts of structures and even simple circuits my collecting raw materials and re-shaping them; and play mode, in which you and other creatures try to kill each other. (I call it “brilliant” because of the sizable ratio between its simplicity and the amount of stuff you can do with it. By this measure, writing and pocket knives are about the most brilliant inventions ever.)
But you can make Minecraft even cooler by downloading different “mods” which add new animals or events or features to the Minecraft land. The trick is that you have to figure out how to download a mod and plug it into your game. My son was immediately baffled, and so I tried to help him. Now I’m nothing like an expert, but I’m generally pretty good at solving this sort of problem. (I’m even better at electrical or plumbing problems.) But I soon was baffled as well and gave up. He kept at it, figured it out, and by now has incorporated several mods. He has learned a lot.
There’s nothing special computers are offering here. They are basically presenting an environment of raw materials which need to be cleverly exploited in order for us to get what we want. In ye olden days, this environment was presented by “the world” or “the garage” or “the broken bike”, which similarly provided promising potential for those willing to exercise some cleverness. The reason I’m so good (well, not a total failure) at electrical work is that when I was 10 or so my brother gave me a shoebox full of toggle switches, and within a couple weeks everything in my room was toggle-switched. I learned how some things work, but more importantly I learned that, for many problems, I could figure it out. That, I hope, is what my son has learned from the difficulty of incorporating mods into Minecraft.
“The lecturer pumps laboriously into sieves. The water may be wholesome, but it runs through. A mind must work to grow,” wrote Charles W. Eliot. But this sentiment has been used to support all sorts of classroom activities and projects which only provide fake problems to be solved by ad hoc teams. A large part of the education provided by higher ed (I suspect) has to do not with these ersatz engagements, but with the obstacles and problems thrown up by higher ed institutions (and by early-adulthood life generally). How do I satisfy the Gen Ed requirements? What are they, anyway? How can I complete this major? How do I convince the Financial Aid office that in fact the check did not arrive? Big institutions are better than broken bicycles at providing the appropriate sorts of challenges for adult life. And computers, of course, make the problems even more difficult, since one can no longer simply rely on genuine communication with an intelligent human, but most now figure out how to get a stupid system to accept a certain cluster of data.
So bring on the advantages, speed, and efficiencies of computers into education. They can only make our problems that much more difficult, and thereby make us smarter.
Al-Ghazali (1058-1111) was a Persian mystic philosopher, and wrote the Deliverance from Error as a kind of intellectual autobiography, while at the same time an argument for sufism. (A student gave me the book after sitting through my epistemology class, probably thinking (a) I’d like it, and (b) I could use the help.) Its similarities to Descartes’s Meditations are striking. Al-Ghazali is writing to a friend, recounting his spiritual journey, which began with “a thirst for grasping the real meaning of things.” He soon realizes he must understand the nature of knowledge. At first he feels sure only self-evident truths and the reports of his senses; but he soon finds himself able to doubt even these, as he considers that he might be in a state like a dream. His soul tells him:
“Don’t you see that when you are asleep you believe certain things and imagine certain circumstances and believe they are fixed and lasting and entertain no doubts about that being their status? Then you wake up and know that all your imaginings and beliefs were groundless and unsubstantial. So while everything you believe through sensation or intellection in your waking state may be true in relation to that state, what assurance have you that you may not suddenly experience a state which would have the same relation to your waking state as the latter has to your dreaming, and your waking state would be dreaming in relation to that new and further state?”
If Descartes had put his doubt this carefully, centuries of undergraduate philosophy professors would have had greater success in making that doubt compelling! Actually: maybe too compelling. For how does one answer the doubt that one’s own powers of conception, the lining of conceivability, might be askew? Descartes’s skeptical scenario is only meant to cast doubt on sensations, for then his powers of “intellection” can save the day. Al-Ghazali lumps them together in his “same relation to a dream state” scenario, and there’s no way out of that, except for a lifeline thrown by an external source. This is exactly what al-Ghazali recounts: he is in skeptical despair for about two months before he is cured of this illness – “My soul regained its health and equilibrium and once again I accepted the self-evident data of reason and relied on them with safety and certainty. But that was not achieved by constructing a proof or putting together an argument. On the contrary, it was the effect of a light which God Most High cast into my breast. And that light is the key to most knowledge.”
This too is echoed by Descartes, in a way. His skepticism is cured or answered by the light of nature, which reveals to him certain truths which, it turns out, render the deceiving-demon scenario, and finally the dreaming scenario, inconceivable after all. But does Descartes take the same attitude as al-Ghazali – as that of a patient acted upon by a higher doctor? Or is Descartes’s remedy more of a self-cure? It seems to me something of both: our natural light, our reason, cures us of skepticism, but it is able to do so only because God invests us with an innate ability to discern the structure God imposes on the cosmos. It’s self-help with prescription medication.
While reading al-Ghazali I also read David Deutsch’s The Beginning of Infinity. Early on, Deutsch argues against the sort of doubt al-Ghazali expresses, labeling it as a “parochial” view of human reason. By this Deutsch means that we are able to offer the kinds of explanations we do, and enjoy the sort of technological success we have, precisely because our intellect grasps fundamental patterns and principles of nature. To give in to the doubt that “maybe, somehow, in a way we cannot possibly imagine, the world is other than we suppose” is to give in to utterly unwarranted, superstitious, magical thinking. Deutsch sees this sort of doubt as an outgrowth of what he calls “the principle of mediocrity,” or a principle which says that there is nothing especially significant about human beings. Posh, says Deutsch: our ability to reason and offer explanations is very significant, and we should not be afraid of it. We stand at the beginning of infinity … [cue Star Trek music]
Al-Ghazali and Deutsch sit at opposite ends of the Enlightenment, which was fundamentally a transition from passive to active voice. Autonomy, for al-Ghazali, manifested itself as a kind of illness – a lacking loneliness, or pointlessness. For Deutsch (and, of course, Kant and Hegel before him), autonomy was a mark of liberation, a graduation from self-incurred tutelage to the world of being a master. But the Enlightenment thinkers see reason at our core, and our liberation from external saviors means the freedom to exercise reason. Once we entertain a doubt as to whether reason constitutes our core (and hello, Nietzsche!), then that “liberation” begins to resemble the illness al-Ghazali found it to be, and we start to seek outside aid. The landscape of the intellect changes; but the need for some sort of salvation, either from within or without, does not.
David Chalmers recently addressed the Moral Sciences Club at Cambridge, and he jokingly announced at the beginning that everything he was about to say was not to leave the room. Of course, there are links to the talk everywhere now, and here is another one. His joke makes sense as a joke because of the general assumption in higher ed that each and every faculty member at a research university is in the business of making piecemeal contributions to an ongoing project of construction and discovery. That’s progress. Faculty members are routinely evaluated on the impact they make upon their profession, or the ways in which they advance their discipline, and that is measured by frequent publications that get cited frequently in other publications. Now if a professional academic like Chalmers comes forward and asks, “How come none of this is getting us anywhere?”, he invites the people who fund higher ed, or the administrators of those funds, to pull the plug on that discipline. That’s why he asked that his remarks not leave the room – knowing full well they would, and that it really wouldn’t matter, as the people with plug-pulling powers do not routinely take into consideration remarks made to the Moral Sciences Club.
Three blah-blah points before going on to say what I want to say. First, yes, there are many sense in which philosophy does indeed make piecemeal progress in the way that bug-collecting and star counting do … blah blah blah. Second, higher ed administrators are generally more sensible than the caricatures faculty make of them, and they make allowances for poets and sculptors and so on … blah blah blah. Third, even with what I’m going to say, I really have no problem with a group of philosophers who like seeing what they do along the same lines as bug collecting and star counting; there’s room in our garden for everything … blah blah blah.
Okay, on to business. What bothers me is that Chalmers, and almost all of the discussion I have read in response to his talk (see comments on Leiter’s blog here), accept this paradigm of progress, and then set to work on explaining why philosophy isn’t advancing as robustly as the marvelous advances in polymers, or microchips. The basic theme of Chalmers talk is this: if we could see that philosophers were all smoothly gravitating over time to the same answers to the big questions, then we would know that there has been progress in philosophy; but that isn’t happening; so how can we explain why it isn’t? Why aren’t philosophers as successful as cell phone engineers?
It seems to me a decent and rational response to this paradigm is, “Are you out of your focking mind?” The fact that there are irresolvably deep differences over the biggest philosophical questions is not something to hide and apologize for. On the contrary: no educated person would expect philosophers as a corporate bunch to settle these questions, as their unsettlement is itself the value of studying philosophy. Understanding how and why Aquinas and Hume could argue for all eternity and never agree is the beginning of a philosophical education. The step after that is for individuals to make some decisions on their own – about Aquinas and Hume, about the nature of the controversy, and about how that understanding will inform their lives. The clearest progress in philosophy is at the level of individuals, in the details of their philosophical biographies and in the evolution of their minds.
Chalmers might agree to all this – at least as possibly true – but then point out that the question he raises is still worth asking: why isn’t there greater convergence on the big philosophical questions over time? But now my answer would be: because individuals make different decisions in their responses to philosophical controversies.
Now I also must admit that becoming a philosopher – learning the material, developing insight, and making your own decisions – might require the “corporate progress model” at least as a heuristic. Philosophers hold one another accountable by raising objections to arguments and responding to them. If we all merely asked one another, “How is your own personal voyage of discovery working out?”, that would certainly be annoying, and the death of philosophy. We need to argue, and we can’t argue unless we take ourselves to be getting somewhere. How one can employ this heuristic while at the same time recognizing the truth of what I have said about decisions is itself an interesting philosophical question: “To what extent must a philosopher be forgetful?” or “Can philosophy take itself seriously?” are questions Nietzsche might have asked. But they also are questions too important and too serious to be raised in the company of those in charge of higher ed.
The invention or discover of non-Euclidean geometry really messed up philosophers’ claims to apriori knowledge. For centuries, philosophers were sure that claims like, “The angles of a triangle are equal to two right angles” are paradigmatically clear examples of apriori truths. But these claims are false in any geometry other than Euclid’s, as has been known from roughly 1818 onward. Worse yet, physicists regard non-Euclidean geometry as (at the very least) the most useful model for physical space. So Euclidean geometry turns out to be, on this view, not only not necessarily, but not even actually true. Real triangles, the kind obtaining among points in real space, have angles summing to more or less than two right angles, depending on where they are and what’s in the neighborhood.
Kant claimed that space is a form we impose upon our experience, and as he had no inkling of non-Euclidean geometry, he of course believed that truths about Euclidean space are apriori synthetic. So what is a good Kantian to do in the light of non-Euclidean geometry? One easy (but dead-end) option is (A) to reign in Kant’s claims and say that he wasn’t talking about the fancy experience physicists describe, but only ordinary, everyday human experience, where Euclidean geometry still holds. But that clearly does not capture Kant’s intent, and turns his epistemology into – what? – a chronicle of the structures of casual experience? An account of untutored beliefs about geometry?
A slightly better option is (B) to simply upgrade Kant to current geometrical knowledge, but here matters get tricky. But from what I understand, there is disagreement among philosophers of physics about how to regard the nature of the geometry of space. Realists believe there is a fact about whether space is truly Euclidean or non-Euclidean. Others, following Poincare, think the geometry of space is conventional: we can choose to regard space as Euclidean, and make certain changes in our assumptions about how objects change shapes in certain situations, or we can choose to regard space as non-Euclidean, and as non-Euclidean in this way or in that way, and then change other assumptions. So if we want to simply upgrade Kant, we have a variety of packages to choose from; and the very existence of that choice makes the move to upgrade Kant troubling, since the whole idea of the apriori synthetic is to capture what is necessary for the possibility of experience.
One point to think about is a claim Frege made early on: that space, and geometry, requires some kind of intuition. In cheap words, space is essentially spacey. Geometers these days don’t really use or need diagrams, as their work is mainly done through equations and nonspatial models. Frege would say they have stopped doing real geometry. If we follow Frege, we could say that Euclidean geometry is still necessary for us when we are dealing with true space, the kind of structure we can represent to ourselves as space. When we try to represent to ourselves non-Euclidean geometry, we have to use three-dimensional Euclidean space in order to exhibit some curved two-dimensional surface which serves as a metaphor for what’s going on in a non-Euclidean space (see diagram). So we are stuck with Euclidean space if we ever want to represent space in a spacelike way to ourselves. Frege would say that this is significant: Kant was right to insist that space, true space, is Euclidean, though we have found all kinds of nonspatial (and strictly nongeometrical) ways to describe other possibilities. This is a dressed-up variant of the (A) strategy.
A second point to consider is whether there are still some features or elements binding together our models of both Euclidean and non-Euclidean spaces. I trust that contemporary geometers are still constrained in various ways as they assemble different kinds of space, and those ways are not mere consistency; in other words, there is still some spacey-ness underlying all different possible models of space; there is something in virtue of which these models count as spatial models. (I could be wrong about this.) If this is so, then those more fundamental constraints might be candidates for the synthetic apriori.
In the previous posts, I’ve been pursuing the idea that our ability to understand experience – interpret it and offer explanations and justifications – requires making a Kantian move: we should postulate some structure inherent to our minds that formats experience and makes our understanding of it possible. I have also argued that this Kantian move cannot be identified with anything found through empirical psychology. But then what does such a “postulation” mean? Does the fact of this structure entail anything supernatural or spooky? I hope not.
In Mind and World, McDowell tries to answer this question by shifting the goal posts of what counts as natural, so that we do not limit what’s natural to the current domain of the natural sciences. Nature is bigger than that, he says. The basic situation, as well as McDowell’s response to it, is very clearly summarized by Jason Bridges in a review of a book by Richard Gaskin that responds to McDowell. According to McDowell’s view,
We are, or ought to be, attracted to the idea that perceptual experience is a “tribunal” — an occasion on which our thoughts are made to answer to the world they are about. Viewing experience as a tribunal involves supposing that experiences serve for the subject as reasons for and against judgments and attitudes, and in so doing, shape the subject’s judgments and attitudes. But there is a problem in seeing how this supposition could be borne out. On the one hand, human perceptual experience, being an instance of the more general phenomenon of an animal’s sensory capacities putting it in touch with the surrounding environment, is clearly a natural occurrence, and natural occurrences, as we moderns know, are the explanatory province of the natural sciences. On the other hand, we are attracted, or ought to be attracted, to the idea that the “space of reasons” is sui generis — that we cannot construct normative (justificatory, reason-involving) facts out of non-normative conceptual materials. This would exclude in particular the conceptual materials of the natural sciences, organized as they are around the concept of a natural law rather than that of a normative relationship. And so the question arises: how can we view an experience both as the natural phenomenon it evidently is and as belonging to the space of reasons — as the ‘tribunal’ conception requires?
Various philosophical views about experience, such as the myth of the Given and Davidsonian coherentism, can be construed as responses to an awareness, however inchoate or partial, of this problem. These views fail to solve the problem and are hopeless in themselves. A better solution is to see our way to a relaxed conception of the natural. We can give due respect to the role of the natural sciences in making the natural world intelligible to us while stopping short of presuming that everything that happens or is so in the natural world can be fully explained and understood in natural-scientific discourse. There is then no problem in countenancing an experience as natural even if some of the characteristic claims we make about that experience — as, for example, when we cite that experience as the subject’s reason for a belief — cannot be captured in natural-scientific terms.
So the dialectic is this. It seems like the domain of nature is the domain of causes. But the space of reasons is its own sort of domain, where reasons rule. McDowell’s gambit is to “relax” his conception of the natural domain so that it includes the space of reasons. I find this unsatisfying; it seems like a genuine conflict is being circumvented through creative rezoning.
In an earlier draft of this post, I tried out the idea that our ability to engage with reasons is the result of some virtual machine that runs on our brains’ hardware. The idea was appealing because, it seemed, I could insulate “what’s on the inside of the virtual machine” (reasons, explanations, justifications) from the causality of the hardware on which the virtual machine is running. But then I realized that such a ploy could not possibly deliver the sort of Kantian structure I am after; the virtual machine of reasons would be another empirical artifact, susceptible to natural forces and discoverable through cognitive science. So far as I can see, that can’t generate what I’m after.
The structure Kant and McDowell are postulating is transcendental; it must “take hold” prior to any understanding we achieve through efforts in cognitive science. This means it’s hopeless to base it on brain science. But then again, consider that when neuroscientists do their work, they approach it with a theory, and that theory, like any theory, is underdetermined by any evidence they find, and is also a structure through which evidence is parsed, understood, and assessed (see discussion of Kuhn, in part 1). The neuroscientists are also approaching their work, of course, with whatever fixtures are generally required by human understanding. These structures govern our interpretation of evidence and experience in just the way any lesser theory governs our interpretation of data; it’s just that it is a deeper theory, which has no alternatives. This means it’s not best to call it a “theory.” It’s a “theory” we cannot talk or reason ourselves out of: a fixed paradigm, a non-negotiable constraint upon our experience, or what Henry Allison (in Kant’s Transcendental Idealism) calls an “epistemic condition.”
But doesn’t such a fixed paradigm have to be grounded in material facts about us? Or, failing that, spiritual facts about our souls? This question launches us into Kant’s “paralogisms,” or the seemingly powerful but ultimately fruitless arguments about our nature as cognitive beings. He argues that we simply cannot answer this question; we cannot know ourselves. (It is worth noting that the motto of the CPR begins “De nobis ipsis silemus” – “Of ourselves we are silent”.) Thinking of this fixed paradigm merely as a paradigm, without trying to explain whose paradigm how how it came to be put in place, is as far as human inquiry can go.
For that reason, it is going too far to call this fixed paradigm “natural,” or (for that matter) to call it “unnatural” or “supernatural.” As the fixed limit of our understanding, it cannot be mapped into any domain subject to itself.
Nevertheless, I think Kant is right to see this as some kind of idealism. Not Berkeley’s idealism, of course. The view is idealistic in that its most basic fixture is something we arrive at through reflection, and posit as an apriori theory. A naturalist makes sense of experience by positing a world of objects, forces, and laws; a Kantian makes sense of experience by positing a fixed theory. While the Kantian cannot make claims about “the world in itself,” apart from all theories (they are more modest than the naturalist in this regard), we can say that the world humans experience is conditional upon something “theory-like.” That makes it idealism – or as Kant called it, “transcendental idealism.”
“When you have worked through it, by further reflection and some decision as to the immediate future it will turn into something like a path marked on a map, to be followed for a good while and possibly for the rest of your life. To put it another way, you will have made a Self, which is indeed a desirable possession. A Self is interesting to oneself and others, it acts as a sort of rudder in all the vicissitudes of life, and it thereby defines what used to be known as a career.”
Book review by Ollie Cussan of Pagden, The Enlightenment and why it still matters, in Prospect:
The Enlightenment’s great achievement, Pagden argues, was to repair the bonds of mankind. Its distinctive feature was not that it held history, nature, theology and political authority to the scrutiny of reason, as most of its critics and many of its champions claim, but instead that it recognised our common humanity—our ability to place ourselves in another’s situation and, ultimately, to sympathise with them. Adam Smith and David Hume taught us that man is neither a creation of God nor a selfish pursuer of his own interests; at the most fundamental level, man is the friend of man. This, Pagden argues, was the origin of cosmopolitanism: the central Enlightenment belief in a common humanity and an awareness of belonging to some world larger than your own community.
For Pagden, the significance of this turn in human thought cannot be exaggerated. Cosmopolitanism “was, and remains, possibly the only way to persuade human beings to live together in harmony with one another, or, to put it differently, to stop killing each other.” It is inextricably tied to the Enlightenment’s “universalising vision of the human world” that ultimately led to a conception of civilisation in which questions of justice can be applied and upheld at a global level. Pagden admonishes critics of the Enlightenment project such as Gray and Macintyre for reducing it to a movement based on autonomous reason and objective science. Instead, the Enlightenment was about sympathy, the invention of civilisation, and the pursuit of a cosmopolitan world order.
The central claim in Kant’s philosophy is that our experience is somehow formatted by the nature of our understanding. Why think this is so?
In part 2, I made a general case for thinking that humans are special in that we can understand – we can explain and offer justifications. We misunderstand and get many things wrong of course, but the entire endeavor, our participation in the space of reasons, is something special and itself in need of some explanation. One general reason for thinking that our experience is shaped by our understanding is the fact of this participation: we experience reasons, and see the world in as ordered in such a way to be explicable and comprehensible. We are always acting upon or seeking after an explanation of things. It’s the human way of being. That is possible only if the content of our experience is the sort of thing that fits into explanations.
Wilfrid Sellars made a simple argument for thinking that our sense experience is mediated by concepts. He argues that, of course, we use our sensory experience to justify certain beliefs we have about the world. But the sensory experience itself, the patches and changes of color and sounds and smells and so on, cannot by themselves play any role in any justification. Sounds and smells and colors are of the wrong logical type (really, they are of no logical type at all). Sensory experiences must be turned into judgments, which can then play some role in an argument, explanation, or justification. A judgment in this case consists in applying concepts to the content of experience. Applying concepts means applying one’s mind or understanding. Therefore, there is no bare “given” that justifies our beliefs; there is only a “given” as mediated by concepts and understanding. (This constitutes Sellars’ rejection of “the Myth of the Given.”)
Now this simple argument is enough to suggest what we might call a “thin” Kantianism: that all sensory experiences involve some basic recognition, in the form of judgments, plus whatever other basic attributes are required to render sensory experiences ready to play roles in explanations or justifications. This gets us as far as Hume (though Hume did not take proper note of the role of the mind in this, so far as I can see). But Hume infamously argued that thinly-interpreted experience still falls short of being able to justify many of the foundational beliefs we have about the world. We experience bread entering our bodies, for example, and then we experience the result of being nourished. But there is nothing in that first experience, nothing in even the most careful scrutiny of bread, which suggests any causal power to make us become nourished. At most, we can witness in our experience only correlations, and never instances of causation. Similarly, we experience a hunk of bread at one time, and a very similar hunk of bread an instant later; but nothing in the content of experience compels us to conclude that the two hunks are in fact one and the same. So experience does not show the existence of substances, or objects. And as with the bread, so too with our own minds: the mere fact that there is first one experience, and later another experience, coupled with the memory of the first, does not force the conclusion that the two experiences belong to any single, enduring self.
In short, this thin Kantianism yields only discrete experiences, and never anything more than that. Nothing in the content of such thin experience justifies any conclusions about enduring things or selves or special causal connections among the objects of those experiences. At this point we have a three options. First, we could try to live as if reality is nothing but a disordered grab bag of discrete experiences. Good luck with that. Second, we could follow Hume and conclude that we have learned something about the nature of philosophical justification: namely, that it is laughably inadequate. We will continue to believe in substances and selves in on-going interaction with one another, but we will also realize that there really is no philosophical justification for these beliefs. But even Hume himself was nervous about this. For he tried to construct some natural explanation for the fact that we end up with these non-justified beliefs. He constructed a psychology in which habit (or custom) brought us to these conclusions, despite custom’s lack of any philosophical justification. He later saw (in the appendix to the Treatise) that thin Kantianism was insufficient for allowing the possibility of even this project. We can put the point this way: if only thin Kantianism is true, then there simply is not enough in the content of experience to suggest the existence of a thing that has any sort of psychology whatsoever. If thin Kantianism is true, Hume’s psychological project does not even get started. (This problem with option #2 also attends to option #1: we might try to live as if reality is a grab bag of discrete experiences, but now must also be completely mystified as to why we should ever think otherwise, even mistakenly.)
The third option is to make our Kantianism thicker. This third option means taking two steps. The first step is described by Galen Strawson in his discussion of Hume’s quandary over his account of the self:
One might say that what Hume [as he writes his Appendix] sees is that his philosophy allows (demands, constitutes) a transcendental argument in Kant’s sense, an argument of a sort strictly forbidden to empiricists. It allows an argument not just to the conclusion that there is something more to the mind than a series of experiences – for that is something he never doubted – but to the conclusion that the nature of this something more is correctly and knowably characterizable in a certain metaphysically specific (albeit extremely general) way: either as a persisting single something or as a non-single multiple thing that knowably involves real connection in a way that is not knowable given empiricist principles. (The Evident Connexion, p. 134)
So what Hume sees is that he has to posit “something more” of a self – some thingliness or capacity which at the very least can establish connections among discrete experiences so as to get Humean psychology up and running. That is the first step: posit something that invests experience with connections. But if we rest content with taking only this first step, we run into two problems. First, all we have done is to salvage Hume’s skepticism about the extent of philosophical justification. That is, we have done what is minimally required in order to become good Humean skeptics about ever obtaining philosophical justification for believing in selves, substances, and causality. Maybe that is a good place to be; Hume himself seemed content with it, overall. But many have found Hume’s skepticism intolerable, even without recognizing the nature of the problem raised in Hume’s appendix. The second problem is more serious. If Kant’s arguments toward the end of the Transcendental Analytic (the Refutation of Idealism) are sound, then we will not be able to account for even an illusory sense of self unless we also establish that there are causal connections among the objects in our experience. The “something more” Strawson says Hume sees he needs is still not yet enough, according to Kant. Hume needs also causality among the objects of his experience in order to get enough materials to construct even a seeming sense of self.
So the second step (if anyone is still with me here!) in this third option is to posit a “something more” which not only establishes basic, non-causal connections among experiences, but also enduring object-hood and causal connections among the objects of experiences. This means, in short, that when we judge experiences, we conceptualize experience as consisting in substances and causes. We do not experience merely sensations, nor merely sensations that have superficial relations to one another; we experience objects in on-going causal relations to one another. And we have this experience only because of the judgments we make, which is to say because of the concepts we apply.
For the sake of keeping things more simple, I have just followed Kant here in thinking that human experience as we know it requires conceptualizing the world in terms of substances and causes. But it may be that Kant was insufficiently thoughtful about this. (A sentence I do not write lightly.) Is it possible to have human understanding without parsing experience in this particular way? Is it not a human possibility to experience and understand the world as consisting in stable, fluid processes, with only occasional causal connections, or only mere propensities and probabilities? Might a thoroughly quantum world be understandable to humans? Right now I am not as confident as Kant in ruling out these possibilities. I agree with him that there must be “something more” in our judgments of experience to yield a world comprehensible to humans, but I am as yet agnostic about precisely what this “something more” must consist in. For now I am just calling the whatever-it-is “MATH” – since whatever it is we impose upon experience, it had better explain, at the very least, why deeply important forces and features of the world are susceptible to mathematical description, and even require math for their expression.
In part 1, I gave a quick description of Kant’s epistemological project: to uncover what might be called the human “operating system,” or the fixed interpretive framework humans employ in encountering and understanding experience. I also made a couple of brief arguments for thinking that this project is not an exercise in psychological or historical or evolutionary science, since those sciences can only come up with contingent frameworks, and not frameworks necessary for the possibility of human understanding. In this post I’d like to suggest that this Kantian project opens up interesting possibilities for philosophy – particularly in hermeneutics (very broadly construed) and morality.
A striking feature of human encounters with the world is that we strive for understanding: we traffic in explanations, arguments, and justifications. We provide them, and we expect them from others; we engage in critical dialogue about them and discover lapses in logic or judgment. We can call this broad endeavor “participating in the space of reasons,” using a term put forward by Wilfrid Sellars and elaborated in a much greater degree by John McDowell. A first item to note is that reasons are not causes. When I argue with you, I present you with reasons for thinking I am right. I do not merely try to cause you to think I am right. If I were to do that, I might more simply poke you with a stick until I cause you finally to relent and say you agree. Reasons provoke us in a way very different from the way causes provoke us. Reasons engage our capacity to reason. We think through them, assess them, and adopt them or reject them on the basic of our beliefs and logic. I can be wrong in my reasoning, or my justification, while I cannot possibly be wrong in my responses to causes. My reasoning might be skewed or distorted in all kinds of ways having to do with my psychology and circumstance. But that still does not turn reasons into causes; indeed, if we try to reduce reasons to causes, we lose any coherent way of making any genuine sense of making mistakes in reasoning.
Now in a thoroughly naturalistic framework, reasons disappear. Or, at best, reasons turn into disguised causes. The reasons I have for (say) adopting Kantianism might be understood as covert causes – I have been effectively brainwashed by several philosophy books, or philosophy teachers, and provoked to utter certain strings of sentences rather than others, not really for any good reason, but because of factors in my environment or psychological temperament. What I claim to be “reasons” for my view are really idle wheels in explaining why I have the view I have; they are at most symptoms of secret underlying causes. The true explanation for what I say and do is discerned by examining what causes me to say and do those things. But taking such a strictly causal approach is hardly credible. For starters, it just doesn’t meet the “sniff” test: for it sure seems like we take reasons seriously, and act on them. (Indeed, very much so; we can’t help but do so.) But even apart from that, the “causalist” approach to reasons defeats itself. For anyone advancing such an account will argue for it by presenting evidence and reasons for thinking it’s true. If they really believed in their conclusions, they would not be so conscientious! (Or I suppose they might be, if they believed that by seeming conscientious, they in fact would be using the sharpest poking sticks to cause others to agree. But wait a minute; that still would be acting on reasons. We would have to say that the causalists were caused to seem to appear conscientious, and caused to seemingly “believe” they had “reasons.” Is this really how such causalists would understand their own arguments? Or do they in fact believe they have reasons for thinking their conclusions are true?)
The Kantian project of opening up a space for reasons, in the context of understanding the human operating system, preserves the possibility of genuine reasoning. When we participate in the space of reasons, we are operating against the backdrop of human understanding, which is a backdrop distinct from that offered by any causal explanation. It then wouldn’t make sense to collapse reasons into causes, since causes simply cannot do the work of reasons. Indeed, since causal explanations are explanations, there is reason to believe that the backdrop of understanding is prior to any understanding we have of causality. Our understanding is a broader framework in which causes, and our understanding of causes, become possible. In fact, we need that broader framework in order to construct and frame causal explanations of anything.
This participation in the space of reasons is plausibly what separates our kind of understanding from the minds of nonhuman animals. Don’t get me wrong – I love animals, and I think they experience pain and pleasure, and they do very clever things. But none of them understand anything. I cannot in any way apologize to my dog, or compensate her for not getting a walk, or demonstrate to her that it’s too muddy to go outside. I can’t argue with her or reason with her, and that’s not merely because I don’t speak Doglish. She hasn’t any participation in the space of reasons. I can care for her and sympathize with her and treat her decently, but that’s as far as our relationship can go.
This presents a further reason to be interested in the space of reasons – a moral reason. When I engage in reason with you, I accord you a certain kind of respect I cannot show to nonhuman animals. I would claim this respect constitutes dignity. The surest way to strip dignity from a human being is to put them in a circumstance in which their capacity to reason is removed (think of Alzheimer’s) or entirely disregarded (think of the basest slavery). If we ourselves have reason, we cannot help but listen to it and attend closely in ourselves. That intrinsic respect for reason itself extends to the reason I find in others – since as we are human beings, it is the same shared space of reason. We can understand one another. Normally, when you offer reasons, I naturally consider them as reasons, and assess them and try to understand them. If I were somehow to deprive you of reason, or ignore your capacity to reason, I would be committing “a sin” against the very reason I find within myself, the very thing that intrinsically calls for my respect. I do not think this general respect for reason generates the entirety of our moral obligation, though I think it captures something especially important in our dealings with one another. (But I’ll leave matters there for now, as I’m still thinking this part through!)
As the title of this post suggests, I’m intending to write several posts reflecting on Kant’s philosophy. I’m doing this because I have a distant goal of writing a book arguing that Kant was essentially right.
In the Critique of Pure Reason, Kant had essentially two goals: first, to provide a broad explanation of our knowledge of the world, and, second, to explain why we aren’t able to establish anything for sure about deep metaphysical topics, like God and the soul and human freedom. He accomplished both of these goals by making a single postulation: that all of our experience has a certain format due to the nature of human understanding. So our abilities and our shortcomings have everything to do with what our particular brand of understanding requires.
Two analogies help me to grasp Kant’s postulation. The two analogies get at the same idea, but I offer them both just to get us into the right space of ideas. The first analogy compares human understanding to a computer program. A program is fundamentally some specific way of coping with various domains of data. In a spreadsheet program, for example, we users make declarations about what kinds of data can be put into which cell in the spreadsheet – numbers, names, explanatory notes, and so on. Then, with the appropriate kinds of data, the program can perform all sorts of functions over these data, and get done what needs doing. Now imagine one of these programs becoming conscious – or, less bizarrely, imagine a human user confining all of her attention to just the boundaries of that program. The conscious being might well wonder how it is that the world works out to be so user-friendly, from the perspective of the program; in other words, why it is that the program is able to grasp hold of the world, represent it, and make useful characterizations of it. Who guaranteed that the world would be spreadsheet-expressible? The answer of course is that the programmers and the users have conspired to shield the program from all of the data that don’t fit its parameters. They have taken care not to put names in number slots, or to perform calculations over phone numbers. The programmers and users have in effect filtered the world so as to make the work of the program possible. The real world “in itself” in fact is not fit for a spreadsheet; it has only been interpreted in such a way as to appear “intelligible” to the spreadsheet program.
The second analogy is to Kuhnian paradigms. In his Structure of Scientific Revolutions, Kuhn claimed that science progresses by big leaps from one paradigm to another. A paradigm, as everyone knows now, is a way of looking at the world; more particularly, it is a way to parse data and interpret experience, in light of a governing theory. The world as seen by Aristotle, Kuhn claimed, is not the same world as seen by Galileo: one sees substances striving toward natural states or resting places, and the other sees masses moving in parabolic motions. In Kuhn’s terms, their experiences are theory-laden with the particular theories they advocate. In the time of a revolution, opposing scientists talk past one another, as big chunks of their disagreements are over the right way of looking at things. Eventually, Kuhn claimed, the old guys die, the young ones get their jobs, and that’s the revolution. Now suppose that humans, in addition to all of the historical paradigms dished up by scientists and others, also share a basic, unshakeable paradigm that is intrinsic to their brand of understanding. That would explain why we can’t help but interpret experience is certain basic ways, and also why some thoughts that reach beyond the paradigm’s parameters are beyond any possible human understanding. That’s Kant’s idea: to articulate the single theory with which all of our experience is laden.
Those are the two analogies. Human understanding, according to Kant, is not totally plastic, not infinitely malleable; it has a specific format or structure to it, like a Kuhnian theory or a computer program. That structure characterizes what human understanding is – it determines the range of explanations humans can offer or comprehend. Moreover – and this is the tricky part – this structure is not something we know about, explore, or chart through psychology or history or evolution. If we could do that – if we could provide a naturalistic understanding of the nature of human understanding - then we would would be in the following position: we would understand how the world really works, and on that basis we would understand what leads human understanding to have the specific structure that it has. But, of course, there is for us no “understanding of how the world really works” that stands apart from human understanding. Aristotle or Galileo (or Kuhn himself) cannot simply “pop out to check” how the world really goes, and then establish a paradigm-free understanding of paradigms. A computer program cannot cannot adopt the perspective we adopted when we saw that the world itself is not spreadsheet-expressible. We are similarly confined within the kind of understanding we have.
Moreover, naturalistic explanations themselves (like those from psychology or evolution) presuppose paradigms that are thoroughly contingent. They have been constructed in real historical time, and they have competitors which, with a little work, can also provide coherent and compelling accounts of the ways human understanding works. But any such accounts constructed by modern psychology won’t be fixed parameters for human understanding so long as it is possible for humans to out-think those psychological parameters, and understand how things would be or seem if those parameters were otherwise. We can ask: how would the world seem to us if Piaget was right/wrong, or Skinner or Freud right/wrong; and the fact that we can consider these questions is evidence enough to show that a natural psychological theory does not reach after fixed parameters of human understanding.
If we want to discover those fixed parameters, we have to go transcendental. That is, we have to reverse engineer the parameters of human understanding, working from those features which are necessary for any possible human understanding, and what is evidently beyond any human understanding, and postulate a structure that has those consequences as consequences. That’s Kant’s project.
Last week my family watched Rise of the Guardians. The idea is that there are guardians on Earth who preserve important ideals: Santa Claus (wonder), the Easter Bunny (hope), the Sandman (dreams), and the Tooth Fairy (memory, stored in teeth). Then there’s Jack Frost, and nobody knows what he’s good for, including himself. (Spoiler: turns out he’s the guardian of fun). The whole world is being darkened by the Boogeyman, Pitch Black, who was responsible for the surfeit of fear during the Dark Ages, and is now sapping the world once again of wonder, hope, dreams, memory, and fun. Pitch and Jack have similar backgrounds: created by the Man in the Moon, they were then abandoned by him, and left with no feeling of purpose. Jack finds his purpose, allies with the guardians, and defeats the Boogeyman.
It’s a fun movie really. Obvious complaints can be made. (For one, it follows all American animated movies in making fun of foreign accents, and making all evil characters British.) But I must say that, in my neokantian enthusiasm, I found myself behind the project. Yes: we do need to safeguard wonder, hope, dreams, memory, and fun, and we must have the courage to defeat fear and pessimism, and create a future in which human virtues are preserved and elevated. We can and must work to make our future the United Federation of Planets, and not Brazil. I have this sacred hope. So I was rooting for the guardians all the way.
But at the end of the film I felt my eyes widen in shock and heard myself muttering, “No! No, no, no! They are getting it wrong!” You see, the guardians, aligned with the last remaining hopeful children on the planet, summon the power to banish the Boogeyman into a dark place from which he cannot escape until there is a sequel. But that’s wrong! Aeschylus, in the Oresteia, was working with the same plotline, and he saw his way clear to the right ending: the Furies must be transformed into the Eumenides, so that the fury of bloody revenge is transmuted into civic loyalty, and humanity attains a higher synthesis. The guardians could have and should have done the same – it would have taken only one of Jack Frost’s magical snowballs to conquer fear and transform Pitch Black into Stout Heart – that is, courage, something the guardians need to perform their function. The world then would have advanced into a newfound synthesis that left it stronger for having discovered and overcome its fear. And they still could have had a sequel, in which the guardians fight against a greedy Hollywood studio that wants to take all human virtues and sell them as merchandise.
What a deep disappointment!
I was rooting around today in an old zip drive and found an initial attempt at what I presented several years ago upon being promoted to professor. I ended up delivering something weirder (see here), but I was happy to come across these thoughts, and the fresh recollection of Zane Pautz. So, for what it’s worth ….
Living Under the Boundless Sky (written in Fall 2009)
When he kindly invited me to present this inaugural lecture, the Provost asked me to describe the path which led me to become an academic scholar, and to illustrate and explain the core of my academic interests. So let’s start in the beginning and see where it leads us. How did I ever come to be a professor of philosophy?
The question makes me think immediately of Zane Pautz. Dr. Pautz was a philosophy professor at Milton College in Wisconsin, a small and charming college which lasted from 1844 to 1982. One day Dr. Pautz was invited to visit my high school Humanities class and present a lecture on Philosophy. It must have been 1982, the very year that Milton College finally closed its doors. At the time I did not know that there were still any living philosophers; I thought they had went out with Zeus and togas. But I still remember that Pautz lectured about the five main areas of philosophy, according to Aristotle — Logic, Epistemology, Metaphysics, Ethics, and Aesthetics — and I was decidedly hooked. At every question he asked, I jumped, thinking “Yes! I’ve always wondered that!” I decided, either then or shortly thereafter, to study philosophy.