Denying the existence of the material world never goes down well. No matter how clever and compelling the arguments, most of us want to insist that matter exists – and as our insistence becomes more vehement, we start pounding tables, as if that will impress our interlocutors.
I went for a long bike ride yesterday. At the start I was just rolling along, letting my mind wander, and taking in the sights:
• kids selling lemonade,
• a well-kept garden,
• a Rat Patrol-style jeep with a gatling gun perched on top and three guys wrapping it in plastic,
• an interesting older red pick up for sale, …
Wait….. whaaaaa? So I had to turn the bike around to investigate.
Sure, enough, the jeep was painted drab, army green, with faux motor pool numbers stenciled on the side. It was on a transport trailer, and looked to be in near-new condition, so it was probably being sent off to a customer somewhere. On the passenger side, just outside the vehicle, was a rifle holster, with a nasty black armament in the holster. And hard at work were three guys wrapping the jeep in plastic (I suppose to protect it against kicked up stones on the road).
I learned recently that the idea of bolting a machine gun to the top of a moving vehicle was the idea of George S. Patton, he of pearl-handled revolver fame. He first used the weapon in the Pancho Villa Expedition, strapping a gun atop a 1915 Dodge, racing up behind the enemy, and blasting away.
“So…,” I said to the guys, “what’s going on up top there?” I gestured toward the gatling gun. It had six long round barrels, two pistol-grip handles, and a big red button.
They kept wrapping. Their leader eventually explained, “It’s for crowd control.”
“It’ll do that,” I agreed. “But is it real? I mean, it isn’t, right? It has a big red button on it, and nothing real has a big red button on it.” I was rapidly reaching the end of my knowledge of weaponry.
“It’s real. I added the big red button myself; it didn’t come with that. It delivers (x hundred? y thousand?) rounds per minute, something, something, something.” He added, “It shoots CO2 pellets.” So it was in fact a very badass BB gun. It wouldn’t kill, but it would definitely disperse a crowd. The other two guys kept wrapping and did not acknowledge my presence.
Now I’ve lived in the rural west long enough to know not to ask questions to which you don’t want to know the answers. So I said, “Well, it sure looks cool. You guys have a good day,” and pedaled off. The encounter gave me plenty of material for thinking over my ride. My guess is that this guy, out of his home, equips vehicles with weaponry, under contract with – well, with whom? Probably not with municipal police units, since the jeep was done up to look federal, and those agencies like to keep it clear who is doing the shooting. Possibly with paramilitary groups, of which there seem to be increasing numbers. Or possibly a low-GDP foreign government? Mexico? Puzzling, the things one sees from time to time. I’ve got to admit, though, that thing was cool.
In 1671, in some letters exchanged with the French mathematician Pierre de Carcavy, Leibniz mentioned his plans to create a calculating machine. Apparently, he had been inspired by a pedometer, probably thinking that if machines could count, they could then calculate. Within a couple of years, he hired a craftsman build a wooden prototype of his machine, and he packed it along in a trip to London in 1673.
Burton Dreben (1927-1999) was a Harvard professor whose influence upon academic philosophers has been great, despite a paucity of publications. Indeed, his influence has been so strong that some people refer to his students as being “Drebenized”, or molded in the form of the master. His main area of interest was logic, and the thought of Wittgenstein, Quine, Frege, and Carnap.
I heard Dreben lecture once – it was on Frege’s notes on Witggenstein’s Tractatus – and found him to be funny, smart, and captivating. He lectured simply, with only the texts before him, and he shared his unscripted thoughts with force and clarity. He easily defended himself against acute criticisms raised by my professors, whom I held in a kind of terrified reverence. An anecdote shared by another philosopher pretty much captures my recollection of Dreben’s style of repartee:
[Michael Dummett] had just delivered a lecture on Wittgenstein on logical necessity. Dreben arose excitedly to disagree with the interpretation. “But Burt,” Dummett said, ”you think all this stuff is nonsense.” To which Dreben replied, “No, no, no, no, no! . . . Well, yes.”
I can easily see the allure of being Drebenized. What fun it must have been to learn from such a clever and funny man!
There has been a larger discussion of Dreben’s thought and influence some years ago on Leiter’s blog *here*, and I am certainly in no position to add to it. But I would like to reflect for my own purposes on the meaning and truth in one of Dreben’s more notorious declamations: “Philosophy is garbage. But the history of garbage is scholarship.”
Philosophy is ridiculously hubristic. It is an attempt to get at the deepest meanings of things, to grasp that which ultimately and finally is, to comprehend not just what happens to be but what must be, and to draw from these grand truths a vision of how human life should proceed. Anyone trying to do this has to begin by presuming that there is some final account of things, and also that the human mind is capable of coming to know it. Both presumptions are unwarranted; and the implausibility of the second presumption undercuts any justification for believing the first one. Who are humans to presume to know such things? While we are so very clever at manipulating objects and forging tools and constructing strategies, there is little reason to think our brains have evolved for the purpose of understanding Ultimate Truth. Our brains have evolved for the simply purpose of getting by well enough to reproduce. That salutary end can be achieved with minds that are good only for small and local things. Even the notion that there is some Ultimate Truth could be completely misguided. There is no guarantee that the universe must obey what we convince ourselves to be logically necessary.
Even if I am wrong about this – if it turns out there is an Ultimate Truth, and humans in principle can come to know it – then it must be admitted at the very least that it is really, REALLY hard to get to that truth. Given our propensity to mess up in comparatively lower-level cognitive tasks (consider the reliability of operating systems, and the multitudinous failures of bureaucratic institutions), it should be no surprise that so far no one has really come up with a thoroughly compelling philosophy. David Hume provides a just observation:
It is easy for a profound philosopher to commit a mistake in his subtile reasonings; and one mistake is the necessary parent of another, while he pushes on his consequences, and is not deterred from embracing any conclusion, by its unusual appearance, or its contradiction to popular opinion. (Enquiry, sec. 1)
Any confidence that, with careful enough thought, we can attain a vision of the True must be weighed against our track record of making the most elementary conceptual mistakes at the outset of any theorizing. We can place on top of that the ingenuity of other philosophers in coming up with compelling objections and devastating counterexamples to claims that might very well have been true. Even if we came across the truth, it would be a miracle if that genuine insight survived our very clever criticality. In the end, if in fact we have buried within us what philosophy would require, then that capacity is so tenuous and frail that the smart money is on humanity’s persistent failure in coming to know anything of metaphysical significance.
(I know I’m not presenting much of an argument here. It’s really only an expression of what Mickey’s father, in Hannah and her Sisters, says in fewer words: “How the hell do I know why there were Nazis? I don’t know how the can opener works!”)
But – for all that – I must confess that it is fun and instructive to read attempts by other philosophers to get at big truths. All right: it’s not fun and instructive for everyone. It’s a genre of literature (fiction? nonfiction?) that has its following. And these followers are improved in several ways by their enthusiasm. The literature of philosophy provides ample material for training critical reading and interpretation. Reading Carnap, and reading Quine, and tracing exactly how they talked past one another (as Dreben did) requires extraordinary care in reading, in forming apt diagnoses, in testing interpretations against one another, and in expressing with precision what is going on.
Moreover, as we try to place great historical philosophers in their times and cultures, we can learn in a general way how efforts at philosophy are shaped by circumstance. No one writes in a vacuum, of course, though Descartes and Spinoza tried. As we come to understand how each philosopher is rooted in some historical period, we come to understand how the philosophy that is generated is an existential reflection on that period. We see, that is, how humans have wrapped their minds around the universe in specific times and places. This in turn gives us more to think about as we craft our own responses to our own times and places. It is really the same insight one gains through travel: seeing how strange other places are helps us to see how strange our own place is. There is some self-knowledge in this, a kind of philosophical humbling, which I believe contributes to a deeper sympathy toward the thoughts of those with whom you disagree.
So, yes, philosophy is garbage. But the history of this garbage is something worth pursuing with scholastic intensity.
Generally, in any conflict between long-held, seemingly obvious beliefs and new research challenging those beliefs, defenders of the old beliefs will find themselves charged with sitting in armchairs. It never is a rocking chair, park bench, hammock, or divan. It is an armchair, the sort of chair one finds in venerable, wood-paneled clubs where stodgy old men opine about the world’s events more from preconceived opinions than from any well-grounded knowledge. An armchair represents both laziness and privilege, a luxurious class of opinion-mongers who simply will not bother themselves with actual empirical research – the original La-Z-Boys, as they might be called.
Émile Bréhier, The Philosophy of Plotinus, translated by Joseph Thomas (UChicago, 1958)
The history of philosophy does not reveal to us ideas existing in themselves, but only the men who think. Its method, like every historical method, is nominalistic. Ideas do not, strictly speaking, exist for it. It is only concrete and active thoughts that exist. The problems which philosophers pose and the solutions they offer are the reactions of original thought operating under given historical circumstances and in a given environment. It is permissible, no doubt, to consider ideas or the representations of reality which result from these reactions in isolation. But thus isolated, they are like effects without causes. We may indeed classify systems under general titles. But classifying them is not giving their history (182).
A true philosophical reform, such as that of a Socrates or of a Descartes, always takes for its point of departure a confrontation of the needs of human nature with the representation the mind forms of reality. It is the sense of a lack of correspondence between these needs and the representation which, in exceptionally endowed minds, awakens the philosophical vocation. Thus, little by little, philosophy reveals man to himself. It is the reality of his own needs, of his own inclinations, which forms the basis of living philosophical thought. A philosophy which does not give the impression of being indispensable to the period in which it appears is merely a vain and futile curiosity (pp. 183-4).
Paul Kléber Monod, Solomon’s Secret Arts: The occult in the age of enlightenment (Yale UP 2013).
In 1650, scientific thinking could not be separated from fascination for alchemy, astrology, witchcraft, spell casting, and prophecy – for short, “the occult”. By 1815, the separation was pretty definite, even if attempts to confound the two persist to this day. Monod’s book, focusing on England and Scotland, covers the transition over these years in its many levels and dimensions, illustrating the transition with story after story of various people engaged in one way or another with the occult.
In the early days, many expected scientific discovery to combine with magic and alchemy and thus rediscover a natural wisdom once possessed by Adam, Moses, and Solomon. No one worried that science and alchemy might not mesh; if anything, the worry was that the darker enticements of magic would lead people away from Christian faith. Hobbes’s thorough disdain for the occult was unusual. Nearly everyone else recoiled from Hobbes’s resolute materialism, and remained fully confident of the influence of spirits and invisible powers upon the visible world. Newton and Boyle steered clear of mentioning the occult in their published works, but they privately pursued secret knowledge along with everybody else. “Magic and science, empiricism and the supernatural: within alchemy, these were not in opposition, but constantly played off each other, combining and separating through a language both allusive and elusive, never fully merging but never wholly apart” (51)
In the practical sphere, alchemical remedies and astrological almanacs were booming businesses. This of course led to a proliferation of quacks and charlatans; and this invited the attention of caustic satirists like Jonathan Swift. Between the great scientists’ reluctance to publish openly about the occult, and the broad lampoons of magical thinking, alchemy faded from the intellectual scene over the first half of the 18th century, with a few exceptions. “The Newtonian magi” continued to bring together natural and supernatural knowledge. They insisted on natural explanations where available, but “the mythology of the Egyptians, the cosmologies of the Greeks and the healing powers of pagan priests provided fragmentary evidence of God’s plan for the universe” (159). William Stukely evidenced great interest in Druids, and offered impressive speculations about their ancient origins.
Eventually, by the last half of the 18th century, people had become comfortable enough with devils and ghosts to enjoy the first gothic novels and the first haunted houses. The occult became a mildly scary and fun subject, and less learned authors capitalized on its revival. One stage production, Omai, was rooted in the true story of a Tahitian man brought to London by Captain Cook. This rather fantastic version of the story includes Tahitian sorcerers and ghosts, and also features a segment entitled “Apotheosis of Captain Cook”, a special effect extravaganza in which Britannia herself elevates Captain Cook to heaven. He holds a sextant that resembles a Masonic compass.
In the end, Monod’s book brings on the same realization every great history book tries to bring about: that while some things have changed, other things have not. People are now, have always been, and will always be suckers for magical thinking. They may be intellectually serious about it, or they may be trying to make a quick buck. Perhaps they are trying to restore some mythic unity to all human knowledge, or perhaps they are just lazy and superstitious in their thinking. But if we take ourselves to know better today – if we think that science has prevailed in a battle against magical thinking – then, if we are honest, we must also recognize that science and the occult grew up together, and were for a while as inseparable as the twins of Gemini.
Last month (April 19, 2014), 3QD’s Robin Varghese linked to an article by philosopher Lisa Guenther on the effects of solitary confinement on the mind. (The original article was published in the online magazine Aeon.) Guenther’s essay is fascinating, as it provides a vivid account of how our perception of the world depends heavily on the social relations we build everyday with other people. When those social relations are stripped from us, our experience of the world goes wonky. For this reason, Guenther’s article is also disturbing, since it reveals the widespread practice of solitary confinement to be nothing less than mental torture.
Read the more here.
A recent post on the internet has outed Neil deGrasse Tyson (or “NdGT,” as he’s been dubbed by the blogosphere) as a philistine in matters of philosophy. True enough: as charismatic as he is, and as beneficial as his public service has been in bringing the wonders of modern science to a big audience, he does appear to be one of those scientists who imperiously dismiss philosophy as a pointless endeavor without appearing to have any clear idea of what philosophy actually is.
(For background, the relevant discussion comes up between minutes 20 and 24 in the Nerdist interview between NdGT and Chris Hardwick. Now, in defense of the Nerdist, the interview is meant only as light entertainment, and it just happened to wander into a dead-end topic. Arguably, they aren’t talking as much about real philosophy as they are talking about pointless verbal activity. But it is also true that the distinction seems lost on all involved – and hence the fitting charge of philistinism.)
I heartily applaud NdGT’s general efforts at popularizing science. My family and I have watched the entire Cosmos series, and while I think the older series had the distinct advantage of Carl Sagan’s masterful prose, this newer series has its own kind of charm (and much better effects). I confess that early on I bristled at the show’s dumbed-down and misleading accounts of the history of science. (The Renaissance Mathematicus cheerfully dishes up the necessary criticisms and correctives on Giordano Bruno and on Robert Hooke.) But after some reflection I realized that the producers put these segments in simplistic cartoon form for good reason: namely, to advertise up front that they were providing only a cartoon version of history. And if the series’ objective is to get kids interested in science, then maybe it’s okay to sacrifice truth for the sake of a good story. So far as that goes, the scientific accounts they tell are also oversimplified, and that’s okay too. First get the kids interested, and let the details get sorted out later. As somebody once said, teaching is strategic lying. If you tell a full and accurate story up front, you’ll only have an audience that didn’t need to be reached in the first place.
So: good on you, NdGT (and producers of Cosmos), and I hope many kids feel wonder for nature as a result of your efforts. But one also wonders whether these laudable ends might be achieved without ignorantly dismissing other ways of understanding the fascinating and wonderful elements of human experience.
Anthony Pagden, The Enlightenment and why it still matters (Random House, 2013)
The overall purpose of the book is to describe the Enlightenment as an intellectual phenomenon, a matter of ideas being thought and books being written, published, and read. There is little attention paid to what we might call the material conditions of history – economics, climate, geography, and social dynamics. So the scope is limited. Nevertheless, Pagden tells a well-informed and entertaining story of a grand sweep of ideas. His book is just the sort of thing that could well have been written by some of the people he writes about. It’s a great introduction to the ideal landscape of the period, and an illustration of the fact that the intellectual debates in our day are nothing new.
As with any great intellectual movement, the Enlightenment is hard to define. Ernst Cassirer called it “a process, the ‘pulsation of the inner intellectual life,’ that consisted ‘less in the certain individual doctrines than in the form and manner of intellectual activity in general'” (quoted by Pagden, 10). It is hard to say anything true about it beyond calling it three centuries of smart Europeans excited by possible conflicts among religion, science, and politics. Maybe we can say that all of them were interested in establishing a new conception of humanity, though they could not agree on precisely what that conception was. For, as Pagden and others show, the thinkers themselves disagreed sharply over matters one might have assumed they agreed upon; and then turned around to agree about other things. Perhaps, if Cassirer was right, the Enlightenment was a variety of sport, and its players found themselves on different teams, and often changed teams as the ball moved to different corners of the field. It would be crazy to define its nature, and crazier yet to deny its existence.
Pagden begins, in “All Coherence Gone,” by recounting the widespread rejection of a single catholic and apostolic church, and the ancillary rejection of a shared philosophical vision of the relations among nature, humanity, and God. Individuals discovered an inner need to work things out for themselves, perhaps politically (as with Hobbes) or epistemically (as with Descartes). Even those like Leibniz who sought a reunification of Christendom went about it in their own way, with their own systems. This led to a moral problem, recounted in “Bringing Pity Back In,” which was to find some motivation for such atomized individuals to have concern for one another. For Hobbes the motivation was strategic and greedy. But of course that goes only so far. Later thinkers believed that we find sympathy within the natural psychology of human beings, and that explains why we sometimes care more for others than can be explained by our narrow interests in self. “The shift from ‘selfishness’ to ‘sentiment,’ from the calculation of interests to the awareness that all humans were bound together by bonds of mutual recognition, became the basis on which a new conception of the social and political order of the entire world would eventually be based” (95).
The third chapter, “The Fatherless World,” recounts the problem of what value to place in religion. Some radicals found no value at all. Others recognized that religion at the very least provides some incentive toward moral behavior when self-interest and sympathy fail. In any case, all agreed that religious intolerance was a clear evil, and that a more generic form of theism would be sufficient to meet the apparent human need to believe in magic beings and provide the sort of crowd control a society requires. (I am beginning to believe that the advent of European deism is a political strategy of both crowd control and crown control.)
These first three chapters cover the basic territory that has to be covered; one might regard them all as preparatory. In the next three chapters, Pagden turns to the areas he knows best, and they are fascinating. They are “The Science of Man,” “Discovering Man in Nature,” and “The Defense of Civilization.” They all involve the challenge presented to European thinkers by the peoples of the Americas and the Pacific. What are we (Europeans) to make of their different values, customs, and practices? Do they simply present to us our primitive origins? Should those origins be regarded with loathing or admiration? What has civilization done for us – and to us? Pagden tells the stories of “Aotourou” and “Omai,” two Tahitian men brought to Europe on different occasions and paraded around town for all to survey and wonder. Their sad stories lend credence to the critics’ charge that the Enlightenment was “specifically a European form of tyranny”(20): both men’s lives were destroyed in the process, and their home communities fared no better.
The final three chapters, “The Great Society of Mankind,” “The Vast Commonwealth of Nations,” and “Enlightenment and its Enemies” trace connections among the noblest ideal of the Enlightenment – true cosmopolitanism, or the free and equal world citizenship of all human beings – and the decidedly mixed consequences of this noble ideal. On the one hand, any dream we have today of stable and peaceful relations among nations, with citizens playing genuine roles in the self-determination of governments, can be traced to books, treatises, and arguments of the Enlightenment. At the end of the book, Pagden speculates how Europe’s history would have been without it. The basic answer is that, had there not been “all coherence gone,” Europe would have met the same overall decline as the glorious Islamic world of the middle ages. A static religious hegemony would have stifled free inquiry, and external barbarians would have charged in and carved us up. Instead – good news! – we were able to do that to other people. And that, of course, is the other hand.
But, really, it need not have been that way. What humans have done under the banner of Enlightenment ideals has certainly not been concordant with those ideals. It is clear that Pagden’s overall assessment – “why it still matters” – is positive:
[The Enlightenment] was about creating a field of values, political, social, and moral, based upon a detached and scrupulous understanding – as far as the human mind is capable – of what it means to be human. And today most educated people, at least in the West, broadly accept the conclusions to which it led. Most generally believe that it is possible to improve, through knowledge and science, the world in which we live. Because they believe this, they also believe there exists a ‘human nature’ …. They hold, that is, that although cultures are important and differences must be respected, this can be so only when cultures conform to some minimal ethical standards that every rational being could be brought to understand. They believe that although most rights come to us courtesy of the states to which we belong, there are others to which we are entitled by virtue of our humanity. (407)
I agree with him that these conclusions express the ideals of the Enlightenment; and it can be no coincidence that we can find no end of volumes from Enlightenment thinkers recommending these conclusions to us. But it is far trickier to establish that we have these values because of the books Pagden discusses. It could be that both phenomena – the great Enlightenment books, and our modern opinions – are expressions of some other deeper thing, like perhaps an economic revolution or some social transformation or lower mortality rates or just the bracing self-interrogation that follows prolonged exposure to other sorts of people. In short, what’s not clear to me is that “what matters” about the Enlightenment is the causal result of thoughtful books.
As much as I like engagement with the world of ideas, I am not always convinced that ideas play decisive causal roles in political and cultural change. They do play some role; ideas cannot be tossed aside as epiphenomenal. But sorting out why ideas matter, and how they come to matter, requires narrow and careful examination on a case by case basis. And that’s not the kind of story Pagden sets out to tell – except in the middle chapters and his discussion of how poor Aotourou and Omai were received and conceived by their European liberators/captors. Even there, not many details were included, and I will be on the lookout for more comprehensive discussions.
We expect that causal laws will be the same across all experience. Hume famously claims that this expectation is grounded neither in pure reason nor in experience. Not pure reason: for one can posit a cause and deny the effect without being contradictory. And not in experience: for all experience can ever show is what we have observed in the past, and that information does not by itself tell us how to generalize upon it. We could generalize that causal laws will remain uniform; or we could generalize that the universe will go completely wonky from this date forward. Neither inference follows validly from what we have observed, and so they are in this sense equally nonstarters. Past performance is no guarantee of future results, as the saying goes.
Hume tries to find a way to explain why it is that, despite all that, we end up expecting causal laws to be constant. Strange as it sounds, the explanation he advances is itself causal. We become used to the causal patterns of the world, or conditioned by them through repeated associations, and so we come to subjectively expect causal patterns to continue. (This isn’t as paradoxical as it sounds. The salient fact about us, that we make causal generalizations, is also itself a generalization, and we expect to continue to generalize in the future as we have in the past. We are conditioned to expect continued conditioning.) We might well call Hume’s explanation the “Pavlovian” account of causality. It is meant precisely not to show that causal claims are grounded in any respectable, defensible process. It is only meant to explain the psychology behind our causal expectations.
Lord Kames, countryman and kinsman of David Hume, did not think this psychological account was good enough, and he raised a counterexample to the claim that constant connections breed causal associations:
In a garrison, the soldiers constantly turn out at a certain beat of the drum. The gates of the town are opened and shut regularly, as the clock points at a certain hour. These facts are observed by a child, grow up with him, and turn habitual during a long life. In this instance, there is a constant connection betwixt objects, which is attended with a similar connection in the imagination: yet the person above supposed, if not a changeling, never imagined, the beat of the drum to be the cause of the motion of the soldiers; nor the pointing of the clock to a certain hour, to be the cause of the opening or shutting of the gates. He perceives the cause of these operations to be very different; and is not led into any mistake by the above circumstances, however closely connected. (Kames 1751)
The child ends up smarter than his experience would suggest. How is he able to sort out the correlations from the causations? In reply to Kames, Hume could claim that the child is able to make the distinction because – once or twice – he has perhaps witnessed the drums beating without the troops mustering, or the gates opening or shutting at odd hours. And what if he hasn’t? Still, he might be able to see the events as only correlated because he has explored the barracks, the drum, the clock, and the gates, and he has found no mechanical links among them. This matters, because he has become otherwise accustomed to expect there to be spatially proximate, mechanical links between causes and effects, at least in events of this kind (“this kind” being correlations among bodies’ behaviors that are not alleged to be explicable through magnetism or gravity or (for us today) quantum spookiness). Indeed, in the Treatise, Hume insists that when we take ourselves to find a causal connection between events, we observe that the events “are contiguous in time and place, and that the object we call cause precedes the other we call effect” (1.3.14). The boy, perhaps, has found the correlated events to be spatially isolated – no links bridging them – and let’s throw in for good measure that perhaps he has also observed that the temporal relations are not as constant as one would otherwise expect among events that are really causally related.
But Kames, I expect, would have further complaints. Don’t we occasionally experience what sure seem like failures in mechanical explanation? We set up a perfect Rube-Goldberg contraption, push the first domino, and then what we believed must surely ensue does not. Indeed, don’t we encounter such causal disappointments just as frequently as we encounter correlated events that we are not supposed to think of as causal? The common course of life certainly suggests so. But if this is so, how on Hume’s account could we ever come to reliably sort out one kind from another? Why aren’t we far more confused than we are?
The upshot of this line of objection is that we end up knowing more about the world than we would if our knowledge were just a result of passive observation. Somehow, out of our experience, including our language and culture and education, we are able to form inner models of the world. In those models there are representations of what kinds of events are causally linked and which are not. Models can be mistaken, of course, and we can get causal explanations very wrong. But these models are not made automatically upon successive viewings of the passing show. Experience does not carve a model into our mind in the way a stream of water carves a canyon into rock. A model is an act of creative invention on our part, and it contains much more information than experience itself provides.
(Both Kant and Popper recognized this, by the way. But while Kant held that some components of the model are fixed, imparted to the model by the structure of the human mind, Popper regarded everything as negotiable.)
I wonder, though, why Hume was so attracted to such a simplistic view of our understanding. It may be that he could not see a way to contribute anything more complicated to the mind without bringing on the worry that he was making the mind supernatural. Nature as he knew it could produce an organism that is rudely shaped by experience in the way he describes. But how can nature produce a model-creating mechanism? Today we don’t worry about that question – not as much as we should, I think – but perhaps in Hume’s day the ability to create complex inner models that went beyond the elements of sensory experience had to be seen as something supernatural. Before you know it, there would be talk of souls, and Hume did not want to see talk drifting in that direction. Better an overly simple mechanism that nature can produce than a fancy one nature can’t, if what you’re trying to do is build a broadly nature-bound epistemology. Then you can hope that custom, habit, and culture will fill in any missing structure.
Or maybe I’m wrong to think that individual minds generate models, and Hume is right to look to larger cultural entities and traditions as the generators of models. When Hume claims that custom or habit is what leads us to expect causal regularities, he might be saying that our expectations – our models – are results of training and education and not results of individuals’ abilities. A humean Adam, with no one around to teach him, would have no expectations for the future. It takes a society for there to be individuals with some kind of shared model of the world that goes beyond each individual’s own experience. That’s an interesting idea.
Jonathan Israel, A Revolution of the Mind (Princeton UP, 2010).
This book is based on lectures Israel gave at Oxford in 2008 in honor of Isaiah Berlin. The overall aim is to show how modern democracy emerged from the tension between Moderate Enlightenment and Radical Enlightenment.
The chief maxim of Radical Enlightenment is “that all men have the same basic needs, rights, and status irrespective of what they believe or what religious, economic, or ethnic group they belong to, and that consequently all ought to be treated alike, whether black or white, male or female, religious or nonreligious, and that all deserve to have their personal interests and aspirations equally respected by law and government” (viii). The four major founders of Radical Enlightenment were Descartes, Hobbes, Bayle, and especially Spinoza. The Moderate Enlightenment (featuring thinkers life Hume, Smith, and Voltaire) denies such thorough egalitarianism, conceding that a great many of us need to be ruled by others, though they do believe effective checks must be placed on these rulers (especially those who pretend to rule over religious doctrines).
Israel offers an provocative metaphysical difference between the Radicals and Moderates:
Beyond a certain level there were and could be only two Enlightenments – moderate (two-substance) Enlightenment, on the one hand, postulating a balance between reason and tradition and broadly supporting the status quo, and, on the other, Radical (one-substance) Enlightenment conflating body and mind into one, reducing God and nature to the same thing, excluding all miracles and spirits separate from bodies, and invoking reason as the sole guide in human life, jettisoning tradition. (19)
The fundamental question is whether an ideal society can be based upon purely secular, monistic reason. Or must there also be a second substance presenting authority and tradition, whether through religion or the state – for the purpose of crowd control, at least? How much can reason do?
In my mind, Israel’s distinction between Radical monists and Moderate dualists parallels a distinction among historians regarding the role of ideas in explaining historical change. Though Israel is not Hegel, he clearly thinks philosophy is a significant contributor to social change – it lends “form and a sharp edge to a powerful emotional upsurge of deeply felt poetic and dramatic aversion to oppression” (88). Other historians think the head plays a much smaller role, and they turn instead toward less rational forces, such as those provided by economics, social structures, and historical accidents. Again: how much can reason do? According to Israel, it provides the central plot; according to others, it is more or less epiphenomenal. At issue in both distinctions is the relevance of ideas.
The term has come to a close and I fall into despair over my failings as a teacher. (My wife tells me this is routine.) My despair is not anything so noble as feeling that I have fallen short of an expectation that I would turn each young mind into a firestorm of intellect. It is the darker conviction that I have wasted everyone’s time, humiliated myself, and presented a charade of learning. The students are glad to be rid of me, and I them, and we are each fully justified in feeling so. There is a handful of exceptions, but in these cases I feel as if we just fell into sympathy with one another, and we are confusing that sociability with genuine learning. (If a student is reading this, don’t worry, I liked you.)
It seems especially bad this term because I entered into it with such high hopes of success. I had devised a new approach, requiring loads of preparation on my part; but, alas! It turned out no better than before. I will probably, again, get relatively high scores in student evaluations, but this is like a glass of vinegar capping off a wretched meal. I know what happened. I was there. Positive evaluations will do nothing but alter my memory of the experience.
So, before the spell wears off, I will record what meager observations have come to mind. Some of these I already practice in some degree, but need to do better.
1. Each class must focus on a live question that has no easy answers. Learning how to read and understand texts is important, but not as important as experiencing the tension of a genuine philosophical problem. Oversimplify and exaggerate, if that is what it takes. If these problems aren’t emerging from a text, drop it.
2. That being said, when we are studying a difficult text, devote more time to working through it line by line so that students learn how to read such things. So far in their lives, they probably have never done it.
3. I should assume my students are two school years lower than they are (college freshman are high school juniors, college seniors are college sophomores, etc.). It is not that students are dumb, or getting dumber; it’s just that as I advance in years, I lose touch with what it was like to be them, and end up presuming too much. (In another ten years, I’ll up it to three years.)
4. One good, well-conceived example is better than three developed on the fly.
5. Be sure to allow time for amusing tangents and asides, but these should be deployed like rich and tasty treats – overdoing it makes everyone sick.
6. Speak to individuals, and never to the class as a whole. Indeed, it turns out there is no such thing as “the class as a whole”. It is a fiction developed by boring lecturers. There are only ever individuals.
7. (This one I learned a long time ago.) Never ask, “What does everyone think of that?” or other such wide-ended questions. Only ask questions that might have wrong answers. (Not that anyone should pounce on those answers as wrong, of course.)
8. In big classes: insert random elements (again, sparingly). This could mean occasionally sitting down among the students while lecturing; it could mean random cartoons or art works inserted into PowerPoints; it could mean leaving the class, if using a microphone, but continuing to lecture. All of this should be done without any explanation – though, on the other hand, if it magically aligns with a relevant point, so much the better. These random elements are meant to inject some unpredictability into something otherwise utterly tedious for both teacher and student. They also subtly challenge the absurd forum of the classroom.
I’m just ending my second foray into academic administration. The first one was serving as department head over a department including philosophy, communications studies, and all of our foreign language programs. It was a terrific exercise in mental and emotional flexibility – at one point I was adjudicating a dispute between a faculty member and a staff assistant while also trying to plan the curricular offerings in French while also teaching early modern philosophy while also …. Luckily, my colleagues were very supportive and forgiving of my mistakes. Still, at the end of my service, I posed myself the question, “What if the dean gave you the choice of (a) staying on for one more year or (b) sticking your hand in a garbage disposal?” and I found myself trying to estimate just how much damage a garbage disposal would do to a hand and how long recovery would take (less than a year? would I get good drugs?). Happily, the choice was never presented to me.
Now I’m finishing up foray #2, having served as an associate dean. This assignment was loads easier. No personnel issues. Mainly, my job has been to go to meetings, answer emails, serve on committees, go to meetings, put people on committees, answer emails, go to meetings…. A lot of my work has focused on academic issues like the structure of general education, the overall shape of the college’s curricula, procedures for fairly and meaningfully evaluating faculty, and so on. This is all interesting stuff (to me), and I’ve learned a lot, and I think we made some real contributions. But now, as I transition back to teaching & scholarship, I’m realizing that one’s mind can be wholly dedicated in different ways. In administration, the whole mind is dedicated to organization, procedural justice, political strategy – I think of this as a broad multilateral engagement. In teaching and scholarship, the mind is wholly dedicated to bringing order and significance to a range of questions that go far deeper – I think of it as deep multilateral engagement.
The deep engagement is a LOT harder and and more exhausting than the broad engagement. When it goes well, it is also more fulfilling; and when it doesn’t, it occasions utter despair. I’m guessing this is because more of one’s self is being put on the line – in the classroom, or on the page in one’s writings. Failure reflects, somehow, on the depth and structure of one’s own soul (to dramatize just a bit). If I assemble and present something I take to be important, and it brings only yawns or silence, then (unless I know I was only faking it) I can only conclude that either I or my audience has failed in taking proper measure. Neither conclusion is a happy one. On the other hand, if what I present in a class is greeted with enthusiasm, then at least everyone involved is failing in a similar direction, and that’s not half-bad (indeed, as good as it gets, in my experience). Companionship softens the self-loathing of incompetents.
(Hmm; I didn’t know this tour was going to stop at that spot.) Anyway, I wanted to make a brief listing of some observations made during this second foray. In no particular order:
1. When administrators take any action, they are almost always in a very tight spot. Generally, seasoned administrators try to change as little as possible, under the reasonable suspicion that any change to a system brings all manner of unintended consequences. (Greener administrators, alas, have yet to learn this, and in their ambition can cause great problems.) This means that when there is a change, one should always look for the deeper and more compelling story – the one that makes you say “Ah! That makes sense” – and not just follow convenient rumors.
2. The further up the ladder you go, the less connection there is to anything of academic interest. Maybe this is just what you’d expect. But it is startling sometimes to listen to high-level discussions by people who seem only dimly aware that there are classes being taught, and that items on CVs might refer to intrinsically interesting things. Our university president, who is a decent man, seems only dimly aware of the academic side of campus, as he spends almost all of his days dealing with legislators, donors, and lawyers.
3. Rarely, one finds an academic administrator living an active life of the mind while also administrating – these creatures are valuable beyond any telling, and should be treasured.
4. Vice-presidents very often see the university centered around them, and expend great energy trying to get everyone to adopt their concerns. I guess it’s their job, but it leads to a lot of rear-guard, defensive maneuvering by deans and associate deans to try to maintain resources that will otherwise get sucked up into the building of little kingdoms. In sum: beware the ambitions of vice-presidents.
5. It is also startling to see the consequences of over-specialization in our disciplines, especially in the humanities. This is what makes general education such a difficult and thankless task. I don’t regard myself as well-educated, but out of guilt I have been working to become well-educated for several decades now (a work still very much in progress). But now I encounter junior colleagues who not only do not have this guilt, but sometimes do not seem to be aware of missing anything – “I’m not supposed to know anything about that, am I?”. But I’ll leave it at that lest I give over to excessive old man grumping.
6. I believe it is good for academics to take a turn in administration. It helps them to see how institutions function, and to befriend the people in the offices; it helps them to gain a broader picture of how universities operate, and where they fail; it helps them as individuals work more efficiently, given firmer pressures on schedules. And I think it is good for those turns to be limited. Granted, from deans on up, it is good to have people with more extensive experience. But there are plenty of posts, like the ones I’ve had, that can be entered into and then left again, and from which much can be learned. It’s been a good turn for me; and I’m happy it’s over.