Now that every click we make is watched, archived, and meta-data-fied, it is time to start thinking seriously about a personal ethics of internet consumption. This goes beyond mere paranoia and worry over what others might think of what you’re taking interest in. Each click is in fact a tiny vote, proclaiming to content providers that you support this sort of thing, and hope to see more of it in the future. And – as always! – we should vote responsibly.
Read more here.
A potted history:
I believe Peter Sloterdijk is right that the Enlightenment has been followed by philosophical cynicism, or an impressive array of natural knowledge unaccompanied by any faith in providence. The U.S., which became the dominant intellectual and cultural force in the course of the 20th century, was well-suited to put this cynicism to work: for America was built upon a pragmatic, “can do” attitude, and seemed ready to let expediency drive ideology . (There are probably interesting connections here to Protestantism and Holland of the 17th century.) And so there arose on American shores the fulfillment of the German idea of a research university, with its faculty as a specialized workforce and its students as Model-Ts rumbling down an assembly line on which three credits of this and three credits of that are bolted on to each chassis.
Each academic discipline became a guild or union, where membership is tightly controlled and guild members insist on their indispensability to the general curriculum. New disciplines created their own means of controlling membership and making cases for their newfound indispensability.
As unions generally lost power and new models of management were developed in the last third of the 20th century, the university also experienced a shift in authority from the faculty to the administration. In the names of efficiency and accountability, administrators deployed numerous measures for evaluating faculty “productivity”; and the nature of these measures encouraged faculty to entrench themselves more firmly in their respective guilds.
In the case of philosophy, this meant (1) more attention devoted to narrow problem-solving activity rather then efforts to deepen philosophical wonder; (2) increasingly narrow specialization and less general knowledge of the discipline itself and its history; (3) less engagement with anyone outside the professional guild; and (4) development of various cants and shibboleths to patrol membership in the guild.
What to do? (Provided, that is, that one is inclined to see these results as problems!)
Most academic philosophy departments see themselves primarily as housing a specialized academic discipline, and contributing only incidentally here or there to a university’s general education curriculum. The priority needs to be reversed. Frankly, there is little or no need for specialized academic philosophy; if it disappeared overnight, the only ones who would notice would be the practitioners themselves. But on the other hand, despite the occasional iconoclastic polemic saying otherwise, there is a widespread recognition that philosophy provides a valuable contribution to the mind of an educated person, even if the person is not working toward a degree in the field. Philosophy professors need to see their primary job as enriching the mental lives, values, and discourses of non-philosophers. For almost everyone, we should be a side dish rather than the main course. That is where our societal value lies.
Now it can be argued that in order to do this well, philosophers also need opportunities to continue to learn and grow: they too need the chance to “geek out” with fellow philosophers through publications and conferences. And, where there is both talent and motivation, some philosophers will manage to advance our very old and rich discipline. But genuine advances in philosophy will not happen with the frequency of advances in younger and more technological disciplines, like computer science and chemistry. Genuine advances in philosophy are as few and far between as are the geniuses of the 17th, 18th, and 19th centuries. For most of us most of the time, our primary job is to enlighten masses.
If philosophy reconceived itself along these lines, graduate training in philosophy would look very different. Right now, the usual aim is to equip each student for intensely critical interaction with a vanishingly narrow band of specialists. (Typically, these PhDs are then hired to teach very broad undergraduate classes – an assignment for which, of course, they are wholly unprepared.) But if my proposal were adopted, these candidates would be trained to engage meaningfully, fruitfully, and philosophically with a wide range of people lacking expertise in philosophy. They would be required to write not dissertations, but books that could meaningfully inform the lives of their fellow citizens. This would be the norm rather than the now-celebrated exception. Philosophy would move out of the tower and back into the agora.
I can hear the complaint: “But there are many really smart people who are now attracted to philosophy’s narrow and difficult questions, and wouldn’t go into the discipline at all if they instead had to ‘dumb down’ their efforts for bigger audiences.” I grant the objection, and have three responses:
- First, it seems to me that these smart people might be able to find as much enjoyment working through equally difficult abstract problems in other fields – fields in which solving the problems would have more impact on more people. Smart problem-solvers are in demand all over the place.
- Second, there would still be room in the discipline for some really smart, narrow specialists, even if most of the room were given over to the broader task I’m recommending. Right now, of course, all of the room is reserved for narrow specialists – and that just doesn’t seem sensible, especially given the nature of the great majority of teaching jobs that exist.
- And third, I bet that for every person who is drawn into philosophy because of an inordinate enthusiasm for tight and narrow problems, there are ten really smart people who turn away from the discipline because there is no current opportunity for tackling broad and deep questions, and bringing them to the attention of wider audiences.
It would take some courage for philosophy as a discipline to make this move and “demean itself” by talking to broader audiences. It might seem like some sort of admission of defeat. But in reality, I think this move would be greeted very enthusiastically by a lot of educated people who have become increasingly disappointed in academic philosophers’ refusal to connect with people other than themselves. Moreover, it might encourage other disciplines in the humanities and social sciences to follow our lead, and recall their original purpose: to enlighten, deepen, enrich, and complicate the minds of human beings from all walks of life.
Denying the existence of the material world never goes down well. No matter how clever and compelling the arguments, most of us want to insist that matter exists – and as our insistence becomes more vehement, we start pounding tables, as if that will impress our interlocutors.
I went for a long bike ride yesterday. At the start I was just rolling along, letting my mind wander, and taking in the sights:
• kids selling lemonade,
• a well-kept garden,
• a Rat Patrol-style jeep with a gatling gun perched on top and three guys wrapping it in plastic,
• an interesting older red pick up for sale, …
Wait….. whaaaaa? So I had to turn the bike around to investigate.
Sure, enough, the jeep was painted drab, army green, with faux motor pool numbers stenciled on the side. It was on a transport trailer, and looked to be in near-new condition, so it was probably being sent off to a customer somewhere. On the passenger side, just outside the vehicle, was a rifle holster, with a nasty black armament in the holster. And hard at work were three guys wrapping the jeep in plastic (I suppose to protect it against kicked up stones on the road).
I learned recently that the idea of bolting a machine gun to the top of a moving vehicle was the idea of George S. Patton, he of pearl-handled revolver fame. He first used the weapon in the Pancho Villa Expedition, strapping a gun atop a 1915 Dodge, racing up behind the enemy, and blasting away.
“So…,” I said to the guys, “what’s going on up top there?” I gestured toward the gatling gun. It had six long round barrels, two pistol-grip handles, and a big red button.
They kept wrapping. Their leader eventually explained, “It’s for crowd control.”
“It’ll do that,” I agreed. “But is it real? I mean, it isn’t, right? It has a big red button on it, and nothing real has a big red button on it.” I was rapidly reaching the end of my knowledge of weaponry.
“It’s real. I added the big red button myself; it didn’t come with that. It delivers (x hundred? y thousand?) rounds per minute, something, something, something.” He added, “It shoots CO2 pellets.” So it was in fact a very badass BB gun. It wouldn’t kill, but it would definitely disperse a crowd. The other two guys kept wrapping and did not acknowledge my presence.
Now I’ve lived in the rural west long enough to know not to ask questions to which you don’t want to know the answers. So I said, “Well, it sure looks cool. You guys have a good day,” and pedaled off. The encounter gave me plenty of material for thinking over my ride. My guess is that this guy, out of his home, equips vehicles with weaponry, under contract with – well, with whom? Probably not with municipal police units, since the jeep was done up to look federal, and those agencies like to keep it clear who is doing the shooting. Possibly with paramilitary groups, of which there seem to be increasing numbers. Or possibly a low-GDP foreign government? Mexico? Puzzling, the things one sees from time to time. I’ve got to admit, though, that thing was cool.
In 1671, in some letters exchanged with the French mathematician Pierre de Carcavy, Leibniz mentioned his plans to create a calculating machine. Apparently, he had been inspired by a pedometer, probably thinking that if machines could count, they could then calculate. Within a couple of years, he hired a craftsman build a wooden prototype of his machine, and he packed it along in a trip to London in 1673.
Burton Dreben (1927-1999) was a Harvard professor whose influence upon academic philosophers has been great, despite a paucity of publications. Indeed, his influence has been so strong that some people refer to his students as being “Drebenized”, or molded in the form of the master. His main area of interest was logic, and the thought of Wittgenstein, Quine, Frege, and Carnap.
I heard Dreben lecture once – it was on Frege’s notes on Witggenstein’s Tractatus – and found him to be funny, smart, and captivating. He lectured simply, with only the texts before him, and he shared his unscripted thoughts with force and clarity. He easily defended himself against acute criticisms raised by my professors, whom I held in a kind of terrified reverence. An anecdote shared by another philosopher pretty much captures my recollection of Dreben’s style of repartee:
[Michael Dummett] had just delivered a lecture on Wittgenstein on logical necessity. Dreben arose excitedly to disagree with the interpretation. “But Burt,” Dummett said, ”you think all this stuff is nonsense.” To which Dreben replied, “No, no, no, no, no! . . . Well, yes.”
I can easily see the allure of being Drebenized. What fun it must have been to learn from such a clever and funny man!
There has been a larger discussion of Dreben’s thought and influence some years ago on Leiter’s blog *here*, and I am certainly in no position to add to it. But I would like to reflect for my own purposes on the meaning and truth in one of Dreben’s more notorious declamations: “Philosophy is garbage. But the history of garbage is scholarship.”
Philosophy is ridiculously hubristic. It is an attempt to get at the deepest meanings of things, to grasp that which ultimately and finally is, to comprehend not just what happens to be but what must be, and to draw from these grand truths a vision of how human life should proceed. Anyone trying to do this has to begin by presuming that there is some final account of things, and also that the human mind is capable of coming to know it. Both presumptions are unwarranted; and the implausibility of the second presumption undercuts any justification for believing the first one. Who are humans to presume to know such things? While we are so very clever at manipulating objects and forging tools and constructing strategies, there is little reason to think our brains have evolved for the purpose of understanding Ultimate Truth. Our brains have evolved for the simply purpose of getting by well enough to reproduce. That salutary end can be achieved with minds that are good only for small and local things. Even the notion that there is some Ultimate Truth could be completely misguided. There is no guarantee that the universe must obey what we convince ourselves to be logically necessary.
Even if I am wrong about this – if it turns out there is an Ultimate Truth, and humans in principle can come to know it – then it must be admitted at the very least that it is really, REALLY hard to get to that truth. Given our propensity to mess up in comparatively lower-level cognitive tasks (consider the reliability of operating systems, and the multitudinous failures of bureaucratic institutions), it should be no surprise that so far no one has really come up with a thoroughly compelling philosophy. David Hume provides a just observation:
It is easy for a profound philosopher to commit a mistake in his subtile reasonings; and one mistake is the necessary parent of another, while he pushes on his consequences, and is not deterred from embracing any conclusion, by its unusual appearance, or its contradiction to popular opinion. (Enquiry, sec. 1)
Any confidence that, with careful enough thought, we can attain a vision of the True must be weighed against our track record of making the most elementary conceptual mistakes at the outset of any theorizing. We can place on top of that the ingenuity of other philosophers in coming up with compelling objections and devastating counterexamples to claims that might very well have been true. Even if we came across the truth, it would be a miracle if that genuine insight survived our very clever criticality. In the end, if in fact we have buried within us what philosophy would require, then that capacity is so tenuous and frail that the smart money is on humanity’s persistent failure in coming to know anything of metaphysical significance.
(I know I’m not presenting much of an argument here. It’s really only an expression of what Mickey’s father, in Hannah and her Sisters, says in fewer words: “How the hell do I know why there were Nazis? I don’t know how the can opener works!”)
But – for all that – I must confess that it is fun and instructive to read attempts by other philosophers to get at big truths. All right: it’s not fun and instructive for everyone. It’s a genre of literature (fiction? nonfiction?) that has its following. And these followers are improved in several ways by their enthusiasm. The literature of philosophy provides ample material for training critical reading and interpretation. Reading Carnap, and reading Quine, and tracing exactly how they talked past one another (as Dreben did) requires extraordinary care in reading, in forming apt diagnoses, in testing interpretations against one another, and in expressing with precision what is going on.
Moreover, as we try to place great historical philosophers in their times and cultures, we can learn in a general way how efforts at philosophy are shaped by circumstance. No one writes in a vacuum, of course, though Descartes and Spinoza tried. As we come to understand how each philosopher is rooted in some historical period, we come to understand how the philosophy that is generated is an existential reflection on that period. We see, that is, how humans have wrapped their minds around the universe in specific times and places. This in turn gives us more to think about as we craft our own responses to our own times and places. It is really the same insight one gains through travel: seeing how strange other places are helps us to see how strange our own place is. There is some self-knowledge in this, a kind of philosophical humbling, which I believe contributes to a deeper sympathy toward the thoughts of those with whom you disagree.
So, yes, philosophy is garbage. But the history of this garbage is something worth pursuing with scholastic intensity.
Generally, in any conflict between long-held, seemingly obvious beliefs and new research challenging those beliefs, defenders of the old beliefs will find themselves charged with sitting in armchairs. It never is a rocking chair, park bench, hammock, or divan. It is an armchair, the sort of chair one finds in venerable, wood-paneled clubs where stodgy old men opine about the world’s events more from preconceived opinions than from any well-grounded knowledge. An armchair represents both laziness and privilege, a luxurious class of opinion-mongers who simply will not bother themselves with actual empirical research – the original La-Z-Boys, as they might be called.
Émile Bréhier, The Philosophy of Plotinus, translated by Joseph Thomas (UChicago, 1958)
The history of philosophy does not reveal to us ideas existing in themselves, but only the men who think. Its method, like every historical method, is nominalistic. Ideas do not, strictly speaking, exist for it. It is only concrete and active thoughts that exist. The problems which philosophers pose and the solutions they offer are the reactions of original thought operating under given historical circumstances and in a given environment. It is permissible, no doubt, to consider ideas or the representations of reality which result from these reactions in isolation. But thus isolated, they are like effects without causes. We may indeed classify systems under general titles. But classifying them is not giving their history (182).
A true philosophical reform, such as that of a Socrates or of a Descartes, always takes for its point of departure a confrontation of the needs of human nature with the representation the mind forms of reality. It is the sense of a lack of correspondence between these needs and the representation which, in exceptionally endowed minds, awakens the philosophical vocation. Thus, little by little, philosophy reveals man to himself. It is the reality of his own needs, of his own inclinations, which forms the basis of living philosophical thought. A philosophy which does not give the impression of being indispensable to the period in which it appears is merely a vain and futile curiosity (pp. 183-4).
Paul Kléber Monod, Solomon’s Secret Arts: The occult in the age of enlightenment (Yale UP 2013).
In 1650, scientific thinking could not be separated from fascination for alchemy, astrology, witchcraft, spell casting, and prophecy – for short, “the occult”. By 1815, the separation was pretty definite, even if attempts to confound the two persist to this day. Monod’s book, focusing on England and Scotland, covers the transition over these years in its many levels and dimensions, illustrating the transition with story after story of various people engaged in one way or another with the occult.
In the early days, many expected scientific discovery to combine with magic and alchemy and thus rediscover a natural wisdom once possessed by Adam, Moses, and Solomon. No one worried that science and alchemy might not mesh; if anything, the worry was that the darker enticements of magic would lead people away from Christian faith. Hobbes’s thorough disdain for the occult was unusual. Nearly everyone else recoiled from Hobbes’s resolute materialism, and remained fully confident of the influence of spirits and invisible powers upon the visible world. Newton and Boyle steered clear of mentioning the occult in their published works, but they privately pursued secret knowledge along with everybody else. “Magic and science, empiricism and the supernatural: within alchemy, these were not in opposition, but constantly played off each other, combining and separating through a language both allusive and elusive, never fully merging but never wholly apart” (51)
In the practical sphere, alchemical remedies and astrological almanacs were booming businesses. This of course led to a proliferation of quacks and charlatans; and this invited the attention of caustic satirists like Jonathan Swift. Between the great scientists’ reluctance to publish openly about the occult, and the broad lampoons of magical thinking, alchemy faded from the intellectual scene over the first half of the 18th century, with a few exceptions. “The Newtonian magi” continued to bring together natural and supernatural knowledge. They insisted on natural explanations where available, but “the mythology of the Egyptians, the cosmologies of the Greeks and the healing powers of pagan priests provided fragmentary evidence of God’s plan for the universe” (159). William Stukely evidenced great interest in Druids, and offered impressive speculations about their ancient origins.
Eventually, by the last half of the 18th century, people had become comfortable enough with devils and ghosts to enjoy the first gothic novels and the first haunted houses. The occult became a mildly scary and fun subject, and less learned authors capitalized on its revival. One stage production, Omai, was rooted in the true story of a Tahitian man brought to London by Captain Cook. This rather fantastic version of the story includes Tahitian sorcerers and ghosts, and also features a segment entitled “Apotheosis of Captain Cook”, a special effect extravaganza in which Britannia herself elevates Captain Cook to heaven. He holds a sextant that resembles a Masonic compass.
In the end, Monod’s book brings on the same realization every great history book tries to bring about: that while some things have changed, other things have not. People are now, have always been, and will always be suckers for magical thinking. They may be intellectually serious about it, or they may be trying to make a quick buck. Perhaps they are trying to restore some mythic unity to all human knowledge, or perhaps they are just lazy and superstitious in their thinking. But if we take ourselves to know better today – if we think that science has prevailed in a battle against magical thinking – then, if we are honest, we must also recognize that science and the occult grew up together, and were for a while as inseparable as the twins of Gemini.
Last month (April 19, 2014), 3QD’s Robin Varghese linked to an article by philosopher Lisa Guenther on the effects of solitary confinement on the mind. (The original article was published in the online magazine Aeon.) Guenther’s essay is fascinating, as it provides a vivid account of how our perception of the world depends heavily on the social relations we build everyday with other people. When those social relations are stripped from us, our experience of the world goes wonky. For this reason, Guenther’s article is also disturbing, since it reveals the widespread practice of solitary confinement to be nothing less than mental torture.
Read the more here.
A recent post on the internet has outed Neil deGrasse Tyson (or “NdGT,” as he’s been dubbed by the blogosphere) as a philistine in matters of philosophy. True enough: as charismatic as he is, and as beneficial as his public service has been in bringing the wonders of modern science to a big audience, he does appear to be one of those scientists who imperiously dismiss philosophy as a pointless endeavor without appearing to have any clear idea of what philosophy actually is.
(For background, the relevant discussion comes up between minutes 20 and 24 in the Nerdist interview between NdGT and Chris Hardwick. Now, in defense of the Nerdist, the interview is meant only as light entertainment, and it just happened to wander into a dead-end topic. Arguably, they aren’t talking as much about real philosophy as they are talking about pointless verbal activity. But it is also true that the distinction seems lost on all involved – and hence the fitting charge of philistinism.)
I heartily applaud NdGT’s general efforts at popularizing science. My family and I have watched the entire Cosmos series, and while I think the older series had the distinct advantage of Carl Sagan’s masterful prose, this newer series has its own kind of charm (and much better effects). I confess that early on I bristled at the show’s dumbed-down and misleading accounts of the history of science. (The Renaissance Mathematicus cheerfully dishes up the necessary criticisms and correctives on Giordano Bruno and on Robert Hooke.) But after some reflection I realized that the producers put these segments in simplistic cartoon form for good reason: namely, to advertise up front that they were providing only a cartoon version of history. And if the series’ objective is to get kids interested in science, then maybe it’s okay to sacrifice truth for the sake of a good story. So far as that goes, the scientific accounts they tell are also oversimplified, and that’s okay too. First get the kids interested, and let the details get sorted out later. As somebody once said, teaching is strategic lying. If you tell a full and accurate story up front, you’ll only have an audience that didn’t need to be reached in the first place.
So: good on you, NdGT (and producers of Cosmos), and I hope many kids feel wonder for nature as a result of your efforts. But one also wonders whether these laudable ends might be achieved without ignorantly dismissing other ways of understanding the fascinating and wonderful elements of human experience.
Anthony Pagden, The Enlightenment and why it still matters (Random House, 2013)
The overall purpose of the book is to describe the Enlightenment as an intellectual phenomenon, a matter of ideas being thought and books being written, published, and read. There is little attention paid to what we might call the material conditions of history – economics, climate, geography, and social dynamics. So the scope is limited. Nevertheless, Pagden tells a well-informed and entertaining story of a grand sweep of ideas. His book is just the sort of thing that could well have been written by some of the people he writes about. It’s a great introduction to the ideal landscape of the period, and an illustration of the fact that the intellectual debates in our day are nothing new.
As with any great intellectual movement, the Enlightenment is hard to define. Ernst Cassirer called it “a process, the ‘pulsation of the inner intellectual life,’ that consisted ‘less in the certain individual doctrines than in the form and manner of intellectual activity in general'” (quoted by Pagden, 10). It is hard to say anything true about it beyond calling it three centuries of smart Europeans excited by possible conflicts among religion, science, and politics. Maybe we can say that all of them were interested in establishing a new conception of humanity, though they could not agree on precisely what that conception was. For, as Pagden and others show, the thinkers themselves disagreed sharply over matters one might have assumed they agreed upon; and then turned around to agree about other things. Perhaps, if Cassirer was right, the Enlightenment was a variety of sport, and its players found themselves on different teams, and often changed teams as the ball moved to different corners of the field. It would be crazy to define its nature, and crazier yet to deny its existence.
Pagden begins, in “All Coherence Gone,” by recounting the widespread rejection of a single catholic and apostolic church, and the ancillary rejection of a shared philosophical vision of the relations among nature, humanity, and God. Individuals discovered an inner need to work things out for themselves, perhaps politically (as with Hobbes) or epistemically (as with Descartes). Even those like Leibniz who sought a reunification of Christendom went about it in their own way, with their own systems. This led to a moral problem, recounted in “Bringing Pity Back In,” which was to find some motivation for such atomized individuals to have concern for one another. For Hobbes the motivation was strategic and greedy. But of course that goes only so far. Later thinkers believed that we find sympathy within the natural psychology of human beings, and that explains why we sometimes care more for others than can be explained by our narrow interests in self. “The shift from ‘selfishness’ to ‘sentiment,’ from the calculation of interests to the awareness that all humans were bound together by bonds of mutual recognition, became the basis on which a new conception of the social and political order of the entire world would eventually be based” (95).
The third chapter, “The Fatherless World,” recounts the problem of what value to place in religion. Some radicals found no value at all. Others recognized that religion at the very least provides some incentive toward moral behavior when self-interest and sympathy fail. In any case, all agreed that religious intolerance was a clear evil, and that a more generic form of theism would be sufficient to meet the apparent human need to believe in magic beings and provide the sort of crowd control a society requires. (I am beginning to believe that the advent of European deism is a political strategy of both crowd control and crown control.)
These first three chapters cover the basic territory that has to be covered; one might regard them all as preparatory. In the next three chapters, Pagden turns to the areas he knows best, and they are fascinating. They are “The Science of Man,” “Discovering Man in Nature,” and “The Defense of Civilization.” They all involve the challenge presented to European thinkers by the peoples of the Americas and the Pacific. What are we (Europeans) to make of their different values, customs, and practices? Do they simply present to us our primitive origins? Should those origins be regarded with loathing or admiration? What has civilization done for us – and to us? Pagden tells the stories of “Aotourou” and “Omai,” two Tahitian men brought to Europe on different occasions and paraded around town for all to survey and wonder. Their sad stories lend credence to the critics’ charge that the Enlightenment was “specifically a European form of tyranny”(20): both men’s lives were destroyed in the process, and their home communities fared no better.
The final three chapters, “The Great Society of Mankind,” “The Vast Commonwealth of Nations,” and “Enlightenment and its Enemies” trace connections among the noblest ideal of the Enlightenment – true cosmopolitanism, or the free and equal world citizenship of all human beings – and the decidedly mixed consequences of this noble ideal. On the one hand, any dream we have today of stable and peaceful relations among nations, with citizens playing genuine roles in the self-determination of governments, can be traced to books, treatises, and arguments of the Enlightenment. At the end of the book, Pagden speculates how Europe’s history would have been without it. The basic answer is that, had there not been “all coherence gone,” Europe would have met the same overall decline as the glorious Islamic world of the middle ages. A static religious hegemony would have stifled free inquiry, and external barbarians would have charged in and carved us up. Instead – good news! – we were able to do that to other people. And that, of course, is the other hand.
But, really, it need not have been that way. What humans have done under the banner of Enlightenment ideals has certainly not been concordant with those ideals. It is clear that Pagden’s overall assessment – “why it still matters” – is positive:
[The Enlightenment] was about creating a field of values, political, social, and moral, based upon a detached and scrupulous understanding – as far as the human mind is capable – of what it means to be human. And today most educated people, at least in the West, broadly accept the conclusions to which it led. Most generally believe that it is possible to improve, through knowledge and science, the world in which we live. Because they believe this, they also believe there exists a ‘human nature’ …. They hold, that is, that although cultures are important and differences must be respected, this can be so only when cultures conform to some minimal ethical standards that every rational being could be brought to understand. They believe that although most rights come to us courtesy of the states to which we belong, there are others to which we are entitled by virtue of our humanity. (407)
I agree with him that these conclusions express the ideals of the Enlightenment; and it can be no coincidence that we can find no end of volumes from Enlightenment thinkers recommending these conclusions to us. But it is far trickier to establish that we have these values because of the books Pagden discusses. It could be that both phenomena – the great Enlightenment books, and our modern opinions – are expressions of some other deeper thing, like perhaps an economic revolution or some social transformation or lower mortality rates or just the bracing self-interrogation that follows prolonged exposure to other sorts of people. In short, what’s not clear to me is that “what matters” about the Enlightenment is the causal result of thoughtful books.
As much as I like engagement with the world of ideas, I am not always convinced that ideas play decisive causal roles in political and cultural change. They do play some role; ideas cannot be tossed aside as epiphenomenal. But sorting out why ideas matter, and how they come to matter, requires narrow and careful examination on a case by case basis. And that’s not the kind of story Pagden sets out to tell – except in the middle chapters and his discussion of how poor Aotourou and Omai were received and conceived by their European liberators/captors. Even there, not many details were included, and I will be on the lookout for more comprehensive discussions.
We expect that causal laws will be the same across all experience. Hume famously claims that this expectation is grounded neither in pure reason nor in experience. Not pure reason: for one can posit a cause and deny the effect without being contradictory. And not in experience: for all experience can ever show is what we have observed in the past, and that information does not by itself tell us how to generalize upon it. We could generalize that causal laws will remain uniform; or we could generalize that the universe will go completely wonky from this date forward. Neither inference follows validly from what we have observed, and so they are in this sense equally nonstarters. Past performance is no guarantee of future results, as the saying goes.
Hume tries to find a way to explain why it is that, despite all that, we end up expecting causal laws to be constant. Strange as it sounds, the explanation he advances is itself causal. We become used to the causal patterns of the world, or conditioned by them through repeated associations, and so we come to subjectively expect causal patterns to continue. (This isn’t as paradoxical as it sounds. The salient fact about us, that we make causal generalizations, is also itself a generalization, and we expect to continue to generalize in the future as we have in the past. We are conditioned to expect continued conditioning.) We might well call Hume’s explanation the “Pavlovian” account of causality. It is meant precisely not to show that causal claims are grounded in any respectable, defensible process. It is only meant to explain the psychology behind our causal expectations.
Lord Kames, countryman and kinsman of David Hume, did not think this psychological account was good enough, and he raised a counterexample to the claim that constant connections breed causal associations:
In a garrison, the soldiers constantly turn out at a certain beat of the drum. The gates of the town are opened and shut regularly, as the clock points at a certain hour. These facts are observed by a child, grow up with him, and turn habitual during a long life. In this instance, there is a constant connection betwixt objects, which is attended with a similar connection in the imagination: yet the person above supposed, if not a changeling, never imagined, the beat of the drum to be the cause of the motion of the soldiers; nor the pointing of the clock to a certain hour, to be the cause of the opening or shutting of the gates. He perceives the cause of these operations to be very different; and is not led into any mistake by the above circumstances, however closely connected. (Kames 1751)
The child ends up smarter than his experience would suggest. How is he able to sort out the correlations from the causations? In reply to Kames, Hume could claim that the child is able to make the distinction because – once or twice – he has perhaps witnessed the drums beating without the troops mustering, or the gates opening or shutting at odd hours. And what if he hasn’t? Still, he might be able to see the events as only correlated because he has explored the barracks, the drum, the clock, and the gates, and he has found no mechanical links among them. This matters, because he has become otherwise accustomed to expect there to be spatially proximate, mechanical links between causes and effects, at least in events of this kind (“this kind” being correlations among bodies’ behaviors that are not alleged to be explicable through magnetism or gravity or (for us today) quantum spookiness). Indeed, in the Treatise, Hume insists that when we take ourselves to find a causal connection between events, we observe that the events “are contiguous in time and place, and that the object we call cause precedes the other we call effect” (1.3.14). The boy, perhaps, has found the correlated events to be spatially isolated – no links bridging them – and let’s throw in for good measure that perhaps he has also observed that the temporal relations are not as constant as one would otherwise expect among events that are really causally related.
But Kames, I expect, would have further complaints. Don’t we occasionally experience what sure seem like failures in mechanical explanation? We set up a perfect Rube-Goldberg contraption, push the first domino, and then what we believed must surely ensue does not. Indeed, don’t we encounter such causal disappointments just as frequently as we encounter correlated events that we are not supposed to think of as causal? The common course of life certainly suggests so. But if this is so, how on Hume’s account could we ever come to reliably sort out one kind from another? Why aren’t we far more confused than we are?
The upshot of this line of objection is that we end up knowing more about the world than we would if our knowledge were just a result of passive observation. Somehow, out of our experience, including our language and culture and education, we are able to form inner models of the world. In those models there are representations of what kinds of events are causally linked and which are not. Models can be mistaken, of course, and we can get causal explanations very wrong. But these models are not made automatically upon successive viewings of the passing show. Experience does not carve a model into our mind in the way a stream of water carves a canyon into rock. A model is an act of creative invention on our part, and it contains much more information than experience itself provides.
(Both Kant and Popper recognized this, by the way. But while Kant held that some components of the model are fixed, imparted to the model by the structure of the human mind, Popper regarded everything as negotiable.)
I wonder, though, why Hume was so attracted to such a simplistic view of our understanding. It may be that he could not see a way to contribute anything more complicated to the mind without bringing on the worry that he was making the mind supernatural. Nature as he knew it could produce an organism that is rudely shaped by experience in the way he describes. But how can nature produce a model-creating mechanism? Today we don’t worry about that question – not as much as we should, I think – but perhaps in Hume’s day the ability to create complex inner models that went beyond the elements of sensory experience had to be seen as something supernatural. Before you know it, there would be talk of souls, and Hume did not want to see talk drifting in that direction. Better an overly simple mechanism that nature can produce than a fancy one nature can’t, if what you’re trying to do is build a broadly nature-bound epistemology. Then you can hope that custom, habit, and culture will fill in any missing structure.
Or maybe I’m wrong to think that individual minds generate models, and Hume is right to look to larger cultural entities and traditions as the generators of models. When Hume claims that custom or habit is what leads us to expect causal regularities, he might be saying that our expectations – our models – are results of training and education and not results of individuals’ abilities. A humean Adam, with no one around to teach him, would have no expectations for the future. It takes a society for there to be individuals with some kind of shared model of the world that goes beyond each individual’s own experience. That’s an interesting idea.
Jonathan Israel, A Revolution of the Mind (Princeton UP, 2010).
This book is based on lectures Israel gave at Oxford in 2008 in honor of Isaiah Berlin. The overall aim is to show how modern democracy emerged from the tension between Moderate Enlightenment and Radical Enlightenment.
The chief maxim of Radical Enlightenment is “that all men have the same basic needs, rights, and status irrespective of what they believe or what religious, economic, or ethnic group they belong to, and that consequently all ought to be treated alike, whether black or white, male or female, religious or nonreligious, and that all deserve to have their personal interests and aspirations equally respected by law and government” (viii). The four major founders of Radical Enlightenment were Descartes, Hobbes, Bayle, and especially Spinoza. The Moderate Enlightenment (featuring thinkers life Hume, Smith, and Voltaire) denies such thorough egalitarianism, conceding that a great many of us need to be ruled by others, though they do believe effective checks must be placed on these rulers (especially those who pretend to rule over religious doctrines).
Israel offers an provocative metaphysical difference between the Radicals and Moderates:
Beyond a certain level there were and could be only two Enlightenments – moderate (two-substance) Enlightenment, on the one hand, postulating a balance between reason and tradition and broadly supporting the status quo, and, on the other, Radical (one-substance) Enlightenment conflating body and mind into one, reducing God and nature to the same thing, excluding all miracles and spirits separate from bodies, and invoking reason as the sole guide in human life, jettisoning tradition. (19)
The fundamental question is whether an ideal society can be based upon purely secular, monistic reason. Or must there also be a second substance presenting authority and tradition, whether through religion or the state – for the purpose of crowd control, at least? How much can reason do?
In my mind, Israel’s distinction between Radical monists and Moderate dualists parallels a distinction among historians regarding the role of ideas in explaining historical change. Though Israel is not Hegel, he clearly thinks philosophy is a significant contributor to social change – it lends “form and a sharp edge to a powerful emotional upsurge of deeply felt poetic and dramatic aversion to oppression” (88). Other historians think the head plays a much smaller role, and they turn instead toward less rational forces, such as those provided by economics, social structures, and historical accidents. Again: how much can reason do? According to Israel, it provides the central plot; according to others, it is more or less epiphenomenal. At issue in both distinctions is the relevance of ideas.