A potted history:
I believe Peter Sloterdijk is right that the Enlightenment has been followed by philosophical cynicism, or an impressive array of natural knowledge unaccompanied by any faith in providence. The U.S., which became the dominant intellectual and cultural force in the course of the 20th century, was well-suited to put this cynicism to work: for America was built upon a pragmatic, “can do” attitude, and seemed ready to let expediency drive ideology . (There are probably interesting connections here to Protestantism and Holland of the 17th century.) And so there arose on American shores the fulfillment of the German idea of a research university, with its faculty as a specialized workforce and its students as Model-Ts rumbling down an assembly line on which three credits of this and three credits of that are bolted on to each chassis.
Each academic discipline became a guild or union, where membership is tightly controlled and guild members insist on their indispensability to the general curriculum. New disciplines created their own means of controlling membership and making cases for their newfound indispensability.
As unions generally lost power and new models of management were developed in the last third of the 20th century, the university also experienced a shift in authority from the faculty to the administration. In the names of efficiency and accountability, administrators deployed numerous measures for evaluating faculty “productivity”; and the nature of these measures encouraged faculty to entrench themselves more firmly in their respective guilds.
In the case of philosophy, this meant (1) more attention devoted to narrow problem-solving activity rather then efforts to deepen philosophical wonder; (2) increasingly narrow specialization and less general knowledge of the discipline itself and its history; (3) less engagement with anyone outside the professional guild; and (4) development of various cants and shibboleths to patrol membership in the guild.
What to do? (Provided, that is, that one is inclined to see these results as problems!)
Most academic philosophy departments see themselves primarily as housing a specialized academic discipline, and contributing only incidentally here or there to a university’s general education curriculum. The priority needs to be reversed. Frankly, there is little or no need for specialized academic philosophy; if it disappeared overnight, the only ones who would notice would be the practitioners themselves. But on the other hand, despite the occasional iconoclastic polemic saying otherwise, there is a widespread recognition that philosophy provides a valuable contribution to the mind of an educated person, even if the person is not working toward a degree in the field. Philosophy professors need to see their primary job as enriching the mental lives, values, and discourses of non-philosophers. For almost everyone, we should be a side dish rather than the main course. That is where our societal value lies.
Now it can be argued that in order to do this well, philosophers also need opportunities to continue to learn and grow: they too need the chance to “geek out” with fellow philosophers through publications and conferences. And, where there is both talent and motivation, some philosophers will manage to advance our very old and rich discipline. But genuine advances in philosophy will not happen with the frequency of advances in younger and more technological disciplines, like computer science and chemistry. Genuine advances in philosophy are as few and far between as are the geniuses of the 17th, 18th, and 19th centuries. For most of us most of the time, our primary job is to enlighten masses.
If philosophy reconceived itself along these lines, graduate training in philosophy would look very different. Right now, the usual aim is to equip each student for intensely critical interaction with a vanishingly narrow band of specialists. (Typically, these PhDs are then hired to teach very broad undergraduate classes – an assignment for which, of course, they are wholly unprepared.) But if my proposal were adopted, these candidates would be trained to engage meaningfully, fruitfully, and philosophically with a wide range of people lacking expertise in philosophy. They would be required to write not dissertations, but books that could meaningfully inform the lives of their fellow citizens. This would be the norm rather than the now-celebrated exception. Philosophy would move out of the tower and back into the agora.
I can hear the complaint: “But there are many really smart people who are now attracted to philosophy’s narrow and difficult questions, and wouldn’t go into the discipline at all if they instead had to ‘dumb down’ their efforts for bigger audiences.” I grant the objection, and have three responses:
- First, it seems to me that these smart people might be able to find as much enjoyment working through equally difficult abstract problems in other fields – fields in which solving the problems would have more impact on more people. Smart problem-solvers are in demand all over the place.
- Second, there would still be room in the discipline for some really smart, narrow specialists, even if most of the room were given over to the broader task I’m recommending. Right now, of course, all of the room is reserved for narrow specialists – and that just doesn’t seem sensible, especially given the nature of the great majority of teaching jobs that exist.
- And third, I bet that for every person who is drawn into philosophy because of an inordinate enthusiasm for tight and narrow problems, there are ten really smart people who turn away from the discipline because there is no current opportunity for tackling broad and deep questions, and bringing them to the attention of wider audiences.
It would take some courage for philosophy as a discipline to make this move and “demean itself” by talking to broader audiences. It might seem like some sort of admission of defeat. But in reality, I think this move would be greeted very enthusiastically by a lot of educated people who have become increasingly disappointed in academic philosophers’ refusal to connect with people other than themselves. Moreover, it might encourage other disciplines in the humanities and social sciences to follow our lead, and recall their original purpose: to enlighten, deepen, enrich, and complicate the minds of human beings from all walks of life.
I went for a long bike ride yesterday. At the start I was just rolling along, letting my mind wander, and taking in the sights:
• kids selling lemonade,
• a well-kept garden,
• a Rat Patrol-style jeep with a gatling gun perched on top and three guys wrapping it in plastic,
• an interesting older red pick up for sale, …
Wait….. whaaaaa? So I had to turn the bike around to investigate.
Sure, enough, the jeep was painted drab, army green, with faux motor pool numbers stenciled on the side. It was on a transport trailer, and looked to be in near-new condition, so it was probably being sent off to a customer somewhere. On the passenger side, just outside the vehicle, was a rifle holster, with a nasty black armament in the holster. And hard at work were three guys wrapping the jeep in plastic (I suppose to protect it against kicked up stones on the road).
I learned recently that the idea of bolting a machine gun to the top of a moving vehicle was the idea of George S. Patton, he of pearl-handled revolver fame. He first used the weapon in the Pancho Villa Expedition, strapping a gun atop a 1915 Dodge, racing up behind the enemy, and blasting away.
“So…,” I said to the guys, “what’s going on up top there?” I gestured toward the gatling gun. It had six long round barrels, two pistol-grip handles, and a big red button.
They kept wrapping. Their leader eventually explained, “It’s for crowd control.”
“It’ll do that,” I agreed. “But is it real? I mean, it isn’t, right? It has a big red button on it, and nothing real has a big red button on it.” I was rapidly reaching the end of my knowledge of weaponry.
“It’s real. I added the big red button myself; it didn’t come with that. It delivers (x hundred? y thousand?) rounds per minute, something, something, something.” He added, “It shoots CO2 pellets.” So it was in fact a very badass BB gun. It wouldn’t kill, but it would definitely disperse a crowd. The other two guys kept wrapping and did not acknowledge my presence.
Now I’ve lived in the rural west long enough to know not to ask questions to which you don’t want to know the answers. So I said, “Well, it sure looks cool. You guys have a good day,” and pedaled off. The encounter gave me plenty of material for thinking over my ride. My guess is that this guy, out of his home, equips vehicles with weaponry, under contract with – well, with whom? Probably not with municipal police units, since the jeep was done up to look federal, and those agencies like to keep it clear who is doing the shooting. Possibly with paramilitary groups, of which there seem to be increasing numbers. Or possibly a low-GDP foreign government? Mexico? Puzzling, the things one sees from time to time. I’ve got to admit, though, that thing was cool.
Burton Dreben (1927-1999) was a Harvard professor whose influence upon academic philosophers has been great, despite a paucity of publications. Indeed, his influence has been so strong that some people refer to his students as being “Drebenized”, or molded in the form of the master. His main area of interest was logic, and the thought of Wittgenstein, Quine, Frege, and Carnap.
I heard Dreben lecture once – it was on Frege’s notes on Witggenstein’s Tractatus – and found him to be funny, smart, and captivating. He lectured simply, with only the texts before him, and he shared his unscripted thoughts with force and clarity. He easily defended himself against acute criticisms raised by my professors, whom I held in a kind of terrified reverence. An anecdote shared by another philosopher pretty much captures my recollection of Dreben’s style of repartee:
[Michael Dummett] had just delivered a lecture on Wittgenstein on logical necessity. Dreben arose excitedly to disagree with the interpretation. “But Burt,” Dummett said, ”you think all this stuff is nonsense.” To which Dreben replied, “No, no, no, no, no! . . . Well, yes.”
I can easily see the allure of being Drebenized. What fun it must have been to learn from such a clever and funny man!
There has been a larger discussion of Dreben’s thought and influence some years ago on Leiter’s blog *here*, and I am certainly in no position to add to it. But I would like to reflect for my own purposes on the meaning and truth in one of Dreben’s more notorious declamations: “Philosophy is garbage. But the history of garbage is scholarship.”
Philosophy is ridiculously hubristic. It is an attempt to get at the deepest meanings of things, to grasp that which ultimately and finally is, to comprehend not just what happens to be but what must be, and to draw from these grand truths a vision of how human life should proceed. Anyone trying to do this has to begin by presuming that there is some final account of things, and also that the human mind is capable of coming to know it. Both presumptions are unwarranted; and the implausibility of the second presumption undercuts any justification for believing the first one. Who are humans to presume to know such things? While we are so very clever at manipulating objects and forging tools and constructing strategies, there is little reason to think our brains have evolved for the purpose of understanding Ultimate Truth. Our brains have evolved for the simply purpose of getting by well enough to reproduce. That salutary end can be achieved with minds that are good only for small and local things. Even the notion that there is some Ultimate Truth could be completely misguided. There is no guarantee that the universe must obey what we convince ourselves to be logically necessary.
Even if I am wrong about this – if it turns out there is an Ultimate Truth, and humans in principle can come to know it – then it must be admitted at the very least that it is really, REALLY hard to get to that truth. Given our propensity to mess up in comparatively lower-level cognitive tasks (consider the reliability of operating systems, and the multitudinous failures of bureaucratic institutions), it should be no surprise that so far no one has really come up with a thoroughly compelling philosophy. David Hume provides a just observation:
It is easy for a profound philosopher to commit a mistake in his subtile reasonings; and one mistake is the necessary parent of another, while he pushes on his consequences, and is not deterred from embracing any conclusion, by its unusual appearance, or its contradiction to popular opinion. (Enquiry, sec. 1)
Any confidence that, with careful enough thought, we can attain a vision of the True must be weighed against our track record of making the most elementary conceptual mistakes at the outset of any theorizing. We can place on top of that the ingenuity of other philosophers in coming up with compelling objections and devastating counterexamples to claims that might very well have been true. Even if we came across the truth, it would be a miracle if that genuine insight survived our very clever criticality. In the end, if in fact we have buried within us what philosophy would require, then that capacity is so tenuous and frail that the smart money is on humanity’s persistent failure in coming to know anything of metaphysical significance.
(I know I’m not presenting much of an argument here. It’s really only an expression of what Mickey’s father, in Hannah and her Sisters, says in fewer words: “How the hell do I know why there were Nazis? I don’t know how the can opener works!”)
But – for all that – I must confess that it is fun and instructive to read attempts by other philosophers to get at big truths. All right: it’s not fun and instructive for everyone. It’s a genre of literature (fiction? nonfiction?) that has its following. And these followers are improved in several ways by their enthusiasm. The literature of philosophy provides ample material for training critical reading and interpretation. Reading Carnap, and reading Quine, and tracing exactly how they talked past one another (as Dreben did) requires extraordinary care in reading, in forming apt diagnoses, in testing interpretations against one another, and in expressing with precision what is going on.
Moreover, as we try to place great historical philosophers in their times and cultures, we can learn in a general way how efforts at philosophy are shaped by circumstance. No one writes in a vacuum, of course, though Descartes and Spinoza tried. As we come to understand how each philosopher is rooted in some historical period, we come to understand how the philosophy that is generated is an existential reflection on that period. We see, that is, how humans have wrapped their minds around the universe in specific times and places. This in turn gives us more to think about as we craft our own responses to our own times and places. It is really the same insight one gains through travel: seeing how strange other places are helps us to see how strange our own place is. There is some self-knowledge in this, a kind of philosophical humbling, which I believe contributes to a deeper sympathy toward the thoughts of those with whom you disagree.
So, yes, philosophy is garbage. But the history of this garbage is something worth pursuing with scholastic intensity.
Émile Bréhier, The Philosophy of Plotinus, translated by Joseph Thomas (UChicago, 1958)
The history of philosophy does not reveal to us ideas existing in themselves, but only the men who think. Its method, like every historical method, is nominalistic. Ideas do not, strictly speaking, exist for it. It is only concrete and active thoughts that exist. The problems which philosophers pose and the solutions they offer are the reactions of original thought operating under given historical circumstances and in a given environment. It is permissible, no doubt, to consider ideas or the representations of reality which result from these reactions in isolation. But thus isolated, they are like effects without causes. We may indeed classify systems under general titles. But classifying them is not giving their history (182).
A true philosophical reform, such as that of a Socrates or of a Descartes, always takes for its point of departure a confrontation of the needs of human nature with the representation the mind forms of reality. It is the sense of a lack of correspondence between these needs and the representation which, in exceptionally endowed minds, awakens the philosophical vocation. Thus, little by little, philosophy reveals man to himself. It is the reality of his own needs, of his own inclinations, which forms the basis of living philosophical thought. A philosophy which does not give the impression of being indispensable to the period in which it appears is merely a vain and futile curiosity (pp. 183-4).
A recent post on the internet has outed Neil deGrasse Tyson (or “NdGT,” as he’s been dubbed by the blogosphere) as a philistine in matters of philosophy. True enough: as charismatic as he is, and as beneficial as his public service has been in bringing the wonders of modern science to a big audience, he does appear to be one of those scientists who imperiously dismiss philosophy as a pointless endeavor without appearing to have any clear idea of what philosophy actually is.
(For background, the relevant discussion comes up between minutes 20 and 24 in the Nerdist interview between NdGT and Chris Hardwick. Now, in defense of the Nerdist, the interview is meant only as light entertainment, and it just happened to wander into a dead-end topic. Arguably, they aren’t talking as much about real philosophy as they are talking about pointless verbal activity. But it is also true that the distinction seems lost on all involved – and hence the fitting charge of philistinism.)
I heartily applaud NdGT’s general efforts at popularizing science. My family and I have watched the entire Cosmos series, and while I think the older series had the distinct advantage of Carl Sagan’s masterful prose, this newer series has its own kind of charm (and much better effects). I confess that early on I bristled at the show’s dumbed-down and misleading accounts of the history of science. (The Renaissance Mathematicus cheerfully dishes up the necessary criticisms and correctives on Giordano Bruno and on Robert Hooke.) But after some reflection I realized that the producers put these segments in simplistic cartoon form for good reason: namely, to advertise up front that they were providing only a cartoon version of history. And if the series’ objective is to get kids interested in science, then maybe it’s okay to sacrifice truth for the sake of a good story. So far as that goes, the scientific accounts they tell are also oversimplified, and that’s okay too. First get the kids interested, and let the details get sorted out later. As somebody once said, teaching is strategic lying. If you tell a full and accurate story up front, you’ll only have an audience that didn’t need to be reached in the first place.
So: good on you, NdGT (and producers of Cosmos), and I hope many kids feel wonder for nature as a result of your efforts. But one also wonders whether these laudable ends might be achieved without ignorantly dismissing other ways of understanding the fascinating and wonderful elements of human experience.
We expect that causal laws will be the same across all experience. Hume famously claims that this expectation is grounded neither in pure reason nor in experience. Not pure reason: for one can posit a cause and deny the effect without being contradictory. And not in experience: for all experience can ever show is what we have observed in the past, and that information does not by itself tell us how to generalize upon it. We could generalize that causal laws will remain uniform; or we could generalize that the universe will go completely wonky from this date forward. Neither inference follows validly from what we have observed, and so they are in this sense equally nonstarters. Past performance is no guarantee of future results, as the saying goes.
Hume tries to find a way to explain why it is that, despite all that, we end up expecting causal laws to be constant. Strange as it sounds, the explanation he advances is itself causal. We become used to the causal patterns of the world, or conditioned by them through repeated associations, and so we come to subjectively expect causal patterns to continue. (This isn’t as paradoxical as it sounds. The salient fact about us, that we make causal generalizations, is also itself a generalization, and we expect to continue to generalize in the future as we have in the past. We are conditioned to expect continued conditioning.) We might well call Hume’s explanation the “Pavlovian” account of causality. It is meant precisely not to show that causal claims are grounded in any respectable, defensible process. It is only meant to explain the psychology behind our causal expectations.
Lord Kames, countryman and kinsman of David Hume, did not think this psychological account was good enough, and he raised a counterexample to the claim that constant connections breed causal associations:
In a garrison, the soldiers constantly turn out at a certain beat of the drum. The gates of the town are opened and shut regularly, as the clock points at a certain hour. These facts are observed by a child, grow up with him, and turn habitual during a long life. In this instance, there is a constant connection betwixt objects, which is attended with a similar connection in the imagination: yet the person above supposed, if not a changeling, never imagined, the beat of the drum to be the cause of the motion of the soldiers; nor the pointing of the clock to a certain hour, to be the cause of the opening or shutting of the gates. He perceives the cause of these operations to be very different; and is not led into any mistake by the above circumstances, however closely connected. (Kames 1751)
The child ends up smarter than his experience would suggest. How is he able to sort out the correlations from the causations? In reply to Kames, Hume could claim that the child is able to make the distinction because – once or twice – he has perhaps witnessed the drums beating without the troops mustering, or the gates opening or shutting at odd hours. And what if he hasn’t? Still, he might be able to see the events as only correlated because he has explored the barracks, the drum, the clock, and the gates, and he has found no mechanical links among them. This matters, because he has become otherwise accustomed to expect there to be spatially proximate, mechanical links between causes and effects, at least in events of this kind (“this kind” being correlations among bodies’ behaviors that are not alleged to be explicable through magnetism or gravity or (for us today) quantum spookiness). Indeed, in the Treatise, Hume insists that when we take ourselves to find a causal connection between events, we observe that the events “are contiguous in time and place, and that the object we call cause precedes the other we call effect” (1.3.14). The boy, perhaps, has found the correlated events to be spatially isolated – no links bridging them – and let’s throw in for good measure that perhaps he has also observed that the temporal relations are not as constant as one would otherwise expect among events that are really causally related.
But Kames, I expect, would have further complaints. Don’t we occasionally experience what sure seem like failures in mechanical explanation? We set up a perfect Rube-Goldberg contraption, push the first domino, and then what we believed must surely ensue does not. Indeed, don’t we encounter such causal disappointments just as frequently as we encounter correlated events that we are not supposed to think of as causal? The common course of life certainly suggests so. But if this is so, how on Hume’s account could we ever come to reliably sort out one kind from another? Why aren’t we far more confused than we are?
The upshot of this line of objection is that we end up knowing more about the world than we would if our knowledge were just a result of passive observation. Somehow, out of our experience, including our language and culture and education, we are able to form inner models of the world. In those models there are representations of what kinds of events are causally linked and which are not. Models can be mistaken, of course, and we can get causal explanations very wrong. But these models are not made automatically upon successive viewings of the passing show. Experience does not carve a model into our mind in the way a stream of water carves a canyon into rock. A model is an act of creative invention on our part, and it contains much more information than experience itself provides.
(Both Kant and Popper recognized this, by the way. But while Kant held that some components of the model are fixed, imparted to the model by the structure of the human mind, Popper regarded everything as negotiable.)
I wonder, though, why Hume was so attracted to such a simplistic view of our understanding. It may be that he could not see a way to contribute anything more complicated to the mind without bringing on the worry that he was making the mind supernatural. Nature as he knew it could produce an organism that is rudely shaped by experience in the way he describes. But how can nature produce a model-creating mechanism? Today we don’t worry about that question – not as much as we should, I think – but perhaps in Hume’s day the ability to create complex inner models that went beyond the elements of sensory experience had to be seen as something supernatural. Before you know it, there would be talk of souls, and Hume did not want to see talk drifting in that direction. Better an overly simple mechanism that nature can produce than a fancy one nature can’t, if what you’re trying to do is build a broadly nature-bound epistemology. Then you can hope that custom, habit, and culture will fill in any missing structure.
Or maybe I’m wrong to think that individual minds generate models, and Hume is right to look to larger cultural entities and traditions as the generators of models. When Hume claims that custom or habit is what leads us to expect causal regularities, he might be saying that our expectations – our models – are results of training and education and not results of individuals’ abilities. A humean Adam, with no one around to teach him, would have no expectations for the future. It takes a society for there to be individuals with some kind of shared model of the world that goes beyond each individual’s own experience. That’s an interesting idea.
The term has come to a close and I fall into despair over my failings as a teacher. (My wife tells me this is routine.) My despair is not anything so noble as feeling that I have fallen short of an expectation that I would turn each young mind into a firestorm of intellect. It is the darker conviction that I have wasted everyone’s time, humiliated myself, and presented a charade of learning. The students are glad to be rid of me, and I them, and we are each fully justified in feeling so. There is a handful of exceptions, but in these cases I feel as if we just fell into sympathy with one another, and we are confusing that sociability with genuine learning. (If a student is reading this, don’t worry, I liked you.)
It seems especially bad this term because I entered into it with such high hopes of success. I had devised a new approach, requiring loads of preparation on my part; but, alas! It turned out no better than before. I will probably, again, get relatively high scores in student evaluations, but this is like a glass of vinegar capping off a wretched meal. I know what happened. I was there. Positive evaluations will do nothing but alter my memory of the experience.
So, before the spell wears off, I will record what meager observations have come to mind. Some of these I already practice in some degree, but need to do better.
1. Each class must focus on a live question that has no easy answers. Learning how to read and understand texts is important, but not as important as experiencing the tension of a genuine philosophical problem. Oversimplify and exaggerate, if that is what it takes. If these problems aren’t emerging from a text, drop it.
2. That being said, when we are studying a difficult text, devote more time to working through it line by line so that students learn how to read such things. So far in their lives, they probably have never done it.
3. I should assume my students are two school years lower than they are (college freshman are high school juniors, college seniors are college sophomores, etc.). It is not that students are dumb, or getting dumber; it’s just that as I advance in years, I lose touch with what it was like to be them, and end up presuming too much. (In another ten years, I’ll up it to three years.)
4. One good, well-conceived example is better than three developed on the fly.
5. Be sure to allow time for amusing tangents and asides, but these should be deployed like rich and tasty treats – overdoing it makes everyone sick.
6. Speak to individuals, and never to the class as a whole. Indeed, it turns out there is no such thing as “the class as a whole”. It is a fiction developed by boring lecturers. There are only ever individuals.
7. (This one I learned a long time ago.) Never ask, “What does everyone think of that?” or other such wide-ended questions. Only ask questions that might have wrong answers. (Not that anyone should pounce on those answers as wrong, of course.)
8. In big classes: insert random elements (again, sparingly). This could mean occasionally sitting down among the students while lecturing; it could mean random cartoons or art works inserted into PowerPoints; it could mean leaving the class, if using a microphone, but continuing to lecture. All of this should be done without any explanation – though, on the other hand, if it magically aligns with a relevant point, so much the better. These random elements are meant to inject some unpredictability into something otherwise utterly tedious for both teacher and student. They also subtly challenge the absurd forum of the classroom.
I’m just ending my second foray into academic administration. The first one was serving as department head over a department including philosophy, communications studies, and all of our foreign language programs. It was a terrific exercise in mental and emotional flexibility – at one point I was adjudicating a dispute between a faculty member and a staff assistant while also trying to plan the curricular offerings in French while also teaching early modern philosophy while also …. Luckily, my colleagues were very supportive and forgiving of my mistakes. Still, at the end of my service, I posed myself the question, “What if the dean gave you the choice of (a) staying on for one more year or (b) sticking your hand in a garbage disposal?” and I found myself trying to estimate just how much damage a garbage disposal would do to a hand and how long recovery would take (less than a year? would I get good drugs?). Happily, the choice was never presented to me.
Now I’m finishing up foray #2, having served as an associate dean. This assignment was loads easier. No personnel issues. Mainly, my job has been to go to meetings, answer emails, serve on committees, go to meetings, put people on committees, answer emails, go to meetings…. A lot of my work has focused on academic issues like the structure of general education, the overall shape of the college’s curricula, procedures for fairly and meaningfully evaluating faculty, and so on. This is all interesting stuff (to me), and I’ve learned a lot, and I think we made some real contributions. But now, as I transition back to teaching & scholarship, I’m realizing that one’s mind can be wholly dedicated in different ways. In administration, the whole mind is dedicated to organization, procedural justice, political strategy – I think of this as a broad multilateral engagement. In teaching and scholarship, the mind is wholly dedicated to bringing order and significance to a range of questions that go far deeper – I think of it as deep multilateral engagement.
The deep engagement is a LOT harder and and more exhausting than the broad engagement. When it goes well, it is also more fulfilling; and when it doesn’t, it occasions utter despair. I’m guessing this is because more of one’s self is being put on the line – in the classroom, or on the page in one’s writings. Failure reflects, somehow, on the depth and structure of one’s own soul (to dramatize just a bit). If I assemble and present something I take to be important, and it brings only yawns or silence, then (unless I know I was only faking it) I can only conclude that either I or my audience has failed in taking proper measure. Neither conclusion is a happy one. On the other hand, if what I present in a class is greeted with enthusiasm, then at least everyone involved is failing in a similar direction, and that’s not half-bad (indeed, as good as it gets, in my experience). Companionship softens the self-loathing of incompetents.
(Hmm; I didn’t know this tour was going to stop at that spot.) Anyway, I wanted to make a brief listing of some observations made during this second foray. In no particular order:
1. When administrators take any action, they are almost always in a very tight spot. Generally, seasoned administrators try to change as little as possible, under the reasonable suspicion that any change to a system brings all manner of unintended consequences. (Greener administrators, alas, have yet to learn this, and in their ambition can cause great problems.) This means that when there is a change, one should always look for the deeper and more compelling story – the one that makes you say “Ah! That makes sense” – and not just follow convenient rumors.
2. The further up the ladder you go, the less connection there is to anything of academic interest. Maybe this is just what you’d expect. But it is startling sometimes to listen to high-level discussions by people who seem only dimly aware that there are classes being taught, and that items on CVs might refer to intrinsically interesting things. Our university president, who is a decent man, seems only dimly aware of the academic side of campus, as he spends almost all of his days dealing with legislators, donors, and lawyers.
3. Rarely, one finds an academic administrator living an active life of the mind while also administrating – these creatures are valuable beyond any telling, and should be treasured.
4. Vice-presidents very often see the university centered around them, and expend great energy trying to get everyone to adopt their concerns. I guess it’s their job, but it leads to a lot of rear-guard, defensive maneuvering by deans and associate deans to try to maintain resources that will otherwise get sucked up into the building of little kingdoms. In sum: beware the ambitions of vice-presidents.
5. It is also startling to see the consequences of over-specialization in our disciplines, especially in the humanities. This is what makes general education such a difficult and thankless task. I don’t regard myself as well-educated, but out of guilt I have been working to become well-educated for several decades now (a work still very much in progress). But now I encounter junior colleagues who not only do not have this guilt, but sometimes do not seem to be aware of missing anything – “I’m not supposed to know anything about that, am I?”. But I’ll leave it at that lest I give over to excessive old man grumping.
6. I believe it is good for academics to take a turn in administration. It helps them to see how institutions function, and to befriend the people in the offices; it helps them to gain a broader picture of how universities operate, and where they fail; it helps them as individuals work more efficiently, given firmer pressures on schedules. And I think it is good for those turns to be limited. Granted, from deans on up, it is good to have people with more extensive experience. But there are plenty of posts, like the ones I’ve had, that can be entered into and then left again, and from which much can be learned. It’s been a good turn for me; and I’m happy it’s over.
Reading Schelling, or even only about Schelling, helps us understand Hegel’s frustration when he called the philosophy of the Absolute “the night in which all cows are black.” The Absolute covers everything under the sky, and rather than illuminating anything, it portrays everything as the same and allows no difference. You can say what you like about it without worrying that someone will point out any exception to what you have said. It is an easy way to sound profound without actually having to say anything.
But let’s try to think with Schelling for a bit. That there is for us a possibility of even trying to refer to “the Absolute” is interesting. What do we try to refer to? It; all of it, and everything. The Absolute is the domain we want to hear about when we ask why there is something rather than nothing. For it is hard for us to think there is no final foundation for explanations; or if we manage to think that, then we feel we are thinking a nonfinality that is itself absolute. But who are we when we think the Absolute? Do we stand apart from it, or within it? Or do we contain it within our thinking?
Schelling says it is all these things at once. He thinks Spinoza does not grasp the full truth when he claims our understanding is an object fully within an Absolute order. He thinks Fichte is similarly wrong to insist that everything is contained within an Absolute subject. Instead, they both are right: the understanding, the absolute truth, the inner well of subjectivity, and the objective world order are all the same, and the great animus of the cosmos is to wrap our heads around that identity.
Our noblest human endeavors – science, art, and religion – are how we set about doing it. In science we try to capture nature as an object. But in that nature we find beautiful symmetries and harmonies that present to us the order of a divine soul. In art we strive to present the glorious sweep of our feelings, but in doing so we must resort to symbols, and the laws and relations governing symbols are not of our own invention, but exhibit an objective structure like what we find in nature. Just as a trip outside returns us inside, the trip inside sends us out again. And in religion, which Schelling either by circumstance or by his own predilection had to place on top of it all, we recapture the unity of the Absolute by finding the personal within the impersonal. At first through myths, and then through Christianity, we find that the cold and terrifying world is in fact someone we know and have known all along.
Perhaps more basic than the unity of the Absolute in Schelling’s thought is the stubborn opposition – and yet ultimate unity – of opposites. It is hard for me not to think of the German Idealists as Kantian versions of the pre-Socratics, and if Fichte is Parmenides (“All is One”), then Schelling is Heraclitus. Heraclitus may have held that what we know as reality is always in a state of flux between opposing forces. Everything is always on its way to something else; and yet “the road up and the road down are one and the same”, meaning that there is some deeper unity in this tension between opposites. In Schelling’s thought, the oppositions of objectivity and subjectivity, and dogmatism and criticism, animate his thinking and provide the power behind science, art, and religion. We strive for unity amid opposition – and that, in a single slogan, is what we do whenever we try to understand and represent. We try to capture in a snapshot what is essentially on the move. Yet this does not occasion despair in Schelling; it instead fuels the romantic endeavor to grasp what transcends our grasp.
An idle self-scolding:
1. Everyone – not all the time, but every once in a while, acknowledge the fact that human beings occasionally produce something noble, beautiful, or virtuous without oppressing anyone. Writing like Eeyore is not doing us any favors.
2. English faculty – it’s okay to get excited once in a while about silly things (Archie comics, zombie movies, Happy Meal toys), but it’s gotten out of hand; try to spend more time with grown-up material.
3. Philosophers – if you are working on problems that only interest people with PhDs in philosophy, there’s a decent chance you are spinning your wheels. Get real.
4. Historians – just because it hasn’t been studied before doesn’t automatically mean it’s worth studying. You’ve got to make the case that it’s worth someone’s time.
5. Classicists – You’re doing a great job; keep up the good work!
6. Everyone again – bitching about not getting enough respect is a losing strategy for getting respect. Write about something important in a way that lots of people can understand. If you get it right, you’ll get respect. (That’s right, by the way: we can get things right, and we can get them wrong – if, that is, we are actually saying anything. It’s not all just interpretation. What? You think I’m wrong about that? Good. Point made.)
Recently I sat in on a debate about gay marriage. It failed in the noble end of giving everyone in the audience a more complete picture of the arguments and concerns surrounding the issue, but it did at any rate give me the chance to organize my own thoughts on the issue.
It seems to me that, as an institution, marriage has three goods attached to it. First, it recognizes the value we place on a loving, meaningful bond between people who declare their lifelong commitment to one another. Second, it recognizes the value we place on creating stable home environments in which children can be raised to become good people and good citizens. I’ll call these two goods the social goods of marriage, and I think rational people must agree that these goods can be met by gay couples as well as by straight couples. Clearly, gay couples long to have their loving, meaningful commitment to one another recognized through the institution of marriage. And while some people argue that the homes of gay couples are not the best place in which children can be raised, the evidence is very far from clear. Moreover, even if there were child-rearing disadvantages to gay marriage, those disadvantages could well result from the general resentment or resistance toward gay marriage that’s found throughout our society. And if this is so, then such evidence should only spur our efforts to ease that resistance, just as we are spurred to ease the resistance to mixed-race couples or single-parent households.
So the first two goods of marriage provide no reason against gay marriage; indeed, there is every reason to think that these goods can be promoted by allowing gay marriage. But the third good of marriage is different. Many people in our society view marriage as providing a religious good. In their view, God has instituted a holy bond between members of the opposite sex: man and woman complete one another in a profoundly important way that members of the same sex couples cannot. The fundamental, biological difference between the sexes exists because of a purpose God has for human beings, which is to wed, complete one another, and produce more human beings. Marriage is the societal recognition of this religious good, in addition to the two social goods, and from this perspective, making marriage available to same-sex couples in fact repudiates any societal recognition of this particular religious good.
I’m going to argue that religious people need to give way on this good, but first I want to stress that this religious good is in fact a very important to a great many of our fellow citizens, and demanding that they give way on it is asking them to give up on something they value quite highly. This needs to be more widely appreciated, I think. It is not at all like asking an atheist to give way on prayers at school assemblies, or saying “so help me God” at the end of an oath. Having to kowtow to such religious observances are fairly small potatoes; they are very minor concessions in comparison to demanding that religious folks give way on their view of a divinely-instituted bond between human beings. In fact, I can’t think of any other institution that has such a great and equal mix of social and religious significance. It is uniquely a big deal.
Okay; so why do I think that, despite this cost, the religious folks need to give way on this issue? I think they need to for two reasons. First, the cost of denying the social goods of marriage to gay couples is an even greater cost than is the cost to religious folks of giving way. This is becoming more and more obvious to a lot of people. As gay couples out themselves, and as there is greater public understanding of their love and commitment to one another, and their familiar humanity, it feels increasingly wrong to refuse to recognize their bonds through marriage. I’m taking this as a social fact. Second, if gay marriage is generally made available and legal, religious folks can still find a lesser way of saving some of the religious good they see in marriage. This lesser way is that religious institutions can always make a distinction between a secular marriage and one that is recognized in the church and through their ceremonies. (In fact, some religions do this already when they refuse to recognize the marriages, baptisms, and rituals performed in other religions or denominations.) This is not an easy fix; there are many complex ramifications that would need to be sorted out, just as there are between religious hospitals and insurance programs and the Affordable Care Act. But though tricky, this is the only way to go. There can no longer be any denial of the social goods of marriage to same sex couples, and allowing religious institutions to preserve their own sense of marriage is about the best we can do in preserving the good they find in marriage.
When I started as an Assistant Professor, people started calling my kind of academic reading and writing “RESEARCH”, and they encouraged me to do the same. At least, this is how I remember it. It seemed to me strange and awkward, because in my mind “RESEARCH” required surveys at the very minimum, and perhaps also processing numbers and running statistics and boiling liquids and writing on clipboards. But I did (and do) none of these things. I read and read and read; and then I walk around in a daze for a while; and then I write; and read some more; and walk around some more; and write; and so on, until somehow someone publishes something. Repeat. That’s been my method; twenty years ago it seemed bizarre to call it “RESEARCH”.
Time erodes most opinions, and now I can only barely remember how bizarre it seemed to me so long ago. In fact, “RESEARCH” has become for me so watered down as to mean “those activities academics can document in such ways as to duly impress, or fail to unimpress, university administrators.” But recently I came across someone who insisted on making some distinction between “RESEARCH” and “SCHOLARSHIP” on some university form or other, and it got me thinking about whether there is indeed a distinction between the two.
I think there is. In my mind, and apart from my own professional cynicism, “RESEARCH” means finding out new stuff (mainly facts). Scientists do this as they run experiments and find hitherto unknown correlations or causal connections; social scientists, same; archival historians do this as they search through records and piece together what must have happened. But it seems to me that for the most part philosophers, and some historians, and scholars of literature – “humanists” – are not in this kind of business. I have seen plenty of cases where someone claims that their area of research is (say) virtue ethics, or late Renaissance literature, or how some people get relegated to the margins in canonical works. But for the most part this does not require discovering new stuff. It mainly requires looking at the stuff we have in new and (hopefully) interesting and revealing ways.
The ideal for such humanists is not research, but “SCHOLARSHIP”. To be a humanist scholar, one needs to read a great deal, think deeply and humanely about it, and pick up on interesting patterns or glaring exceptions to patterns commonly thought to exist. It is rare to find such scholars (the rarity I’ll try to explain next). Lately I have been finding lectures on YouTube by scholars like Robert Brandom, Peter Brown, and Anthony Grafton, and they are scholarly exemplars. They possess a synoptic knowledge across broad domains, and they have the intelligence to sort the significant from the insignificant and issue meaningful opinions about important matters, with fairness and grace.
Usually I listen to these lectures as I am doing my exercises. It’s maybe this association that leads me identify to a third kind of activity, in addition to research and scholarship, which I am calling “PUSH-UPS”. Of course, we do push-ups or sit-ups not because they are in themselves valuable (duh!), but because they improve our strength in some way. This presents the most charitable description I can think of for what many scholars are engaged in prior to embarking on either research or scholarship. I’m sure that many scientists are not really discovering new facts; they engage in the outward form of research activity, but what they discover is either spurious or too trivial to be dignified by the designation “new stuff”. Also, many humanists perform the outward activities of scholarship, but what they discover and write up is hardly interesting, compelling, or general. But what can be said favorably of such academics is that they are young. Hopefully, with time and confidence, they will become scholars or researchers. Right now, they are in some kind of disciplinary training, like push-ups, that will yield valuable results down the road.
That’s optimistic, of course. There is every likelihood that academics will get caught up in the illusion that their push-ups are in fact ends in themselves and they may forget or never come to know that there is genuine scholarship and research. Universities as institutions demand quick return on investment in their faculty, and that certainly promotes a lot of push-ups, and what we do repeatedly becomes habitual. If it weren’t for tenure we’d have practically no researchers or scholars, as no one would ever have the chance of advancing from training to the real thing. As it is, with tenure, some push-up skills mature into genuinely valuable work. The price we pay is that sometimes it doesn’t happen. But I can’t see any more effective way to promote scholarship and research.
I confess that I write this as someone who has done his fair share of push-ups. I can’t say I have become anything more than a rudimentary scholar. But I’m glad to see now the merit in a distinction I was encouraged to forget some time ago.
Wow, that is a mouthful, and the talk does feature some elaborate (if not baroque) constructions with complex concepts. But the underlying ideas are clear and powerful. Robert Brandom argues (the start of his talk is about at 9:44) that there may seem to be irreconcilable differences between a genealogical approach to philosophy (which identifies all of the social, historical, and psychological causes leading up to an idea) and a more purely rationalistic approach (which considers the reasons behind the idea). Basically, it seems like a thorough-going naturalism will make it impossible to take ideas seriously. But this only “seems” so. In fact, trying to sharply demarcate naturalistic causes from reasons in the formation of ideas rests upon a naive view of concepts and meanings. This is the point that Quine was making to Carnap, and Hegel was making to Kant. We should understand ideas as complicated structures that reflect both causes and reasons. In Hegel’s philosophy, this view means reading history in the way that judges of common law read the past as precedent – looking for a rational trajectory behind the mess of daily details, and making use of it for the future.
I found the lecture to be brilliant, and worthy of attention. Plus, Brandom looks like Dumbledore, which is cool. I wonder if grad students at Pitt call him “Brandomdore,” or maybe “Brandalf.”
Arthur Schopenhauer did not have much use for Fichte. He thought Fichte’s mistakes arose from the fact that Fichte did away with Kant’s realm of things in themselves, leaving human consciousness free to just spin in any direction without any friction from anything external to it. And, to make matters worse, he did so without any compelling reason, and in prose that left his readers baffled. In Schopenhauer’s words:
[Fichte] declared everything to be a priori, naturally without any evidence for such a monstrous assertion; instead of evidence, he gave sophisms and even crazy sham demonstrations whose absurdity was concealed under the mask of profundity and of the incomprehensibility ostensibly arising therefrom. (Parerga and Paralipomena, 1.13)
I think Schopenhauer discovered the way not to read Fichte – namely, as a philosopher arguing cogently for a conclusion based on reason and evidence. But then how should we read Fichte? My sense is that Fichte is best read as providing a metaphysics for a certain mood. I would like to try to articulate that mood, and then see how well his speculative and moral philosophy generates it.
Though we live in a cynical age, try for a moment to adopt the view of someone who regards utopia as possible, and perhaps even necessary. Forget for now that we live in a broken and fractured age, with conflict, injustice, inequalities, and absurdity. For a moment, try believing that all these fractures can be healed or repaired. How? Believe that through the dedicated use of our reason we can establish a political community, replete with a scientific understanding of nature, that can bring us into an understanding with one another and into living in harmony with nature’s boundaries and requirements. Through intelligence and justice, we can make whole what is now fractured. Having this belief, and having the temperament to act on it, is what I shall call the mood of Enlightened Optimism, and this is precisely the mood of Fichte.
Now what kind of metaphysics would generate such Enlightened Optimism? If what we see as now fractured can be brought into wholeness, then there must be a wholeness that is possible. Moreover, this wholeness is not merely accidental; it is not the case that it just so happens that there turns out to be a way to make everything whole. Rather, this possibility of wholeness is guaranteed. The wholeness is thus a pre-condition for our fractured world; the possibility of wholeness places constraints upon the kinds of fractures our world can have. Our world can be fractured only in such ways as wholeness is still a possibility. So this means that our disunity proceeds from a prior unity – not prior in time, but prior in possibility. Unity is ontologically more fundamental than disunity.
But then, if unity is fundamental, why should there be any disunity at all? Perhaps it is because the unity attains its own unity only through recognizing itself through something other than itself. That is to say, the unity marks off its own limits, and brings itself identity, through something other than itself. Thus the fracturing is the way for the unity to come to itself. In this fracturing the unity is in its own funhouse, catching true glimpses of itself alongside distortions, curved images, and broken images.
What this means is that our own consciousness is the striving to bring unity out of what is fractured. It is imperative for us to do so, for what we are arises out of the negotiation between unity and disunity that must eventually resolve itself in unity. Moreover, to retain our mood of Enlightened Optimism, we must not see ourselves as pawns in some cosmic game that is beyond our control. Instead, we must see the postulation of the original unity, to which we are returning, as our own postulation. That is, we freely, absolutely freely, posit unity. In our efforts to repair the fractures, and restore unity, we are fulfilling our absolute freedom.
Okay. You may now return to Earth. Schopenhauer was right to find much to complain about in Fichte’s philosophy. But insofar as such “crazy sham demonstrations” encourage us to believe that our freedom consists in restoring unity to a fractured world, it must be admitted that it might be just crazy enough to work.