Hegelian vs. Kuhnian idealism

[from an essay in progress on idealism]

What we have seen so far is that there is no observation of the world, and no understanding of it, without a theory. We have also met several idealists who believe, in varying ways and for different reasons, that theory is not just important, but really, really important: the most important knowledge we can have of our experience has more to do with theory than with anything else. Hegel, the greatest idealist of all, believed that human progress consists in further development of theory.

G.W.F._Hegel_(by_Sichling,_after_Sebbers)But, as thrilling as his vision is, it is hard for us to go the full distance with Hegel. It is true that our greatest triumphs of knowledge, in physics and engineering, in biology and medicine, in economics and public policy, are all made possible by advances in theory, and through the patient application of reason to experience. But the fact is that foundation of Hegel’s vision – that in the end all our knowledge grows from a grand logic – seems to have been of no use to us whatsoever. The only scholars interested in fathoming the depths of Hegel’s most fundamental philosophy are historians who are only trying to make some sense of it. It could be, of course, that Hegel’s mind was simply more penetrating than any other mind since, and his logic should have made more of a difference; or it could be that Hegel got some things powerfully right, but made the mistake of trying to base them on a logic that was in fact only a philosopher’s dream.

What would Hegel’s philosophy look like without his grand logic? In the area of science, we would see old theories being replaced abruptly by new, radically different theories that offer deeper insight into the forces and laws of nature. In the area of history, we would see conflicts and tensions being worked out through dialogue and politics and – when these failed – war. But there would be no overall theoretical structure that governed these advances, or told them which way to go, so to speak. There would be only those of us on the ground, working things out as best we can, armed only with our insights and prejudices, our biases and our hopes, our blind spots and our misgivings. There would be no guarantee that we were heading in some special direction: there would be only our desire to fix what is broken, and to make a better situation for ourselves. Hegel’s view, minus the grand logic, begins to look startlingly plausible.

kuhnThe resulting vision might be drawn from what Thomas Kuhn offered in his 1962 work, The Structure of Scientific Revolutions. The work is famous for introducing the twin terms “paradigm” and “paradigm shift,” which can now be found in just about everything humans set their sights on. A paradigm is a framework, a theory, or a shared vision of a complex system. A paradigm shift – or a revolution – is when one paradigm is replaced by another. Earlier historians of scientific revolutions viewed these paradigm shifts as rational transitions in which one theory offering better predictive power replaces an older one that just could not compete. Kuhn’s work – itself a paradigm shift, among historians – was to see these shifts as not wholly rational. At times a better theory is available, but it is not adopted because the old theory is so well entrenched in existing practices and institutions. Other times a new theory is adopted even though, from the perspective of the old paradigm, it really does not offer any advantages. Anyone who really pays attention to history cannot see scientific revolutions as purely rational transitions. Human politics, economics, and psychology play very active roles.

epicyclesKuhn’s celebrated example is the Copernican revolution. The textbook account, still prevalent in many quarters today, is that western Europeans limped along for centuries with a Ptolemaic model of the solar system, with the earth at the center and the planets orbiting it in curlicue fashion (due to “epicycles,” or little circular orbits around a point which itself orbits the earth). Along came Copernicus in 1543, daring to suggest that the earth and the planets orbit the sun. Suddenly, the traditional story went, all of our observations of the planets made sense and planetary positions in the night’s sky could be predicted without recourse to all those silly ad hoc epicycles. But this textbook account is wrong. It is true, of course, that the Ptolemaic model reigned for centuries, but for good reason: it offered accurate predictions of where we should see the planets each night, and so it was indispensable for navigators trying to cross oceans without sight of land. Ptolemy’s system, as presented on Johannes de Sacrobosco’s 13th-century treatise On the Spheres, was taught to would-be navigators well into the 17th century. Copernicus’s new model was not accepted or taught for generations – mainly because it was worse at offering predictions. But it was not rejected outright; mathematicians and astronomers kept tinkering with it until Kepler came along, replaced the circles with ellipses, and made it work.

If it was so bad initially, why on earth was Copernicus’s view not simply rejected? Basically, Kuhn’s explanation was that experts at the time decided that the overall “package” Copernicus offered – the new model, attended by new sorts of problems – was more interesting and more promising than the old theory, which over the centuries had basically gone about as far as it could go. The new theory offered interesting opportunities. It also was in line with broader revolutions sweeping Europe, which sought to dethrone Aristotelian authority in the churches and universities and support a radically new and independent view of the universe. The Copernican revolution should be seen as something more like a political revolution: a change that happens out of a variety of influences, tipping points, and new values. It took place, in the words used above, at the hands of people “on the ground, working things out as best we can, armed only with our insights and prejudices, our biases and our hopes, our blind spots and our misgivings.” Or, in Kuhn’s words:

Individual scientists embrace a new paradigm for all sorts of reasons and usually for several at once. Some of these reasons – for example, the sun worship that helped make Kepler a Copernican – lie outside the apparent sphere of science entirely. Others must depend upon idiosyncrasies of autobiography and personality. Even the nationality or the prior reputation of the innovator and his teachers can sometimes play a significant role. (Structure, pp. 152-3)

Historians have argued ever since about exactly what a paradigm is, or precisely when a shift can be said to occur – and with good reason, for revolutions are complicated, confusing things. But no one tries to see such revolutions in Hegelian terms, or in the terms of a grand logic that become more crisply focused over time. It is not that conceptual revolutions are wholly irrational. Individuals in the middle of revolutions are sorting things out as best they can, through their own reason, weighing a range of competing beliefs and values. But whereas Hegel would see the outcome as determined by reason’s own structure, we see more chance at play. There is no global plan. There is only improvement on what was before, in the eyes (or in the paradigm) of those who have come out the other end of the revolution. Again, Kuhn:

Can we not account for both science’s existence and its success in terms of evolution from the community’s state of knowledge at any given time? Does it really help to imagine that there is some one, full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to that ultimate goal? If we can learn to substitute evolution-from-what-we-do-know for evolution-toward-what-we-wish-to-know, a number of vexing problems may vanish in the process. (Structure, p. 171)

What Kuhn offered was a relativized idealism. The theories or paradigms are as important as ever: they tell us what things are, how things change, and what our experience is. But the paradigms are not “pregnant” with their successors. Old paradigms are discarded and new ones adopted as the result of many factors, some rational, and others less so. The fact that we discern progress over time, particularly in the case of science, tells us more about the current paradigm we inhabit than about the objective rationality of human discovery over time. Progress, that is to say, is relative to those making the judgment, and what they count as “progress.” Indeed: without some sort of Kantian or Hegelian ideal of pure reason, what else could it possibly be?

But Kuhn’s idealism is relativized in a second way as well. Kant, with his twelve categories of the understanding, and Hegel, with his grand logic, believed that the structures we place upon our experience are fixed (Kant), or at least that the way in which those structures evolve is fixed (Hegel). For Kuhn, the paradigms we invent are constructed from available materials, and the revolutionary thinker comes upon them in a flash of insight, much in the same way an artist suddenly sees a new way of combining given elements. The new paradigm is incommensurable with the old in the sense that the two paradigms are irreducibly different ways of seeing the phenomena. Usually, in a period of revolution, there are several thinkers devising very different paradigms, and many of them go nowhere or attract only a few followers; one of them, for various reasons, gains greater currency, and “wins.” The winner, though, does not have a deep logical connection with the old paradigm and does not emerge out of it in any meaningful way. Its new concepts are regarded as important and fundamental only from the perspective of the new paradigm – indeed, they make sense only in the new paradigm.

The transition from Hegelian to Kuhnian idealism might be seen as follows. In Hegel’s version, there is a single form of human thinking that allows for a single story to be told. Hegel knew very well that history has an impact on thinking, and different cultures offer different stories about the world. But he also believed they have a common core, described through a single logic, which serves to orient all our conceptual efforts (as well as our political ones) into a single direction. Kuhn, on the other hand, allows for incommensurably different forms of thinking and different stories, offered up in their varieties like just so many Darwinian species, competing for dominance in an ideosphere. At the end of a revolution, one view dominates – but that victory has more to do with contingencies of history than with that view being a better expression of some fundamental logic. For Hegel the basic script has been written; for Kuhn, we make up the story as we go along.

Posted in Kant and/or Hume | 4 Comments

Krug’s pen

Fountain pen and letterWilhelm Traugott Krug (1770-1842) was the philosopher who succeeded Kant in the chair for logic and metaphysics at the University of Königsberg. Just before taking on that role, he had thrown down a challenge for Schelling’s idealist philosophy: could Schelling, or any idealist, pretend to offer any sort of explanation why, from the Absolute, any particular thing – such as the pen Krug was using – should exist? How do we get from a stock of pure concepts to individual things we hold in our individual hands? In the end, Schelling admitted that it can’t be done. The mind can discover all the logical possibilities, but none of the actualities. For this there must be actual intuition, Vorstellung, or the representation of factual beings from outside the intellect. The mind yields negative philosophy, or the philosophy of possibilities; for positive philosophy, we must bump up against the world.

Schelling used this insight to point out the critical shortcoming of Hegel’s philosophy. Hegel’s logic, he claimed, yields only negative philosophy; but since Hegel knew that he somehow had to account for the reality of finite particulars, he fudged a bit. With one eye on the changeless Parmenidean world of logical Being, and the other eye on the pen in his hand, he came up with the concept of Becoming, which arises magically out of the dialectic between Being and its evil twin, Non-being. “Bad faith!” charged Schelling: Hegel was twisting logic to meet his own philosophical demands.

Stephen Houlgate has argued that Schelling himself was not being fair in this accusation. For Hegel, unlike both Krug and Schelling, did not sharply separate the land of pure logic from the land of pens and writing desks. When we experience particular things, we are already deep in the world of logic and concepts. We identify them, distinguish them, and make sense of them through a logic that permeates all being and thought. In Houlgate’s words, “Things are not given to us as existing by sensation (or by Vorstellung) as such, but have to be understood to exist by the very same thought and understanding that determines what they are” (1999: 119). Hegel never finds himself trapped in a prison of abstractions, looking for an escape. The world, as it were, is trapped there in the prison with him.

Hegel’s logic, then, is not meant as a rationalist philosopher’s replacement for Genesis. His claim is not that in the beginning there was the Idea, which thought Being, etc., and in the end out popped Krug’s pen. We might even reverse the order: in the beginning was Krug’s pen, and we came to think about what it was, and what it wasn’t, and before long we found in our world Being and Non-being, and Becoming, and for further details please consult the Logic. Hegel infuses the world with logic, in just the way our physicists imbue the world with invisible forces and conservation laws. Our task is to see the logical structure of our experience, and fathom its depths, until we see for ourselves that Anaxagoras was right, and all is indeed mind.

Napoleon_Chasseur_from_Guard_by_BellangeBut one might further wonder whether there was more going on in this debate than accounting for the existence of pens. After Krug taught in Königsberg for a few years, he moved on to the University of Leipzig. In 1813, he took time off from teaching and served as a cavalry captain in the “War of Liberation” against Napoleon’s army as they retreated from Russia. The overwhelming allied forces chased Napoleon out of Leipzig to western lands and finally back to France.

There aren’t many philosophy professors ready to ride out into real battle, but this event is less surprising in the case of Krug. In a long list of works, he championed freedom of religion and speech and advocated many liberal causes, including the emancipation of the Jews. The mere thought of freedom was not enough for him; he sought action, change, and active resistance. Scholars are divided as to just how liberal Hegel was in his thinking, but it is undeniable that he saw advantages in constitutional monarchy and liked to see the World Spirit taking possession of singular, powerful individuals. While he could rationally accommodate any of the changes Krug fought for, his temperament was to smooth changes into continuous, inevitable transformations rather than to see them as sudden and contingent ruptures. Hegel saw the pen as part of a world that was meant to be; Krug saw it as a tool he could use to make a mere possibility actual.



Stephen Houlgate, “Schelling’s Critique of Hegel’s “Science of Logic””, Review of Metaphysics (1999), 53:99-128.

Posted in Historical episodes, Kant and/or Hume | Leave a comment

Peter Adamson, and the gap problem

It’s wonderful to have Peter Adamson’s perspective on this perpetual problem in teaching the history of philosophy: whom do I cover, and whom do I leave out? Adamson, of course, is bravely executing “The History of Philosophy Without Any Gaps” podcast. He knows it’s impossible, but he’s doing what he can do give some basic treatment of philosophy from all times and places. I’ve heard a few of the podcasts, but have recently gone to the very beginning and am listening in order while I’m having to haul my body from one place to another. He’s endearingly nerdy and silly, and also absolutely genuine and responsible. He’s doing a good thing.

The basic tension is that teachers feel obligated to cover “the greats” – the people whose names students really must recognize, and will most likely encounter in other classes or books or conversations. But at the same time, these “greats” in the western tradition tend to be all men, precisely because women have for such a long time been forbidden or at least strongly discouraged from participating. Whether we mean to or not, we perpetuate the discrimination by not including women philosophers, since undergraduate women often come away with the sense that this is a game for boys. And even setting that important issue aside, there are loads of wonderful, intriguing philosophers from history who did not make the “A-list” for reasons having nothing to do with intrinsic merits of their writing. Accidents of history, and all that.

But every school term is limited. Peter nails the tension head-on:

Are you really going to drop Aquinas from your medieval philosophy course to make room for Eriugena, or skip over Hume to accommodate Mary Wollstonecraft when teaching modern philosophy?

And his answer swiftly follows:

But what I’ve come to think is that we should give up on trying to cover “all the important things.” For this is impossible by a very large margin. You might tell yourself you have covered the important medieval philosophers if you’ve done Anselm, Abelard, Avicenna, Aquinas, Scotus, and Ockham. That’s an impressive line-up, no doubt. It’s a lot more medieval philosophy than most undergraduate students will ever read, and even gets in a thinker from the Islamic world. But do these big names really have a greater claim on our attention than Eriugena, Hildegard of Bingen, John Buridan, Meister Eckhart, and Fakhr al-Din al-Razi?

My answer would be no. The fact that such authors are not, or not yet, “canonical” has little to do with historical and philosophical merit and much to do with the historiographical priorities and limited perspectives of previous generations. These generations wrote our textbooks, designed the syllabi for courses we took as students, and decided what to edit, study, and translate—and in so doing, shaped our sense of what is too “important” to leave out. In reality, there are simply too many important thinkers in every period to be fit into any undergraduate historical course, in both the historical and philosophical sense of “important.” And that’s without even getting into “minor” figures like, say, Saadia Gaon, Yahya ibn ‘Adi, Alcuin of York, John of Salisbury, Hadewijch, Radulphus Brito, or Henry of Ghent, all of whom would be well worth teaching to undergraduate students. So when we’re exposing students to any period in the history of philosophy, we should not tell ourselves that we only have time to visit the highlights. In fact we should admit that we don’t even have time to do that.

Peter goes on to recommend that we don’t think of covering the “major” figures as our primary responsibility. We might think in terms of giving a taste of the kinds of problems of the time period, the styles of argument, the big concerns, and the seemingly endless variety of voices. Students who leave class with an informed sense of the complicated landscape of early modern philosophy – metaphysical, social, epistemological, political, religious – will be much better served that those who leave with the sense that there was Descartes, Spinoza, Leibniz, Locke, Berkeley, and Hume — and little else going on.

There is plenty of room for experiment and variation in these matters. One way to dispel the “boys only” sense is to include secondary articles by contemporary female historians of philosophy. One can devote a day to two or three lesser-known figures; or, turning that upside down, one can spend a day providing a thumbnail sketch of a “great,” and then spend two or three days going into greater detail of a lesser-known figure. One can assign an “orthodox narrative” (like Copleston’s) as homework reading, and then use class time to complicate and challenge that narrative.

In the end, I agree with Peter that we can stop thinking of our work in teaching the history of philosophy as something like screwing this or that part onto a chassis as it rolls down the line, thinking that we must make sure that these parts are included for the final product to be functional. A better view is that we are equipping students for much more variable tasks, and a more open-ended future.


Posted in Items of the academy / learning, Uncategorized | Leave a comment

Philosophical zombies and 1984

(from today’s “Zombie Zymposium”)

zombieZymposium jpegI’d like to discuss two things. First, I’ll discuss the quasi-technical use of “zombies” in recent discussions of the philosophy of consciousness. I’ll call these entities “philosophical zombies” since, as we’ll see, they are not much like the zombies more commonly featured in movies and TV shows. Secondly, I’d like to speculate about the cultural significance of philosophical zombies – specifically, what discussions of them reveal about our culture.

A philosophical zombie is an allegedly conceivable entity that is meant to show that physicalism is false. Physicalism is the view that human consciousness can be explained through neuroscience, or through the study of the physical properties and events of the brain. If physicalism is true, then physical accounts of the brain/body should be able to explain why we have particular sensations and experiences; brain science should tell us exactly why the brain undergoing some particular event will result in us tasting pineapple or smelling rotten eggs. If it is a real explanation, then we should be able to see why anyone with a brain in that particular state would have that particular experience.

Enter the philosophical zombie. Such a zombie is defined as a creature that is just like us in terms of physical properties. It also is just like us in terms of behavior, including speech behavior. But they are different from us in the fact that they have no conscious experience. The lights are on, but nobody is home. It is not like anything to be them. They are like incredibly complex robots, or wind-up toys, with no first-person perspective or feelings or thoughts whatsoever.

So: is it possible to conceive such a being? Note that I am not asking if there are any zombies, or if we can make one. I’m only asking if we can conceive of such a being without encountering any contradiction. We can’t conceive a four-sided triangle, or a married bachelor, or a nephew whose parents have no siblings. We can conceive a mile-high unicycle, or a bear with six legs, or a snowball not melting on a very hot day in July. So is a philosophical zombie like a four-sided triangle or a six-legged bear? Is it something we can conceive?

If we can, the argument goes, then physicalism is false. For then we can conceive a brain doing all the stuff brains typically do without there being any conscious experience. And if we can conceive that, then an account of what the brain is doing when we have a particular conscious episode does not explain why we should be having that particular episode rather than another, or rather than none at all. The zombie is a conceivable case in which the brain is doing its thing, but no conscious event is happening. If that’s conceivable, then physicalism hasn’t provided a real explanation. At most, it’s pointed out a mysterious correlation between conceptually distinct events.

But what then is true, if physicalism isn’t? Here philosophers have resisted the temptation to believe in souls, or immaterial things that have experiences. They have instead suggested that conscious experience is a hidden dimension of the physical, or that nature includes both physical and nonphysical properties, or that some physical events can somehow give rise to conscious states. In short, they have tended to reach for pixie dust.

Now the most promising response on the part of the physicalist is to insist that no, zombies are not conceivable. It seems like they are, but they’re not. Daniel Dennett makes this case by ramping up our concept of a zombie into a zimbo: a zimbo is a zombie that can adjust its behavior on the basis of monitoring its own behavioral states. It’s still not conscious, mind you. But it can see what it is doing, hear what it is saying, gain feedback from the environment, positive or negative, and adjust its behavior accordingly. If you are having trouble imagining such a being without attributing consciousness to it, then Dennett says you are discovering that philosophical zombies are in fact inconceivable. Indeed, he writes, there is a sense in which we are all zombies – namely, in the sense that there is nothing to us over and above our brain behavior and bodily behavior that gives us the experience we have. Physicalism is the view that we are zombies.

Now why on earth are philosophers devoting so much attention to zombies? To a large extent, surely, it is to determine whether physicalism is true. It’s philosophical curiosity. But there’s always more than that going on.

Philosophical zombies started to receive lots of attention in the mid 1990s. At this point in time, computers were becoming more widespread, more powerful, and more interesting; the discussions about artificial intelligence were becoming less science-fictiony and more science-facty. So there was a general awareness that a physicalistic account of consciousness might be genuinely possible. Then again, that had been true since at least the 1950s, and maybe since the 1650s.

But I think other factors were responsible for the sudden emergence (or re-emergence) of philosophical zombies. The mid-1990s marked the end of the cold war and a kind of triumph of the Reagan/Thatcher/Bush-the-elder political regimes. In the eyes of liberal college professors (at least), these regimes promoted and rewarded a kind of opportunism and corporate greed that had been largely suppressed or regulated over the previous eight decades. The “Yuppie” (Young Urban Professional) became for many a genuine societal ideal, though this ideal was met by liberals with large measures of scorn, disdain, and satire. In this vein, one of the most powerful TV commercials ever produced accused corporate-driven consumerist society of being the totalitarian state envisioned by George Orwell in 1984.



Now the irony should be lost on no one that this commercial was for Apple, which was itself a totalitarian regime, and it was taking aim at another totalitarian regime, IBM, which at that time served as the whipping boy for MBA-style lack of creativity. (Then Microsoft stepped into that particular role.) But the commercial cleverly played upon a general yearning among liberal consumers to see themselves as more than faceless cogs in dreadful machines: to see themselves, that is, as something other than mere robotic servants, or zombies.

I believe that the “philosophical zombie phenomenon” in the 1990s and 2000s gained its momentum from a liberal yearning among philosophers to see themselves as creative agents (led by David Chalmers, at that time a young, long-haired, brilliant upstart from Down Under), and a yearning to see human consciousness generally as itself special and irreducible to material forces. It was a kind of rebellion against a staid philosophical tradition, but also against a broader society that was celebrating conformity and materialistic consumerism. The confidence that zombies are metaphysically possible was fueled by the recognition that corporate zombies were actually all over the place, and by the fear of becoming one.

Seen in this light, the reactionary response from physicalists – that “We are all zombies!” – can be heard as the voice of the disillusioned, the Microsoft confidence that Excel spreadsheets are, in the end, more bankable than iPaint. Nothing human offers lasting resistance to the scientistic effort to reductively explain. The pixie dust, we are brutally informed, is just dust.

In all this, I’m half-joking, but only half. It’s certainly not true that nonphysicalists in philosophy of mind are all political liberals, and physicalists are political conservatives. But in all of us there are propensities to think magically alongside enlightened demands that we not dream, and that we face facts as they are. These inner drives fuel debates about human consciousness just as they fuel political disputes.

A thought experiment is never just a thought experiment.


Posted in Historical episodes, Metaphysical musings | 3 Comments

Defining the divine

144Here is a big question:

Is anything divine?

It’s easiest simply to assume (for now) that there is a natural world, and that this world is pretty much what it appears to be (with corrections supplied through scientific inquiry, of course). The question then is whether that assumption will be sufficient for our knowledge and experience, or whether there is anything in our lives urging us to think of something in the world, or out of it, or maybe the world itself, as divine.

What is divinity? I would like to explore this definition: divinity is the quality of intrinsic meaningfulness. A divine thing is not meaningful because of our own ends and expectations. It is not meaningful in virtue of any other thing. It is meaningful only in virtue of itself. When we encounter it, there can be no denying its significance. That’s what divinity would be, anyway.

Obviously, this is not the common way of understanding the term “divinity” – but I think it is a useful way, since it immediately cuts away many things we may be taught to think of as divine that are in fact not so special. Take God, for example. If God is supposed to be some powerful being, with vast plans and occasional responses to prayer, and the ability to dispense the biggest rewards and punishments, then God is no more than an extraordinary mundane being, like a cosmic tyrant or king. God in this case is no more divine than a Nero of time and space. No being is divine just by having extraordinary mundane powers.

But suppose there is a mystic who recognizes a certain experience – perhaps “the experience of God’s love” – as divine. When having that experience, the mystic cannot deny its significance. The experience electrifies his whole being. Obviously, the meaningfulness of the experience has nothing to do with what it will yield for the mystic, nor does it advance the mystic’s own aims and ambitions. The experience itself is pure meaningfulness, and the mystic rightly identifies the experience as divine – according to the above definition, at any rate.

Those of us who are not the mystic are not compelled to see the experience as divine. We might see the mystic’s happy face or other physical symptoms, but none of these are especially meaningful to us. We might skeptically regard the mystic’s experience as only apparently meaningful to the mystic. And if we regard it so, and carry through our thought with consistency, then when we are offered the chance to be the mystic ourselves, and enjoy the experience of God’s love for ourselves, then we ought to similarly conclude that the experience is only apparently meaningful to us.

No one can deny the existence of apparently meaningful things. The question we are asking, though, is whether anything really is genuinely meaningful, and intrinsically meaningful at that.

To remain within the secure confines of skepticism, admitting only the existence of apparently divine things, is to be a secularist. A secularist sees only relational meaningfulness. A thing or event is meaningful to a person, or within a context. Nothing is point-blank meaningful. And this means that nothing is divine (according to the proposed definition), and nothing is sacred. It must be noted, though, that this does not mean that a secularist values nothing. A secularist values many things. But the secularist values things because of those things’ relations to people or projects. It is true that for the secularist there is no final, fixed source of value or meaningfulness. But this does not make everything value-less. It only makes all valuable things of relative value.

Continue reading

Posted in Meanings of life / death / social & moral stuff | 14 Comments

Haig Khatchadourian (1925-2016)

I learned yesterday through Facebook that one of my teachers, Haig Khatchadourian, has passed away. He was a warm and generous man, and a philosopher with such broad knowledge and penetrating intellect as to both intimidate and inspire those of us lucky enough to be in his classroom. I remember the blue exam books he would hand back, completely filled and smudged with his red ink, taking any weak point we managed to make and building it into an interesting insight. He made us feel like we were part of an extremely important and demanding project: that of making critical, well-informed sense of the world. All my friends strove eagerly to win his praise, because we felt getting it really meant something.

I remember asking him once about his core philosophical interests, and he explained to me that, early on, he had planned his life as a series of decades: ten years to work on epistemology, ten years on metaphysics, ten years on politics, etc. My impression is that he threw himself completely into each decade, getting to the heart of the matter and planning his courses so that students could be carried along with him.

IMAG0617As a senior, I was allowed to take a graduate seminar he led on Kant’s Critique. I worked harder in that class than in any other. There were four of us, and to this day I have a photograph he took when we met under a tree one day. (I post it below my copies of the CPR.) The fact that he wanted to take a photo, and that he gave us all copies to remember the experience, meant a lot to me, and still does. He really cared about the human side of his students, in addition to his efforts to sharpen our meager intellectual capacities.

There are few who have lived with his focus and dedication. I’m very, very grateful for having been his student.


Posted in Meanings of life / death / social & moral stuff, This & that in the life of CH | 6 Comments

Idealism and contingency

(Reading Terry Pinkard’s marvelous German Philosophy 1760-1860: The Legacy of Idealism)

It may be that the tenability of idealism comes down to the question of history. A resolute idealist discovers that the most fundamental framework of existence is expressed as dynamic relations among concepts: the I, the not-I, the striving of the I to take in the not-I as an object of thought, and the thorough ordering of the not-I – “Nature” – in further concepts, eventually expressed in the terms of the most conceptual branch of physical science. The idealists promise that, in the final analysis, our science of nature will merge meaningfully into this fundamental metaphysics. We will somehow get from I/not-I to the general theory of relativity.

But this still leaves the problem of history, for Nature is not merely a set of relations among concepts. Nature has been, is, and will be a most particular sequence of events. Another way to put this point is that there are many, many possible worlds which differ radically from one another but which all obey the same laws of nature. One of these is ours; how can this be explained? Why have things been one way rather than another? In short: why this history?

There are some basic replies one might try. You might try, in Zeno-like fashion, to simply deny the reality of our particular history, and call it some kind of illusion. Or you might partition off some deeper recess of the I which, in its hidden structure, explains why we should have this particular history. Or you might mythologize history, and turn its seemingly inexplicable particularity into something uniquely meaningful: our history is getting something done, and this something can get done in just one way, which is the way of our history. Moreover, this something that needs to get done may be linked up with the I/not-I dynamic, so that in the end the I wills the world. None of these ploys are especially interesting.

There may be a more subtle way of responding to the problem, which Pinkard sketches in his account of Schelling’s idealism:

Surely the past, as Shelling himself notes, has a reality that is independent of our representation of it. This objection to idealism, however, like generalized skepticism, assumes the “reflective” stance that puts subjects on one side of a divide and objects on the other [“the mind-as-the-mirror-of-nature” view]. Once one has shifted one’s picture and come to “see” or “intuit” the matter differently, those worries cannot arise. In understanding our experience as of a world, we experience it as more than what is manifest in that experience; or, as Schelling puts it, for us to be “intelligences,” we must perform a “synthesis” (a drawing of normative lines), which requires us to take up our experience both as being of an objective “universe at large” and as the way we “view the universe precisely from this determinate point.” We understand ourselves, that is, as particular points of view on an objective world that can be only partially manifested to us in our experience of it. Seen in that way, idealism is, as he puts it, only a “higher” realism. (p. 186; my bolds)

I understand this as follows. The very question – “Why is history this way rather than another?” – presupposes that we are divided from it. On one side is us, armed with our philosophical understanding; and on the other is the totally other “it,” with its own stubborn character. But Schelling asks us to shift out of this paradigm: take away the dividing line. The particular world we confront in experience is not distinct from us, but is a resulting mash-up of our own intelligence with an entity we ourselves posit – an object of our experience. The “stubborn character” which we thought was outside us is in fact inside us, in the sense that we have projected it into our own experience.

Now I would like to continue to press the objection: but why then have we projected one sort of seemingly objective world rather than another? (Or, in other words, just whose show is this, anyway?) I suppose that the Pinkard/Schelling reply would be something like, “There you go again, trying to divide yourself from the objects of your experience. Stop it!” But I can’t decide whether this is really a reply or just an attempt to get me to stop raising a question they can’t answer. Does the brute contingency of history evaporate as soon as I accept responsibility for it? Doesn’t that just bring the contingency into myself?

Posted in Books, Kant and/or Hume, Metaphysical musings | 1 Comment