The impact of Boris Hessen

Reading: Gerardo Ienna and Giulia Rispoli, “Boris Hessen at the Crossroads of Science and Ideology from International Circulation to the Soviet Context”, Society and Politics, 2019, 13:37-63.

[These are just some preliminary notes on a very complex story I am only beginning to understand. I was introduced to the topic through discussion of a Facebook post by Martin Lenz.]

If Boris Hessen is known among historians today, it is primarily for playing a foundational role in launching “externalist” views in the history of science, or paying close attention to the social, political, and economic forces at work in the development of scientific theories. In a 1931 lecture presented at a conference in London, Hessen argued that Newton’s physics was inextricably bound up with a burgeoning early modern capitalism (“The Socio-Economic Roots of Newton’s Principia”). It was a Marxist exposition of Newtonianism, and it forcefully challenged the received opinion that Newton and his cohort were simply a bunch of politically neutral boys interested in the truth for truth’s sake. Hessen’s work led directly to Robert K. Merton’s dissertation and subsequent work which expounded “the Merton thesis”, which specifically claimed that early modern science in England had a lot to do with Protestantism, and generally claimed that, even in intellectual history, it’s not only ideas that matter.

But there is much more to Hessen than this. A short history: Hessen was born in modern-day Ukraine in 1893. He studied physics at St. Petersburg and Edinburgh, where he developed an interest in the history of science.  In 1914 he returned to Russia, and a few years later joined the Red Army to fight in the revolution. He continued his studies in physics and history in Moscow and in 1928 moved to Berlin to collaborate with Richard von Mises. Von Mises directed Hessen’s attention to Ernst Mach and the Vienna Circle, which was to prove fateful. He returned to Moscow in 1930 and became engaged in philosophical controversies over whether a good Communist could also support Einstein and be a Machian idealist. He lost these arguments – in the sense that he was accused by the Communist Party of conspiracy in 1934, and was secretly tried, convicted, and executed in 1936. He was officially rehabilitated in 1956, which probably would have pleased him had he not been dead already for 20 years.

It may seem surprising that anyone could be convicted and executed for being an idealist, but the dialectical space of the Soviet Union was a treacherous place. Earlier in the century, Lenin had argued that attempts to ground scientific knowledge in an individual’s fluctuating experience leads to the conclusion that scientific theories are necessarily open to revision as experience demands, which meant that Marxism in particular was open to revision. Hessen and his colleagues were arguing that Machian idealism (which is basically a ramped-up version of Berkeley’s idealism) was in fact a kind of lawbound materialism, inasmuch as “matter” could be  reduced to measurements and experience, and bound by lawful regularities. But in the estimation of Stalin’s courts, these arguments were insufficient – or, one speculates, the simple fact that Lenin’s word was not sufficient for these uppity philosophers was reason enough to convict them of something.

The effect of Hessen’s 1931 lecture on anglophone historians and philosophers of science was complex. On the one hand, there emerged several varieties of externalist approaches to the history of science, emphasizing economics, religion, culture, psychology, and politics in varying degrees. Some (e.g., John Desmond Bernal) held to a strictly Marxist line, putting economic considerations in front of everything else, while others (e.g., George Norman Clark and Robert K. Merton) assembled multi-causal explanations of scientific development. On the other hand, in opposition to Hessen, other historians and philosophers (e.g., Alexandre Koyré) leaned toward internalist explanations, maintaining that it was clear-eyed empiricism and logic that pushed science forward, and social factors could be safely ignored. Inasmuch as such internalist accounts were rooted in conceiving individuals as behaviorally free from social determination, they served to promote the ideology of liberal capitalism. It is not surprising that, for the most part, internalist approaches to the history and philosophy of science dominated anglophone academics for the better part of the 20th century. The principal exception was the sociology of scientific knowledge program (SSK), founded in Edinburgh by David Edge, advanced in following years by Barry Barnes, David Bloor, Steven Shapin, and Simon Schaffer.

I *think* it’s safe to say that the principal holdout nowadays for thoroughly internalist historical approaches is a sect of historians of philosophy, trained in philosophy departments with very little exposure to history. But even here, there is a steadily advancing wave of more externalist or “contextual” approaches, though these approaches still typically steer clear of economics, politics, and culture. They are contextualist only in the sense that they pay attention to lesser-read texts published in the period they study. So their subjects are still free, disembodied minds, though these minds have read more broadly than imagined previously.

Posted in Historical episodes, Items of the academy / learning | 7 Comments

Is there such a thing as the history of philosophy?

(Reading Christia Mercer. “The Contextualist Revolution in Early Modern Philosophy.” Journal of the History of Philosophy 57, no. 3 (2019): 529-548.)

Christia Mercer has revisited the methodological battles that have waged among scholars of the history of philosophy. She uses as her starting point a 2015 exchange between Michael Della Rocca and Dan Garber. Garber charges Della Rocca with being engaged in “rational reconstruction” of Spinoza’s Ethics. What this means is that Della Rocca is not concerned so much with Spinoza’s historical context as with the integrity of Spinoza’s thought. With such an approach, Della Rocca is prone to creating new arguments on Spinoza’s behalf, and considering objections Spinoza never conceived, in an attempt to push Spinoza’s philosophical system to its greatest philosophical potential. Garber, by contrast, is more interested in situating historical philosophers in their social and political contexts, without caring so much about whether the resulting interpretation of their views makes for “legitimate” philosophies, as judged by contemporary standards. 

Mercer’s main claim is that these two apparently different approaches really have more in common than one might initially think, and that since the 1980s there has been a decisive trend among historians of philosophy to pay closer attention to both texts and contexts. Until the 1980s, the prevalent methodology could be seen as “extreme appropriationism”, where so-called historians of philosophy in fact did not care at all about historical questions, and instead raided the works of dead philosophers for new ideas whose value rested in their applicability to the philosophical questions currently en vogue. But steadily over the following decades, according to Mercer, philosophers began to care about issues of translation, and so historical contexts, and so the relevance of other thinkers then important but now forgotten. Historians of philosophy as a group traveled in the direction of obeying a “Getting Things Right Constraint” (GTRC), which means paying attention both to historical context as well as philosophical intelligibility, with different individual philosophers perhaps placing more weight on one dimension than the other. In short: historians of philosophy have gotten much better at their craft, and as a whole are providing accounts and interpretations that are both historically informed and philosophically fruitful.

In short, a methodological revolution has come upon us like a thief in the night:

As the philosophical advantages of a non-appropriationist approach became increasingly evident and as innovative early modernists exposed the richness of the period’s philosophy, contextualism and its commitment to the GTRC gained a momentum that could not be stopped. Early modernists are now committed contextualists in that they aim to explicate as clearly as possible the authentic views of a wide range of historical texts, although they differ in the skills used and projects selected to attain that goal.

Mercer adds a further interest that historians of philosophy would do well to consider, which is to explore the ways in which historical philosophers, in their particular contexts, may have light to shed upon social and political problems of our own day. Some things, alas, never change; and understanding how Spinoza or Wollstonecraft responded to problems of their own day may give us further material to consider as we grapple with our own, and especially issues of diversity and inclusion.

I am always heartened to see someone offer a friendly, ecumenical approach, and so am cheered to read Mercer’s insights into recent history of history of philosophy. I think she is right to see that scholarship has gotten much better as a whole over recent decades, and that there is room within the GTRC for a variety of approaches, questions, and methods. But I would like to add to her insights some further issues about academic disciplines that her account does not address.

I think the bigger question that lies below Mercer’s discussion of methodological disagreements is the question of whether philosophy, and history of philosophy in particular, is to be counted among the humanities. It is a question about the sort of scholarly activity philosophy is: is it in the same general category that literature and history fall within, or is it something else? Historians and scholars of historical literature do work that often overlaps. An historian studying early 17th century London and a literary scholar studying Shakespeare will read each others’ works with great delight and profit, and can expect to have interesting disagreements. Some historians of early modern philosophy will be able to join in this discussion, especially those who are studying Francis Bacon in contextual fashion. But many others will twiddle their thumbs on the sidelines until a properly philosophical topic comes up for discussion, like the adequacy of empirical induction as a basis for science. The first group places philosophy within the humanities, and is interested in reading literature and learning history in order to deepen their understanding of the philosophers of the period. The second group cannot find much of interest in all this talk of guild formation and Atlantic trade routes. Their concern is over something the historian and literature scholar are ignoring: namely, whether Bacon (or whomever) managed to come up with anything of genuine philosophical interest, and not “merely” of historical or literary interest.

The “humanities” as a group of disciplines was a 19th century invention, and it has never been exactly clear where philosophy fits. Practically, of course, the academic departments have been shoe-horned into colleges of humanities, mainly because there has been nowhere else to put them. Several subdisciplines of philosophy – like metaphysics, epistemology, logic, and philosophies of action, mind, and science – really have nothing to do with other humanistic disciplines. Really nothing: the separation is entire and complete. History of philosophy, political philosophy, and ethics are mixed cases, depending on the sorts of interests of the individual scholars. A philosopher interested in the ways in which gender has been portrayed in films will have much to discuss with humanists, as will a philosopher interested in the politics of race. But a philosopher interested in the legitimacy of Rawls’ theory of justice or rule-based utilitarianism can expect to have little to say to other humanists, and little to learn from them. (For the most part; again, individual types of interest vary considerably.) 

When it comes to the history of philosophy, this disciplinary aporia gets played out in disputes over methodology. Questions over “the right way to do history of philosophy” are in fact questions about the sort of discipline philosophy is. Normally questions over methodology can get settled at least somewhat by trying to see which methodology yields the best results. But what is at issue here is what counts as best results. Do we want a richer understanding of the world in which Spinoza crafted his philosophy, and why his context led him to raise some questions while ignoring others? Or do we want to explore the conceptual space in which Spinoza carved out a distinctive niche? The answer here depends on the philosopher, and what gets them excited, or at least which group of peers they are trying to engage.

In this way, history of philosophy (and perhaps also ethics and political philosophy) come to resemble “multidisciplinary disciplines” like religious studies, international studies, or gender studies. There isn’t a single disciplinary model, no shared methodology, which brings unity to these areas of study. That is not to say they are not valuable, of course, but they are not, properly speaking, disciplines. They are instead “areas of study” admitting of different kinds of questions and different methods. A scholar in religious studies may be more of a historian, or more of a sociologist, in terms of method and approach. The same is true of historians of philosophy, as they may be more historical or more philosophical. Just as it would be futile to try to establish a single method for religious studies, it would be futile to do so for history of philosophy.

(That being said, I will state my own preference. I think philosophers ought to be humanists, mainly because that suits my own inclinations. It also is a good idea, I think, for academic programs to try to integrate with others, where possible; and, frankly, no one else has much interest in the non-humanistic endeavors of philosophers. But this latter point is merely one of strategy in the politics of academia.)

In all, I suppose this leads me to a question I would like Mercer and others to reflect upon – namely, whether there really is a discipline of history of philosophy, which has its own distinctive kind of methodology. I suspect the answer is no, which means we should stop looking for the right way to do it. Let’s just do it, and see what we learn.

Posted in Items of the academy / learning | 2 Comments

Hobbes and coins

Thomas Hobbes saw humans as purely mechanical devices. External objects press against us in one way or another, setting off a chain reaction of interior pulleys, wheels, and ratchets that engage one another and result in some version of “Cuckoo!” escaping our lips. In some way that he saw no need to explain, the motions of our inner works are paired up with the contents of our experiences: our ideas, premonitions, appetites, urges, and fears. And so when one idle thought casually links up with another, there is at the same time some mechanical action causally linking up with another.

Hobbes offered an example of what seems like a “free” association in fact being causally determined by a host of associations and traces of memory:

For in a discourse of our present civil war, what could seem more impertinent than to ask (as one did) what was the value of a Roman penny? Yet the coherence to me was manifest enough. For the thought of the war introduced the thought of the delivering up the king to his enemies; the thought of that brought in the thought of the delivering up of Christ; and that again the thought of the 30 pence which was the price of treason; and thence easily followed that malicious question; and all this in a moment of time, for thought is quick.

Set aside the political point of the example, with the gratuitous comparison of Charles I with Our Lord and Savior. We might instead wonder who it was who raised such an “impertinent” question? Of course, it may have been just some fellow with whom Hobbes was conversing one day. Or, if you run a search on Early English Books Online, looking for any tract concerning ancient coins published during the civil war but before Hobbes’s publication of Leviathan (1651), you will find exactly one: The Scripture Calendar, Used by the Prophets and Apostles, and by our Lord Jesus Christ, paralleld with the new Stile, and Measures, Weights, Coyns, Customes, and Language, of Gods ancient people, and of Primitive Christians (London: printed by M. B. for the Company of Stationers, 1649), written by the clergyman Henry Jessey (1603-1663). For the most part, Jessey’s work is an estimation of just where notable Biblical events happened to fall on the calendar. But he also translated ancient measures and weights to more contemporary ones, and briefly calculated the contemporary values of several ancient coins, according to their weight in silver. In his estimation, the Roman penny, or drachma, was perhaps worth about seven pence. So that’s settled.

Jessey was an apt target for Hobbes’s example, as he was a somewhat radical Protestant and supported the revolution. Jessey would have been the sort of guy Hobbes would seek to skewer. More specifically, Jessey was a “Particular Baptist”, which is a baptist who restricts holy communion only to fellow Particular Baptists. A fellow member of this sect was a guy named “Praisegod Barebone” (c. 1598-1679), which you would think is one of the greatest names ever, at least until you learned that in fact his given name was “Unless-Jesus-Christ-Had-Died-For-Thee-Thou-Hadst-Been-Damned Barebone”. Good ole UJCHDFTTHBD Barbone was a leather worker who also offered the occasional brimstoney sermon, and was imprisoned for some time in the Tower because of the religious ruckus he stirred up. Praisegod was rewarded later by Cromwell with a position in the Nominated Assembly (the rebels’ form of parliament), which was ridiculed by its critics as “Barebone’s Parliament”.

330px-Praise-god_Barebone

Praisegod Barebone (Wikipedia)

Praisegod had a son with the considerably less flashy name “Nicholas Barbon” (1640-1698). Nicholas had a greater measure of worldly sense, and was one of several entrepreneurs who hit upon the idea of selling fire insurance in the aftermath of the London Fire in 1666. He seems to have been an expert manipulator, gathering his creditors into an ornate dining room and affecting such aggressive bonhomie as to guilt them into backing off on their demands. He is known today as an early “scientific” economist, as he thought in theoretical terms about the value of the tarnished lucre he was accumulating. He realized that it really did not matter what rare metals coins are made of, so long as everyone agrees to treat them as valuable. He argued for this view against John Locke, who viewed money as itself a commodity whose value depended on relative scarcity. (For more, see this account.) In the end, there may have been some truth in both views: money is what people think it is, but people at that time did think of money as valuable precisely because of the precious metals involved. Hence it was important to keep the pound as sound as a pound, which is why someone no less than Isaac Newton was assigned the role of Keeper of the Mint.

So, in all, it would seem that Hobbes was wrong, and it was not “impertinent” at all to speculate in the midst of civil war upon the value of the Roman penny.  Anyone trying to manage a government, let alone a revolution, would do well to pay attention to such things.

Posted in Historical episodes | 2 Comments

What we know when we know particulars

Some reflections on the early sections of Hegel’s Phenomenology of Spirit:

georg-wilhelm-friedrich-hegel-the-phenomenology-of-spiritIf we try to think about what is most obvious in our experience, and what the most basic elements of knowledge are, we turn to sense perception. For it seems like the more our minds and our concepts are mixed up with what we are trying to know, the more likely it is that there will be some “ideological pollution” through psychological or social forces. We would like to have something pure and basic that is what it is, no two ways about it. So we look to a green button, a patch of red on a coffee mug, the smell of mint. No matter how we have been raised, or what other confusions lurk within our minds, those sense experiences are simply and humbly given; we cannot change how they appear by changing our minds about them.

But Hegel asks us to think more carefully and try to grasp what it is we come to know when we turn to our sense perceptions. Let us take that patch of red as an example. In order to make a solid knowledge claim of the form, “I know that X”, we shall have to fill in the value for X. For starters, we might let X be “there is a red patch”. But there are many shades of red, and many shapes of patches. If we are seeking knowledge of a sense particular, and not knowledge of redness or shapedness, we shall have to be more specific. We might try “coral red” or “fire-engine red”, and we might try “trapezoidal” or “blobby and nose-shaped”. But these are also general qualities, and not sense particulars. If we want to make our knowledge claim focus on a given particular, and not general qualities, we shall have to somehow manage to refer to the this in our experience, the particular thing we are experiencing, and not its general features. If we allow ourselves to do so, what we can say we know is that this is here, or this is now – understanding “this”, “here”, and “now” to have a special emphasis and to somewhat mysteriously latch onto the elements of our experience. But we should not deceive ourselves; even with our special emphasis, “this”, “here”, and “now” are not particular items, but are general terms that can be applied in infinitely many other cases to infinitely many other objects. For there are many thisses, many heres, and many nows.

We have failed to come up with a specific object in our knowledge of sense particulars. Though we tried to find a value for X that was itself a particular, our best efforts resulted in knowledge not of a particular, but of terms or concepts that range over a broad array of cases. What we know in our sense perception is not anything particular, but only that “This is here now”, a claim which is always true of every sense perception. Hegel thinks the lesson to be learned from this failure is that our knowledge of sense experience is not of particulars, but of universals. We thought initially to turn to our senses in order to find a concrete thing that was unpolluted by concepts, and instead what we found is that our sense experience – at least, insofar as it can be articulated in language – is only of universals.

We might seize upon the qualification about language, and place fault there. We might insist we really are experiencing and knowing a particular, but due to a shortcoming of language we cannot find the right words to express it. But we should consider seriously whether we want to make this move. If we start allowing for knowledge that cannot be captured by language, we allow for knowledge that cannot be articulated and transmitted to others. We close off opportunities for testing, for experiment, and for disagreement or confirmation. We close off the public dimension of knowledge, and we should begin to wonder whether essentially private knowledge can do any of the work we normally expect knowledge to do. If it cannot be articulated, communicated, and assessed, then is it really knowledge?

Hegel puts the point this way:

We also express the sensuous as a universal, but here is what we say: This, i.e., the universal this, or we say: it is, i.e., being as such. We thereby of course do not represent to ourselves the universal This or being as such, but we express the universal; or, in this sense-certainty we do not at all say what we mean. However, as we see, language is the more truthful. In language, we immediately refute what we mean to say, and since the universal is the truth of sensuous-certainty, and language only expresses this truth, it is, in that way, not possible at all that we could say what we mean about sensuous being. (section 97; Pinkard translation)

In this case language is our teacher. We thought we meant one thing, but language shows us that we cannot possibly say it. What can be said, and what can be articulated as knowledge, is not what we mean when we inwardly point to our particular experience and call it “this”; the only value we can have for X is a universal. What language teaches us here, according to Hegel, is that so far as knowledge is concerned, what we learn through sense experience is not knowledge of particulars, but knowledge of universals.

Of course, there are further surprising consequences that this recognition leads to in Hegel’s philosophy, but we might pause to note that this result is obviously true. Consider the wide range of published items of knowledge: scientific papers, books, articles, etc. Not one of them makes use of any sense particulars, at least not any of the kind we were looking for at the beginning of this discussion. They make use of correlations, causal connections, generalizations, and, in short, universals. Someone might introduce a paper by cleverly noting, “At 12:01 a.m., I saw the black needle swerve to indicate 1.025”, but that would be of only passing interest, and would not itself play a crucial role in the articulation of what the author has learned. (Furthermore, as we have seen, such a claim would fail in conveying knowledge of any sense particulars anyway.) An article might include detailed tables and graphs of what has been observed, but the data would be meaningful only insofar as they were representative of some deeper and more universal phenomenon. Our knowledge is of generalities, not of particulars, and the more significant our knowledge is, the more this is true.

Posted in Historical episodes, Kant and/or Hume, Metaphysical musings, Uncategorized | Leave a comment

Unvatting the Brains: Putnam, Bostrom, and thinking the unthinkable

comic.gif

The worry is familiar. All of my experience comes to me via my central nervous system, which is a biochemical electrical network. It can be hacked. The data coming from my eyes and ears and so on are converted to electrical signals, and so if those electrical signals were stimulated artificially (i.e., not from eyes and ears), I would not be any the wiser. All my experience, including all experience I have ever had up to and including this moment, could be experience not of the world, as it seems, but experience of some neural stimulation machine. I could be a brain in a vat.

It’s an easy doubt to walk away from, but harder to disprove logically, because any witness called to the stand could (again) be the result of the neural stimulation machine. But it is this very fact – the disprovability of the Vat hypothesis – that has caused many philosophers to wonder if some mistake has been made along the way. G. E. Moore famously asserted that his real, definitive knowledge that he has hands, that his shoes are tied, and so on, is more than enough to demonstrate that any such far-out skeptical hypothesis is false. (Samuel Johnson provided the same “argument” when he refuted Berkeley’s idealism by kicking a stone.) In a contest between the certainty of our immediate world, and the vague possibility that we might be thoroughly deceived, certainty wins out.

But other philosophers have dug more deeply into the argument and have come to the conclusion that the radically skeptic scenario is, in fact, self-refuting. The most famous of these philosophers was Hilary Putnam, in a paper entitled “Brains in a Vat”. I think Putnam’s argument is correct. But I have found it difficult to convey to others, so I am going to try to sneak up on it from behind. In an effort to make the central idea clear, I am going to turn to Nick Bostrom’s argument for thinking we may be living in a simulated universe.

matrix

Neo, Sims-style

The argument can be made quite sophisticated, but the central idea is this. The universe is really big; in fact, it is mind-bogglingly huge. And it is super-duper old. In all that space and all that time, there must have been intelligent civilizations. And some of these must have developed supercomputers. And – assuming that some sort of supercomputer can house a consciousness, or even house several or many consciousnesses – some civilizations must have become interested in artificially creating consciousnesses believing themselves to be living in environments that seem real, but in fact are artificially created. (Think of “The Sims”, but suitably ramped up.) Maybe some of these advanced civilizations, or even just one of them, really went crazy and artificially created trillions and trillions of such conscious entities. All of this is possible, and maybe even probable, depending on the assumptions that are made and how the numbers are run. Upshot: there could be vastly more “simulated lives” than real ones. And so, by the numbers, it is more likely that you are experiencing a simulated universe than that you are experiencing a real one. Call this “the simulation conclusion”.

Grant all this for the moment. Suppose all the numbers check out, and the conclusion is indeed more probably true than its denial. If the conclusion is true, then all our experience is of a simulated universe (probably). But this means we really don’t know anything about the real universe. We don’t know, in particular, how big it is, or how old it is, or how many advanced civilizations there may be, or whether computers are possible (let alone artificial intelligence). In fact, we do not know any of the premises that led us to the conclusion. So if the conclusion of the argument is true, then the argument should be dismissed.

But the disaster of the conclusion is even worse than this. If the simulation conclusion is true, then none of our words or ideas refer to anything in the real universe. For all our words and thoughts have been gained as a result of electrical transactions within the supercomputer that houses us. “The tree stands alone on the quad” has nothing to do with trees or quads, for such things have never entered into our experience, and may not even exist in the Real World. Instead, “The tree stands alone on the quad” has its meaning, and is either true or false, depending on certain states of the supercomputer. The sentence is really about the supercomputer, though it may seem to us to be about something else. That, in fact, is just another way of expressing the radically skeptical scenario: our thoughts aren’t about what they seem to be about, but are instead about something else.

But if this is the case, then our simulation conclusion – “We are probably living in a simulated universe” – is itself not about us, or living, or universes, or simulations. None of these words really mean what we think they mean. And so the sentence is either false or meaningless (or true, we might say, but true about electrical states of a computer, and not true about us or the universe) – in any case, it isn’t true in the way we supposed it was. Thus the simulation conclusion refutes itself.

Hilary Putnam makes the very same argument about brains in vats – namely, that if we are brains in vats, then none of our words mean what we might think we mean, including the words “brains” and “vats”, so the supposition is self-refuting. I think it’s easier to see the main idea in Bostrom’s argument because it lays out so many suspicious premises – assumptions about the size of the universe and relative frequency of advanced civilizations and so on – that we can easily begin asking how we can be sure of those premises (especially if it turns out we’re living in a simulation). Putnam’s premises, by contrast, aren’t as prominent, or as jarring to our sensibilities. We’re only talking about a brain in a vat, after all – how far off can our assumptions be?

Posted in Metaphysical musings, Uncategorized | 3 Comments

Minds as predictive engines

(Reading Andy Clark, Surfing Uncertainty (Oxford UP 2015))

I’m no longer sure I know what an “ordinary” theory of mind would look like, but I’m guessing that it would resemble an organized camp of explorers. The explorers, or our senses, venture out into the world and report what they see, hear, and encounter. Back at headquarters, all the information is assembled and projected into a map or model of the territory. The people in charge study the map and decide upon courses of action: maybe one of the teams has turned up something interesting, and should explore further; or maybe headquarters should relocate, or shore up its defenses; or maybe the camp needs to issue a report on its findings to other camps in the area, and so on. The basic model is that information is received into the camp, processed, and decisions are made. This would be a rational way to explore a foreign territory, so it might come naturally to us as a model for how our minds work, trapped as they are in the unfamiliar territory of the world.

But recent promising work in neuroscience suggests this is not at all how the mind works. This news reached me in a fascinating article in WIRED about Karl Friston, who is at the center of a range of new ways to think about how we function, and how consciousness might arise out of biological survival mechanisms. At the heart of Friston’s theory is the free-energy principle, which is sort of a mechanical strategy to keep living things from falling victim to the second law of thermodynamics. According to the free energy principle, a living system tries to keep free energy to a minimum, which means it basically tries not to waste any energy. Ideally, in the simplest possible world, a living organism would sit in one spot, absorb nutrients, and poop as little as possible (rather like a roommate I once had). But our world also allows for the development of more complex organisms that still adhere to the free energy principle, but are able to move around and find the simpler organisms and eat them.

To get these more complex bugs, you need to outfit them with some sensory tools, some movement skills, and a little internal engine for generating predictions about what their senses should be telling them. Then install the following algorithm: 1. Generate a prediction about what the senses should be reporting. 2. Check the prediction against what the senses are in fact reporting. 3. If there is a difference – a surprise! – then do something to make the surprise go away. So our complex bug might begin with the expectation that everything is going swell, and nutrients are being absorbed. But if, surprisingly, this turns out not to be so, then the bug has to do something to minimize the difference: move forward and bite, or move left or right and bite, and so on, until the bug’s expectations are being met – and then, invariably, another surprise comes along, and the process repeats. Basically, a living this does what it has to do until its senses tell it that the predictions it is generating are true, and then rests in that state for as long as possible.

(No doubt this is why we sleep: shutting down our senses is a straightforward way to keep our surprises to a minimum. If we could sleep our whole lives, we probably would. But, alas, there’s other business that we need to do, like eating and mating, and they require wakefulness. Hence alarm clocks.)

The fascinating attraction of Friston’s thinking is that something like this simple algorithm can be scaled upward into an account of human perception and behavior. On this view, we are always generating predictions about our experience – what we are sensing, who we are, what we are doing, and what is happening. These are our ongoing narrative theories, telling us what our circumstances are and what we are doing. These ongoing theories prompt us to do the things that confirm to ourselves that the world is just as we think it is. Occasionally, the world dishes up a surprise we can’t ignore, and so then we have to change our dominant prediction to something more fitting (and therefore, in the end, less surprising). Then we act to make that prediction align with our senses until the next surprise comes along.

Imagine having lunch with Alice. Your brain is generating the prediction “I am having lunch with Alice”, and everything your senses tell you suggests you are right: there’s Alice, there’s some food on your fork, tablecloths, sugar packets. You continue to behave in ways that confirm to you that you know what you are doing: you talk, eat, smile, nod. Then your eyes report that somebody at a neighboring table is holding their phone at eye level in your direction. Well, they are probably just looking up some information, which is consistent with you having lunch with Alice. So no problem. But now there’s another person holding up their phone in a similar way. Hmm. You are still having lunch with Alice, but uncertainty is beginning to build. Now a third person has come up to your table – and no, he’s not the waiter, and he is holding up his phone and pointing it at you in the way people do who are taking pictures or a video. This is now something other than having lunch with Alice. In a desperate attempt to keep the old prediction afloat, you look around – behind you, down at your shirt, trying to find something that everyone might be looking at while you are still having lunch with Alice. But nothing seems out of order. You ask what the person is doing, or you give them a rude glance, because you want to go back to having lunch with Alice. But that is no longer possible: whether they go away or not, you are now going to have to generate some new prediction about what is going on, because the old one will no longer serve. You are not just having lunch with Alice. Maybe you’re the target of a joke? Maybe Alice has suddenly become famous? Or you’re famous now for some reason? You’re on Candid Camera? As fast as you can generate predictions, you are checking the evidence to find some prediction that reduces the gap between what is going on and what you think is going on. You work to reduce the surprise.

(Are you annoyed that I’m not going to tell you what was really happening at the lunch? If so, it helps to demonstrate how needy we are of some sort of prediction that squares with the available evidence. When nothing seems to quite fit, it nags at us and we are annoyed. Sorry.)

inkedthinker_liAnother example: just now I caught myself stroking my beard. I think this was to confirm to myself my prediction that I am engaged in thinking; it is the sort of behavior I associate with thinking. So that checks out. Then I scratched my head, probably to prove to myself that I am right about there not being a bug up there. (Thank goodness.) I re-read what I just wrote, furrowing my brow, hence convincing myself that I was faithfully articulating an idea and reflecting carefully upon it (which is what I predicted about my own behavior). But if, just now, my house’s smoke alarm goes off – mreeeeerp!!!!! – all of my predictions go out the window, since a blaring smoke alarm does not at all support my prediction that I am thinking. This upsetting change would force me to generate quickly a new dominant prediction of my own behavior: what I am doing now, I predict, is something about that noise. My body follows suit to make that new prediction come out as true, and I start moving.

The “ordinary” model of mind has us making decisions and plans on the basis of some rational consideration of the evidence being presented. The Friston model has us making a prediction about what we are doing, and sending out for evidence to confirm our predictions. If the evidence doesn’t fit our predictions, we revise the prediction, and send out for evidence again. We’re always looking and listening with a prediction in mind about what we will be seeing and hearing, and we stick to those predictions for as long as we possibly can, and revise when we must. It’s probably a stupid way to organize an exploration team, or to learn the truth about the world: it is geared toward confirmation bias and all manner of post hoc rationalizing. But it is possibly a very economical design for a living organism whose principal goal is not to waste any energy.

In this model, we don’t have to end up with predictions that are true; we only have to end up with predictions that don’t clash against the evidence. If I can cobble together a worldview that is pretty continuous with what I do and what I encounter, I have satisfied the free energy principle.

Just after I first read the WIRED article about Friston, I was walking on a college campus on a cold rainy day and came across some people with signs urging everyone to accept Jesus Christ as Lord and Savior. They had no takers; everyone gave them wide berth. I asked myself why they were doing this – standing out in the rain ineffectually promoting what I think is a set of metaphysically bizarre beliefs. The answer came to me quickly: they are standing out in the rain not to persuade others, but in order to confirm their prediction about themselves that they are Christian. What makes religion weird is that it makes such confident predictions about stuff no one could ever possibly see or test in experiments – that God exists, the God loves us, that original sin is real and really bad and can be cured through deicide, etc. So if for whatever reason you predict that you have these beliefs, you have to find something else in the environment to convince yourself that your prediction is right. Standing out in the rain will do the trick – for why would you do such a stupid thing if it weren’t for the fact that you believe in those religious truths? Standing out in the rain, wearing a cross, attending long church services, engaging in prayer behavior – these are just about the only empirical realities you can use to convince yourself that you really do believe in the religious stuff. You can’t get any confirming signals for the beliefs themselves, but you can manufacture your own signals proving to you that you believe them.

And this holds not just for religion, of course. I am a scholar, and so I need to convince myself of this on a daily basis. So I fill my walls with books. I write blogposts. I wear a vaguely European style of clothing, and speak in long and complete sentences about esoteric things. I do all this in order to convince myself that the prediction – “I am a scholar” – is true. You are a fan of the Sports. So you had better get a bumper sticker, a sweater with an emblem, and a cable subscription, for without these things you will lose confidence in the claim that you are a Sports fan. Do you want to believe the world is flat? Start evangelizing, and be sure to do so in contexts where you’ll receive a lot of pushback and ridicule, and pretty soon you will believe – for no one would submit themselves to such humiliating degradation if they didn’t really believe it. Case closed, and congratulations.

I’m still at the stage of making sense of Friston’s view, so there’s a lot more reading to do, but Andy Clark’s works have been helping me to gain a clearer picture of the view, and especially of the ways it has been borne out by neuroscience research. There is Clark’s 2015 Surfing Uncertainty, but also his lengthy and illuminating article in Behavioral and Brain Sciences (“Whatever next? Predictive brains, situated agents, and the future of
cognitive science”, 36, 2013, pp. 181–253). From what I see, the theory has just the right features to form an explanatory bridge from the more mechanistic or algorithmic biological world to the world of seemingly intelligent human behavior. It’s a piece that fits the hole in the puzzle of how nature could engineer up something like us.

Posted in Books, Meanings of life / death / social & moral stuff, Metaphysical musings | Leave a comment

RNZ interview

The Sunday Show of Radio New Zealand interviewed me about my Delphic maxims piece. It was a delight to speak with Jim Mora, the host. You can listen to the interview here, if you like. We vacationed in New Zealand nearly a decade ago, and had a wonderful time. I regularly think back upon our chance encounter with the genius of Geraldine.

Posted in Uncategorized | Leave a comment