Unvatting the Brains: Putnam, Bostrom, and thinking the unthinkable


The worry is familiar. All of my experience comes to me via my central nervous system, which is a biochemical electrical network. It can be hacked. The data coming from my eyes and ears and so on are converted to electrical signals, and so if those electrical signals were stimulated artificially (i.e., not from eyes and ears), I would not be any the wiser. All my experience, including all experience I have ever had up to and including this moment, could be experience not of the world, as it seems, but experience of some neural stimulation machine. I could be a brain in a vat.

It’s an easy doubt to walk away from, but harder to disprove logically, because any witness called to the stand could (again) be the result of the neural stimulation machine. But it is this very fact – the disprovability of the Vat hypothesis – that has caused many philosophers to wonder if some mistake has been made along the way. G. E. Moore famously asserted that his real, definitive knowledge that he has hands, that his shoes are tied, and so on, is more than enough to demonstrate that any such far-out skeptical hypothesis is false. (Samuel Johnson provided the same “argument” when he refuted Berkeley’s idealism by kicking a stone.) In a contest between the certainty of our immediate world, and the vague possibility that we might be thoroughly deceived, certainty wins out.

But other philosophers have dug more deeply into the argument and have come to the conclusion that the radically skeptic scenario is, in fact, self-refuting. The most famous of these philosophers was Hilary Putnam, in a paper entitled “Brains in a Vat”. I think Putnam’s argument is correct. But I have found it difficult to convey to others, so I am going to try to sneak up on it from behind. In an effort to make the central idea clear, I am going to turn to Nick Bostrom’s argument for thinking we may be living in a simulated universe.


Neo, Sims-style

The argument can be made quite sophisticated, but the central idea is this. The universe is really big; in fact, it is mind-bogglingly huge. And it is super-duper old. In all that space and all that time, there must have been intelligent civilizations. And some of these must have developed supercomputers. And – assuming that some sort of supercomputer can house a consciousness, or even house several or many consciousnesses – some civilizations must have become interested in artificially creating consciousnesses believing themselves to be living in environments that seem real, but in fact are artificially created. (Think of “The Sims”, but suitably ramped up.) Maybe some of these advanced civilizations, or even just one of them, really went crazy and artificially created trillions and trillions of such conscious entities. All of this is possible, and maybe even probable, depending on the assumptions that are made and how the numbers are run. Upshot: there could be vastly more “simulated lives” than real ones. And so, by the numbers, it is more likely that you are experiencing a simulated universe than that you are experiencing a real one. Call this “the simulation conclusion”.

Grant all this for the moment. Suppose all the numbers check out, and the conclusion is indeed more probably true than its denial. If the conclusion is true, then all our experience is of a simulated universe (probably). But this means we really don’t know anything about the real universe. We don’t know, in particular, how big it is, or how old it is, or how many advanced civilizations there may be, or whether computers are possible (let alone artificial intelligence). In fact, we do not know any of the premises that led us to the conclusion. So if the conclusion of the argument is true, then the argument should be dismissed.

But the disaster of the conclusion is even worse than this. If the simulation conclusion is true, then none of our words or ideas refer to anything in the real universe. For all our words and thoughts have been gained as a result of electrical transactions within the supercomputer that houses us. “The tree stands alone on the quad” has nothing to do with trees or quads, for such things have never entered into our experience, and may not even exist in the Real World. Instead, “The tree stands alone on the quad” has its meaning, and is either true or false, depending on certain states of the supercomputer. The sentence is really about the supercomputer, though it may seem to us to be about something else. That, in fact, is just another way of expressing the radically skeptical scenario: our thoughts aren’t about what they seem to be about, but are instead about something else.

But if this is the case, then our simulation conclusion – “We are probably living in a simulated universe” – is itself not about us, or living, or universes, or simulations. None of these words really mean what we think they mean. And so the sentence is either false or meaningless (or true, we might say, but true about electrical states of a computer, and not true about us or the universe) – in any case, it isn’t true in the way we supposed it was. Thus the simulation conclusion refutes itself.

Hilary Putnam makes the very same argument about brains in vats – namely, that if we are brains in vats, then none of our words mean what we might think we mean, including the words “brains” and “vats”, so the supposition is self-refuting. I think it’s easier to see the main idea in Bostrom’s argument because it lays out so many suspicious premises – assumptions about the size of the universe and relative frequency of advanced civilizations and so on – that we can easily begin asking how we can be sure of those premises (especially if it turns out we’re living in a simulation). Putnam’s premises, by contrast, aren’t as prominent, or as jarring to our sensibilities. We’re only talking about a brain in a vat, after all – how far off can our assumptions be?

Posted in Metaphysical musings, Uncategorized | Leave a comment

Minds as predictive engines

(Reading Andy Clark, Surfing Uncertainty (Oxford UP 2015))

I’m no longer sure I know what an “ordinary” theory of mind would look like, but I’m guessing that it would resemble an organized camp of explorers. The explorers, or our senses, venture out into the world and report what they see, hear, and encounter. Back at headquarters, all the information is assembled and projected into a map or model of the territory. The people in charge study the map and decide upon courses of action: maybe one of the teams has turned up something interesting, and should explore further; or maybe headquarters should relocate, or shore up its defenses; or maybe the camp needs to issue a report on its findings to other camps in the area, and so on. The basic model is that information is received into the camp, processed, and decisions are made. This would be a rational way to explore a foreign territory, so it might come naturally to us as a model for how our minds work, trapped as they are in the unfamiliar territory of the world.

But recent promising work in neuroscience suggests this is not at all how the mind works. This news reached me in a fascinating article in WIRED about Karl Friston, who is at the center of a range of new ways to think about how we function, and how consciousness might arise out of biological survival mechanisms. At the heart of Friston’s theory is the free-energy principle, which is sort of a mechanical strategy to keep living things from falling victim to the second law of thermodynamics. According to the free energy principle, a living system tries to keep free energy to a minimum, which means it basically tries not to waste any energy. Ideally, in the simplest possible world, a living organism would sit in one spot, absorb nutrients, and poop as little as possible (rather like a roommate I once had). But our world also allows for the development of more complex organisms that still adhere to the free energy principle, but are able to move around and find the simpler organisms and eat them.

To get these more complex bugs, you need to outfit them with some sensory tools, some movement skills, and a little internal engine for generating predictions about what their senses should be telling them. Then install the following algorithm: 1. Generate a prediction about what the senses should be reporting. 2. Check the prediction against what the senses are in fact reporting. 3. If there is a difference – a surprise! – then do something to make the surprise go away. So our complex bug might begin with the expectation that everything is going swell, and nutrients are being absorbed. But if, surprisingly, this turns out not to be so, then the bug has to do something to minimize the difference: move forward and bite, or move left or right and bite, and so on, until the bug’s expectations are being met – and then, invariably, another surprise comes along, and the process repeats. Basically, a living this does what it has to do until its senses tell it that the predictions it is generating are true, and then rests in that state for as long as possible.

(No doubt this is why we sleep: shutting down our senses is a straightforward way to keep our surprises to a minimum. If we could sleep our whole lives, we probably would. But, alas, there’s other business that we need to do, like eating and mating, and they require wakefulness. Hence alarm clocks.)

The fascinating attraction of Friston’s thinking is that something like this simple algorithm can be scaled upward into an account of human perception and behavior. On this view, we are always generating predictions about our experience – what we are sensing, who we are, what we are doing, and what is happening. These are our ongoing narrative theories, telling us what our circumstances are and what we are doing. These ongoing theories prompt us to do the things that confirm to ourselves that the world is just as we think it is. Occasionally, the world dishes up a surprise we can’t ignore, and so then we have to change our dominant prediction to something more fitting (and therefore, in the end, less surprising). Then we act to make that prediction align with our senses until the next surprise comes along.

Imagine having lunch with Alice. Your brain is generating the prediction “I am having lunch with Alice”, and everything your senses tell you suggests you are right: there’s Alice, there’s some food on your fork, tablecloths, sugar packets. You continue to behave in ways that confirm to you that you know what you are doing: you talk, eat, smile, nod. Then your eyes report that somebody at a neighboring table is holding their phone at eye level in your direction. Well, they are probably just looking up some information, which is consistent with you having lunch with Alice. So no problem. But now there’s another person holding up their phone in a similar way. Hmm. You are still having lunch with Alice, but uncertainty is beginning to build. Now a third person has come up to your table – and no, he’s not the waiter, and he is holding up his phone and pointing it at you in the way people do who are taking pictures or a video. This is now something other than having lunch with Alice. In a desperate attempt to keep the old prediction afloat, you look around – behind you, down at your shirt, trying to find something that everyone might be looking at while you are still having lunch with Alice. But nothing seems out of order. You ask what the person is doing, or you give them a rude glance, because you want to go back to having lunch with Alice. But that is no longer possible: whether they go away or not, you are now going to have to generate some new prediction about what is going on, because the old one will no longer serve. You are not just having lunch with Alice. Maybe you’re the target of a joke? Maybe Alice has suddenly become famous? Or you’re famous now for some reason? You’re on Candid Camera? As fast as you can generate predictions, you are checking the evidence to find some prediction that reduces the gap between what is going on and what you think is going on. You work to reduce the surprise.

(Are you annoyed that I’m not going to tell you what was really happening at the lunch? If so, it helps to demonstrate how needy we are of some sort of prediction that squares with the available evidence. When nothing seems to quite fit, it nags at us and we are annoyed. Sorry.)

inkedthinker_liAnother example: just now I caught myself stroking my beard. I think this was to confirm to myself my prediction that I am engaged in thinking; it is the sort of behavior I associate with thinking. So that checks out. Then I scratched my head, probably to prove to myself that I am right about there not being a bug up there. (Thank goodness.) I re-read what I just wrote, furrowing my brow, hence convincing myself that I was faithfully articulating an idea and reflecting carefully upon it (which is what I predicted about my own behavior). But if, just now, my house’s smoke alarm goes off – mreeeeerp!!!!! – all of my predictions go out the window, since a blaring smoke alarm does not at all support my prediction that I am thinking. This upsetting change would force me to generate quickly a new dominant prediction of my own behavior: what I am doing now, I predict, is something about that noise. My body follows suit to make that new prediction come out as true, and I start moving.

The “ordinary” model of mind has us making decisions and plans on the basis of some rational consideration of the evidence being presented. The Friston model has us making a prediction about what we are doing, and sending out for evidence to confirm our predictions. If the evidence doesn’t fit our predictions, we revise the prediction, and send out for evidence again. We’re always looking and listening with a prediction in mind about what we will be seeing and hearing, and we stick to those predictions for as long as we possibly can, and revise when we must. It’s probably a stupid way to organize an exploration team, or to learn the truth about the world: it is geared toward confirmation bias and all manner of post hoc rationalizing. But it is possibly a very economical design for a living organism whose principal goal is not to waste any energy.

In this model, we don’t have to end up with predictions that are true; we only have to end up with predictions that don’t clash against the evidence. If I can cobble together a worldview that is pretty continuous with what I do and what I encounter, I have satisfied the free energy principle.

Just after I first read the WIRED article about Friston, I was walking on a college campus on a cold rainy day and came across some people with signs urging everyone to accept Jesus Christ as Lord and Savior. They had no takers; everyone gave them wide berth. I asked myself why they were doing this – standing out in the rain ineffectually promoting what I think is a set of metaphysically bizarre beliefs. The answer came to me quickly: they are standing out in the rain not to persuade others, but in order to confirm their prediction about themselves that they are Christian. What makes religion weird is that it makes such confident predictions about stuff no one could ever possibly see or test in experiments – that God exists, the God loves us, that original sin is real and really bad and can be cured through deicide, etc. So if for whatever reason you predict that you have these beliefs, you have to find something else in the environment to convince yourself that your prediction is right. Standing out in the rain will do the trick – for why would you do such a stupid thing if it weren’t for the fact that you believe in those religious truths? Standing out in the rain, wearing a cross, attending long church services, engaging in prayer behavior – these are just about the only empirical realities you can use to convince yourself that you really do believe in the religious stuff. You can’t get any confirming signals for the beliefs themselves, but you can manufacture your own signals proving to you that you believe them.

And this holds not just for religion, of course. I am a scholar, and so I need to convince myself of this on a daily basis. So I fill my walls with books. I write blogposts. I wear a vaguely European style of clothing, and speak in long and complete sentences about esoteric things. I do all this in order to convince myself that the prediction – “I am a scholar” – is true. You are a fan of the Sports. So you had better get a bumper sticker, a sweater with an emblem, and a cable subscription, for without these things you will lose confidence in the claim that you are a Sports fan. Do you want to believe the world is flat? Start evangelizing, and be sure to do so in contexts where you’ll receive a lot of pushback and ridicule, and pretty soon you will believe – for no one would submit themselves to such humiliating degradation if they didn’t really believe it. Case closed, and congratulations.

I’m still at the stage of making sense of Friston’s view, so there’s a lot more reading to do, but Andy Clark’s works have been helping me to gain a clearer picture of the view, and especially of the ways it has been borne out by neuroscience research. There is Clark’s 2015 Surfing Uncertainty, but also his lengthy and illuminating article in Behavioral and Brain Sciences (“Whatever next? Predictive brains, situated agents, and the future of
cognitive science”, 36, 2013, pp. 181–253). From what I see, the theory has just the right features to form an explanatory bridge from the more mechanistic or algorithmic biological world to the world of seemingly intelligent human behavior. It’s a piece that fits the hole in the puzzle of how nature could engineer up something like us.

Posted in Books, Meanings of life / death / social & moral stuff, Metaphysical musings | Leave a comment

RNZ interview

The Sunday Show of Radio New Zealand interviewed me about my Delphic maxims piece. It was a delight to speak with Jim Mora, the host. You can listen to the interview here, if you like. We vacationed in New Zealand nearly a decade ago, and had a wonderful time. I regularly think back upon our chance encounter with the genius of Geraldine.

Posted in Uncategorized | Leave a comment

A poisoned peace

Albert-Camus-001“I realize that if through science I can seize phenomena and enumerate them, I cannot, for all that, apprehend the world. Were I to trace its entire relief with my finger, I should not know any more. And you give me the choice between a description that is sure but that teaches me nothing and hypotheses that claim to teach me but that are not sure. A stranger to myself and to the world, armed solely with a thought that negates itself as soon as it asserts, what is this condition in which I can have peace only by refusing to know and to live, in which the appetite for conquest bumps into walls that defy its assaults? To will is to stir up paradoxes. Everything is ordered in such a way as to bring into being that poisoned peace produced by thoughtlessness, lack of heart, or fatal renunciations.”  (Camus, Myth of Sisyphus)

Posted in Uncategorized | Leave a comment

On the Other Delphic Maxims

Now up at Aeon. The conclusion:

The fact that the great majority of maxims on the list can still serve us today is itself worth further reflection. There is no denying that our lives have changed a lot in the past 25 centuries. But the need to organise one’s priorities, to cultivate friendships and social bonds, to care for families, and to measure out one’s emotions – these are philosophical requirements at the foundation of a human life, and they haven’t changed. By reflecting on these maxims, and thinking through how they might change our lives, we form a kinship with those who turned to the ancient sages for guidance – and share in the human effort to live wisely.

Posted in Historical episodes, Meanings of life / death / social & moral stuff, Stacks of Books | 1 Comment

Say, whatever happened to Casearius?

51nP0weVoGL._SX248_BO1,204,203,200_Readers of Spinoza’s letters will recall the name “Casearius”. Johannes Casearius lived in the same house in Rijnsburg as Spinoza, and Spinoza taught him Cartesian philosophy, an effort which led in part to Spinoza’s book, The Principles of Cartesian Philosophy. Spinoza regarded Casearius as troublesome, and was wary of sharing his own views with him. Casearius went on to gain a degree in theology from Leiden, but couldn’t find work, and so signed on with the VOC. He ended up in Cochin (Kochi) in Southwest India in 1669, and there he met Hendrik Adriaan van Reede tot Drakenstein, a great naturalist, who was to author the multivolume botanical work, Hortus Malabaricus. Casearius was recruited to put the manuscript into proper Latin – some of which, I am guessing, he learned from Spinoza. According to Harold Cook (Matters of Exchange), both Van Reede and Casearius were broad-minded in religious matters, as was a third member of Van Reede’s team: Matthew of Saint Joseph, a friar of the Discalced Carmelites, who was extremely well-traveled and knowledgeable of local people and customs. Casearius eventually succumbed to some tropical disease, and died in 1677 (the same year as Spinoza) while en route to Batavia (Central Jakarta).

I should add that a lot of Van Reede’s botanical knowledge of Malabar came from three local experts: Apu Botto, Ranga Botto, and Vinaique Pandito [pandito = “scholar”]. These fellows weren’t just casual recognizers of flora, but experts trained in the classical literature of plants in their own culture (a great example of how Enlightenment knowledge rides upon the shoulders of unsung peoples).

Here are the makings of an interesting historical novel!

Posted in Historical episodes | Leave a comment

Gerry’s soldiers

saluteWe have been in the process of sorting through the detritus of my parents-in-law: lots of junk, no longer meaningful to anyone, but occasionally the striking this or that suggestive of a parent’s love, a freakish endeavor, or long afternoons of timeless play. This last mood was suggested by my father-in-law’s tub of tin soldiers.

There are nine intact pieces, missing no limbs or helmets, though little of the original paint shows through:

good soldiers

Most surprising among them is this lonesome cowboy, who must have been surprised as he wandered in from the prairies into the fearsome trenches:


And I can only imagine this Texan’s horror as he came across the body parts strewn across the fields:

body parts

But medical attention was available, for those who could still benefit from it:


Sadly, for me, the bicyclist’s broken wheel rendered him pretty much useless:


I’m sure Gerry had a lot of fun setting these guys up into various scenarios, and though I feel some regret that more of the pieces aren’t intact, I’d like to believe that they were played with thoroughly, which would mean they each did their duty.

Posted in This & that in the life of CH | Leave a comment