The worry is familiar. All of my experience comes to me via my central nervous system, which is a biochemical electrical network. It can be hacked. The data coming from my eyes and ears and so on are converted to electrical signals, and so if those electrical signals were stimulated artificially (i.e., not from eyes and ears), I would not be any the wiser. All my experience, including all experience I have ever had up to and including this moment, could be experience not of the world, as it seems, but experience of some neural stimulation machine. I could be a brain in a vat.
It’s an easy doubt to walk away from, but harder to disprove logically, because any witness called to the stand could (again) be the result of the neural stimulation machine. But it is this very fact – the disprovability of the Vat hypothesis – that has caused many philosophers to wonder if some mistake has been made along the way. G. E. Moore famously asserted that his real, definitive knowledge that he has hands, that his shoes are tied, and so on, is more than enough to demonstrate that any such far-out skeptical hypothesis is false. (Samuel Johnson provided the same “argument” when he refuted Berkeley’s idealism by kicking a stone.) In a contest between the certainty of our immediate world, and the vague possibility that we might be thoroughly deceived, certainty wins out.
But other philosophers have dug more deeply into the argument and have come to the conclusion that the radically skeptic scenario is, in fact, self-refuting. The most famous of these philosophers was Hilary Putnam, in a paper entitled “Brains in a Vat”. I think Putnam’s argument is correct. But I have found it difficult to convey to others, so I am going to try to sneak up on it from behind. In an effort to make the central idea clear, I am going to turn to Nick Bostrom’s argument for thinking we may be living in a simulated universe.
The argument can be made quite sophisticated, but the central idea is this. The universe is really big; in fact, it is mind-bogglingly huge. And it is super-duper old. In all that space and all that time, there must have been intelligent civilizations. And some of these must have developed supercomputers. And – assuming that some sort of supercomputer can house a consciousness, or even house several or many consciousnesses – some civilizations must have become interested in artificially creating consciousnesses believing themselves to be living in environments that seem real, but in fact are artificially created. (Think of “The Sims”, but suitably ramped up.) Maybe some of these advanced civilizations, or even just one of them, really went crazy and artificially created trillions and trillions of such conscious entities. All of this is possible, and maybe even probable, depending on the assumptions that are made and how the numbers are run. Upshot: there could be vastly more “simulated lives” than real ones. And so, by the numbers, it is more likely that you are experiencing a simulated universe than that you are experiencing a real one. Call this “the simulation conclusion”.
Grant all this for the moment. Suppose all the numbers check out, and the conclusion is indeed more probably true than its denial. If the conclusion is true, then all our experience is of a simulated universe (probably). But this means we really don’t know anything about the real universe. We don’t know, in particular, how big it is, or how old it is, or how many advanced civilizations there may be, or whether computers are possible (let alone artificial intelligence). In fact, we do not know any of the premises that led us to the conclusion. So if the conclusion of the argument is true, then the argument should be dismissed.
But the disaster of the conclusion is even worse than this. If the simulation conclusion is true, then none of our words or ideas refer to anything in the real universe. For all our words and thoughts have been gained as a result of electrical transactions within the supercomputer that houses us. “The tree stands alone on the quad” has nothing to do with trees or quads, for such things have never entered into our experience, and may not even exist in the Real World. Instead, “The tree stands alone on the quad” has its meaning, and is either true or false, depending on certain states of the supercomputer. The sentence is really about the supercomputer, though it may seem to us to be about something else. That, in fact, is just another way of expressing the radically skeptical scenario: our thoughts aren’t about what they seem to be about, but are instead about something else.
But if this is the case, then our simulation conclusion – “We are probably living in a simulated universe” – is itself not about us, or living, or universes, or simulations. None of these words really mean what we think they mean. And so the sentence is either false or meaningless (or true, we might say, but true about electrical states of a computer, and not true about us or the universe) – in any case, it isn’t true in the way we supposed it was. Thus the simulation conclusion refutes itself.
Hilary Putnam makes the very same argument about brains in vats – namely, that if we are brains in vats, then none of our words mean what we might think we mean, including the words “brains” and “vats”, so the supposition is self-refuting. I think it’s easier to see the main idea in Bostrom’s argument because it lays out so many suspicious premises – assumptions about the size of the universe and relative frequency of advanced civilizations and so on – that we can easily begin asking how we can be sure of those premises (especially if it turns out we’re living in a simulation). Putnam’s premises, by contrast, aren’t as prominent, or as jarring to our sensibilities. We’re only talking about a brain in a vat, after all – how far off can our assumptions be?