(Reading Andy Clark, Surfing Uncertainty (Oxford UP 2015))
I’m no longer sure I know what an “ordinary” theory of mind would look like, but I’m guessing that it would resemble an organized camp of explorers. The explorers, or our senses, venture out into the world and report what they see, hear, and encounter. Back at headquarters, all the information is assembled and projected into a map or model of the territory. The people in charge study the map and decide upon courses of action: maybe one of the teams has turned up something interesting, and should explore further; or maybe headquarters should relocate, or shore up its defenses; or maybe the camp needs to issue a report on its findings to other camps in the area, and so on. The basic model is that information is received into the camp, processed, and decisions are made. This would be a rational way to explore a foreign territory, so it might come naturally to us as a model for how our minds work, trapped as they are in the unfamiliar territory of the world.
But recent promising work in neuroscience suggests this is not at all how the mind works. This news reached me in a fascinating article in WIRED about Karl Friston, who is at the center of a range of new ways to think about how we function, and how consciousness might arise out of biological survival mechanisms. At the heart of Friston’s theory is the free-energy principle, which is sort of a mechanical strategy to keep living things from falling victim to the second law of thermodynamics. According to the free energy principle, a living system tries to keep free energy to a minimum, which means it basically tries not to waste any energy. Ideally, in the simplest possible world, a living organism would sit in one spot, absorb nutrients, and poop as little as possible (rather like a roommate I once had). But our world also allows for the development of more complex organisms that still adhere to the free energy principle, but are able to move around and find the simpler organisms and eat them.
To get these more complex bugs, you need to outfit them with some sensory tools, some movement skills, and a little internal engine for generating predictions about what their senses should be telling them. Then install the following algorithm: 1. Generate a prediction about what the senses should be reporting. 2. Check the prediction against what the senses are in fact reporting. 3. If there is a difference – a surprise! – then do something to make the surprise go away. So our complex bug might begin with the expectation that everything is going swell, and nutrients are being absorbed. But if, surprisingly, this turns out not to be so, then the bug has to do something to minimize the difference: move forward and bite, or move left or right and bite, and so on, until the bug’s expectations are being met – and then, invariably, another surprise comes along, and the process repeats. Basically, a living this does what it has to do until its senses tell it that the predictions it is generating are true, and then rests in that state for as long as possible.
(No doubt this is why we sleep: shutting down our senses is a straightforward way to keep our surprises to a minimum. If we could sleep our whole lives, we probably would. But, alas, there’s other business that we need to do, like eating and mating, and they require wakefulness. Hence alarm clocks.)
The fascinating attraction of Friston’s thinking is that something like this simple algorithm can be scaled upward into an account of human perception and behavior. On this view, we are always generating predictions about our experience – what we are sensing, who we are, what we are doing, and what is happening. These are our ongoing narrative theories, telling us what our circumstances are and what we are doing. These ongoing theories prompt us to do the things that confirm to ourselves that the world is just as we think it is. Occasionally, the world dishes up a surprise we can’t ignore, and so then we have to change our dominant prediction to something more fitting (and therefore, in the end, less surprising). Then we act to make that prediction align with our senses until the next surprise comes along.
Imagine having lunch with Alice. Your brain is generating the prediction “I am having lunch with Alice”, and everything your senses tell you suggests you are right: there’s Alice, there’s some food on your fork, tablecloths, sugar packets. You continue to behave in ways that confirm to you that you know what you are doing: you talk, eat, smile, nod. Then your eyes report that somebody at a neighboring table is holding their phone at eye level in your direction. Well, they are probably just looking up some information, which is consistent with you having lunch with Alice. So no problem. But now there’s another person holding up their phone in a similar way. Hmm. You are still having lunch with Alice, but uncertainty is beginning to build. Now a third person has come up to your table – and no, he’s not the waiter, and he is holding up his phone and pointing it at you in the way people do who are taking pictures or a video. This is now something other than having lunch with Alice. In a desperate attempt to keep the old prediction afloat, you look around – behind you, down at your shirt, trying to find something that everyone might be looking at while you are still having lunch with Alice. But nothing seems out of order. You ask what the person is doing, or you give them a rude glance, because you want to go back to having lunch with Alice. But that is no longer possible: whether they go away or not, you are now going to have to generate some new prediction about what is going on, because the old one will no longer serve. You are not just having lunch with Alice. Maybe you’re the target of a joke? Maybe Alice has suddenly become famous? Or you’re famous now for some reason? You’re on Candid Camera? As fast as you can generate predictions, you are checking the evidence to find some prediction that reduces the gap between what is going on and what you think is going on. You work to reduce the surprise.
(Are you annoyed that I’m not going to tell you what was really happening at the lunch? If so, it helps to demonstrate how needy we are of some sort of prediction that squares with the available evidence. When nothing seems to quite fit, it nags at us and we are annoyed. Sorry.)
Another example: just now I caught myself stroking my beard. I think this was to confirm to myself my prediction that I am engaged in thinking; it is the sort of behavior I associate with thinking. So that checks out. Then I scratched my head, probably to prove to myself that I am right about there not being a bug up there. (Thank goodness.) I re-read what I just wrote, furrowing my brow, hence convincing myself that I was faithfully articulating an idea and reflecting carefully upon it (which is what I predicted about my own behavior). But if, just now, my house’s smoke alarm goes off – mreeeeerp!!!!! – all of my predictions go out the window, since a blaring smoke alarm does not at all support my prediction that I am thinking. This upsetting change would force me to generate quickly a new dominant prediction of my own behavior: what I am doing now, I predict, is something about that noise. My body follows suit to make that new prediction come out as true, and I start moving.
The “ordinary” model of mind has us making decisions and plans on the basis of some rational consideration of the evidence being presented. The Friston model has us making a prediction about what we are doing, and sending out for evidence to confirm our predictions. If the evidence doesn’t fit our predictions, we revise the prediction, and send out for evidence again. We’re always looking and listening with a prediction in mind about what we will be seeing and hearing, and we stick to those predictions for as long as we possibly can, and revise when we must. It’s probably a stupid way to organize an exploration team, or to learn the truth about the world: it is geared toward confirmation bias and all manner of post hoc rationalizing. But it is possibly a very economical design for a living organism whose principal goal is not to waste any energy.
In this model, we don’t have to end up with predictions that are true; we only have to end up with predictions that don’t clash against the evidence. If I can cobble together a worldview that is pretty continuous with what I do and what I encounter, I have satisfied the free energy principle.
Just after I first read the WIRED article about Friston, I was walking on a college campus on a cold rainy day and came across some people with signs urging everyone to accept Jesus Christ as Lord and Savior. They had no takers; everyone gave them wide berth. I asked myself why they were doing this – standing out in the rain ineffectually promoting what I think is a set of metaphysically bizarre beliefs. The answer came to me quickly: they are standing out in the rain not to persuade others, but in order to confirm their prediction about themselves that they are Christian. What makes religion weird is that it makes such confident predictions about stuff no one could ever possibly see or test in experiments – that God exists, the God loves us, that original sin is real and really bad and can be cured through deicide, etc. So if for whatever reason you predict that you have these beliefs, you have to find something else in the environment to convince yourself that your prediction is right. Standing out in the rain will do the trick – for why would you do such a stupid thing if it weren’t for the fact that you believe in those religious truths? Standing out in the rain, wearing a cross, attending long church services, engaging in prayer behavior – these are just about the only empirical realities you can use to convince yourself that you really do believe in the religious stuff. You can’t get any confirming signals for the beliefs themselves, but you can manufacture your own signals proving to you that you believe them.
And this holds not just for religion, of course. I am a scholar, and so I need to convince myself of this on a daily basis. So I fill my walls with books. I write blogposts. I wear a vaguely European style of clothing, and speak in long and complete sentences about esoteric things. I do all this in order to convince myself that the prediction – “I am a scholar” – is true. You are a fan of the Sports. So you had better get a bumper sticker, a sweater with an emblem, and a cable subscription, for without these things you will lose confidence in the claim that you are a Sports fan. Do you want to believe the world is flat? Start evangelizing, and be sure to do so in contexts where you’ll receive a lot of pushback and ridicule, and pretty soon you will believe – for no one would submit themselves to such humiliating degradation if they didn’t really believe it. Case closed, and congratulations.
I’m still at the stage of making sense of Friston’s view, so there’s a lot more reading to do, but Andy Clark’s works have been helping me to gain a clearer picture of the view, and especially of the ways it has been borne out by neuroscience research. There is Clark’s 2015 Surfing Uncertainty, but also his lengthy and illuminating article in Behavioral and Brain Sciences (“Whatever next? Predictive brains, situated agents, and the future of
cognitive science”, 36, 2013, pp. 181–253). From what I see, the theory has just the right features to form an explanatory bridge from the more mechanistic or algorithmic biological world to the world of seemingly intelligent human behavior. It’s a piece that fits the hole in the puzzle of how nature could engineer up something like us.