Justin E. H. Smith’s recent book, The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning (Princeton UP 2022) has received plenty of notice here on 3 Quarks Daily, and for good reason. Smith’s books and essays always remind us that, no matter how bizarre and ironic some recent damn thing is, we are always part of a long anthropological history of bizarre irony, and indeed the harder you look the more bizarre and ironic it all gets. At least, I think that is one of the main vibes of 3QD: seeing where we are in some map of the strange natural/cultural universe.
There are plenty of books complaining about the evils of the internet, and to be sure Smith offers four complaints: it is addictive, it warps human lives through algorithms, it is ruled by corporate interests, and it serves as a universal surveillance device. There is enough evidence, both objective and anecdotal, that too much time on the internet turns one’s brain into a Twitter slushie that leaves one in no condition to meditate upon difficult problems, but instead only to scroll and click and scroll and click at digital gewgaws, feeling empty and alone, a terrible feeling one then tries to escape with more scrolling and clicking.
[Reading Jeff Hawkins, A Thousand Brains, Basic Books, 2021]
Rene Descartes had an uneasy relationship to academics. He was very well educated, but he never held any academic positions and spent much of his life arguing with professors and theologians. He saw himself as a scientist, like Galileo, disclosing the secrets of the universe, and it just so happened that in his view those secrets were not best expressed in Aristotelian or scholastic terms. He seriously considered providing a detailed, point-by-point commentary and criticism on an academic treatise of the day (Francisco Suarez’s Metaphysical Disputations), but then apparently decided his time would be better spent just forging ahead and making his own path, trampling over carefully-articulated distinctions drawn by pointy-headed academics.
I get something of the same vibe from Jeff Hawkins’ book, A Thousand Brains. Hawkins was on his way to becoming an academic brain scientist, but at the time the academics were busy asking narrower questions he didn’t find as interesting, so he went his own way, made millions of dollars, and started his own research institute so he could study what he wanted. Unlike Descartes, I think his team is not fundamentally changing the paradigm for neuroscience, and their work is being published in respected journals; but like Descartes, he is marching ahead at a pace that academics might consider brazen. Not my field, so I don’t know. I can say that his knowledge of philosophy is shallow, but that doesn’t seem to hamper his research at all.
I will also add that while I am fascinated by the first half of his book, the second half is not so great. Hawkins offers his own view of humanity’s future, and he just doesn’t have much that’s interesting to say. So my advice is to skip that and just read through chapter 9.
Okay, on to the book. The crucial players in Hawkins’ story are the neocortex, cortical columns, and frames of reference. The neocortex is a thin sheet of networked neurons, about the size of a dinner napkin and about 3 mm thick. It wraps around the rest of your brain in crinkled fashion. It’s the part of the brain responsible for all of our fancy, distinctly human sorts of thought. If you zoom in on it, you will find about 150,000 cortical columns that span the thickness of the napkin. Those cortical columns have similar structures. So the neocortex seems to be made of 150,000 similar circuits, like just so many Raspberry Pis, though each one involves something like 100,000 neurons. They end up performing different functions because, basically, they are plugged into different parts of our bodies and different sets of other columns.
Cortical columns combine together to provide what Hawkins calls frames of reference. A frame of reference can be thought of as an algorithm that shapes and structures the information it receives, or works to sort out or combine information, or shapes and structures some activity or behavior on our part. Hawkins’ favorite example is our knowledge of a coffee cup. We know how to recognize one by sight; we know how it should feel in our hands, and what each finger should sense as we grip the handle or trace the edge of the rim; we know how much it should weigh in our hands, and so on. Our knowledge of coffee cups—and of the world at large—is built from these frames of reference, or our projections and expectations placed upon our experience. Again, the cortical columns all have pretty much the same structure; so the difference in the information processing between the smell of coffee and the feel of the coffee cup has more to do with cortical columns being plugged into your nose or your hands, rather than to do with the inner processing that takes place.
(Just an expression of wonder here that 150,000 cortical columns is enough to do the trick—not just for coffee cups, but all of what we think we know and experience! Can you exhaustively reduce your entire Weltanshauung into 150,000 components? It seems like there should be more, though I’ve been making a list and it ends at 12, so maybe so.)
All these processors plugged into one another provide us with a set of maps or models of what is in the world, where we are in relation to it, what we are doing, and what further actions are available to us. It is highly decentralized: there is no “headquarters” where “it all comes together” (in Daniel Dennett’s endearing terms), but instead there are tens of thousands of processors, all biting off bits of the problem and chewing their way through it and sending their results for other processors to use. In the end, we do and say stuff, and the eventual narration of what we do and say becomes the basis for beliefs about who “we” are and what “we” think “we” are up to.
Another crucial part of Hawkins’ account is the significance of motion. The neocortex does not simply store a bunch of pictures and sounds like a library. It tracks changes and movements and patterns in motion. Our model of the coffee cup is not just a bunch of slides, but clips portraying how it should change as we move in relation to it, or as we move it around through space. Motion and change are what we model in the world, and it is also how we think about ideas and concepts:
“If all knowledge is stored this way, then what we commonly call thinking is actually moving through a space, through a reference frame. Your current thought, the thing that is in your head at any moment, is determined by the current location in the reference frame. As the location changes, the items stored at each location are recalled one at a time. Our thoughts are continually changing, but they are not random. What we think next depends on which direction we mentally move through a reference frame, in the same way that what we see next in a town depends on which direction we move from our current location.”
This is an exciting idea. We might know of memory palaces and the spatial hacks people use to remember long strings of information, but this appears to be evidence for the claim that our thinking is always movement through a landscape of ideas. Thinking is not a disembodied experience, but is something more like a virtual reality in which our projected motions take us from one concept to another, from one idea to a similar one, from one metaphor to its neighbors, and so on.
A commendable feature of Hawkins’ book is that he provides many clear analogies and illustrations of how his claims are borne out in both ordinary experiences and experimental ones. I have long been a fan of Dennett’s “pandemonium” model of the mind, and Hawkins’ book seems to be spelling out the details of how the congress of demons are actually embodied in the neocortex. The trick in any explanation of consciousness or intelligence is to show how a bunch of simple systems that aren’t so conscious or intelligent can sum up into a system that is conscious or intelligent (at least on our good days). There will always be philosophers who cry that in principle no such explanation can be achieved, but these are the folks who will be increasingly ignored as the results come in. (True in Descartes’s physics, true in Hawkins’ neuroscience.)
I do think that at some point our understanding of ourselves will have to shift from individuals to communities. That is to say, many of my cortical columns are plugged into not just each other and parts of my body, but to other people. Not directly of course (ew). But through language, culture, social dynamics, etc: all that squishy stuff gets reference framed in my brain, and exerts strong pulls over who and what I say I am and what the world is supposed to be. Neither Hawkins nor Dennett go far in this direction, but that’s okay, since we need both approaches, from individuals outward and from societies inward. But in the end, I think what we call “consciousness” and “intelligence” is going to be explained much more by communities and traditions than it is by cortical columns.
Today is July 4th, a day when Americans reflect on the value of freedom and the costs and sacrifices required for it. So it is an appropriate day to reflect on America’s deepest political aspirations.
Nah. Let’s talk about our brains. The neocortex is where all our fancy thinking takes place. The neocortex wraps around the core of our brain, and if you could carefully unwrap it and lay it flat it would be about the size of a dinner napkin, and about 3 millimeters thick. The neocortex consists of 150,000 cortical columns, which we might think of as separate processing units involving hundreds of thousands of neurons. According to research at Jeff Hawkins’ company Numenta (and as explained in his fascinating recent book, A Thousand Brains), these cortical columns are capable of modeling patches of our experience (he calls them “reference frames”) and setting our expectations of what we should experience next, at any given moment. [Note: the remarks that follow are inspired by Hawkins’ book, but shouldn’t be taken as a faithful representation of it.]
The neocortex’s complexity is considerable, but not infinite, and what’s most surprising is that such a relatively small network—you can fold it up and put it in your head—is capable of understanding calculus and making up jaunty little tunes and comparing Kierkegaard to Heidegger, not to mention doing all three in a single afternoon.
…But clearly we do end up with causal knowledge, as Hume himself never doubted, and we manage to navigate our ways through a steady world of enduring objects. We somehow end up with knowledge of an objective world. And we don’t remember that arriving at such knowledge was all that difficult. We just sort of grew into it, and now it seems so natural that it’s really hard to imagine not having it, and it’s even difficult not to find such knowledge perfectly obvious. But in fact it is anything but obvious …
The most important thing is this: what you experience, what you think, what you believe has no deep connection to what is real. Kant had this single truth exactly right: everything we think we know about the world is mostly a reflection of ourselves—psychologically, culturally, socially. As Leszek Kołakowski wrote, “In all the universe man cannot find a well so deep that, leaning over it, he does not discover at the bottom his own face.”
The explanation for this is straightforward. We only ever encounter the model of the world our minds have made, and each model bears the imprint of its maker, in thoroughgoing ways so pervasive and nuanced that we seldom see ourselves in it. Solipsism in this sense is inescapably true. We experience our own minds, for the most part.
But it’s also true that our models are disrupted by experience: we make mistakes, we are surprised, we get things wrong and we collide and break. So it would be wrong to say there isn’t a reality independent of us. But we cannot know it as it is in itself—that’s Kant’s point. All we can do is try to model it, with our sloppy cognitive engines, and over time we have become pretty good at it, if only within the narrow realm of our endeavors.
How do we come to know this fact, that we cannot know reality as it is in itself? Certainly not by looking at reality as it is in itself and comparing it to the cartoon that is in our heads. No, we know it from the inside. We make wrong predictions about the world, and sometimes come to see our predictions as wishful thinking. We observe what other people say and believe, and we see how closely it is tied to their own psychology. We study other societies, all of which plant themselves at the center of what’s important. and from all of these observations we formulate the general thesis that people paint themselves into their worlds, or more accurately: they paint the world with themselves, rather in the way John Malkovich sees everyone as John Malkovich when he enters into his own mind as a stranger in the film Being John Malkovich.
We know this to be true in dreams. In dreams every element is coming from within us—where else could it be coming from? But it is a short step from dreams to waking experience. In waking experience what we see are the judgments we arrive at, and those judgments are formed from sensations, yes, but also the same internal apparatus that gives shape to our dreams. Our minds are predictive engines, but the predictions we make gain their characters from our dream engines. Malkovich, in entering his own head, has supplied himself as input to the apparatus that makes predictions about his experience, and unsurprisingly he sees himself everywhere. Most of us who aren’t crippled by extreme narcissism don’t have this experience, thank god, but we still inject ourselves into our predictions, and thereby into our experience.
Some people think that knowledge is something in the head. I have a belief, and it has appropriate connections to other ideas and beliefs, all in my head. These connections ensure that my belief has good grounding or justification: I have reasons for my belief. And if this particular belief maps onto some structure in the world in the right way, in such a way that we might say my belief is “true”, then I have knowledge. But whatever is “out there” in the world is not my knowledge; it is only a fact or state of affairs that renders my belief true. It’s what is in my head that is my knowledge.
I’m not sure how many people find this way of looking at things congenial. It is a way of looking at knowledge that has been commonplace among anglo-american philosophers through the 20th century, at least. But as I write it down, I realize how strange it is. In ordinary daily life, knowledge is not regarded as a private pile of chestnuts kept within a cranium. Knowledge is almost always in action, playing an explanatory role in what we do or say or write. On some occasions it is only when we do or say something that we ourselves realize we had some knowledge we didn’t know we had. On other occasions other people see what we are up to, and they say to themselves, “He doesn’t know what he’s doing.”
On ordinary occasions, knowledge is a set of capacities we have, capacities which can be witnessed by others as present. You watch me change a tire, and you see that I know what I’m doing. Sam takes a calculus test and gets a high score. Nelly recounts the history of the Beatles. Juniper fits a cast on a dog’s broken leg. All these people have knowledge, and it is out in the open for anyone to see. Of course there is something going on in their heads—heads are not just ornamental—but the knowledge consists in an agent’s performance, or their capacity to perform, as judged by some kind of audience or measure that indicates their success. It’s not merely “in the head” any more than running is merely “in the legs”.
But even this generous schmearing of knowledge across agents, judges, and environments is too restrictive. For some of our knowledge we possess in virtue of belonging to a group that has that knowledge, loosely speaking. For example, I know humans can colonize Mars. If you press me on this, asking how we would secure a source of water and get plants to grow and protect our living entourage from radiation and hurricane-force winds, my story would crumble pretty quickly. I don’t know those things. My knowledge is based on some loose facts: that we have sent probes to Mars, and humans to the Moon, and I haven’t heard anyone say we could not possibly colonize Mars. That’s just about all I have to offer, and it hardly qualifies me as an expert. But I also believe that if I spent a few weeks doing research or consulting with experts, I could put together a much more detailed account of how we could colonize Mars, perhaps with charts and figures and artist renderings. That is to say, I don’t have the knowledge on me right now, but I could get it for you if you give me a few weeks, because I live in a society where that knowledge is available.
We could play with words and say, for example, that I don’t KNOW we can colonize Mars; I only sort of “know” we can colonize Mars. But I think in most conversations I could make the claim that we can colonize Mars and get virtually no resistance from others, whereas if I assert that we can train monkeys to do brain surgery, I very likely will receive some polite resistance on the matter, even though my knowledge of training monkeys is not appreciably less than my knowledge of establishing a base on Mars. The difference between the two cases is explained by the fact that our society, as a whole, knows how to colonize Mars, and we don’t know how to train monkeys to perform brain surgery.
Somehow we gain a rough sense of the knowledge that is within our society’s reach, and what is not, though none of us are experts on most matters. Actually, this is no surprise: this is what it is to live in a society rich in knowledge, information, and access. We read and watch and listen and learn, and thus come to a fallible estimation of our collective capacities. One needn’t travel far down the road, geographically or historically, to come across other societies far poorer in their collective knowledge.
We might see if we can organize our knowledge into a series of gradations, extending from what I really know, right here, right now, to knowledge I have as a result of the society I am in. In between these extremes would be layers of things I know something about, or just a little about, or remember knowing at one time, or things about which I know some parts but am confused about others. It would be a complicated, multi-dimensional series, to be sure. But I want to try to push aside as many complications as possible, and focus on the things we know mostly because we are in a society that knows them. Let’s give this murky area of known things its own distinctive name: educated true opinions.
For example, my knowledge that we can colonize Mars is an educated true opinion. Your knowledge that vaccines reduce the risk of contracting certain diseases is an educated true opinion. Our knowledge that humans are causing global warming is an educated true opinion, and our knowledge that antisemitic conspiracy theories have no factual basis in reality is an educated true opinion. It’s a cumbersome name, but each element is important. I insist on the word “educated” because these opinions are grounded in a roughly accurate sense of what our society collectively knows, even if we as individuals are not able to supply extensive reasons immediately upon request. I insist on the word “true” because I do not wish to say that any of these true opinions are merely opinions that might be true or false: no, they are true. I insist on the word “opinion” because this knowledge concerns matters about which we are not experts.
As the examples I just offered might suggest, many of us end up arguing over educated true opinions. That is, many of us argue over which claims are educated true opinions, and which aren’t. For some of us this arguing is only an occasional pastime. For others it becomes a rage-filled, life-consuming passion. Why it ends up being so important to some people, and not to others, is an interesting question.
One might think any such argument could be settled quite easily by asking some experts to tell us what’s true and what isn’t. But whether we can trust the experts, or even identify them reliably, turns out to be a matter over which there is further disagreement. On many issues we don’t even agree on what it would take to establish a claim as an educated true opinion. Sometimes the arguments become so acrimonious that it can be asked whether we are living in the same society.
Liberalism has been so successful in promoting a wide range of different ideas that its own name has gotten pretty murky. Many people think it means supporting a welfare state, championing the voices of people usually pushed to the side, and generally showing sympathy for anyone or anything that can’t defend itself. Other people think it means being a stupid hippie crybaby. Still others lump liberalism together with belonging to a specific political party, and others argue it’s just another word for capitalism. But the classic meaning is that a liberal tries to establish a social order that gives people the freedom to live however they think best without getting in each other’s way. Fundamentally, it is the defense of pluralism, or the broad toleration of different visions of what’s good. It’s this sense of liberalism that I think we shouldn’t give up on just yet.
A recent blogpost by philosopher Liam Kofi Bright explains why he isn’t a liberal. (And a similarly forceful critique is offered by Christopher Horner here on 3QD.) Bright argues that humans just can’t maintain a sharp distinction between what’s private and what’s public: our own visions of the good life inevitably will pollute our politics (and so pluralism is unstable). Second, and relatedly, he argues the very idea is incoherent, and a governing institution necessarily shutters some visions of a human life as no longer open for business. He also argues that liberalism historically has been the vision advanced by white plutocrats, and it carries their worldview in its DNA, particularly under the banners of private property and rapacious capitalism.
It is entirely possible that we cannot handle the ever rising tide of knowledge. Yes, I am going to presume that it is knowledge — that we are not barking up the wrong axis mundi, that we are not ten days away from the next Einstein who overturns everything, that this time next year we will not look back on today as back when we were mere children. You might ask how I can possibly make this presumption, and you are right to ask. Nevertheless…
We know a helluva lot. It’s really extraordinary if you stop to think about it. Why should the descendants of some savannah primates be able to figure out all this stuff about quarks, penicillin, double-entry bookkeeping, stock derivatives, the rise and fall of psychoanalysis, Bluetooth (well, right, work in progress), and microchip readers? Any ancient alien bookies would have placed the odds heavily against us. But here we are, trying to drink from a veritable firehouse of veritas, swelling our heads most impressively.
Lots of things don’t exist. Bigfoot, a planet between Uranus and Neptune, yummy gravel, plays written by Immanuel Kant, the pile of hiking shoes stacked on your head — so many things, all of them not existing. Maybe there are more things that don’t exist than we have names for. After all, there are more real objects than we have names for. No one has named every individual squid, nor every rock on Mars, nor every dream you’ve ever had. The list of existing things consists mostly of nameless objects, it seems.
So there also must be a lot of nameless things that don’t exist. The collection of two marbles in my coffee mug — call it “Duo”. Duo doesn’t exist. Nor the collection of three marbles (“Trio”), nor the collection of four marbles, etc. Beyond Duo and Trio, there is an infinity of collections of marbles in my coffee cup that don’t exist, and the greatest portion of them, by a long shot, are nameless. Think of all the integers that don’t exist between 15 and 16. None of them have names. The world is full of them, or it would be, if they existed.
My guess is that there are more nameless things that don’t exist than there are nameless things that do exist. I have read that there is a finite number of particles that exist in the universe, and that’s probably going to limit the number of nameless existing things, somehow. But think of all the particles that don’t exist! There are far more of them, right?
We primates of the homo sapiens variety are very clever when it comes to making maps and plotting courses over dodgy terrain, so it comes as no surprise that we are prone to think of possible actions over time as akin to different paths across a landscape. A choice that comes to me in time can be seen easily as the choice between one path or another, even when geography really has nothing to do with it. My decision to emit one string of words rather than another, or to slip into one attitude or another, or to roll my eyes or stare stolidly ahead, can all be described as taking the path on the right instead the path on the left. And because we primates of the homo sapiens variety are notably bad at forecasting the consequences of our decisions, the decision to choose one path and lose access to the other, forever, can be momentous and frightening. It’s often better to stay in bed.
Indeed, because every decision cuts the future in half, the space of possibilities is carved rapidly into strange and unexpected shapes, causing us to gaze at one another imploringly and ask, “How ever did such a state of things come to pass?” And the answer, you see, is that we and our compatriots made one decision, and then another, and then another, and before long we found ourselves in this fresh hot mess. And we truly need not ascribe “evil” intentions to anyone in the decision chain, as much as we would like to, since our own futuromyopia supplies all the explanation that is needed. We stumble along in the forever blurry present, bitching as we go, like an ill-tempered Mr. Magoo.
(Hegelian World Spirit as Mr. Magoo, the philosopher writes in his notebook.)
A man rides an empty suit. The suit tells others what to think of the man, though it would not fit him. The man does not control the suit, but merely takes a ride upon it, come what may.
In his twenties, Franz Kafka composed a long story, “Description of a Struggle”, which remains one of his most enigmatic works. It follows a dream-like logic from a party, to a stroll through Prague, to an encounter with “a monstrously fat man” being borne in a litter by four naked men, to a supplicant once known by the fat man who prayed by bashing his own head against the stone floor of a church, to a final scene on a mountaintop, where a stabbing takes place, though it does not seem to be very consequential. The end.
Max Brod thought it was a work of genius, though John Updike thought it was adolescent posturing. (¿Por qué no los dos?) Like all of Kafka’s works, it shows up on your doorstep like a locked desk that you are sure contains something you need, but the key is locked inside it; and when you finally bash the desk open, you find your own corpse with a toe tag reading “GUILTY OF BREAKING THE DESK”. Maybe some of the strange imagery Kafka himself could neither explain nor control, maybe some of it spoke of his own secrets, maybe all of it is an existential parable.
One thing is for sure: the story shatters in every way. We might expect a story with a beginning, middle, and end: nope. We might expect some clarity about just whose story it is: nope. We might expect facts to stay fixed, or people to inhabit their own bodies: nope. We might expect some thread of consistency, conversations that make even minimal sense, words of wisdom that do not culminate in irrelevant banalities. Nope, nope, nope. That the work is offered as a story, and even as a description, is an exaggeration. It’s something, all right, and we may try to read it as a story, but the damned thing will not cooperate. It keeps falling apart the more we try to hold it together, like a human life, come to think of it.
Over years of teaching philosophy, I have observed that people fall into two groups with regard to the Biggest Question. The Biggest Question is one that is so big it is hard to fit into words, but here goes: When everything that can be explained has been explained, when we know the truths of physics and brains and psychology and social interactions and so on and so forth, will there still be anything worth wondering about? I am assuming the “wonder” here is a philosophical wonder, not the sort of wonder over whatever happened to my old pocket knife or whatever. It’s the sort of wonder that has a “why-is-there-something-rather-than-nothing” flavor to it. It’s the sort of wonder that doesn’t go away no matter how much is explained.
Some people think that on that sunny day when everything that can be explained has been explained, well then, that will be that. We will understand why things have happened, and how we came to exist, and what we should do if we want to be healthy and happy, and why works of art move us as they do. It’s not that such people are in any way shallow or unimaginative or tone deaf. They are open to the most wonderful experiences of life, along with the most heart-wrenching and most tragic. It’s just that they think these experiences can be explained and understood in all their glory through that explanation. If there is anything “left over” — some stubborn bit of incredulous wonder we just can’t shake — then that too will be explained through some feature of human psychology, like the way those patterns still seem to swirl in a static optical illusion even when you know the trickery behind it. The feeling that there is a Mystery can itself be explained as an illusory sort of feeling, an accidental by-product of the cognitive engine we happen to think with.
“Social media have gutted institutions: journalism, education, and increasingly the halls of government too. When Marjorie Taylor Greene displays some dumb-as-hell anti-communist Scooby-Doo meme before congress, blown up on poster-board and held by some hapless staffer, and declares “This meme is very real”, she is channeling words far, far wiser than the mind that produced them. We’re all just sharing memes now, and those of us who hope to succeed out there in “reality”, in congress and classrooms and so on, momentarily removed from our screens and feeds, must learn how to keep the memes going even then. “Real-world” events, in other words, are staged by the victors in our society principally with an eye to the potential virality of their online uptake. And when virality is the desired outcome, clicks effected in support or in disgust are all the same.”
[…] Somehow, through our language, culture, and shared projects of both construction and destruction, we manage to invent a spirit-world of fictions and concepts that paper over whatever-it-is-that-really-is-there, and we think and act in that spirit-world. It is nearly impossible — or maybe it is necessarily impossible — to tear off the layers of interpretation and take a sneak peak at the In Itself. Instead, we form new spirit-worlds through which we can reference the previous ones, and through a kind of “semantic ascent” we find ourselves with being able to name everything many times over, connecting every alleged thing to every other alleged thing. When the layers upon layers of these spirit-worlds become sufficiently entangled, we come to believe we can speak intelligibly about all things, and we lose sight of the basic fact that it is all a bunch of very sophisticated nonsense we have ourselves summoned into intelligibility. Reification is the birth of (nearly) everything. […]