It might seem daunting to read philosophy. Giants of thinking with names like Hegel, Plato, Marx, Nietzsche and Kierkegaard loom over us with imperious glares, asking if we are sure we are worthy. We might worry that we won’t understand anything they are telling us; even if we do think we understand, we still might worry that we’ll get it wrong somehow.
So, if we’re going to read philosophy, we need to begin by knocking those giants down to size. Every one of them tripped and burped and doodled. Some of them were real jerks. Here’s Arthur Schopenhauer on his fellow German philosopher Georg Wilhelm Friedrich Hegel, for instance: ‘a flat-headed, insipid, nauseating, illiterate charlatan, who reached the pinnacle of audacity in scribbling together and dishing up the craziest mystifying nonsense.’ I’m not sure whether this paints Schopenhauer or Hegel as the bigger jerk.
The point is that each giant of philosophy was a human being trying to figure out life by doing just what you do: reading, thinking, observing, writing. Don’t let their big words intimidate you; we can insist that they make sense to us – or, at least, intrigue us – or are left behind in the discount book bin. They must prove their worth to us.
I recently listened to a discussion on the topic of longtermism, or the moral view that we need to factor in the welfare of future generations far more seriously than we do, including generations far, far into the future. No one should deny that the people of the future deserve some of our consideration, but most people soften that consideration with fluffy pillows of uncertainty. We take ourselves to have a rough idea of what the next generation will face, but after that everything gets cloudy fast, and most of us aren’t sure what exactly we should do for those possible people in the clouds, so we start dropping them from our moral calculations.
But if you insist on considering them, and treating them as real (but real elsewhen), their numbers and their interests get big fast. How many people might exist in the whole future of the universe? Millions of billions, maybe, if we go full-on Star Trek. If they each deserve only one millionth of our concern, that still ends up being a whopping amount of concern. Look at things that way, and really just about all of our moral thinking should be focused on the future generations of the universe. The Iroquois who asserted that we should “have always in view not only the present but also the coming generations” were severely understating the magnitude of the task before us.
There is a set of interesting discussions posted on Scott Aaronson’s blog among Aaronson, Steven Pinker, and others on whether recent text generators like GPT-3 indicate that artificial intelligence is upon us. The discussion is informed, sensible, and well-mannered: these guys all respect each other’s views, though they disagree, so it’s a model of genuine discourse. The basic disagreement seems to be between the more impressed by GPT-3 and the less impressed, as one would expect.
Aaronson’s dialogue continues with Pinker on this page, and the two seem to get stuck on the question of whether we could have not just AI, but genius AI, such as an AI that duplicates Einstein’s intelligence. They arrive at this point because after Pinker points out that we really don’t have a clear definition of what constitutes “intelligence”, Aaronson counters that if one devised a program that could give you Einstein-level insights, that would surely count as an intelligent program:
“Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster. Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later. Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.”
Pinker replies to this idea with skepticism toward treating intelligence or super-intelligence as a kind of magical substance that inhabits special brains: “I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do….[I]f intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence.” So Pinker would rather see a detailed account of how an AI is recognizing problems and thinking about them, rather than just wowing us with stellar performance.
Pinker has an interesting point, I think: it is easy to believe we will have programs that can maintain conversations and solve hard problems, but harder to believe that those programs will be doing it the way humans do it. The programming will require much more massive information than the human brain runs on, and so won’t fully answer the question of how human beings think.
It’s a good and interesting discussion, as I said, but there’s a further element I think they are missing, and that’s the social dimension of intelligence (and of “genius”, even more so). I am not at all sure Einstein would have been a genius if he had been born in 1779 or 1979 instead of 1879. As it happened, he was in the right place at the right time to make an important contribution in a certain problem space. We shouldn’t assume that he would have the same level of success if dropped into other problem spaces. Same goes for the others on the typical lists of geniuses: Plato, Da Vinci, Shakespeare, Newton, etc. A lot of lucky circumstances need to come together before an individual set of abilities can plug itself in and solve a problem in an impressive way. Drop a genius into another time and they may cease to be a genius entirely.
(I am reminded of Bill and Ted’s Excellent Adventure, in which Beethoven, brought to the present day, quickly masters disco. I don’t quite remember what the other historical figures master—Genghis Khan skateboards, and Joan of Arc jazzercises?!–but the trajectory of this idea would have Newton quickly mastering the internet, Lincoln bringing peace to the Middle East, etc. It’s a vivid expression of the common belief that genius is a superpower.)
A similar point might be made for sub-genius level intelligence, with which most of us are familiar. What matters is not what a smart device or a person can do, or what puzzles it can solve, but the ways in which it can be incorporated into the rest of society. So long as we stick with Turing-test-style contests to see if we have a genuine AI or not, we will always be in an argument like the one among Aaronson and his friends: enthusiasts and skeptics, progressives and cynics, arguing over whether “genuine intelligence” is a necessary condition for passing the test. By contrast, if we find one day that we already have incorporated an entity into our conversations and in our lives, and that in these roles we cannot help but regard the entity as a separate, intelligent person, then the debate is practically over. It’s intelligent, because we can’t help but treat it as intelligent.
So, for example, right now I don’t think anyone regards Siri or Alexa as intelligent. They’re still awkward to deal with. But if someday they are as easy to converse with as a genuine personal assistant, so that we have to think about our interactions with them in the same way we have to think about our interactions with human others, then we will have genuine AIs: not merely because of their programming (though that is not irrelevant, of course), but because they fit into our society in the right sort of way to be regarded as intelligent beings.
We have done this with children over the last 60 years or so, and with animals over the last 20 years. It’s only been in the 20th century that we started inviting children into the adult world, where their feelings and ideas and abilities were taken as equal to an adult’s, or at least on many occasions we pretended as if they did. A kid in 1887 was not awarded nearly the same degree of intellectual authority and respect as a kid in 1987, let alone 2017. Same with our pets: though we do not treat them like fully adult humans—well, it’s not exactly common practice, anyway—it’s still a lot better to be a dog in 2020 than a dog in 1920. For some reason, we decided to include children and pets into the charmed circle of Beings Whose Intelligence Matters. Who and what they are as persons depends partly on their own abilities and capacities, of course, but also very much on the way we treat them. Talk of “rights” reflects our attitudes on these matters. To establish a being’s rights is to decide, as a matter of policy, how they are to be regarded, and that decision is informed in part by how intelligent we regard them as being.
I do think that, in general computer scientists and psychologists and philosophers are reluctant to get into the messy business of the social dimensions of all this stuff, because they prefer to work in more clearly defined domains: the confines of the skull, for instance, or the structure of an algorithm, or interactions among a small number of participants. Once we open the lid on the roles of culture, tradition, and even economics and politics on these questions, then the worms start wiggling out at an unmanageable rate. So it’s an understandable oversimplification. But that doesn’t mean that the social dimension can be ignored.
Justin E. H. Smith’s recent book, The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning (Princeton UP 2022) has received plenty of notice here on 3 Quarks Daily, and for good reason. Smith’s books and essays always remind us that, no matter how bizarre and ironic some recent damn thing is, we are always part of a long anthropological history of bizarre irony, and indeed the harder you look the more bizarre and ironic it all gets. At least, I think that is one of the main vibes of 3QD: seeing where we are in some map of the strange natural/cultural universe.
There are plenty of books complaining about the evils of the internet, and to be sure Smith offers four complaints: it is addictive, it warps human lives through algorithms, it is ruled by corporate interests, and it serves as a universal surveillance device. There is enough evidence, both objective and anecdotal, that too much time on the internet turns one’s brain into a Twitter slushie that leaves one in no condition to meditate upon difficult problems, but instead only to scroll and click and scroll and click at digital gewgaws, feeling empty and alone, a terrible feeling one then tries to escape with more scrolling and clicking.
[Reading Jeff Hawkins, A Thousand Brains, Basic Books, 2021]
Rene Descartes had an uneasy relationship to academics. He was very well educated, but he never held any academic positions and spent much of his life arguing with professors and theologians. He saw himself as a scientist, like Galileo, disclosing the secrets of the universe, and it just so happened that in his view those secrets were not best expressed in Aristotelian or scholastic terms. He seriously considered providing a detailed, point-by-point commentary and criticism on an academic treatise of the day (Francisco Suarez’s Metaphysical Disputations), but then apparently decided his time would be better spent just forging ahead and making his own path, trampling over carefully-articulated distinctions drawn by pointy-headed academics.
I get something of the same vibe from Jeff Hawkins’ book, A Thousand Brains. Hawkins was on his way to becoming an academic brain scientist, but at the time the academics were busy asking narrower questions he didn’t find as interesting, so he went his own way, made millions of dollars, and started his own research institute so he could study what he wanted. Unlike Descartes, I think his team is not fundamentally changing the paradigm for neuroscience, and their work is being published in respected journals; but like Descartes, he is marching ahead at a pace that academics might consider brazen. Not my field, so I don’t know. I can say that his knowledge of philosophy is shallow, but that doesn’t seem to hamper his research at all.
I will also add that while I am fascinated by the first half of his book, the second half is not so great. Hawkins offers his own view of humanity’s future, and he just doesn’t have much that’s interesting to say. So my advice is to skip that and just read through chapter 9.
Okay, on to the book. The crucial players in Hawkins’ story are the neocortex, cortical columns, and frames of reference. The neocortex is a thin sheet of networked neurons, about the size of a dinner napkin and about 3 mm thick. It wraps around the rest of your brain in crinkled fashion. It’s the part of the brain responsible for all of our fancy, distinctly human sorts of thought. If you zoom in on it, you will find about 150,000 cortical columns that span the thickness of the napkin. Those cortical columns have similar structures. So the neocortex seems to be made of 150,000 similar circuits, like just so many Raspberry Pis, though each one involves something like 100,000 neurons. They end up performing different functions because, basically, they are plugged into different parts of our bodies and different sets of other columns.
Cortical columns combine together to provide what Hawkins calls frames of reference. A frame of reference can be thought of as an algorithm that shapes and structures the information it receives, or works to sort out or combine information, or shapes and structures some activity or behavior on our part. Hawkins’ favorite example is our knowledge of a coffee cup. We know how to recognize one by sight; we know how it should feel in our hands, and what each finger should sense as we grip the handle or trace the edge of the rim; we know how much it should weigh in our hands, and so on. Our knowledge of coffee cups—and of the world at large—is built from these frames of reference, or our projections and expectations placed upon our experience. Again, the cortical columns all have pretty much the same structure; so the difference in the information processing between the smell of coffee and the feel of the coffee cup has more to do with cortical columns being plugged into your nose or your hands, rather than to do with the inner processing that takes place.
(Just an expression of wonder here that 150,000 cortical columns is enough to do the trick—not just for coffee cups, but all of what we think we know and experience! Can you exhaustively reduce your entire Weltanshauung into 150,000 components? It seems like there should be more, though I’ve been making a list and it ends at 12, so maybe so.)
All these processors plugged into one another provide us with a set of maps or models of what is in the world, where we are in relation to it, what we are doing, and what further actions are available to us. It is highly decentralized: there is no “headquarters” where “it all comes together” (in Daniel Dennett’s endearing terms), but instead there are tens of thousands of processors, all biting off bits of the problem and chewing their way through it and sending their results for other processors to use. In the end, we do and say stuff, and the eventual narration of what we do and say becomes the basis for beliefs about who “we” are and what “we” think “we” are up to.
Another crucial part of Hawkins’ account is the significance of motion. The neocortex does not simply store a bunch of pictures and sounds like a library. It tracks changes and movements and patterns in motion. Our model of the coffee cup is not just a bunch of slides, but clips portraying how it should change as we move in relation to it, or as we move it around through space. Motion and change are what we model in the world, and it is also how we think about ideas and concepts:
“If all knowledge is stored this way, then what we commonly call thinking is actually moving through a space, through a reference frame. Your current thought, the thing that is in your head at any moment, is determined by the current location in the reference frame. As the location changes, the items stored at each location are recalled one at a time. Our thoughts are continually changing, but they are not random. What we think next depends on which direction we mentally move through a reference frame, in the same way that what we see next in a town depends on which direction we move from our current location.”
This is an exciting idea. We might know of memory palaces and the spatial hacks people use to remember long strings of information, but this appears to be evidence for the claim that our thinking is always movement through a landscape of ideas. Thinking is not a disembodied experience, but is something more like a virtual reality in which our projected motions take us from one concept to another, from one idea to a similar one, from one metaphor to its neighbors, and so on.
A commendable feature of Hawkins’ book is that he provides many clear analogies and illustrations of how his claims are borne out in both ordinary experiences and experimental ones. I have long been a fan of Dennett’s “pandemonium” model of the mind, and Hawkins’ book seems to be spelling out the details of how the congress of demons are actually embodied in the neocortex. The trick in any explanation of consciousness or intelligence is to show how a bunch of simple systems that aren’t so conscious or intelligent can sum up into a system that is conscious or intelligent (at least on our good days). There will always be philosophers who cry that in principle no such explanation can be achieved, but these are the folks who will be increasingly ignored as the results come in. (True in Descartes’s physics, true in Hawkins’ neuroscience.)
I do think that at some point our understanding of ourselves will have to shift from individuals to communities. That is to say, many of my cortical columns are plugged into not just each other and parts of my body, but to other people. Not directly of course (ew). But through language, culture, social dynamics, etc: all that squishy stuff gets reference framed in my brain, and exerts strong pulls over who and what I say I am and what the world is supposed to be. Neither Hawkins nor Dennett go far in this direction, but that’s okay, since we need both approaches, from individuals outward and from societies inward. But in the end, I think what we call “consciousness” and “intelligence” is going to be explained much more by communities and traditions than it is by cortical columns.
Today is July 4th, a day when Americans reflect on the value of freedom and the costs and sacrifices required for it. So it is an appropriate day to reflect on America’s deepest political aspirations.
Nah. Let’s talk about our brains. The neocortex is where all our fancy thinking takes place. The neocortex wraps around the core of our brain, and if you could carefully unwrap it and lay it flat it would be about the size of a dinner napkin, and about 3 millimeters thick. The neocortex consists of 150,000 cortical columns, which we might think of as separate processing units involving hundreds of thousands of neurons. According to research at Jeff Hawkins’ company Numenta (and as explained in his fascinating recent book, A Thousand Brains), these cortical columns are capable of modeling patches of our experience (he calls them “reference frames”) and setting our expectations of what we should experience next, at any given moment. [Note: the remarks that follow are inspired by Hawkins’ book, but shouldn’t be taken as a faithful representation of it.]
The neocortex’s complexity is considerable, but not infinite, and what’s most surprising is that such a relatively small network—you can fold it up and put it in your head—is capable of understanding calculus and making up jaunty little tunes and comparing Kierkegaard to Heidegger, not to mention doing all three in a single afternoon.
…But clearly we do end up with causal knowledge, as Hume himself never doubted, and we manage to navigate our ways through a steady world of enduring objects. We somehow end up with knowledge of an objective world. And we don’t remember that arriving at such knowledge was all that difficult. We just sort of grew into it, and now it seems so natural that it’s really hard to imagine not having it, and it’s even difficult not to find such knowledge perfectly obvious. But in fact it is anything but obvious …
The most important thing is this: what you experience, what you think, what you believe has no deep connection to what is real. Kant had this single truth exactly right: everything we think we know about the world is mostly a reflection of ourselves—psychologically, culturally, socially. As Leszek Kołakowski wrote, “In all the universe man cannot find a well so deep that, leaning over it, he does not discover at the bottom his own face.”
The explanation for this is straightforward. We only ever encounter the model of the world our minds have made, and each model bears the imprint of its maker, in thoroughgoing ways so pervasive and nuanced that we seldom see ourselves in it. Solipsism in this sense is inescapably true. We experience our own minds, for the most part.
But it’s also true that our models are disrupted by experience: we make mistakes, we are surprised, we get things wrong and we collide and break. So it would be wrong to say there isn’t a reality independent of us. But we cannot know it as it is in itself—that’s Kant’s point. All we can do is try to model it, with our sloppy cognitive engines, and over time we have become pretty good at it, if only within the narrow realm of our endeavors.
How do we come to know this fact, that we cannot know reality as it is in itself? Certainly not by looking at reality as it is in itself and comparing it to the cartoon that is in our heads. No, we know it from the inside. We make wrong predictions about the world, and sometimes come to see our predictions as wishful thinking. We observe what other people say and believe, and we see how closely it is tied to their own psychology. We study other societies, all of which plant themselves at the center of what’s important. and from all of these observations we formulate the general thesis that people paint themselves into their worlds, or more accurately: they paint the world with themselves, rather in the way John Malkovich sees everyone as John Malkovich when he enters into his own mind as a stranger in the film Being John Malkovich.
We know this to be true in dreams. In dreams every element is coming from within us—where else could it be coming from? But it is a short step from dreams to waking experience. In waking experience what we see are the judgments we arrive at, and those judgments are formed from sensations, yes, but also the same internal apparatus that gives shape to our dreams. Our minds are predictive engines, but the predictions we make gain their characters from our dream engines. Malkovich, in entering his own head, has supplied himself as input to the apparatus that makes predictions about his experience, and unsurprisingly he sees himself everywhere. Most of us who aren’t crippled by extreme narcissism don’t have this experience, thank god, but we still inject ourselves into our predictions, and thereby into our experience.
Some people think that knowledge is something in the head. I have a belief, and it has appropriate connections to other ideas and beliefs, all in my head. These connections ensure that my belief has good grounding or justification: I have reasons for my belief. And if this particular belief maps onto some structure in the world in the right way, in such a way that we might say my belief is “true”, then I have knowledge. But whatever is “out there” in the world is not my knowledge; it is only a fact or state of affairs that renders my belief true. It’s what is in my head that is my knowledge.
I’m not sure how many people find this way of looking at things congenial. It is a way of looking at knowledge that has been commonplace among anglo-american philosophers through the 20th century, at least. But as I write it down, I realize how strange it is. In ordinary daily life, knowledge is not regarded as a private pile of chestnuts kept within a cranium. Knowledge is almost always in action, playing an explanatory role in what we do or say or write. On some occasions it is only when we do or say something that we ourselves realize we had some knowledge we didn’t know we had. On other occasions other people see what we are up to, and they say to themselves, “He doesn’t know what he’s doing.”
On ordinary occasions, knowledge is a set of capacities we have, capacities which can be witnessed by others as present. You watch me change a tire, and you see that I know what I’m doing. Sam takes a calculus test and gets a high score. Nelly recounts the history of the Beatles. Juniper fits a cast on a dog’s broken leg. All these people have knowledge, and it is out in the open for anyone to see. Of course there is something going on in their heads—heads are not just ornamental—but the knowledge consists in an agent’s performance, or their capacity to perform, as judged by some kind of audience or measure that indicates their success. It’s not merely “in the head” any more than running is merely “in the legs”.
But even this generous schmearing of knowledge across agents, judges, and environments is too restrictive. For some of our knowledge we possess in virtue of belonging to a group that has that knowledge, loosely speaking. For example, I know humans can colonize Mars. If you press me on this, asking how we would secure a source of water and get plants to grow and protect our living entourage from radiation and hurricane-force winds, my story would crumble pretty quickly. I don’t know those things. My knowledge is based on some loose facts: that we have sent probes to Mars, and humans to the Moon, and I haven’t heard anyone say we could not possibly colonize Mars. That’s just about all I have to offer, and it hardly qualifies me as an expert. But I also believe that if I spent a few weeks doing research or consulting with experts, I could put together a much more detailed account of how we could colonize Mars, perhaps with charts and figures and artist renderings. That is to say, I don’t have the knowledge on me right now, but I could get it for you if you give me a few weeks, because I live in a society where that knowledge is available.
We could play with words and say, for example, that I don’t KNOW we can colonize Mars; I only sort of “know” we can colonize Mars. But I think in most conversations I could make the claim that we can colonize Mars and get virtually no resistance from others, whereas if I assert that we can train monkeys to do brain surgery, I very likely will receive some polite resistance on the matter, even though my knowledge of training monkeys is not appreciably less than my knowledge of establishing a base on Mars. The difference between the two cases is explained by the fact that our society, as a whole, knows how to colonize Mars, and we don’t know how to train monkeys to perform brain surgery.
Somehow we gain a rough sense of the knowledge that is within our society’s reach, and what is not, though none of us are experts on most matters. Actually, this is no surprise: this is what it is to live in a society rich in knowledge, information, and access. We read and watch and listen and learn, and thus come to a fallible estimation of our collective capacities. One needn’t travel far down the road, geographically or historically, to come across other societies far poorer in their collective knowledge.
We might see if we can organize our knowledge into a series of gradations, extending from what I really know, right here, right now, to knowledge I have as a result of the society I am in. In between these extremes would be layers of things I know something about, or just a little about, or remember knowing at one time, or things about which I know some parts but am confused about others. It would be a complicated, multi-dimensional series, to be sure. But I want to try to push aside as many complications as possible, and focus on the things we know mostly because we are in a society that knows them. Let’s give this murky area of known things its own distinctive name: educated true opinions.
For example, my knowledge that we can colonize Mars is an educated true opinion. Your knowledge that vaccines reduce the risk of contracting certain diseases is an educated true opinion. Our knowledge that humans are causing global warming is an educated true opinion, and our knowledge that antisemitic conspiracy theories have no factual basis in reality is an educated true opinion. It’s a cumbersome name, but each element is important. I insist on the word “educated” because these opinions are grounded in a roughly accurate sense of what our society collectively knows, even if we as individuals are not able to supply extensive reasons immediately upon request. I insist on the word “true” because I do not wish to say that any of these true opinions are merely opinions that might be true or false: no, they are true. I insist on the word “opinion” because this knowledge concerns matters about which we are not experts.
As the examples I just offered might suggest, many of us end up arguing over educated true opinions. That is, many of us argue over which claims are educated true opinions, and which aren’t. For some of us this arguing is only an occasional pastime. For others it becomes a rage-filled, life-consuming passion. Why it ends up being so important to some people, and not to others, is an interesting question.
One might think any such argument could be settled quite easily by asking some experts to tell us what’s true and what isn’t. But whether we can trust the experts, or even identify them reliably, turns out to be a matter over which there is further disagreement. On many issues we don’t even agree on what it would take to establish a claim as an educated true opinion. Sometimes the arguments become so acrimonious that it can be asked whether we are living in the same society.
Liberalism has been so successful in promoting a wide range of different ideas that its own name has gotten pretty murky. Many people think it means supporting a welfare state, championing the voices of people usually pushed to the side, and generally showing sympathy for anyone or anything that can’t defend itself. Other people think it means being a stupid hippie crybaby. Still others lump liberalism together with belonging to a specific political party, and others argue it’s just another word for capitalism. But the classic meaning is that a liberal tries to establish a social order that gives people the freedom to live however they think best without getting in each other’s way. Fundamentally, it is the defense of pluralism, or the broad toleration of different visions of what’s good. It’s this sense of liberalism that I think we shouldn’t give up on just yet.
A recent blogpost by philosopher Liam Kofi Bright explains why he isn’t a liberal. (And a similarly forceful critique is offered by Christopher Horner here on 3QD.) Bright argues that humans just can’t maintain a sharp distinction between what’s private and what’s public: our own visions of the good life inevitably will pollute our politics (and so pluralism is unstable). Second, and relatedly, he argues the very idea is incoherent, and a governing institution necessarily shutters some visions of a human life as no longer open for business. He also argues that liberalism historically has been the vision advanced by white plutocrats, and it carries their worldview in its DNA, particularly under the banners of private property and rapacious capitalism.
It is entirely possible that we cannot handle the ever rising tide of knowledge. Yes, I am going to presume that it is knowledge — that we are not barking up the wrong axis mundi, that we are not ten days away from the next Einstein who overturns everything, that this time next year we will not look back on today as back when we were mere children. You might ask how I can possibly make this presumption, and you are right to ask. Nevertheless…
We know a helluva lot. It’s really extraordinary if you stop to think about it. Why should the descendants of some savannah primates be able to figure out all this stuff about quarks, penicillin, double-entry bookkeeping, stock derivatives, the rise and fall of psychoanalysis, Bluetooth (well, right, work in progress), and microchip readers? Any ancient alien bookies would have placed the odds heavily against us. But here we are, trying to drink from a veritable firehouse of veritas, swelling our heads most impressively.
Lots of things don’t exist. Bigfoot, a planet between Uranus and Neptune, yummy gravel, plays written by Immanuel Kant, the pile of hiking shoes stacked on your head — so many things, all of them not existing. Maybe there are more things that don’t exist than we have names for. After all, there are more real objects than we have names for. No one has named every individual squid, nor every rock on Mars, nor every dream you’ve ever had. The list of existing things consists mostly of nameless objects, it seems.
So there also must be a lot of nameless things that don’t exist. The collection of two marbles in my coffee mug — call it “Duo”. Duo doesn’t exist. Nor the collection of three marbles (“Trio”), nor the collection of four marbles, etc. Beyond Duo and Trio, there is an infinity of collections of marbles in my coffee cup that don’t exist, and the greatest portion of them, by a long shot, are nameless. Think of all the integers that don’t exist between 15 and 16. None of them have names. The world is full of them, or it would be, if they existed.
My guess is that there are more nameless things that don’t exist than there are nameless things that do exist. I have read that there is a finite number of particles that exist in the universe, and that’s probably going to limit the number of nameless existing things, somehow. But think of all the particles that don’t exist! There are far more of them, right?
We primates of the homo sapiens variety are very clever when it comes to making maps and plotting courses over dodgy terrain, so it comes as no surprise that we are prone to think of possible actions over time as akin to different paths across a landscape. A choice that comes to me in time can be seen easily as the choice between one path or another, even when geography really has nothing to do with it. My decision to emit one string of words rather than another, or to slip into one attitude or another, or to roll my eyes or stare stolidly ahead, can all be described as taking the path on the right instead the path on the left. And because we primates of the homo sapiens variety are notably bad at forecasting the consequences of our decisions, the decision to choose one path and lose access to the other, forever, can be momentous and frightening. It’s often better to stay in bed.
Indeed, because every decision cuts the future in half, the space of possibilities is carved rapidly into strange and unexpected shapes, causing us to gaze at one another imploringly and ask, “How ever did such a state of things come to pass?” And the answer, you see, is that we and our compatriots made one decision, and then another, and then another, and before long we found ourselves in this fresh hot mess. And we truly need not ascribe “evil” intentions to anyone in the decision chain, as much as we would like to, since our own futuromyopia supplies all the explanation that is needed. We stumble along in the forever blurry present, bitching as we go, like an ill-tempered Mr. Magoo.
(Hegelian World Spirit as Mr. Magoo, the philosopher writes in his notebook.)
A man rides an empty suit. The suit tells others what to think of the man, though it would not fit him. The man does not control the suit, but merely takes a ride upon it, come what may.
In his twenties, Franz Kafka composed a long story, “Description of a Struggle”, which remains one of his most enigmatic works. It follows a dream-like logic from a party, to a stroll through Prague, to an encounter with “a monstrously fat man” being borne in a litter by four naked men, to a supplicant once known by the fat man who prayed by bashing his own head against the stone floor of a church, to a final scene on a mountaintop, where a stabbing takes place, though it does not seem to be very consequential. The end.
Max Brod thought it was a work of genius, though John Updike thought it was adolescent posturing. (¿Por qué no los dos?) Like all of Kafka’s works, it shows up on your doorstep like a locked desk that you are sure contains something you need, but the key is locked inside it; and when you finally bash the desk open, you find your own corpse with a toe tag reading “GUILTY OF BREAKING THE DESK”. Maybe some of the strange imagery Kafka himself could neither explain nor control, maybe some of it spoke of his own secrets, maybe all of it is an existential parable.
One thing is for sure: the story shatters in every way. We might expect a story with a beginning, middle, and end: nope. We might expect some clarity about just whose story it is: nope. We might expect facts to stay fixed, or people to inhabit their own bodies: nope. We might expect some thread of consistency, conversations that make even minimal sense, words of wisdom that do not culminate in irrelevant banalities. Nope, nope, nope. That the work is offered as a story, and even as a description, is an exaggeration. It’s something, all right, and we may try to read it as a story, but the damned thing will not cooperate. It keeps falling apart the more we try to hold it together, like a human life, come to think of it.