“Monotonizing existence, so that it won’t be monotonous. Making daily life anodyne, so that the littlest thing will amuse.” —Bernardo Soares (Fernando Pessoa), The Book of Disquiet, translated by Richard Zenith, section 171
Senhor Soares goes on to explain that in his job as assistant bookkeeper in the city of Lisbon, when he finds himself “between two ledger entries,” he has visions of escaping, visiting the grand promenades of impossible parks, meeting resplendent kings, and traveling over non-existent landscapes. He doesn’t mind his monotonous job, so long as he has the occasional moment to indulge in his daydreams. And the value for him in these daydreams is that they are not real. If they were real, they would not belong to him. They would belong to others as public resources, and not reside in his own private realm. And what is more, if they were real, then what would he have left to dream? Far better, he thinks, “to have Vasques my boss than the kings of my dreams.” It’s more than that he doesn’t mind his monotonous job. On the contrary: the more monotonous his existence, the better his dreams.
I have taught “Epistemology” for many years, but it has always been for me a difficult course to plan. I want to cover traditional philosophical questions about skepticism, justification, induction, and belief in the external world. But then I also want to cover topics arising from the social conditions of knowledge: how cultural ideologies and prejudices color what we perceive and what we think we know, and “the crooked timber of humanity” and all that. And then I also want to explore human psychology and our natural inclinations toward fallacious thinking, as well as how conspiracy theories arise, and the fresh challenges the internet brings to epistemology.
So finally, inevitably, I wrote my own textbook, and since textbooks are usually outrageously overpriced, I wanted to make mine an open resource ( = free!). I was lucky enough to gain tremendous, enthusiastic support from my university’s Open Educational Resource staff. And so here it is, for anyone interested!
At least on the surface, there seems to be something incongruous in regarding some artifact (like a watch) as clearly implying some kind of intelligent, crafty mind, but then regarding that intelligent, crafty mind as not implying any sort of further creator, but coming about through natural causes. A watchmaker is if anything more impressive and more in need of some explanation than the watch. So by what principle do we say that the watch requires design and artifice, but the watchmaker does not?
It’s a tricky question, because it is difficult to sort out “order that requires design” from “order that does not require design”. If we point out that all of an entity’s parts work together, that its behavior is regular or uniform, and that it is clearly not a simple heap or aggregate of parts, it is not clear whether we are talking about a watch or a giraffe. Natural objects and artificial objects all exhibit order; but which kind of order implies design, and which does not?
To sharpen the question a bit further, imagine the following case. Suppose we land on Mars and we find remarkable entities composed of stone. These entities can move themselves over the landscape, and they have parts. We observe that they are able to seek out specific kinds of rocks and assemble them into copies of themselves. Sometimes they break or wear down, but enough of them are able to make copies of themselves before this happens that there seems to be a steady population of them on the planet.
Imagine further that, in response to this discovery, the scientists on earth form two camps. One camp proclaims, “We have found life on another planet! The surprise is that living beings can be made of rock.” The second camp proclaims, “These robotic artifacts are evidence for intelligent robot engineers, who either once lived on Mars, or at least were able to send some clever rock robots to Mars.” One might initially say these are just two equivalent descriptions of the entities, but that isn’t so, because the two theories are very different in what they imply about the past: one theory says the robots arose naturally, without any designers, and the other theory says there must have been designers. So they are saying different things.
How can we tell which theory is right? Suppose we examine the entities more closely, and learn that they are powered by some chemical process involving the rocks and the Martian atmosphere, and that each one has the parts it has because their “parent” made them that way. They don’t “grow” or change on their own, except through erosion and minor collisions. They don’t heal, and, being rocks, they don’t have DNA. They don’t communicate in any way we can see, and each one of them, when put into a similar situation, “robotically” will do a similar thing. So far, no clear reason to regard them decisively as living beings or decisively as artificial robots.
In this imagined case, there is nothing inherent in the rock entities that tells us whether they are natural or designed. But there should be some fact of the matter, because these rock things must have come from somewhere, after all.
This leads us to another possible way of answering the question. Could these rock entities have evolved through natural selection? Answering this question requires that we try to construct an explanation for them that appeals to some sort of random generating process, some sort of environment that selects for some entities over others, and enough time for these factors to result in the rock entities in question. The explanation we construct will have to be consistent with Mars’ history, so far as we know it, and should not rely upon too many improbable, lucky accidents (each one counts against the explanation).
If it turns out that we can construct such an explanation, then it is possible that these rock entities have come about naturally, and if we have no real evidence for Martian or other-worldly engineers, then that’s the more likely explanation. If it turns out there is no way to construct such an explanation, then we will judge the design hypothesis as more probable (leaving some room for the possibility that future theorists will know more than we do, and come up with an evolutionary explanation). So there is a way to answer the question, and it comes down to what kinds of explanations we are able or not able to construct.
This offers a surprising lesson about the role of Darwinian evolution by natural selection in the distinctions we make between what is natural and what is designed. Indeed, the story suggests that “natural” just means “can be explained through evolution”–and old Charlie Darwin may not have realized that, in his work, he was in fact providing a criterion for nature itself! It also helps us to understand why “the design argument” held such powerful sway over intellectuals before Darwin: there was no clear criterion for when we should regard an ordered thing as natural, and so there was really nothing to keep anyone from seeing any ordered thing as designed.
One of the cleanest and most compelling arguments against physicalism in the philosophy of mind is the “Knowledge Argument”. (Here is a quick summary. The response I am going to offer doesn’t show up there, though it fits in as a variant of the “No Learning” objection. It’s also the reply Daniel Dennett gives in Consciousness Explained.) According to this argument, thorough knowledge of the physical facts of a human being will not reveal any of the subjective states of that human being–what they feel, think, and sense. But this means there are facts about a human being that cannot be known by knowing all of the relevant physical facts. Hence, physicalism is false.
The argument is typically presented in a story about Mary. Suppose Mary is a super-smart brain scientist who is unable to see colors. She learns everything there is to know about the brain, including what the brain does when it sees colors. So Mary has all the neurophysiological facts. But then suppose she is given the ability to see colors. She sees a rose and exclaims, “Oh! So that is what red looks like!” She didn’t know what red looks like, even given her knowledge of the brain. Hence there was something Mary didn’t know about human conscious experience. Hence there’s more to it than physical facts can tell.
It’s a cute and somewhat compelling example. But it’s misleading, and the misleading part is the part in the story when it is claimed that Mary knows everything there is to know about the brain. Set aside the problem that that is quite a lot. The problem, really, is that that is not enough.
Suppose Mary is a super-smart leg-scientist. She knows everything there is to know about legs, including what legs do when they walk. One day someone asks her for the best route for walking to Las Vegas. She doesn’t know, as she has spent all her time studying leg physiology. Hence, there must be more to to walking to Las Vegas than just walking.
Well, yes, we should say, of course there is. One should consult a map of some kind, and it would be helpful to recommend sturdy walking shoes and so on. Just studying walking won’t tell you which direction to take. Similarly for studying the brain. The brain evolves and learns in a natural environment. Elements in the natural environment evolve as well, and sometimes in response to organisms’ abilities to process color information. The red stuff in the world tends to be stuff that commands attention, as red stuff is usually poisonous, or pretending to be poisonous, or yummy, or blood, or meat, or something else you should pay attention to. Part of what red is has to do with what things in the environment get perceived as red, and why. Color perception would not have evolved at all if it had not been useful for processing information.
So we should change the Mary case so that Mary knows even more. She not only knows everything there is to know about the brain, but also everything there is to know about colored objects in the environment, and what role they have played in evolution. So now Mary knows that seeing red evolved so as to alert organisms to threats and opportunities in the environment, and that seeing red things usually results in a charged experience–scary, appetizing, sexy, etc. Red is attention-commanding. As such, it had better stand out brightly against things that can usually be ignored in crisis situations – green and brown things, for example. Mary may as well learn all the ways that red and other color words have been culturally embedded as well, in poems and stories and religious ideas, so as to understand the extensive role red plays in human experience.
Now once Mary knows all that, is it as obvious that she wouldn’t know what red looks like? I’ll admit, it may seem like she still wouldn’t know exactly what it looks like. But I don’t think it is obvious that she wouldn’t know. She might well gain her color vision, see a red thing, and say, “Ah, that’s pretty much what I expected!”
It was announced last week that scientists have integrated neurons from human brains into infant rat brains, resulting in new insights about how our brain cells grow and connect, and some hope of a deeper understanding of neural disorders. Full story here. And while no scientist would admit they are working toward the development of some Rat Man who will escape the lab and wreak havoc on some faraway island or in the subways, it’s impossible not to wonder.
It might seem daunting to read philosophy. Giants of thinking with names like Hegel, Plato, Marx, Nietzsche and Kierkegaard loom over us with imperious glares, asking if we are sure we are worthy. We might worry that we won’t understand anything they are telling us; even if we do think we understand, we still might worry that we’ll get it wrong somehow.
So, if we’re going to read philosophy, we need to begin by knocking those giants down to size. Every one of them tripped and burped and doodled. Some of them were real jerks. Here’s Arthur Schopenhauer on his fellow German philosopher Georg Wilhelm Friedrich Hegel, for instance: ‘a flat-headed, insipid, nauseating, illiterate charlatan, who reached the pinnacle of audacity in scribbling together and dishing up the craziest mystifying nonsense.’ I’m not sure whether this paints Schopenhauer or Hegel as the bigger jerk.
The point is that each giant of philosophy was a human being trying to figure out life by doing just what you do: reading, thinking, observing, writing. Don’t let their big words intimidate you; we can insist that they make sense to us – or, at least, intrigue us – or are left behind in the discount book bin. They must prove their worth to us.
I recently listened to a discussion on the topic of longtermism, or the moral view that we need to factor in the welfare of future generations far more seriously than we do, including generations far, far into the future. No one should deny that the people of the future deserve some of our consideration, but most people soften that consideration with fluffy pillows of uncertainty. We take ourselves to have a rough idea of what the next generation will face, but after that everything gets cloudy fast, and most of us aren’t sure what exactly we should do for those possible people in the clouds, so we start dropping them from our moral calculations.
But if you insist on considering them, and treating them as real (but real elsewhen), their numbers and their interests get big fast. How many people might exist in the whole future of the universe? Millions of billions, maybe, if we go full-on Star Trek. If they each deserve only one millionth of our concern, that still ends up being a whopping amount of concern. Look at things that way, and really just about all of our moral thinking should be focused on the future generations of the universe. The Iroquois who asserted that we should “have always in view not only the present but also the coming generations” were severely understating the magnitude of the task before us.
There is a set of interesting discussions posted on Scott Aaronson’s blog among Aaronson, Steven Pinker, and others on whether recent text generators like GPT-3 indicate that artificial intelligence is upon us. The discussion is informed, sensible, and well-mannered: these guys all respect each other’s views, though they disagree, so it’s a model of genuine discourse. The basic disagreement seems to be between the more impressed by GPT-3 and the less impressed, as one would expect.
Aaronson’s dialogue continues with Pinker on this page, and the two seem to get stuck on the question of whether we could have not just AI, but genius AI, such as an AI that duplicates Einstein’s intelligence. They arrive at this point because after Pinker points out that we really don’t have a clear definition of what constitutes “intelligence”, Aaronson counters that if one devised a program that could give you Einstein-level insights, that would surely count as an intelligent program:
“Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster. Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later. Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.”
Pinker replies to this idea with skepticism toward treating intelligence or super-intelligence as a kind of magical substance that inhabits special brains: “I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do….[I]f intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence.” So Pinker would rather see a detailed account of how an AI is recognizing problems and thinking about them, rather than just wowing us with stellar performance.
Pinker has an interesting point, I think: it is easy to believe we will have programs that can maintain conversations and solve hard problems, but harder to believe that those programs will be doing it the way humans do it. The programming will require much more massive information than the human brain runs on, and so won’t fully answer the question of how human beings think.
It’s a good and interesting discussion, as I said, but there’s a further element I think they are missing, and that’s the social dimension of intelligence (and of “genius”, even more so). I am not at all sure Einstein would have been a genius if he had been born in 1779 or 1979 instead of 1879. As it happened, he was in the right place at the right time to make an important contribution in a certain problem space. We shouldn’t assume that he would have the same level of success if dropped into other problem spaces. Same goes for the others on the typical lists of geniuses: Plato, Da Vinci, Shakespeare, Newton, etc. A lot of lucky circumstances need to come together before an individual set of abilities can plug itself in and solve a problem in an impressive way. Drop a genius into another time and they may cease to be a genius entirely.
(I am reminded of Bill and Ted’s Excellent Adventure, in which Beethoven, brought to the present day, quickly masters disco. I don’t quite remember what the other historical figures master—Genghis Khan skateboards, and Joan of Arc jazzercises?!–but the trajectory of this idea would have Newton quickly mastering the internet, Lincoln bringing peace to the Middle East, etc. It’s a vivid expression of the common belief that genius is a superpower.)
A similar point might be made for sub-genius level intelligence, with which most of us are familiar. What matters is not what a smart device or a person can do, or what puzzles it can solve, but the ways in which it can be incorporated into the rest of society. So long as we stick with Turing-test-style contests to see if we have a genuine AI or not, we will always be in an argument like the one among Aaronson and his friends: enthusiasts and skeptics, progressives and cynics, arguing over whether “genuine intelligence” is a necessary condition for passing the test. By contrast, if we find one day that we already have incorporated an entity into our conversations and in our lives, and that in these roles we cannot help but regard the entity as a separate, intelligent person, then the debate is practically over. It’s intelligent, because we can’t help but treat it as intelligent.
So, for example, right now I don’t think anyone regards Siri or Alexa as intelligent. They’re still awkward to deal with. But if someday they are as easy to converse with as a genuine personal assistant, so that we have to think about our interactions with them in the same way we have to think about our interactions with human others, then we will have genuine AIs: not merely because of their programming (though that is not irrelevant, of course), but because they fit into our society in the right sort of way to be regarded as intelligent beings.
We have done this with children over the last 60 years or so, and with animals over the last 20 years. It’s only been in the 20th century that we started inviting children into the adult world, where their feelings and ideas and abilities were taken as equal to an adult’s, or at least on many occasions we pretended as if they did. A kid in 1887 was not awarded nearly the same degree of intellectual authority and respect as a kid in 1987, let alone 2017. Same with our pets: though we do not treat them like fully adult humans—well, it’s not exactly common practice, anyway—it’s still a lot better to be a dog in 2020 than a dog in 1920. For some reason, we decided to include children and pets into the charmed circle of Beings Whose Intelligence Matters. Who and what they are as persons depends partly on their own abilities and capacities, of course, but also very much on the way we treat them. Talk of “rights” reflects our attitudes on these matters. To establish a being’s rights is to decide, as a matter of policy, how they are to be regarded, and that decision is informed in part by how intelligent we regard them as being.
I do think that, in general computer scientists and psychologists and philosophers are reluctant to get into the messy business of the social dimensions of all this stuff, because they prefer to work in more clearly defined domains: the confines of the skull, for instance, or the structure of an algorithm, or interactions among a small number of participants. Once we open the lid on the roles of culture, tradition, and even economics and politics on these questions, then the worms start wiggling out at an unmanageable rate. So it’s an understandable oversimplification. But that doesn’t mean that the social dimension can be ignored.
Justin E. H. Smith’s recent book, The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning (Princeton UP 2022) has received plenty of notice here on 3 Quarks Daily, and for good reason. Smith’s books and essays always remind us that, no matter how bizarre and ironic some recent damn thing is, we are always part of a long anthropological history of bizarre irony, and indeed the harder you look the more bizarre and ironic it all gets. At least, I think that is one of the main vibes of 3QD: seeing where we are in some map of the strange natural/cultural universe.
There are plenty of books complaining about the evils of the internet, and to be sure Smith offers four complaints: it is addictive, it warps human lives through algorithms, it is ruled by corporate interests, and it serves as a universal surveillance device. There is enough evidence, both objective and anecdotal, that too much time on the internet turns one’s brain into a Twitter slushie that leaves one in no condition to meditate upon difficult problems, but instead only to scroll and click and scroll and click at digital gewgaws, feeling empty and alone, a terrible feeling one then tries to escape with more scrolling and clicking.
[Reading Jeff Hawkins, A Thousand Brains, Basic Books, 2021]
Rene Descartes had an uneasy relationship to academics. He was very well educated, but he never held any academic positions and spent much of his life arguing with professors and theologians. He saw himself as a scientist, like Galileo, disclosing the secrets of the universe, and it just so happened that in his view those secrets were not best expressed in Aristotelian or scholastic terms. He seriously considered providing a detailed, point-by-point commentary and criticism on an academic treatise of the day (Francisco Suarez’s Metaphysical Disputations), but then apparently decided his time would be better spent just forging ahead and making his own path, trampling over carefully-articulated distinctions drawn by pointy-headed academics.
I get something of the same vibe from Jeff Hawkins’ book, A Thousand Brains. Hawkins was on his way to becoming an academic brain scientist, but at the time the academics were busy asking narrower questions he didn’t find as interesting, so he went his own way, made millions of dollars, and started his own research institute so he could study what he wanted. Unlike Descartes, I think his team is not fundamentally changing the paradigm for neuroscience, and their work is being published in respected journals; but like Descartes, he is marching ahead at a pace that academics might consider brazen. Not my field, so I don’t know. I can say that his knowledge of philosophy is shallow, but that doesn’t seem to hamper his research at all.
I will also add that while I am fascinated by the first half of his book, the second half is not so great. Hawkins offers his own view of humanity’s future, and he just doesn’t have much that’s interesting to say. So my advice is to skip that and just read through chapter 9.
Okay, on to the book. The crucial players in Hawkins’ story are the neocortex, cortical columns, and frames of reference. The neocortex is a thin sheet of networked neurons, about the size of a dinner napkin and about 3 mm thick. It wraps around the rest of your brain in crinkled fashion. It’s the part of the brain responsible for all of our fancy, distinctly human sorts of thought. If you zoom in on it, you will find about 150,000 cortical columns that span the thickness of the napkin. Those cortical columns have similar structures. So the neocortex seems to be made of 150,000 similar circuits, like just so many Raspberry Pis, though each one involves something like 100,000 neurons. They end up performing different functions because, basically, they are plugged into different parts of our bodies and different sets of other columns.
Cortical columns combine together to provide what Hawkins calls frames of reference. A frame of reference can be thought of as an algorithm that shapes and structures the information it receives, or works to sort out or combine information, or shapes and structures some activity or behavior on our part. Hawkins’ favorite example is our knowledge of a coffee cup. We know how to recognize one by sight; we know how it should feel in our hands, and what each finger should sense as we grip the handle or trace the edge of the rim; we know how much it should weigh in our hands, and so on. Our knowledge of coffee cups—and of the world at large—is built from these frames of reference, or our projections and expectations placed upon our experience. Again, the cortical columns all have pretty much the same structure; so the difference in the information processing between the smell of coffee and the feel of the coffee cup has more to do with cortical columns being plugged into your nose or your hands, rather than to do with the inner processing that takes place.
(Just an expression of wonder here that 150,000 cortical columns is enough to do the trick—not just for coffee cups, but all of what we think we know and experience! Can you exhaustively reduce your entire Weltanshauung into 150,000 components? It seems like there should be more, though I’ve been making a list and it ends at 12, so maybe so.)
All these processors plugged into one another provide us with a set of maps or models of what is in the world, where we are in relation to it, what we are doing, and what further actions are available to us. It is highly decentralized: there is no “headquarters” where “it all comes together” (in Daniel Dennett’s endearing terms), but instead there are tens of thousands of processors, all biting off bits of the problem and chewing their way through it and sending their results for other processors to use. In the end, we do and say stuff, and the eventual narration of what we do and say becomes the basis for beliefs about who “we” are and what “we” think “we” are up to.
Another crucial part of Hawkins’ account is the significance of motion. The neocortex does not simply store a bunch of pictures and sounds like a library. It tracks changes and movements and patterns in motion. Our model of the coffee cup is not just a bunch of slides, but clips portraying how it should change as we move in relation to it, or as we move it around through space. Motion and change are what we model in the world, and it is also how we think about ideas and concepts:
“If all knowledge is stored this way, then what we commonly call thinking is actually moving through a space, through a reference frame. Your current thought, the thing that is in your head at any moment, is determined by the current location in the reference frame. As the location changes, the items stored at each location are recalled one at a time. Our thoughts are continually changing, but they are not random. What we think next depends on which direction we mentally move through a reference frame, in the same way that what we see next in a town depends on which direction we move from our current location.”
This is an exciting idea. We might know of memory palaces and the spatial hacks people use to remember long strings of information, but this appears to be evidence for the claim that our thinking is always movement through a landscape of ideas. Thinking is not a disembodied experience, but is something more like a virtual reality in which our projected motions take us from one concept to another, from one idea to a similar one, from one metaphor to its neighbors, and so on.
A commendable feature of Hawkins’ book is that he provides many clear analogies and illustrations of how his claims are borne out in both ordinary experiences and experimental ones. I have long been a fan of Dennett’s “pandemonium” model of the mind, and Hawkins’ book seems to be spelling out the details of how the congress of demons are actually embodied in the neocortex. The trick in any explanation of consciousness or intelligence is to show how a bunch of simple systems that aren’t so conscious or intelligent can sum up into a system that is conscious or intelligent (at least on our good days). There will always be philosophers who cry that in principle no such explanation can be achieved, but these are the folks who will be increasingly ignored as the results come in. (True in Descartes’s physics, true in Hawkins’ neuroscience.)
I do think that at some point our understanding of ourselves will have to shift from individuals to communities. That is to say, many of my cortical columns are plugged into not just each other and parts of my body, but to other people. Not directly of course (ew). But through language, culture, social dynamics, etc: all that squishy stuff gets reference framed in my brain, and exerts strong pulls over who and what I say I am and what the world is supposed to be. Neither Hawkins nor Dennett go far in this direction, but that’s okay, since we need both approaches, from individuals outward and from societies inward. But in the end, I think what we call “consciousness” and “intelligence” is going to be explained much more by communities and traditions than it is by cortical columns.
Today is July 4th, a day when Americans reflect on the value of freedom and the costs and sacrifices required for it. So it is an appropriate day to reflect on America’s deepest political aspirations.
Nah. Let’s talk about our brains. The neocortex is where all our fancy thinking takes place. The neocortex wraps around the core of our brain, and if you could carefully unwrap it and lay it flat it would be about the size of a dinner napkin, and about 3 millimeters thick. The neocortex consists of 150,000 cortical columns, which we might think of as separate processing units involving hundreds of thousands of neurons. According to research at Jeff Hawkins’ company Numenta (and as explained in his fascinating recent book, A Thousand Brains), these cortical columns are capable of modeling patches of our experience (he calls them “reference frames”) and setting our expectations of what we should experience next, at any given moment. [Note: the remarks that follow are inspired by Hawkins’ book, but shouldn’t be taken as a faithful representation of it.]
The neocortex’s complexity is considerable, but not infinite, and what’s most surprising is that such a relatively small network—you can fold it up and put it in your head—is capable of understanding calculus and making up jaunty little tunes and comparing Kierkegaard to Heidegger, not to mention doing all three in a single afternoon.
…But clearly we do end up with causal knowledge, as Hume himself never doubted, and we manage to navigate our ways through a steady world of enduring objects. We somehow end up with knowledge of an objective world. And we don’t remember that arriving at such knowledge was all that difficult. We just sort of grew into it, and now it seems so natural that it’s really hard to imagine not having it, and it’s even difficult not to find such knowledge perfectly obvious. But in fact it is anything but obvious …
The most important thing is this: what you experience, what you think, what you believe has no deep connection to what is real. Kant had this single truth exactly right: everything we think we know about the world is mostly a reflection of ourselves—psychologically, culturally, socially. As Leszek Kołakowski wrote, “In all the universe man cannot find a well so deep that, leaning over it, he does not discover at the bottom his own face.”
The explanation for this is straightforward. We only ever encounter the model of the world our minds have made, and each model bears the imprint of its maker, in thoroughgoing ways so pervasive and nuanced that we seldom see ourselves in it. Solipsism in this sense is inescapably true. We experience our own minds, for the most part.
But it’s also true that our models are disrupted by experience: we make mistakes, we are surprised, we get things wrong and we collide and break. So it would be wrong to say there isn’t a reality independent of us. But we cannot know it as it is in itself—that’s Kant’s point. All we can do is try to model it, with our sloppy cognitive engines, and over time we have become pretty good at it, if only within the narrow realm of our endeavors.
How do we come to know this fact, that we cannot know reality as it is in itself? Certainly not by looking at reality as it is in itself and comparing it to the cartoon that is in our heads. No, we know it from the inside. We make wrong predictions about the world, and sometimes come to see our predictions as wishful thinking. We observe what other people say and believe, and we see how closely it is tied to their own psychology. We study other societies, all of which plant themselves at the center of what’s important. and from all of these observations we formulate the general thesis that people paint themselves into their worlds, or more accurately: they paint the world with themselves, rather in the way John Malkovich sees everyone as John Malkovich when he enters into his own mind as a stranger in the film Being John Malkovich.
We know this to be true in dreams. In dreams every element is coming from within us—where else could it be coming from? But it is a short step from dreams to waking experience. In waking experience what we see are the judgments we arrive at, and those judgments are formed from sensations, yes, but also the same internal apparatus that gives shape to our dreams. Our minds are predictive engines, but the predictions we make gain their characters from our dream engines. Malkovich, in entering his own head, has supplied himself as input to the apparatus that makes predictions about his experience, and unsurprisingly he sees himself everywhere. Most of us who aren’t crippled by extreme narcissism don’t have this experience, thank god, but we still inject ourselves into our predictions, and thereby into our experience.
Some people think that knowledge is something in the head. I have a belief, and it has appropriate connections to other ideas and beliefs, all in my head. These connections ensure that my belief has good grounding or justification: I have reasons for my belief. And if this particular belief maps onto some structure in the world in the right way, in such a way that we might say my belief is “true”, then I have knowledge. But whatever is “out there” in the world is not my knowledge; it is only a fact or state of affairs that renders my belief true. It’s what is in my head that is my knowledge.
I’m not sure how many people find this way of looking at things congenial. It is a way of looking at knowledge that has been commonplace among anglo-american philosophers through the 20th century, at least. But as I write it down, I realize how strange it is. In ordinary daily life, knowledge is not regarded as a private pile of chestnuts kept within a cranium. Knowledge is almost always in action, playing an explanatory role in what we do or say or write. On some occasions it is only when we do or say something that we ourselves realize we had some knowledge we didn’t know we had. On other occasions other people see what we are up to, and they say to themselves, “He doesn’t know what he’s doing.”
On ordinary occasions, knowledge is a set of capacities we have, capacities which can be witnessed by others as present. You watch me change a tire, and you see that I know what I’m doing. Sam takes a calculus test and gets a high score. Nelly recounts the history of the Beatles. Juniper fits a cast on a dog’s broken leg. All these people have knowledge, and it is out in the open for anyone to see. Of course there is something going on in their heads—heads are not just ornamental—but the knowledge consists in an agent’s performance, or their capacity to perform, as judged by some kind of audience or measure that indicates their success. It’s not merely “in the head” any more than running is merely “in the legs”.
But even this generous schmearing of knowledge across agents, judges, and environments is too restrictive. For some of our knowledge we possess in virtue of belonging to a group that has that knowledge, loosely speaking. For example, I know humans can colonize Mars. If you press me on this, asking how we would secure a source of water and get plants to grow and protect our living entourage from radiation and hurricane-force winds, my story would crumble pretty quickly. I don’t know those things. My knowledge is based on some loose facts: that we have sent probes to Mars, and humans to the Moon, and I haven’t heard anyone say we could not possibly colonize Mars. That’s just about all I have to offer, and it hardly qualifies me as an expert. But I also believe that if I spent a few weeks doing research or consulting with experts, I could put together a much more detailed account of how we could colonize Mars, perhaps with charts and figures and artist renderings. That is to say, I don’t have the knowledge on me right now, but I could get it for you if you give me a few weeks, because I live in a society where that knowledge is available.
We could play with words and say, for example, that I don’t KNOW we can colonize Mars; I only sort of “know” we can colonize Mars. But I think in most conversations I could make the claim that we can colonize Mars and get virtually no resistance from others, whereas if I assert that we can train monkeys to do brain surgery, I very likely will receive some polite resistance on the matter, even though my knowledge of training monkeys is not appreciably less than my knowledge of establishing a base on Mars. The difference between the two cases is explained by the fact that our society, as a whole, knows how to colonize Mars, and we don’t know how to train monkeys to perform brain surgery.
Somehow we gain a rough sense of the knowledge that is within our society’s reach, and what is not, though none of us are experts on most matters. Actually, this is no surprise: this is what it is to live in a society rich in knowledge, information, and access. We read and watch and listen and learn, and thus come to a fallible estimation of our collective capacities. One needn’t travel far down the road, geographically or historically, to come across other societies far poorer in their collective knowledge.
We might see if we can organize our knowledge into a series of gradations, extending from what I really know, right here, right now, to knowledge I have as a result of the society I am in. In between these extremes would be layers of things I know something about, or just a little about, or remember knowing at one time, or things about which I know some parts but am confused about others. It would be a complicated, multi-dimensional series, to be sure. But I want to try to push aside as many complications as possible, and focus on the things we know mostly because we are in a society that knows them. Let’s give this murky area of known things its own distinctive name: educated true opinions.
For example, my knowledge that we can colonize Mars is an educated true opinion. Your knowledge that vaccines reduce the risk of contracting certain diseases is an educated true opinion. Our knowledge that humans are causing global warming is an educated true opinion, and our knowledge that antisemitic conspiracy theories have no factual basis in reality is an educated true opinion. It’s a cumbersome name, but each element is important. I insist on the word “educated” because these opinions are grounded in a roughly accurate sense of what our society collectively knows, even if we as individuals are not able to supply extensive reasons immediately upon request. I insist on the word “true” because I do not wish to say that any of these true opinions are merely opinions that might be true or false: no, they are true. I insist on the word “opinion” because this knowledge concerns matters about which we are not experts.
As the examples I just offered might suggest, many of us end up arguing over educated true opinions. That is, many of us argue over which claims are educated true opinions, and which aren’t. For some of us this arguing is only an occasional pastime. For others it becomes a rage-filled, life-consuming passion. Why it ends up being so important to some people, and not to others, is an interesting question.
One might think any such argument could be settled quite easily by asking some experts to tell us what’s true and what isn’t. But whether we can trust the experts, or even identify them reliably, turns out to be a matter over which there is further disagreement. On many issues we don’t even agree on what it would take to establish a claim as an educated true opinion. Sometimes the arguments become so acrimonious that it can be asked whether we are living in the same society.
Liberalism has been so successful in promoting a wide range of different ideas that its own name has gotten pretty murky. Many people think it means supporting a welfare state, championing the voices of people usually pushed to the side, and generally showing sympathy for anyone or anything that can’t defend itself. Other people think it means being a stupid hippie crybaby. Still others lump liberalism together with belonging to a specific political party, and others argue it’s just another word for capitalism. But the classic meaning is that a liberal tries to establish a social order that gives people the freedom to live however they think best without getting in each other’s way. Fundamentally, it is the defense of pluralism, or the broad toleration of different visions of what’s good. It’s this sense of liberalism that I think we shouldn’t give up on just yet.
A recent blogpost by philosopher Liam Kofi Bright explains why he isn’t a liberal. (And a similarly forceful critique is offered by Christopher Horner here on 3QD.) Bright argues that humans just can’t maintain a sharp distinction between what’s private and what’s public: our own visions of the good life inevitably will pollute our politics (and so pluralism is unstable). Second, and relatedly, he argues the very idea is incoherent, and a governing institution necessarily shutters some visions of a human life as no longer open for business. He also argues that liberalism historically has been the vision advanced by white plutocrats, and it carries their worldview in its DNA, particularly under the banners of private property and rapacious capitalism.