Some AI art

When I put together the “Two and a half minutes” piece for 3QD (below), I experimented with DALL-E to compose some art to accompany it. I ended up going with something less literal, but here’s what the AI did with the prompt “A painting of a muddy landscape with humans climbing out of brown pods”:

Looks like AI can pass the “think up some depressing imagery” test.

Posted in Uncategorized | 3 Comments

3QD: Two and a half minutes

There is nothing new in this thought. But it’s worth revisiting now and again.

There’s an unbounded muddy terrain as dark and timeless as night. Drifting slowly over the landscape is a disk of light from an unknown source, like a spotlight. There’s no predictable pattern to its motion, and no place is illuminated for more than two and a half minutes. By then the light has moved on, never to return again.

When the light shines upon a circle of the land, its muddy features are revealed, tangled roots and rocks and mud. Look closer and you will see dull brown pods that stir into motion as soon as the light touches them. The pods break open and human beings climb out.

Read more

Posted in 3QD essays | Leave a comment

3QD: Sea monster

Vasco da Gama was the first person we can name who successfully commandeered a voyage around Africa’s southernmost point, the Cape of Good Hope. It is a treacherous passage, where warm currents from the southern part of the Indian Ocean clash against the icy currents of the south Atlantic, leading to dangerous waves that have swallowed many ships. (Indeed, at the time it was known as “the Cape of Storms”.) Da Gama gave the cape wide berth, sailing far the sight of land, before turning northward and poking his way along the eastern coast of Africa, where many hijinks ensued.

This was in 1497, and Europeans were keen to find some route to Indian spices that didn’t involve crossing lands controlled by some sultan or other. Da Gama showed everyone the way, and the Dutch and the English rushed through and established colonies along the coasts of the Indian Ocean. Da Gama’s fellow Portuguese established colonies as well, of course, but not with equal success. Part of the reason was that Portuguese sailors as a whole were not very interested in following da Gama’s Cape Route because they knew damned well there was a monster down there that ate ships like snacks.

Read more here

Posted in 3QD essays | Leave a comment

3QD: Give me monotony!

“Monotonizing existence, so that it won’t be monotonous. Making daily life anodyne, so that the littlest thing will amuse.” —Bernardo Soares (Fernando Pessoa), The Book of Disquiet, translated by Richard Zenith, section 171

Senhor Soares goes on to explain that in his job as assistant bookkeeper in the city of Lisbon, when he finds himself “between two ledger entries,” he has visions of escaping, visiting the grand promenades of impossible parks, meeting resplendent kings, and traveling over non-existent landscapes. He doesn’t mind his monotonous job, so long as he has the occasional moment to indulge in his daydreams. And the value for him in these daydreams is that they are not real. If they were real, they would not belong to him. They would belong to others as public resources, and not reside in his own private realm. And what is more, if they were real, then what would he have left to dream? Far better, he thinks, “to have Vasques my boss than the kings of my dreams.” It’s more than that he doesn’t mind his monotonous job. On the contrary: the more monotonous his existence, the better his dreams.

Read more here

Posted in 3QD essays | 2 Comments

Knowledge for Humans

I have taught “Epistemology” for many years, but it has always been for me a difficult course to plan. I want to cover traditional philosophical questions about skepticism, justification, induction, and belief in the external world. But then I also want to cover topics arising from the social conditions of knowledge: how cultural ideologies and prejudices color what we perceive and what we think we know, and “the crooked timber of humanity” and all that. And then I also want to explore human psychology and our natural inclinations toward fallacious thinking, as well as how conspiracy theories arise, and the fresh challenges the internet brings to epistemology.

So finally, inevitably, I wrote my own textbook, and since textbooks are usually outrageously overpriced, I wanted to make mine an open resource ( = free!). I was lucky enough to gain tremendous, enthusiastic support from my university’s Open Educational Resource staff. And so here it is, for anyone interested!

Link to the book

Posted in Uncategorized | 5 Comments

The argument from design, and the surprising significance of evolutionary explanation

At least on the surface, there seems to be something incongruous in regarding some artifact (like a watch) as clearly implying some kind of intelligent, crafty mind, but then regarding that intelligent, crafty mind as not implying any sort of further creator, but coming about through natural causes. A watchmaker is if anything more impressive and more in need of some explanation than the watch. So by what principle do we say that the watch requires design and artifice, but the watchmaker does not?

Rock Beast by Dusty101 on DeviantArt

It’s a tricky question, because it is difficult to sort out “order that requires design” from “order that does not require design”. If we point out that all of an entity’s parts work together, that its behavior is regular or uniform, and that it is clearly not a simple heap or aggregate of parts, it is not clear whether we are talking about a watch or a giraffe. Natural objects and artificial objects all exhibit order; but which kind of order implies design, and which does not?

To sharpen the question a bit further, imagine the following case. Suppose we land on Mars and we find remarkable entities composed of stone. These entities can move themselves over the landscape, and they have parts. We observe that they are able to seek out specific kinds of rocks and assemble them into copies of themselves. Sometimes they break or wear down, but enough of them are able to make copies of themselves before this happens that there seems to be a steady population of them on the planet.

Imagine further that, in response to this discovery, the scientists on earth form two camps. One camp proclaims, “We have found life on another planet! The surprise is that living beings can be made of rock.” The second camp proclaims, “These robotic artifacts are evidence for intelligent robot engineers, who either once lived on Mars, or at least were able to send some clever rock robots to Mars.” One might initially say these are just two equivalent descriptions of the entities, but that isn’t so, because the two theories are very different in what they imply about the past: one theory says the robots arose naturally, without any designers, and the other theory says there must have been designers. So they are saying different things.

How can we tell which theory is right? Suppose we examine the entities more closely, and learn that they are powered by some chemical process involving the rocks and the Martian atmosphere, and that each one has the parts it has because their “parent” made them that way. They don’t “grow” or change on their own, except through erosion and minor collisions. They don’t heal, and, being rocks, they don’t have DNA. They don’t communicate in any way we can see, and each one of them, when put into a similar situation, “robotically” will do a similar thing. So far, no clear reason to regard them decisively as living beings or decisively as artificial robots.

In this imagined case, there is nothing inherent in the rock entities that tells us whether they are natural or designed. But there should be some fact of the matter, because these rock things must have come from somewhere, after all. 

This leads us to another possible way of answering the question. Could these rock entities have evolved through natural selection? Answering this question requires that we try to construct an explanation for them that appeals to some sort of random generating process, some sort of environment that selects for some entities over others, and enough time for these factors to result in the rock entities in question. The explanation we construct will have to be consistent with Mars’ history, so far as we know it, and should not rely upon too many improbable, lucky accidents (each one counts against the explanation). 

If it turns out that we can construct such an explanation, then it is possible that these rock entities have come about naturally, and if we have no real evidence for Martian or other-worldly engineers, then that’s the more likely explanation. If it turns out there is no way to construct such an explanation, then we will judge the design hypothesis as more probable (leaving some room for the possibility that future theorists will know more than we do, and come up with an evolutionary explanation). So there is a way to answer the question, and it comes down to what kinds of explanations we are able or not able to construct.

This offers a surprising lesson about the role of Darwinian evolution by natural selection in the distinctions we make between what is natural and what is designed. Indeed, the story suggests that “natural” just means “can be explained through evolution”–and old Charlie Darwin may not have realized that, in his work, he was in fact providing a criterion for nature itself! It also helps us to understand why “the design argument” held such powerful sway over intellectuals before Darwin: there was no clear criterion for when we should regard an ordered thing as natural, and so there was really nothing to keep anyone from seeing any ordered thing as designed. 

Posted in Uncategorized | Leave a comment

The Thing About Mary

One of the cleanest and most compelling arguments against physicalism in the philosophy of mind is the “Knowledge Argument”. (Here is a quick summary. The response I am going to offer doesn’t show up there, though it fits in as a variant of the “No Learning” objection. It’s also the reply Daniel Dennett gives in Consciousness Explained.) According to this argument, thorough knowledge of the physical facts of a human being will not reveal any of the subjective states of that human being–what they feel, think, and sense. But this means there are facts about a human being that cannot be known by knowing all of the relevant physical facts. Hence, physicalism is false.

The argument is typically presented in a story about Mary. Suppose Mary is a super-smart brain scientist who is unable to see colors. She learns everything there is to know about the brain, including what the brain does when it sees colors. So Mary has all the neurophysiological facts. But then suppose she is given the ability to see colors. She sees a rose and exclaims, “Oh! So that is what red looks like!” She didn’t know what red looks like, even given her knowledge of the brain. Hence there was something Mary didn’t know about human conscious experience. Hence there’s more to it than physical facts can tell.

It’s a cute and somewhat compelling example. But it’s misleading, and the misleading part is the part in the story when it is claimed that Mary knows everything there is to know about the brain. Set aside the problem that that is quite a lot. The problem, really, is that that is not enough.

Suppose Mary is a super-smart leg-scientist. She knows everything there is to know about legs, including what legs do when they walk. One day someone asks her for the best route for walking to Las Vegas. She doesn’t know, as she has spent all her time studying leg physiology. Hence, there must be more to to walking to Las Vegas than just walking.

Well, yes, we should say, of course there is. One should consult a map of some kind, and it would be helpful to recommend sturdy walking shoes and so on. Just studying walking won’t tell you which direction to take. Similarly for studying the brain. The brain evolves and learns in a natural environment. Elements in the natural environment evolve as well, and sometimes in response to organisms’ abilities to process color information. The red stuff in the world tends to be stuff that commands attention, as red stuff is usually poisonous, or pretending to be poisonous, or yummy, or blood, or meat, or something else you should pay attention to. Part of what red is has to do with what things in the environment get perceived as red, and why. Color perception would not have evolved at all if it had not been useful for processing information.

So we should change the Mary case so that Mary knows even more. She not only knows everything there is to know about the brain, but also everything there is to know about colored objects in the environment, and what role they have played in evolution. So now Mary knows that seeing red evolved so as to alert organisms to threats and opportunities in the environment, and that seeing red things usually results in a charged experience–scary, appetizing, sexy, etc. Red is attention-commanding. As such, it had better stand out brightly against things that can usually be ignored in crisis situations – green and brown things, for example. Mary may as well learn all the ways that red and other color words have been culturally embedded as well, in poems and stories and religious ideas, so as to understand the extensive role red plays in human experience.

Now once Mary knows all that, is it as obvious that she wouldn’t know what red looks like? I’ll admit, it may seem like she still wouldn’t know exactly what it looks like. But I don’t think it is obvious that she wouldn’t know. She might well gain her color vision, see a red thing, and say, “Ah, that’s pretty much what I expected!”

Posted in Uncategorized | 2 Comments

3QD: Rat Man? Ewww!

It was announced last week that scientists have integrated neurons from human brains into infant rat brains, resulting in new insights about how our brain cells grow and connect, and some hope of a deeper understanding of neural disorders. Full story here. And while no scientist would admit they are working toward the development of some Rat Man who will escape the lab and wreak havoc on some faraway island or in the subways, it’s impossible not to wonder.

Read more here.

Posted in 3QD essays | Leave a comment

Psyche: How to Read Philosophy

It might seem daunting to read philosophy. Giants of thinking with names like Hegel, Plato, Marx, Nietzsche and Kierkegaard loom over us with imperious glares, asking if we are sure we are worthy. We might worry that we won’t understand anything they are telling us; even if we do think we understand, we still might worry that we’ll get it wrong somehow.

So, if we’re going to read philosophy, we need to begin by knocking those giants down to size. Every one of them tripped and burped and doodled. Some of them were real jerks. Here’s Arthur Schopenhauer on his fellow German philosopher Georg Wilhelm Friedrich Hegel, for instance: ‘a flat-headed, insipid, nauseating, illiterate charlatan, who reached the pinnacle of audacity in scribbling together and dishing up the craziest mystifying nonsense.’ I’m not sure whether this paints Schopenhauer or Hegel as the bigger jerk.

The point is that each giant of philosophy was a human being trying to figure out life by doing just what you do: reading, thinking, observing, writing. Don’t let their big words intimidate you; we can insist that they make sense to us – or, at least, intrigue us – or are left behind in the discount book bin. They must prove their worth to us.

Read more here

Posted in Uncategorized | 2 Comments

3QD: Thinking Big About The Future

I recently listened to a discussion on the topic of longtermism, or the moral view that we need to factor in the welfare of future generations far more seriously than we do, including generations far, far into the future. No one should deny that the people of the future deserve some of our consideration, but most people soften that consideration with fluffy pillows of uncertainty. We take ourselves to have a rough idea of what the next generation will face, but after that everything gets cloudy fast, and most of us aren’t sure what exactly we should do for those possible people in the clouds, so we start dropping them from our moral calculations.

But if you insist on considering them, and treating them as real (but real elsewhen), their numbers and their interests get big fast. How many people might exist in the whole future of the universe? Millions of billions, maybe, if we go full-on Star Trek. If they each deserve only one millionth of our concern, that still ends up being a whopping amount of concern. Look at things that way, and really just about all of our moral thinking should be focused on the future generations of the universe. The Iroquois who asserted that we should “have always in view not only the present but also the coming generations” were severely understating the magnitude of the task before us.

Read more here

Posted in 3QD essays | Leave a comment

Artificial Einstein

There is a set of interesting discussions posted on Scott Aaronson’s blog among Aaronson, Steven Pinker, and others on whether recent text generators like GPT-3 indicate that artificial intelligence is upon us. The discussion is informed, sensible, and well-mannered: these guys all respect each other’s views, though they disagree, so it’s a model of genuine discourse. The basic disagreement seems to be between the more impressed by GPT-3 and the less impressed, as one would expect.

Aaronson’s dialogue continues with Pinker on this page, and the two seem to get stuck on the question of whether we could have not just AI, but genius AI, such as an AI that duplicates Einstein’s intelligence. They arrive at this point because after Pinker points out that we really don’t have a clear definition of what constitutes “intelligence”, Aaronson counters that if one devised a program that could give you Einstein-level insights, that would surely count as an intelligent program:

“Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster.  Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later.  Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.”

Pinker replies to this idea with skepticism toward treating intelligence or super-intelligence as a kind of magical substance that inhabits special brains: “I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do….[I]f intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence.” So Pinker would rather see a detailed account of how an AI is recognizing problems and thinking about them, rather than just wowing us with stellar performance.

Pinker has an interesting point, I think: it is easy to believe we will have programs that can maintain conversations and solve hard problems, but harder to believe that those programs will be doing it the way humans do it. The programming will require much more massive information than the human brain runs on, and so won’t fully answer the question of how human beings think.

It’s a good and interesting discussion, as I said, but there’s a further element I think they are missing, and that’s the social dimension of intelligence (and of “genius”, even more so). I am not at all sure Einstein would have been a genius if he had been born in 1779 or 1979 instead of 1879. As it happened, he was in the right place at the right time to make an important contribution in a certain problem space. We shouldn’t assume that he would have the same level of success if dropped into other problem spaces. Same goes for the others on the typical lists of geniuses: Plato, Da Vinci, Shakespeare, Newton, etc. A lot of lucky circumstances need to come together before an individual set of abilities can plug itself in and solve a problem in an impressive way. Drop a genius into another time and they may cease to be a genius entirely.

(I am reminded of Bill and Ted’s Excellent Adventure, in which Beethoven, brought to the present day, quickly masters disco. I don’t quite remember what the other historical figures master—Genghis Khan skateboards, and Joan of Arc jazzercises?!–but the trajectory of this idea would have Newton quickly mastering the internet, Lincoln bringing peace to the Middle East, etc. It’s a vivid expression of the common belief that genius is a superpower.)

A similar point might be made for sub-genius level intelligence, with which most of us are familiar. What matters is not what a smart device or a person can do, or what puzzles it can solve, but the ways in which it can be incorporated into the rest of society. So long as we stick with Turing-test-style contests to see if we have a genuine AI or not, we will always be in an argument like the one among Aaronson and his friends: enthusiasts and skeptics, progressives and cynics, arguing over whether “genuine intelligence” is a necessary condition for passing the test. By contrast, if we find one day that we already have incorporated an entity into our conversations and in our lives, and that in these roles we cannot help but regard the entity as a separate, intelligent person, then the debate is practically over. It’s intelligent, because we can’t help but treat it as intelligent.

So, for example, right now I don’t think anyone regards Siri or Alexa as intelligent. They’re still awkward to deal with. But if someday they are as easy to converse with as a genuine personal assistant, so that we have to think about our interactions with them in the same way we have to think about our interactions with human others, then we will have genuine AIs: not merely because of their programming (though that is not irrelevant, of course), but because they fit into our society in the right sort of way to be regarded as intelligent beings. 

We have done this with children over the last 60 years or so, and with animals over the last 20 years. It’s only been in the 20th century that we started inviting children into the adult world, where their feelings and ideas and abilities were taken as equal to an adult’s, or at least on many occasions we pretended as if they did. A kid in 1887 was not awarded nearly the same degree of intellectual authority and respect as a kid in 1987, let alone 2017. Same with our pets: though we do not treat them like fully adult humans—well, it’s not exactly common practice, anyway—it’s still a lot better to be a dog in 2020 than a dog in 1920. For some reason, we decided to include children and pets into the charmed circle of Beings Whose Intelligence Matters. Who and what they are as persons depends partly on their own abilities and capacities, of course, but also very much on the way we treat them. Talk of “rights” reflects our attitudes on these matters. To establish a being’s rights is to decide, as a matter of policy, how they are to be regarded, and that decision is informed in part by how intelligent we regard them as being.

I do think that, in general computer scientists and psychologists and philosophers are reluctant to get into the messy business of the social dimensions of all this stuff, because they prefer to work in more clearly defined domains: the confines of the skull, for instance, or the structure of an algorithm, or interactions among a small number of participants. Once we open the lid on the roles of culture, tradition, and even economics and politics on these questions, then the worms start wiggling out at an unmanageable rate. So it’s an understandable oversimplification. But that doesn’t mean that the social dimension can be ignored.

Posted in Machines / gadgets / technology / games | Leave a comment

3QD: Is The Internet What I Think It Is?

Justin E. H. Smith’s recent book, The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning (Princeton UP 2022) has received plenty of notice here on 3 Quarks Daily, and for good reason. Smith’s books and essays always remind us that, no matter how bizarre and ironic some recent damn thing is, we are always part of a long anthropological history of bizarre irony, and indeed the harder you look the more bizarre and ironic it all gets. At least, I think that is one of the main vibes of 3QD: seeing where we are in some map of the strange natural/cultural universe.

There are plenty of books complaining about the evils of the internet, and to be sure Smith offers four complaints: it is addictive, it warps human lives through algorithms, it is ruled by corporate interests, and it serves as a universal surveillance device. There is enough evidence, both objective and anecdotal, that too much time on the internet turns one’s brain into a Twitter slushie that leaves one in no condition to meditate upon difficult problems, but instead only to scroll and click and scroll and click at digital gewgaws, feeling empty and alone, a terrible feeling one then tries to escape with more scrolling and clicking.

Read more here

Posted in 3QD essays | Leave a comment

A Thousand Brains

[Reading Jeff Hawkins, A Thousand Brains, Basic Books, 2021]

Rene Descartes had an uneasy relationship to academics. He was very well educated, but he never held any academic positions and spent much of his life arguing with professors and theologians. He saw himself as a scientist, like Galileo, disclosing the secrets of the universe, and it just so happened that in his view those secrets were not best expressed in Aristotelian or scholastic terms. He seriously considered providing a detailed, point-by-point commentary and criticism on an academic treatise of the day (Francisco Suarez’s Metaphysical Disputations), but then apparently decided his time would be better spent just forging ahead and making his own path, trampling over carefully-articulated distinctions drawn by pointy-headed academics.

I get something of the same vibe from Jeff Hawkins’ book, A Thousand Brains. Hawkins was on his way to becoming an academic brain scientist, but at the time the academics were busy asking narrower questions he didn’t find as interesting, so he went his own way, made millions of dollars, and started his own research institute so he could study what he wanted. Unlike Descartes, I think his team is not fundamentally changing the paradigm for neuroscience, and their work is being published in respected journals; but like Descartes, he is marching ahead at a pace that academics might consider brazen. Not my field, so I don’t know. I can say that his knowledge of philosophy is shallow, but that doesn’t seem to hamper his research at all.

I will also add that while I am fascinated by the first half of his book, the second half is not so great. Hawkins offers his own view of humanity’s future, and he just doesn’t have much that’s interesting to say. So my advice is to skip that and just read through chapter 9.

Okay, on to the book. The crucial players in Hawkins’ story are the neocortex, cortical columns, and frames of reference. The neocortex is a thin sheet of networked neurons, about the size of a dinner napkin and about 3 mm thick. It wraps around the rest of your brain in crinkled fashion. It’s the part of the brain responsible for all of our fancy, distinctly human sorts of thought. If you zoom in on it, you will find about 150,000 cortical columns that span the thickness of the napkin. Those cortical columns have similar structures. So the neocortex seems to be made of 150,000 similar circuits, like just so many Raspberry Pis, though each one involves something like 100,000 neurons. They end up performing different functions because, basically, they are plugged into different parts of our bodies and different sets of other columns. 

Cortical columns, as drawn by Ramon y Cajal

Cortical columns combine together to provide what Hawkins calls frames of reference. A frame of reference can be thought of as an algorithm that shapes and structures the information it receives, or works to sort out or combine information, or shapes and structures some activity or behavior on our part. Hawkins’ favorite example is our knowledge of a coffee cup. We know how to recognize one by sight; we know how it should feel in our hands, and what each finger should sense as we grip the handle or trace the edge of the rim; we know how much it should weigh in our hands, and so on. Our knowledge of coffee cups—and of the world at large—is built from these frames of reference, or our projections and expectations placed upon our experience. Again, the cortical columns all have pretty much the same structure; so the difference in the information processing between the smell of coffee and the feel of the coffee cup has more to do with cortical columns being plugged into your nose or your hands, rather than to do with the inner processing that takes place.

(Just an expression of wonder here that 150,000 cortical columns is enough to do the trick—not just for coffee cups, but all of what we think we know and experience! Can you exhaustively reduce your entire Weltanshauung into 150,000 components? It seems like there should be more, though I’ve been making a list and it ends at 12, so maybe so.)

All these processors plugged into one another provide us with a set of maps or models of what is in the world, where we are in relation to it, what we are doing, and what further actions are available to us. It is highly decentralized: there is no “headquarters” where “it all comes together” (in Daniel Dennett’s endearing terms), but instead there are tens of thousands of processors, all biting off bits of the problem and chewing their way through it and sending their results for other processors to use. In the end, we do and say stuff, and the eventual narration of what we do and say becomes the basis for beliefs about who “we” are and what “we” think “we” are up to. 

Another crucial part of Hawkins’ account is the significance of motion. The neocortex does not simply store a bunch of pictures and sounds like a library. It tracks changes and movements and patterns in motion. Our model of the coffee cup is not just a bunch of slides, but clips portraying how it should change as we move in relation to it, or as we move it around through space. Motion and change are what we model in the world, and it is also how we think about ideas and concepts: 

“If all knowledge is stored this way, then what we commonly call thinking is actually moving through a space, through a reference frame. Your current thought, the thing that is in your head at any moment, is determined by the current location in the reference frame. As the location changes, the items stored at each location are recalled one at a time. Our thoughts are continually changing, but they are not random. What we think next depends on which direction we mentally move through a reference frame, in the same way that what we see next in a town depends on which direction we move from our current location.”

(Hawkins, Jeff. A Thousand Brains (p. 80). Basic Books. Kindle Edition.)

This is an exciting idea. We might know of memory palaces and the spatial hacks people use to remember long strings of information, but this appears to be evidence for the claim that our thinking is always movement through a landscape of ideas. Thinking is not a disembodied experience, but is something more like a virtual reality in which our projected motions take us from one concept to another, from one idea to a similar one, from one metaphor to its neighbors, and so on. 

A commendable feature of Hawkins’ book is that he provides many clear analogies and illustrations of how his claims are borne out in both ordinary experiences and experimental ones. I have long been a fan of Dennett’s “pandemonium” model of the mind, and Hawkins’ book seems to be spelling out the details of how the congress of demons are actually embodied in the neocortex. The trick in any explanation of consciousness or intelligence is to show how a bunch of simple systems that aren’t so conscious or intelligent can sum up into a system that is conscious or intelligent (at least on our good days). There will always be philosophers who cry that in principle no such explanation can be achieved, but these are the folks who will be increasingly ignored as the results come in. (True in Descartes’s physics, true in Hawkins’ neuroscience.)

I do think that at some point our understanding of ourselves will have to shift from individuals to communities. That is to say, many of my cortical columns are plugged into not just each other and parts of my body, but to other people. Not directly of course (ew). But through language, culture, social dynamics, etc: all that squishy stuff gets reference framed in my brain, and exerts strong pulls over who and what I say I am and what the world is supposed to be. Neither Hawkins nor Dennett go far in this direction, but that’s okay, since we need both approaches, from individuals outward and from societies inward. But in the end, I think what we call “consciousness” and “intelligence” is going to be explained much more by communities and traditions than it is by cortical columns. 

Posted in Books | 2 Comments

3QD: Swarms in the brain

Today is July 4th, a day when Americans reflect on the value of freedom and the costs and sacrifices required for it. So it is an appropriate day to reflect on America’s deepest political aspirations.

Nah. Let’s talk about our brains. The neocortex is where all our fancy thinking takes place. The neocortex wraps around the core of our brain, and if you could carefully unwrap it and lay it flat it would be about the size of a dinner napkin, and about 3 millimeters thick. The neocortex consists of 150,000 cortical columns, which we might think of as separate processing units involving hundreds of thousands of neurons. According to research at Jeff Hawkins’ company Numenta (and as explained in his fascinating recent book, A Thousand Brains), these cortical columns are capable of modeling patches of our experience (he calls them “reference frames”)  and setting our expectations of what we should experience next, at any given moment. [Note: the remarks that follow are inspired by Hawkins’ book, but shouldn’t be taken as a faithful representation of it.]

The neocortex’s complexity is considerable, but not infinite, and what’s most surprising is that such a relatively small network—you can fold it up and put it in your head—is capable of understanding calculus and making up jaunty little tunes and comparing Kierkegaard to Heidegger, not to mention doing all three in a single afternoon. 

More here

Posted in 3QD essays | Leave a comment

3QD: CAPTCHAs, Kant, and Culture

…But clearly we do end up with causal knowledge, as Hume himself never doubted, and we manage to navigate our ways through a steady world of enduring objects. We somehow end up with knowledge of an objective world. And we don’t remember that arriving at such knowledge was all that difficult. We just sort of grew into it, and now it seems so natural that it’s really hard to imagine not having it, and it’s even difficult not to find such knowledge perfectly obvious. But in fact it is anything but obvious …

Read more here

Posted in 3QD essays | Leave a comment