There is a set of interesting discussions posted on Scott Aaronson’s blog among Aaronson, Steven Pinker, and others on whether recent text generators like GPT-3 indicate that artificial intelligence is upon us. The discussion is informed, sensible, and well-mannered: these guys all respect each other’s views, though they disagree, so it’s a model of genuine discourse. The basic disagreement seems to be between the more impressed by GPT-3 and the less impressed, as one would expect.
Aaronson’s dialogue continues with Pinker on this page, and the two seem to get stuck on the question of whether we could have not just AI, but genius AI, such as an AI that duplicates Einstein’s intelligence. They arrive at this point because after Pinker points out that we really don’t have a clear definition of what constitutes “intelligence”, Aaronson counters that if one devised a program that could give you Einstein-level insights, that would surely count as an intelligent program:
“Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster. Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later. Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.”
Pinker replies to this idea with skepticism toward treating intelligence or super-intelligence as a kind of magical substance that inhabits special brains: “I still think that discussions of AI and its role in human affairs—including AI safety—will be muddled as long as the writers treat intelligence as an undefined superpower rather than a mechanisms with a makeup that determines what it can and can’t do….[I]f intelligence is a collection of mechanisms rather than a quantity that Einstein was blessed with a lot of, it’s not clear that just speeding him up would capture what anyone would call superintelligence.” So Pinker would rather see a detailed account of how an AI is recognizing problems and thinking about them, rather than just wowing us with stellar performance.
Pinker has an interesting point, I think: it is easy to believe we will have programs that can maintain conversations and solve hard problems, but harder to believe that those programs will be doing it the way humans do it. The programming will require much more massive information than the human brain runs on, and so won’t fully answer the question of how human beings think.
It’s a good and interesting discussion, as I said, but there’s a further element I think they are missing, and that’s the social dimension of intelligence (and of “genius”, even more so). I am not at all sure Einstein would have been a genius if he had been born in 1779 or 1979 instead of 1879. As it happened, he was in the right place at the right time to make an important contribution in a certain problem space. We shouldn’t assume that he would have the same level of success if dropped into other problem spaces. Same goes for the others on the typical lists of geniuses: Plato, Da Vinci, Shakespeare, Newton, etc. A lot of lucky circumstances need to come together before an individual set of abilities can plug itself in and solve a problem in an impressive way. Drop a genius into another time and they may cease to be a genius entirely.
(I am reminded of Bill and Ted’s Excellent Adventure, in which Beethoven, brought to the present day, quickly masters disco. I don’t quite remember what the other historical figures master—Genghis Khan skateboards, and Joan of Arc jazzercises?!–but the trajectory of this idea would have Newton quickly mastering the internet, Lincoln bringing peace to the Middle East, etc. It’s a vivid expression of the common belief that genius is a superpower.)
A similar point might be made for sub-genius level intelligence, with which most of us are familiar. What matters is not what a smart device or a person can do, or what puzzles it can solve, but the ways in which it can be incorporated into the rest of society. So long as we stick with Turing-test-style contests to see if we have a genuine AI or not, we will always be in an argument like the one among Aaronson and his friends: enthusiasts and skeptics, progressives and cynics, arguing over whether “genuine intelligence” is a necessary condition for passing the test. By contrast, if we find one day that we already have incorporated an entity into our conversations and in our lives, and that in these roles we cannot help but regard the entity as a separate, intelligent person, then the debate is practically over. It’s intelligent, because we can’t help but treat it as intelligent.
So, for example, right now I don’t think anyone regards Siri or Alexa as intelligent. They’re still awkward to deal with. But if someday they are as easy to converse with as a genuine personal assistant, so that we have to think about our interactions with them in the same way we have to think about our interactions with human others, then we will have genuine AIs: not merely because of their programming (though that is not irrelevant, of course), but because they fit into our society in the right sort of way to be regarded as intelligent beings.
We have done this with children over the last 60 years or so, and with animals over the last 20 years. It’s only been in the 20th century that we started inviting children into the adult world, where their feelings and ideas and abilities were taken as equal to an adult’s, or at least on many occasions we pretended as if they did. A kid in 1887 was not awarded nearly the same degree of intellectual authority and respect as a kid in 1987, let alone 2017. Same with our pets: though we do not treat them like fully adult humans—well, it’s not exactly common practice, anyway—it’s still a lot better to be a dog in 2020 than a dog in 1920. For some reason, we decided to include children and pets into the charmed circle of Beings Whose Intelligence Matters. Who and what they are as persons depends partly on their own abilities and capacities, of course, but also very much on the way we treat them. Talk of “rights” reflects our attitudes on these matters. To establish a being’s rights is to decide, as a matter of policy, how they are to be regarded, and that decision is informed in part by how intelligent we regard them as being.
I do think that, in general computer scientists and psychologists and philosophers are reluctant to get into the messy business of the social dimensions of all this stuff, because they prefer to work in more clearly defined domains: the confines of the skull, for instance, or the structure of an algorithm, or interactions among a small number of participants. Once we open the lid on the roles of culture, tradition, and even economics and politics on these questions, then the worms start wiggling out at an unmanageable rate. So it’s an understandable oversimplification. But that doesn’t mean that the social dimension can be ignored.