Roman Yampolskiy, AI: Unexplainable, unpredictable, uncontrollable (CRC Press 2024)
It’s coming, and probably sooner than you think. Open AI’s Leopold Aschenbrenner noted recently on X, “One year since GPT-4 release. Hope you all enjoyed some time to relax; it’ll have been the slowest 12 months of AI progress for quite some time to come.” You might see that as exciting or worrisome, depending on whether you have thought at all about the problems entailed by creating a superhuman intelligence.
Roman Yampolskiy has been thinking about it, and he argues that the risks of AI outweigh the benefits. His argument is straightforward: “unconstrained intelligence cannot be controlled and constrained intelligence cannot innovate”. We want unconstrained AI because it will outmatch our efforts. We want it to innovate and be unpredictable and inexplicable (otherwise, why bother?). We will put it in charge of vital systems that are generally too complicated for us to manage with equal success (think of markets, investments, and logistics). Once it’s up and running, there will be no way to unplug it without causing various crashes and disasters.
It will be an alien form of intelligence, not like us, and not like anything we can imagine (and we are bad at imagining intelligences not like us, even in sci-fi). (Especially in sci-fi.) Consider this: if scientists were instead breeding a new biological species with massive brains (call them “Brainzillas”), with the hope of turning everything over to them in the next generations, how relaxed would we feel? But since we’re not talking Brainzillas, but something less tangible, we’re mostly sitting back and wondering how it’s gonna go. The two scenarios are basically the same, except we could take out Brainzillas with rocket launchers.
In this book Yampolskiy examines the idea from all angles, making use of all the literature he can lay hands on, from publications and pre-publications to interviews and social media comment threads. His thinking is strategic and incisive, even if the book itself seems at points a bit thrown together from the stuff he has on his desk. Then again, time is not on anyone’s side in the development of AI. In some interview I watched, Yampolskiy hoped the government would get involved, as that would slow everything down to a crawl and give wiser minds some time to think things through.
I am not in any position to out-guess Yampolksiy on this. I follow Niels Bohr’s wisdom that prediction is very difficult, especially about the future. Yamploskiy’s arguments are rooted in basic concepts of intelligence and control, and they have an a priori flavor to them, and yet human events often pay no respect to what we think must be. Still, that’s not much to rest on; it’s taking the fact that we don’t fully know what we are doing and concluding that we can’t know for sure that it will be a disaster. A more cautious plan of development would be better.
Leave a comment