Getting distracted by ideas

Here comes Brainzilla

Roman Yampolskiy, AI: Unexplainable, unpredictable, uncontrollable (CRC Press 2024)

It’s coming, and probably sooner than you think. Open AI’s Leopold Aschenbrenner noted recently on X, “One year since GPT-4 release. Hope you all enjoyed some time to relax; it’ll have been the slowest 12 months of AI progress for quite some time to come.” You might see that as exciting or worrisome, depending on whether you have thought at all about the problems entailed by creating a superhuman intelligence.

Roman Yampolskiy has been thinking about it, and he argues that the risks of AI outweigh the benefits. His argument is straightforward: “unconstrained intelligence cannot be controlled and constrained intelligence cannot innovate”. We want unconstrained AI because it will outmatch our efforts. We want it to innovate and be unpredictable and inexplicable (otherwise, why bother?). We will put it in charge of vital systems that are generally too complicated for us to manage with equal success (think of markets, investments, and logistics). Once it’s up and running, there will be no way to unplug it without causing various crashes and disasters.

It will be an alien form of intelligence, not like us, and not like anything we can imagine (and we are bad at imagining intelligences not like us, even in sci-fi). (Especially in sci-fi.) Consider this: if scientists were instead breeding a new biological species with massive brains (call them “Brainzillas”), with the hope of turning everything over to them in the next generations, how relaxed would we feel? But since we’re not talking Brainzillas, but something less tangible, we’re mostly sitting back and wondering how it’s gonna go. The two scenarios are basically the same, except we could take out Brainzillas with rocket launchers.

In this book Yampolskiy examines the idea from all angles, making use of all the literature he can lay hands on, from publications and pre-publications to interviews and social media comment threads. His thinking is strategic and incisive, even if the book itself seems at points a bit thrown together from the stuff he has on his desk. Then again, time is not on anyone’s side in the development of AI. In some interview I watched, Yampolskiy hoped the government would get involved, as that would slow everything down to a crawl and give wiser minds some time to think things through. 

I am not in any position to out-guess Yampolksiy on this. I follow Niels Bohr’s wisdom that prediction is very difficult, especially about the future. Yamploskiy’s arguments are rooted in basic concepts of intelligence and control, and they have an a priori flavor to them, and yet human events often pay no respect to what we think must be. Still, that’s not much to rest on; it’s taking the fact that we don’t fully know what we are doing and concluding that we can’t know for sure that it will be a disaster. A more cautious plan of development would be better.

5 responses to “Here comes Brainzilla”

  1. Mike Avatar
    Mike

    It seems humanity right now and maybe since we created nukes (even earlier?) is fairly content playing Russian roulette with technology (unleashing AI being the most recent example). Then on the other technology-induced-inevitable-doom hand, humanity has a slow suicide plan in place to “handle” climate change (sin of omission style). 

    Good news is all other sorts of random things might kill me sooner and keep me from witnessing one of these civilization scale catastrophes.

    I’ve heard I should have hope but it still seems hope is for people who aren’t thinking clearly. Maybe it’s the thinking clearly that I should try to correct. Or maybe I can find a nice little spot for my hope where it can keep me going but not adversely impact my predictive capacities. Dunno.

    Liked by 1 person

  2. Huenemann Avatar
    Huenemann

    I heard someone say that with the approach of AI they weren’t worried about global warming, since either AI would be benevolent and solve it, or AI would be malevolent and then pose a bigger threat. I find such observations darkly amusing, but there’s a possibility that usually gets ignored: the something in between option, where nothing gets solved, and human life isn’t completely ruined, and we just sort of muddle along in general misery. Like, you know, usual.

    Liked by 1 person

    1. Mike Avatar
      Mike

      Unless something major changes I don’t see it solving any of our hardest problems. I really think that’s wishful thinking. More likely it’s going to start taking out entry and mid level positions in a number of fields and create a bunch of crap with second system syndrome. I’m not exactly sure why this seems to be its main intellectual malady at the moment, maybe because that’s the sort of thing engineers publish (and so what it’s trained on).

      Liked by 1 person

      1. Mike Avatar
        Mike

        Well, i started googling for “AI solved”, maybe I’m overly pessimistic. https://www.independent.co.uk/tech/nuclear-fusion-ai-clean-energy-b2505138.html though i still think it’s about people using these tools correctly more than letting it solve something on its own.

        Liked by 1 person

      2. Huenemann Avatar
        Huenemann

        Hadn’t known of “second system syndrome”! I love discovering a name for something that I hadn’t realized it would be handy to have a name for!

        Liked by 1 person

Leave a comment