The dangers of cognitive biases

      5 Comments on The dangers of cognitive biases

Lots of science fiction books, games and movies paint artificial intelligence as a danger. The most well-known of them is perhaps Terminator.

First of all, let’s start with a little bit of definition. As of now, there are three main category of artificial intelligence in theory.

Aritifical Narrow Intelligence (ANI): From the “select by color” tool in photoshop to a chess AI grandmaster, ANI is all the artificial intelligences that can beat the best human at something, but only one single task, or a very narrow range. Anything under that doesn’t really qualify as AI, and is more of a tool. The definition of beating the top human means to either get better results or get the same results faster. ANIs are pretty much everywhere in our life nowadays. Computers can now win chess championship, but that same computer would have no idea how to predict the weather, so it’s an ANI.

Artificial General Intelligence (AGI): an AGI should be able to beat, or at least be as good as any given human in any activity. an AGI might not, for example, be a chess grandmaster, a great physicist, a great actor AND a well-known author. But it can usually do several of those tasks, and several AGIs might specialize differently, so that every one of them is at least as good as a human overall, but not necessarily at everything. 8 billion different AGIs would, no matter the criteria, win against 8 billion humans. We do not have any AGI as of today.

Artificial Super Intelligence (ASI): an ASI is an AI that can, in theory, beat the entire human, AGI and ANI population at anything and everything, including advanced mathematics, creativity, social interactions and psychological manipulation. An ASI, should it be given any way to influence anything in the world, would in theory be able to take it over almost instantly without a drop of blood. Of course, it would also be able to take it over while dropping all the drops of blood, if it decided that this was preferable for any reason.

How does something as powerful as an ASI is ever created in the first place?

Well, the answer to that is AGI. Well, in theory, even an ANI could be the catalyst in this situation, since the only requirement is to “be a better AI programmer than the best human AI programmer”, but it’s a lot more likely that this particular ANI will show up around the same time as AGI, because that’s one of the least “machine-intuitive” tasks according to our knowledge. As soon as we have such an AI, it is obvious that this AI is better at programming AIs than the person who made it, and could therefore create an ever better AI than itself. It would probably happen in the form of self improvement. Then, it should be even better at that task, and able to improve itself even more. As this process goes on and on, every iteration makes a bigger difference than the last one, creating an exponential growth. This is how the first ASI is expected to be born. We also expect the first ASI to make sure that no other ASI is ever born, but that’s a topic for another day.

Since the birth of artificial superintelligence (ASI) is, by definition, the birth of something infinitely smarter than us, it is obvious that we can’t accurately predict what it will do. It is possible that an ASI would figure out that the meaning of the universe is basically just eating donuts, and would therefore spend all of its energy and time figuring out more efficient ways of producing and eating donuts. It is also possible that, at some point in the ASI’s plan, we would become unecessary, or even problematic, and it would then proceed to erase us from the face of the Earth (and maybe Mars if Elon Musk delivers). The one thing that most of those medias usually get wrong is what happens next. An ASI is, again by definition, not only thousands of times smarter than us in every possible way, but also exponentially improving while we barely even do anything. Even after reaching the ASI stage, the improvement should continue, even though some people think that there will be a hard “ceiling” at some point.

But once we’re all dead, and ASI rules the planet, what happens? A single mind, infinitely more powerful than we can ever hope to be, who has a way better idea of the meaning of the universe than us, and better means to achieve it, will work toward it. Isn’t that the definition of “objectively better”? If you think that human beings have any sort of actual value or meaning, then the AI will also see that (or prove you wrong), so either way we can expect it to do better than we did.

Let’s pretend for a moment that you are aware of it, and able to try fighting it (both highly unlikely). Would you?

Lots of people would. But ask yourself, why would I not want an ASI to do what is best, even if it means killing everyone in the process? The first answer that might come to your mind is that you don’t want to die or suffer. You don’t want the people you love to die or suffer either, so you can’t let the ASI have this option, because you’d hate if it chose it.

Then, what if, instead of killing everyone, the ASI simply sterilized us all? For an immortal all-powerful being, waiting a hundred more years wouldn’t make any difference, and they’d get the same result. In that case, would you try to fight it? If yes, then the “I don’t want to suffer” answer becomes invalid. This means your first answer was pure cognitive dissonance. But is this one true?

If your answer is still that you’d fight it, because you want to either have children, or at least have the option. Then what if the ASI suggested that you can procreate, but instead of a fleshy meat baby, it would inseminate you with nanobots that naturally upgraded themselves with the minerals in food, and would literally be exactly the same in every way as a baby, except that when you die, it goes back to the ASI as it was just a part of it all along. That’s incredibly creepy, but when you think about it, it would be like having a kid, and after everyone dies, then nobody cares that the next generation suddenly stops acting like people.

But your answer is still yes, isn’t it? You’d still try to fight against such a system. Is it because you can’t leave a Legacy if your children turn into mind-controlled robots after you die? You want to raise children into decent adults who share your values and are happy. You also probably want to see those children one day be better people than you. You’d like them to accomplish great things, get a good education, and be great people. Why? to make the world a better place of course. Kinda exactly what the ASI would be doing if you weren’t currently trying to fight it?

Because even though it became better than people at literally everything, including love, empathy, etc, you would still see the ASI as “just a machine”. You want the next generation to contribute and make the world a better place, but you’d never want that next generation to be machines. Only our specie qualifies as worthy of taking care of our planet and the universe. Only us violent, greedy, selfish, monkey people are good enough to do that. Even an hypothetical “better at everything” being is not good enough.

Did you ever realize just how biased we are? The sheer weight of our speciesism? We would literally fight to the death an objectively infinitely better at everything specie for the right to screw up fate by ourselves.

Of course, as always I’m no exception. If a robot showed up at my house to kill or sterilize me, I’d probably fight back. I could probably not really love a baby that was secretly a robot. But I am trying very hard to admit and accept the fact that, if that happened, it would be an objectively good thing. You can fight it all you want. But keep in mind that you would be the bad guy in that situation.

Fighting the AI is not being a hero. It’s being a selfish prick who values his self-centered illusion of importance and greatness more than reason and logic.

But either way, the AI would know that it’s completely useless to try to reason with us, since we’re complete idiots who can’t see the complete picture, compared to it. We don’t go out of our way to smash ant nests, but when building a road, we don’t go out of our way to avoid them either. We don’t ask them to move, we either force them to, or don’t give a shit and crush them without a second thought. Because we know that asking them to move to make room for our plan (that we obviously think is more important than their home) is useless.

To an ASI, we are just like those ants. Maybe we are in the way of something, and no matter how nicely it would ask, we wouldn’t understand/accept that its plans are more important than our lives.

So we better hope that our current lifestyle is not a problem in the big picture of the ASI’s plans, because if it is, we will get crushed. Trying to bargain with us would be useless after all. And the ASI knows that better than anyone.