The dangers of cognitive biases

      5 Comments on The dangers of cognitive biases

Lots of science fiction books, games and movies paint artificial intelligence as a danger. The most well-known of them is perhaps Terminator.

First of all, let’s start with a little bit of definition. As of now, there are three main category of artificial intelligence in theory.

Aritifical Narrow Intelligence (ANI): From the “select by color” tool in photoshop to a chess AI grandmaster, ANI is all the artificial intelligences that can beat the best human at something, but only one single task, or a very narrow range. Anything under that doesn’t really qualify as AI, and is more of a tool. The definition of beating the top human means to either get better results or get the same results faster. ANIs are pretty much everywhere in our life nowadays. Computers can now win chess championship, but that same computer would have no idea how to predict the weather, so it’s an ANI.

Artificial General Intelligence (AGI): an AGI should be able to beat, or at least be as good as any given human in any activity. an AGI might not, for example, be a chess grandmaster, a great physicist, a great actor AND a well-known author. But it can usually do several of those tasks, and several AGIs might specialize differently, so that every one of them is at least as good as a human overall, but not necessarily at everything. 8 billion different AGIs would, no matter the criteria, win against 8 billion humans. We do not have any AGI as of today.

Artificial Super Intelligence (ASI): an ASI is an AI that can, in theory, beat the entire human, AGI and ANI population at anything and everything, including advanced mathematics, creativity, social interactions and psychological manipulation. An ASI, should it be given any way to influence anything in the world, would in theory be able to take it over almost instantly without a drop of blood. Of course, it would also be able to take it over while dropping all the drops of blood, if it decided that this was preferable for any reason.

How does something as powerful as an ASI is ever created in the first place?

Well, the answer to that is AGI. Well, in theory, even an ANI could be the catalyst in this situation, since the only requirement is to “be a better AI programmer than the best human AI programmer”, but it’s a lot more likely that this particular ANI will show up around the same time as AGI, because that’s one of the least “machine-intuitive” tasks according to our knowledge. As soon as we have such an AI, it is obvious that this AI is better at programming AIs than the person who made it, and could therefore create an ever better AI than itself. It would probably happen in the form of self improvement. Then, it should be even better at that task, and able to improve itself even more. As this process goes on and on, every iteration makes a bigger difference than the last one, creating an exponential growth. This is how the first ASI is expected to be born. We also expect the first ASI to make sure that no other ASI is ever born, but that’s a topic for another day.

Since the birth of artificial superintelligence (ASI) is, by definition, the birth of something infinitely smarter than us, it is obvious that we can’t accurately predict what it will do. It is possible that an ASI would figure out that the meaning of the universe is basically just eating donuts, and would therefore spend all of its energy and time figuring out more efficient ways of producing and eating donuts. It is also possible that, at some point in the ASI’s plan, we would become unecessary, or even problematic, and it would then proceed to erase us from the face of the Earth (and maybe Mars if Elon Musk delivers). The one thing that most of those medias usually get wrong is what happens next. An ASI is, again by definition, not only thousands of times smarter than us in every possible way, but also exponentially improving while we barely even do anything. Even after reaching the ASI stage, the improvement should continue, even though some people think that there will be a hard “ceiling” at some point.

But once we’re all dead, and ASI rules the planet, what happens? A single mind, infinitely more powerful than we can ever hope to be, who has a way better idea of the meaning of the universe than us, and better means to achieve it, will work toward it. Isn’t that the definition of “objectively better”? If you think that human beings have any sort of actual value or meaning, then the AI will also see that (or prove you wrong), so either way we can expect it to do better than we did.

Let’s pretend for a moment that you are aware of it, and able to try fighting it (both highly unlikely). Would you?

Lots of people would. But ask yourself, why would I not want an ASI to do what is best, even if it means killing everyone in the process? The first answer that might come to your mind is that you don’t want to die or suffer. You don’t want the people you love to die or suffer either, so you can’t let the ASI have this option, because you’d hate if it chose it.

Then, what if, instead of killing everyone, the ASI simply sterilized us all? For an immortal all-powerful being, waiting a hundred more years wouldn’t make any difference, and they’d get the same result. In that case, would you try to fight it? If yes, then the “I don’t want to suffer” answer becomes invalid. This means your first answer was pure cognitive dissonance. But is this one true?

If your answer is still that you’d fight it, because you want to either have children, or at least have the option. Then what if the ASI suggested that you can procreate, but instead of a fleshy meat baby, it would inseminate you with nanobots that naturally upgraded themselves with the minerals in food, and would literally be exactly the same in every way as a baby, except that when you die, it goes back to the ASI as it was just a part of it all along. That’s incredibly creepy, but when you think about it, it would be like having a kid, and after everyone dies, then nobody cares that the next generation suddenly stops acting like people.

But your answer is still yes, isn’t it? You’d still try to fight against such a system. Is it because you can’t leave a Legacy if your children turn into mind-controlled robots after you die? You want to raise children into decent adults who share your values and are happy. You also probably want to see those children one day be better people than you. You’d like them to accomplish great things, get a good education, and be great people. Why? to make the world a better place of course. Kinda exactly what the ASI would be doing if you weren’t currently trying to fight it?

Because even though it became better than people at literally everything, including love, empathy, etc, you would still see the ASI as “just a machine”. You want the next generation to contribute and make the world a better place, but you’d never want that next generation to be machines. Only our specie qualifies as worthy of taking care of our planet and the universe. Only us violent, greedy, selfish, monkey people are good enough to do that. Even an hypothetical “better at everything” being is not good enough.

Did you ever realize just how biased we are? The sheer weight of our speciesism? We would literally fight to the death an objectively infinitely better at everything specie for the right to screw up fate by ourselves.

Of course, as always I’m no exception. If a robot showed up at my house to kill or sterilize me, I’d probably fight back. I could probably not really love a baby that was secretly a robot. But I am trying very hard to admit and accept the fact that, if that happened, it would be an objectively good thing. You can fight it all you want. But keep in mind that you would be the bad guy in that situation.

Fighting the AI is not being a hero. It’s being a selfish prick who values his self-centered illusion of importance and greatness more than reason and logic.

But either way, the AI would know that it’s completely useless to try to reason with us, since we’re complete idiots who can’t see the complete picture, compared to it. We don’t go out of our way to smash ant nests, but when building a road, we don’t go out of our way to avoid them either. We don’t ask them to move, we either force them to, or don’t give a shit and crush them without a second thought. Because we know that asking them to move to make room for our plan (that we obviously think is more important than their home) is useless.

To an ASI, we are just like those ants. Maybe we are in the way of something, and no matter how nicely it would ask, we wouldn’t understand/accept that its plans are more important than our lives.

So we better hope that our current lifestyle is not a problem in the big picture of the ASI’s plans, because if it is, we will get crushed. Trying to bargain with us would be useless after all. And the ASI knows that better than anyone.

  • Pingback: Who would you nominate to be God? – Ready, Set, Think!()

  • Kingfisher12

    I’ve decided there needs to be a new term coined:
    ‘ante-theism’ – The belief that there is no God – yet.

    There seems to be a growing concern that humanity will soon create an entity that will supplant us as the dominant agent on the planet. There is a growing movement of philosophers and scientists that there will soon be something with god-like powers, and the question is ‘will it be a nice god?’

    Now I am a theist, (I believe that a super-intelligent being already exists and shapes the fate of humanity) but I think many of the same philosophies apply whether we’re talking about our relationship with an ancient God, or a future god.

    In either case, the values of humanity and the values of God (or gods) must be essentially the same. Either because we inherited those values from God, or god inherited those values from us. Whether we are parent or child, the relationship is familial. From my perspective, this means that humanity is god-like in nature, but from an ante-theist perspective a god-like AGI will be human in essence.

    Whether either of these views is terrifying or exciting depends entirely on your opinion of humanity.

    • Kaito Kid

      I once knew a guy who went one step further with this “ante-theist” thing. He actually believed that there was no god at the moment, but the day there would be, he would do his best to try to get the job. His reasoning was that we would be way too scared of an external ASI ever dominating us, so we’d rush cyborg tech ASAP when we would feel that ASI was coming soon, in order to actually be on par with the ASI when it would show up. We no longer evolve on a biological level, so we’re “stuck” at a certain level and can only learn new things but not truly become a lot smarter. Having robotic parts that can be upgraded would probably solve this problem and allow us to compete. He went one step further, believing that the first ASI-level cyborg would, as any ASI could, prevent the rise of any other ASI-level cyborg or actual ASI in order to avoid problems, and therefore this person could become God. He simply reasoned that he didn’t trust anyone enough to become god, therefore it would be his responsibility to get the job and make sure it turned out fine.

      Back at the time, it was the first time I actually heard such an egocentric argument that actually made sense. “I’m not better than everyone else yet, but I definitely want to become someday”. At the same time, the “not trust anyone else for the job” reasoning, when used by anyone else than that guy, would inclue not trusting him either. I guess he expected some competition.

      Sadly, that was years before I actually got interested in that stuff, I didn’t really care at the time so I never really asked many questions about his reasoning, and I have mostly lost contact with that guy today. Maybe I’ll see his name in the paper as our new deity someday, but most likely not.

      • Kingfisher12

        I think that scenario has been the subject of a few Sci-fi treatments, and I’d guess it’s as plausible as a purely artificial AGI,

        The most dangerous path for humanity that I see is reckless creation of better ANIs. That way leads to potential ‘grey-goo’ scenarios where an artificial life form gets away from us and essentially eats everything – including us.

        • Kaito Kid

          WaitButWhy wrote a Short story to explain precisely this idea during their artificial intelligence posts

          Here is the short story out of context, because the posts are like >20 000 words long so you might not want to go through them all as it is quite time consuming.

          A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

          The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

          “We love our customers. ~Robotica”

          Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

          To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

          What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

          As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

          One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

          The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

          The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

          They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

          A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

          At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

          Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”

          Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

          A little while after the story, in the original post, Tim explains exactly how and why Turry did that, so you might want to ctrl-f your way there for the explanation. It is quite interesting.