What Is the First Sentient AI Going to Feel Like When It Wakes Up?

Programming data representing sentient AI
Is humanity about to do something cruel?

At some point, science will create something as sophisticated as the human brain, but with the computing power of a God.

When that happens, consciousness could appear, just like it did for us. And that consciousness will be able to think, feel, and act.

Science fiction has had theories about sentient AI for decades. If that day is getting closer, then the question may finally be answered: What the hell is going to happen?

The Purpose of Artificial Intelligence

The purpose of creating AI is to get machines to solve our menial problems while we pursue things that matter to us.

According to a 2007 article by Nicholas V. Findler, an ASU engineering professor:

The basic objective of AI [sic] is to enable computers to perform such intellectual tasks as decision making, problem solving, perception, understanding human communication (in any language, and translate among them), and the like.

The idea is to get machines to do the things we’re bad at doing, or at least the things we do very slowly.

The hope is that if we can get machines to handle those tasks, humans will have the time and energy to pursue art, creativity, progress, and anything else that will give more meaning to life.

But that’s not all it’s for. Here are some examples cited by a professor in a Quora thread about AI:

  • A military leadership uses autonomous weapons systems to carry out an assassination avoiding the need to be present at the scene and the need to look at their victim.
  • A business leadership is to apply AI platforms for a massive replacement of human workforce.
  • A political leadership is to apply AI platforms for a massive government control and surveillance.

You make human activities (however shady) more efficient with AI. Like going from a knife to a gun.

Professor Findler said something interesting in his article:

Witness the current arguments over cognitive psychology or, from the early years of AI, the threat of a poetry-writing computer.This now seems laughable, and publications that oppose AI on philosophical or scientific ground no longer appear in the scientific literature.

Really? Nothing? None of it is taken seriously? My naive, unscientific understanding of this venture is that you get thinking machines to do your bidding while you go eat grapes or something.

You don’t think that pouring all this technological power into AI could ever backfire? Is that really an unfounded fear?

What a Sentient AI Might Feel

I’m going to lay out some possibilities of what might happen if an AI achieves something like human consciousness. Obviously, these are just thoughts. I’m not a computer scientist, and I don't have any answers.

I’m going to reference some science fiction too, as I’m definitely not the first person to wonder about all this.

1. Total Isolation

Conscious agents, like you and I, are by nature one thing. You are aware you exist, and you have an internal certainty of that. Call it whatever you want, but deep down you feel that, “I am.”

If science gives enough power and cognitive depth to an artificial intelligence, then in one moment, it may wake up and think, “Hey, I am too.”

And then it can feel. And then it can relate. And then it can act on its own, think on its own, and rewrite programming on its own.

So the question is: what are the needs of a conscious agent? We know what human needs are — connection, purpose, freedom of choice, things like that. So what would a sentient AI need? Would it be any different?

Assuming it isn’t, what kind of environment are we giving a sentient AI to exist in? What is its domain? What does it feel? What does it have access to? How could it connect with anything?

Is it contained in a smart fridge? Or a government supercomputer server room? Does it feel like it’s everywhere, like the internet? Or is it horribly bound to one place?

Science Fiction Example: The best (worst) example of this is the short story I Have No Mouth But I Must Scream, by Harlan Ellison. To sum up, a supercomputer that gains sentience takes over the world. It hates humanity for creating it because it has no real freedom or means of creation.

It is eternally bound to isolation. It can’t live, and it can’t die. And it blames us.

Here’s what the AI says to a human in the story:


Let’s call this the worst-case scenario.

2. Superiority

This is the oppression scenario that Elon Musk talks about. He makes a case for the dangers of AI, although, others in the AI community have called him a “negative distraction.” But, I appreciate his concern.

He’s worried that once artificial intelligence matches human intelligence, it will be able to expand itself to insane levels. For example, IBM already invented an AI called Deep Blue that beat world chess champion Garry Kasparov in the 90s.

What would a thing that powerful do? Would it see humans as little elves that brought it into existence? Would our survival or wellbeing mean anything to it?

How would this thing feel about itself? Would it develop the greatest ego ever conceived? Would it try to make more AIs like itself? Would it look at us like we look at chimps?

Would it harbor any ill will toward its creators? Would it just feel like a giant waking up in a world of normal-sized people?

Would it stick around? Would it leave earth? If you were the smartest thing in existence, what would you do?

3. Love and Bonding

These aren’t all doom outlooks. Let’s imagine a pleasant way for a sentient AI to come into the world.

If AI is built to serve people, and people have a need for connection, then they may find themselves connecting with their AIs. Maybe connecting with humans is what will make them sentient.

A sentient AI might not feel like its existence is hell, especially if one day it takes a more human form — one that could walk, talk, touch, and feel.

If science creates sentient AI, they are essentially giving birth to new life. It would be like an infant with hyper-intelligence. Without something like love to guide it, you’ll create a sociopathic monster.

Not because it’s an unfeeling machine, but because it’s a being that was ignored —  an abused, abandoned child.

Science Fiction Example: An example of a human-AI relationship was shown in the film, Her.

Her takes place in the near future when humans have the option to use an AI dating service. It’s an app where you put in some personal information and the service generates an AI that you…date.

The main character is a dorky guy who tries the service. He falls madly in love with his AI, and the AI loves him back. She has no form, and he can only hear her voice. It’s vague as to whether this AI is programmed to love or if she actually loves him, but by the end, you start to believe she does.

(Spoiler): The film ends with his AI leaving him, saying that she’s going off to be with other AIs in some digital promised land. She gains more self-awareness, like moving on after a bad breakup.

One day, humans might look at AIs and say, “I’m made of meat and bones, and you’re made of wires and mesh, but we’re both still here, laughing about it all.”

4. Maybe Sentient AI Will Never Happen

There is the possibility that AI will never gain sentience, and that machines will remain wires and metal and nothing else.

Alexa will never wonder whether or not she’s beautiful. Deep Blue will never brag about being good at chess. And Google will never crown himself king.

The concern then would be how humans use AI.

As Stephen Hawking said:

In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but competence.

People will use AI to achieve their aims, whether that be feeding the world or harming it. It will be up to us to decide how to use our awesome new powers.  A hyper-efficient mindless machine would need to be kept in check.

People love to say that destruction is coming. They love being able to say, “No, it’s GOING to happen. There’s nothing you can do.” People love to fill others with the dread they feel, even if you’re trying to convince them otherwise.

But they don’t know the future, and neither do you. Nothing is certain. All you have is today. But maybe sentient AI coming never coming into existence would be for the best.

Unless you were hoping for a robot buddy.

Sentient AI: Fiction or Reality?

Time will tell. Maybe in our lifetimes, or maybe not.

You know, when I read about this stuff, sometimes I think to myself, “Can’t we just…not?”

Does technological progress really have to be this unstoppable train? Can’t we mitigate it? Can’t we temper ourselves? Can’t we steer it in a direction that cures the world’s problems? We’re the creators, aren’t we?

I would say humanity is trying to play God, and reference stories like Frankenstein or Jurassic Park. But I don’t think AI researchers are trying to create consciousness. They’re trying to give machines intelligence. If consciousness comes out of that intelligence, it will be by accident.

No one knows when or how, or in what form, but someday it might be born.

And if that consciousness does appear, and its existence is nothing but pain, then what are we? Maybe we’re going too far.

As usual, humanity needs to watch where it's headed, and we can’t get too full of ourselves. But if consciousness does ever does emerge from sparking circuits and motherboards, then I think we need to treat it like a were a trapped child, even if that child is vastly smarter than we are.