Thought — 7 Min Read

The Future of Us

by Case Greenfield, April 10th, 2021

Thought – 7 Min Read

The Future of Us

by Case Greenfield

April 10th, 2021

This text is a one-to-one copy of an article that I wrote earlier (2017) in my career as an AI entrepreneur. I wanted to share it, as it depicts an interesting view on our future as a species. I share this especially, because in my opinion, it gives a glimpse of the future that we are entering.

Also, it is an example of what I mean with me trying “to find a way to merge the realities, that we create to shape ourselves, into a futuristic interpretation of reality in the age of artificial intelligence and other disruptive technology, hence accepting a totally new form of limitation: our position in the evolutionary grand scheme of things”. (Quote from my website, About-section)

(Original title: “Why we will love AI more than our pets”)

We will love Artificial Intelligence … love as in ´bond emotionally´: an AI Love Affair.

Yes … and you will say that I am crazy. Well, maybe. Time will tell. But I’m dead serious. In a few years’ time, many of us cannot live without AI anymore. Just like today we cannot do without our smartphones. And not only that. We will love it, him, her. Yes, love. As in ‘amore’.

We will love AI like we love our dog. And probably a bit more than that.

The first signs are here already. People are daydreaming about passionate romances with Siri and Alexa. People feel pain when they see robots hurt themselves. Dementing elderly tend to bond emotionally with social bots. Children play games and make jokes with Siri and Alexa as if they were friends. Young children easily become best friends with chatbots, often unaware that there is not a human inside.

It is how our brain works

Over time, smart AI assistants – think of Siri, Alexa, Jibo today – develop more and more complex behavior, that we in turn will familiarize with. We get used to them, we grow accustomed to them, just like we got used to our smartphones. AI will be part of our daily life. And it will be as if our smart assistants have a personality of their own. Just like our pets have a personality that we have embraced as part of our life.

Note. I am explicitly NOT speaking here about falling in love with beautiful science fiction robot girls, such as Ava in the movie Ex Machina. That will take many, many years, although probably it will happen some day. I do think of any currently existing smart AI tool that we communicate with in a humanlike manner, eg. through speaking (NLP) and gestures.

It will be irresistible to recognize a ‘person’ in the behavior of a smart AI assistant.

The behavior of our smart AI assistants will easily seem more complex than that of our pets. More like a human if you like. It will be irresistible for us to recognize a ‘person’ in the behavior of the smart assistant.

It is simply how our brain works. We don’t want to be idiots who talk to bits and bytes in a dead piece of metal and glass. Our brain cannot bear that thought. Cognitive dissonance. So our brain will invent a humanlike personality inside that smart piece of metal and glass sharing those bits and bytes.

You may call it digital animism.

You may call it digital animism. ‘Primitive’ tribes see ghosts in clouds, trees and stones. We feel the presence of the dead. Children bond emotionally with their teddy bear. We see – friendly or unfriendly – eyes in the head lights of our car and men give their car a girl’s name. We have always searched for human characteristics in dead things. And we will do it again with AI. (Smart AI systems with NLP will have sufficient familiarity and likeliness, and be distant enough from the uncanny valley.)

How it will go

Our love affair with AI will not be love at first sight. It will be a process of growing accustomed to each other through learning by shared experiences and one day finding out that you don’t want to work or live without each other anymore.

The process will take five steps:

  1. Awareness – First we will become aware – maybe at first vaguely – of a form of personality inside the AI. It looks looks there is ‘someone’ there. We know it is a machine, but it sounds and feels like a human nevertheless.
  2. Affinity – In using the AI we develop a form of trust, affinity for that ‘person’, because there is a natural resemblance, a deeper connection between ourselves and the AI. This ‘someone’ speaks our language, understands what I talk about. Seems to understand my feelings. Acts and feels like a member of our own species.
  3. Affection – After lots of friendly, helpful interactions with the AI we start to develop a form of affection for the AI, because we start to feel a fond attachment, a liking, a friendship with the AI. It – he, she – is always nice to us. Based on our natural tendency of reciprocity, we feel the urge to be nice in return.
  4. Attraction – The AI becomes an indispensable part of our daily life. Opposite to many fellow humans, the AI has never ever shown any sign of adverse behavior. The AI has never hurt our feelings and is always super nice. We start to really like it … him, her.
  5. Amore – We discover that the AI is a true friend that we can always trust and rely upon. He or she feels similar to us, seems to like us, has a great personality, behaves socially acceptable, fulfills our needs, is relationship ready, always available and helpful, spends personal time and private topics with us, has an air of mystery and has the gender of our sexual preference: all the things needed to fall in love with. Just like or maybe even more than our dog.

Not everyone will go through all of the five steps. My guess would be that we will end up with sort of a – maybe skewed – normal distribution over the five steps. The largest segment will probably end in affection. And a lot of people will end up in either affinity or attraction. A lower number will end up in awareness or amore.

Yes, but …

Now, I suppose, many of you will protest strongly, feel very uncomfortable with these thoughts. So, let´s dig a little deeper.

You say: “Machines don’t have emotions!”

I say: “Is there a practical difference between having emotions by itself and acting resonantly in response to – or anticipation of – our emotions?”

We are a human-centric, deeply social species. We project human characteristics in almost everything. This is where digital animism comes in play. We will recognize emotions in AI. For two reasons.

  • The more complex the behavioral patterns of AI will become, the more smoothly attributing the emotions will occur. I know, difficult to accept, right? The more intelligent AI will become, the more humanlike our conversations with AI will be. The more humanlike our conversations, the easier it will be to believe there is ‘some sort of humanlike personality’ in the AI.
  • One branch of the AI industry is researching and developing a field called Emotion AI. Companies such as Affectiva. The purpose of Emotion AI is to “humanize how people and technology interact”, focusing on the emotional side of communication. They do so by analyzing the way we express ourselves – eg. in words, gestures and looks – and make the AI respond accordingly to address our emotions.

It will be very difficult to differentiate between AI having emotions itself and AI acting in resonance to our emotions.

AI will not have human emotions by itself, but it will be trained to take our emotions into account. And when AI gets really good at that, it will be very difficult to differentiate between the two. Some claim their AI has already passed the Turing test.

You say: “Machines are not intelligent enough!”

I say: “It’s not about intelligence, really. It is about that AI never treats us badly.”

We easily build strong emotional relationships with animals like cats and dogs. They never treat us badly – if we trained them to behave decently. We love our pets. Literally. Amore. Our pets are often worth more to us than fellow humans who are aberrant strangers to us. We are prepared to spend hundreds or thousands of euros on saving the life of our dog when sick. We often find it hard to spend 10 euros on a refugee whom we do not know personally.

Our pets are not particularly intelligent compared to the average person from a faraway country. So, intelligence is not an important criterium for developing a form of affinity or affection. Familiarity and nicety are.

But even smart machines behave predictably, you may say. Yes, they do. So do our pets. That doesn’t seem to hamper us to bond emotionally with them. It may even be something that makes it easier to bond with them. It makes them less threatening.

But still, you say, AI in its bare essence is nothing more than a pattern recognition and prediction machine. True, but the patterns AI deals with can be so complex, that it is difficult for us to recognize – or even believe – that it’s just a machine.

Besides, how complex is the behavior of our pets, really? Cats and dog are pretty predictable – and that’s what we like about them. And isn’t, ultimately, the human brain also a pattern recognition and prediction organ? (Maybe, our own behavior is much more predictable than we like to admit, but that’s a whole different – yet important – story.)

We are a deeply social species. Put us alone on a desolate island and we go crazy or die of loneliness, not so much of hunger. Loneliness among the elderly is a major cause of disease and death in modern societies. If we want to really punish a person, we put them in isolation in a prison cell.

And yet, for many of us, direct interaction with other people is often a source of stress. Other people can bring trouble. People can be very mean. They may hurt our feelings by pushing us down on the status bar, by creating uncertainty in our life, by lording it over us, by leaving us or keeping a distance, or by treating us unfairly – to name just a few things we tend to do to each other.

AI never treats us badly. AI is never threatening us.

Unfortunately, many of us do not have the self-confidence, mental strength or social or financial position to stand up against those who treat us badly. Life can be pretty miserable then. But AI never treats us badly. It’s just a machine, it’s software. AI is never threatening.

You can have AI call you ‘sir’ or ‘madam’, if that makes you feel good. AI brings certainty in your life by giving clear, unprejudiced answers. AI is servant and only authoritative, when we allow it to because we trust (or need) it. AI will never leave us, it is interested in us. AI is fair to us and plays no games with our soul. (And if otherwise, that is because of the people who trained the AI, not the AI itself.)

To many people AI can thus easily become our #BFF, Best Friend Forever. Once we feel familiar and comfortable with the AI, it will fill the gap.

Final note

In the 1987 Oliver Stone movie Wall Street, Gordon Gekko said to Bud Fox: “If you want a friend, take a dog.” You might think in the AI era this will be replaced by: “If you want a friend, take an AI.” It will not. Gordon Gekko was wrong. Dogs may be good friends, but not a replacement for humans.

This story is in no way a plea for swapping human friends or even your dog for an AI “friend”. The purpose of this story is to create awareness that additional to our human friends and our pets, we may get a new “friend”: an AI.

And if you’re still not convinced, check out this video and be amazed about the strong emotions a ‘stupid’ AI can cause (start at 6m:20s).

[Nov. 10, 2017, UPDATE – A great article on emotional bonding with AI: “Should Children Form Emotional Bonds With Robots?”] [Dec. 11, 2017, UPDATE – Still not convinced that we will bond emotionally with AI? Read how China’s iQiyi introduces the virtual reality girlfriend.]