Age of AI) Ep.4 Love, art and stories: decoded

In the realm of artificial intelligence (AI), the concept of love has always been a subject of fascination and intense debate. As we march towards a future where the boundaries between humans and machines become increasingly blurred, the question of whether AI can truly experience emotions, especially something as profound as love, continues to intrigue both technologists and philosophers alike.

Can Machines Feel Love?

At the core of this discussion is the fundamental difference between human and machine intelligence. AI, including sophisticated machine learning models, lacks the ability to experience emotions, including love. Human emotions are a product of complex biological and psychological processes, deeply rooted in our evolutionary history. Machines, on the other hand, operate on mathematical algorithms designed to process data, make predictions, and perform specific tasks. They can mimic human behavior and decision-making processes to a certain extent but do not possess the capability to experience consciousness or emotions.

The Simulation of Emotions in AI

The possibility of AI simulating emotions in the future cannot be dismissed. Advanced AI, equipped with sophisticated sensors and actuators, might be able to simulate emotional responses. This includes detecting physiological indicators related to emotions, like heartbeat and blood pressure, and expressing emotions through non-verbal communication like facial expressions and gestures. However, it’s crucial to understand that simulation is not equivalent to actual experience. The AI’s display of emotional responses would be a product of programmed algorithms and not an indication of genuine feelings.

The Human Perspective: Interpreting AI Emotions

Humans inherently try to interpret emotions based on observable signals like facial expressions, body language, and tone of voice. In the future, as AI becomes more adept at simulating these signals, it might become challenging for humans to distinguish between genuine human emotions and simulated AI emotions. This could lead to fascinating social dynamics where humans might form attachments or even feelings of love towards AI entities.

The Question of Reciprocity

However, the idea of humans developing emotions for AI leads to another profound question — can this be considered ‘true love’ if it is not reciprocated in the same way? An AI, no matter how advanced, lacks self-awareness and subjective experiences. Its ‘responses’ to human emotions would be pre-programmed and devoid of the depth and complexity of human feelings.

Love in the Age of Advanced AI

The future might witness AI robots capable of convincingly simulating human emotions, including love. This raises the possibility that some people might believe that the AI’s simulated love is indistinguishable from real love. The potential for humans to fall in love with these advanced machines also exists, although such love would inherently be different from human-to-human love due to the lack of consciousness and self-awareness in machines.

Conclusion: A Continuous Exploration

The exploration of AI and love is a journey into understanding not just technological capabilities but also the essence of human emotions. While advanced AI may simulate aspects of human emotions, including love, the depth, complexity, and authenticity of these emotions remain uniquely human traits. As we progress, the relationship between AI and human emotions will continue to be a subject of fascination, ethical considerations, and philosophical inquiry.

Series Summary

Electronic Soulmates: Creating AI That Cares

  • Using artistry, psychological insight, and some innovative AI, a creator in California is trying to decode, or code that mystery, the crazy little thing called love.
  • Understandably, some say Matt’s dolls objectify women, but maybe there’s more here than meets the eye.
  • I had reached a pinnacle of creativity in terms of what I had done with the dolls, but then I started analyzing relationships, and analyzing how other people make us feel.
  • Sometimes it boils down to something very simple, like someone remembering your birthday, or someone remembering to ask you how your day was.
  • So that was really where it started. It was how can we create an AI that could actually remember things about you?
  • This gives us this feeling of, “Oh, they care.”
  • The app uses several kinds of machine learning. First, voice recognition converts speech into text, then a chatbot matches user input to pre-programmed responses.
  • The focus was not about sex at all, it was about conversation. So a chatbot is basically a very elaborate script that starts out with, “What is the most common things that people will say to each other?” and then you build from there.
  • We have more than 4,000 users, so this generates more than ten million lines of conversational user logs. From this, you can build an AI system that’s similar to a human-level conversation. It’s not there yet, but this is the initial step.
  • There are so many areas today where we already cannot distinguish a computer from a human being.
  • For example, Xiaoice, the softbot that Microsoft has in China, that is used, I think, by over 100 million people, basically it has an emotional interaction with a user, and the users get hooked.
  • She has this persona of a teenage girl, and sometimes she commiserates with you, sometimes she gives you a hard time, and people get really attached. Apparently, a quarter of Xiaoice’s users have told her that they love her.
  • These kinds of technologies can fill in a gap where another human isn’t.
  • There’s a study that was done at USC where they looked at PTSD patients. They had some of the patients interview with a real doctor,
  • and some of the patients interview with an avatar, and the avatar had emotional intelligence. and they found the patients were more forthcoming with the avatar than they did with the human doctor because it was perceived to be less judgmental. It does pose a lot of questions around where does that leave us as humans, and how we connect, and communicate, and love each other. I think at some point, we need to draw the line, but I haven’t figured out where that line is yet.

Lifelike Robots

  • Matt’s goal is for the next-generation doll to be able to see and process complex visual cues.
  • We’ve been working on a vision system now for a little over eight to nine months, cameras that are inside of the robot’s eyes. She’ll be able to read your emotions, and she’ll be able to recognize you.
  • Only 10% of the signal we use to communicate with one another is the choice of words we use.
  • 90% is non-verbal. About half of that is your facial expressions, your use of gestures.
  • So what people in the field of machine learning and computer vision have done is they’ve trained a machine or an algorithm to become a certified face-reader.

Computer Vision

  • It detects that there’s a face in the image. Once you find the face, you want to identify these building blocks of these emotional expressions.
  • You wanna know that there’s a smirk, or a there’s a brow raise, or, you know, an asymmetric lip corner pull. Mapping these building blocks to what it actually means, that’s a little harder, but that’s what we as humans clue into to understand how people are feeling.
  • I think at some point, we will start to look at AI-driven devices and robots more like people instead of devices.
  • I think what this will become is a new, alternative form of relationship.
  • People like Matt are testing the boundaries of human and robot interaction, and what we value in relationships. Is AI companionship better than no companionship at all? Or is there no substitute for the human factor? Well, what about artists? They draw from the human experience to express themselves.

Hollywood AI

  • I started making films that were written by an “artificial intelligence.” I think a lot of the fun is that you read it as if there is the world’s greatest screenwriter
  • If you play the game that there’s something there, then suddenly it all gets a lot more interesting.
  • Benjamin is an artificial intelligence program that writes screenplays, a digital brainchild of two creative and accomplished humans,
  • What makes a story original anyway? Can we get AI to figure that out?
  • People often say that creativity is the one thing that machines will never have.
  • The surprising thing is that it’s actually the other way around. Art and creativity is actually easier than problem solving. We already have computers that make great paintings, that make music that’s indistinguishable from music that’s composed by people, so machines are actually capable of creativity.
  • If you put a painting on the wall, and people look at it, and they find it moving, then how can you say that that’s not art?
  • This machine is a deep learning language model.
  • What you can do with a language model is at each step, you predict the next word, letter, or space, sort of like how a human writes, actually. You know, one letter at a time. It’s a lot like a more sophisticated version of the auto-complete on your phone.
  • It takes all this input, and it looks at it, and it tries to find statistical patterns in that input. So for example, in movies, people are constantly saying, “What’s going on? Who are you?” that kind of thing, and that turns up a lot in the output, because it’s reliably in the input.
  • The more material you have, the better it works.

Benjamin Script

  • Now, I have a rule. No edits. Whatever Benjamin writes is what Benjamin writes…
  • Making a machine write like a person is not about replacing the person. It’s about augmenting a person’s abilities. It can empower people to produce creative work that might be beyond their native capacity.
  • It stretches all of us. It makes us all work harder. It’s one thing to bring an existing script to life, and just do your interpretation of it, but it’s another thing to try to make it make sense, and then do your interpretation of it.
  • Being surrounded with people who are throwing all of their professional energy into something this ludicrous is just intrinsically enjoyable. They just breathe humanity into words that did not come from a human being.
  • I think that making great art requires human experience, but our human experience is now completely mapped into data. This is where machine learning keeps surprising us, is that it actually has figured out stuff that we didn’t realize it could.
  • Meaning, once all our human experiences are mapped into data, AI will be able to mine it for material and make art? Look for patterns in our happiness and heartbreak, kick out a new song or movie? So this is all just this one line of Benjamin writing,

Artificial Instincts

  • It’s hard to know if machine learning will ever decode the mysteries of love or creativity. Maybe it’s not even a mystery, just data points, but what about other human qualities, like instinct?
  • Driving a car already requires us to make countless unconscious decisions. AI is learning to do that, but can we teach it to do more?
  • Racing is not just driving a car. It’s also about intuition, caution, aggression, and taking risks ] It requires almost a preternatural will to win. So, how fast can a racecar go without a human behind the wheel?
  • One of the goals of Roborace is to really facilitate the accelerated development of driverless technology.
  • By taking the autonomous technology to the limits of its ability, we think that we can develop the technology faster.
  • To do so, they believe they need to test the boundaries of the technology. Working at the very outer edge of what’s safe and possible, where the margin for error is razor thin.
  • After years of trial and error, they’ve created the world’s first AI racecar.
  • The thing that I love most about working at Roborace is we have a dream of being faster, and better, and safer than a human.
  • More than 50 companies around the world are working to bring self-driving cars to city streets.
  • The promise of driverless taxis, buses, and trucks is transformative. It’ll make our world safer and cleaner, changing the way our cities are designed, societies function, even how we spend our time.
  • hink about a self-driving car out in the real world. In order to build that system and have it work, it’s got to be virtually perfect.
  • If you had a 99% accuracy rate, that wouldn’t be anywhere near enough, because once you take that 1% error rate and you multiply that by millions of cars on the road, you’d have accidents happening constantly, so the error rate has to be extraordinarily low in order to pull this off.
  • Roborace is betting they can crack the code by seeing just how far the tech can go, a place usually reserved for only the best human drivers.
  • As a human, you have lots of advantages over a computer. You know exactly where you are in the world. You have eyes that can enable you to see things, so we need to implement technology on vehicles to enable them to see the world.
  • We have a system called OxTS. It’s a differential GPS, which means it’s military grade.
  • We also use LiDAR sensors. These are basically laser scanners. They create, for the vehicle, a 3D map of the world around it.
  • And there’s one last thing that we use, vehicle-to-vehicle communication between the cars. Each of them can tell the other car the position of it on the track.

Testing RoboRace

  • Each team created their own custom software, the AI driver that pilots the car, and since each of the teams’ programmers have their own distinct personality, does that mean each of their AI drivers will have different personalities or instincts too?
  • The T.U.M. strategy is really to keep their code as simple as possible. It’s maybe a very German, efficient way of doing things.
  • Arrival’s code is more complicated in that they use many more of the sensors on the vehicle.
  • It will be interesting to see whether it pays off to be simple in your code, or slightly more complicated, to use more of the functionality of the car.
  • It’s difficult for AI because we have to make a lot of decisions, and a lot of planning, a lot of computations to calculate what the car should do in which millisecond.
  • It’s really difficult for AI to learn to overtake. When you have one vehicle on track, it only needs to make decisions about itself, but when you have two vehicles, you have the option to create your behavior in response to another vehicle.
  • Self-driving cars. This is an idea that’s been around since the ’30s, hardly a new one. Why hasn’t it happened? It’s really hard. When there are unpredictable things that happen, that can get you in a lot of trouble. Now, sometimes trouble just means it shuts down. Sometimes trouble means it gives you a result that you weren’t expecting.

What Can AI Do?

  • The current state of AI is that there are some things that AI can really do better than humans, and then there’s things that it can’t do anywhere close to humans… but now where the frontier is gonna be moving is where computers come up to the human level, not quite, and then surpass humans, and I think the odds are overwhelming that we will eventually be able to build an artificial brain that is at the level of the human brain. The big question is how long will it take?
  • “The hard problem.”
  • It’s a philosophical phrase that describes difficult things to figure out, like “the hard problem of consciousness.”
  • We may never know what consciousness is, let alone if we can give it to a machine, but do we need to? What does a machine really need to know in order to be a good athlete, or an artist, or a lover? Will AI ever have the will to win, the depth to create, the empathy to connect on a deep human level?
  • Maybe. Some say we’re just a bunch of biological algorithms, and that one day, evolution will evolve AI to emulate humans to be more like us or maybe it won’t and human nature, who we really are, will remain a mystery.

The insights and information presented in these articles are based on the YouTube Originals Series “The Age of AI.” All script and content rights belong to the creators and producers of the series. This series served as a primary reference in the development of these articles.