089: AI And The Movie “Her”: A Team Conversation



In today’s rapidly evolving technological landscape, the movie “Her” serves as a poignant exploration of the relationship between humans and artificial intelligence (AI). The film delves into the emotional and ethical complexities that arise when a man forms a deep connection with his AI operating system. But how far are we from this fictional narrative becoming a reality?

Join Peter and the team at BizMarketing for a conversation about AI and the movie “Her.”

The Emotional Depth of AI

As AI technologies like chatbots and virtual assistants become increasingly sophisticated, questions arise about their potential to offer emotional support and companionship. While these systems are designed to understand and respond to human language, can they ever truly comprehend human emotion? The general consensus is one of caution. Although AI has made significant strides, replacing human interaction with machine-generated responses could lead to a society where genuine relationships are devalued.

Privacy Concerns and Ethical Dilemmas

Another pressing issue is the ethical implications of AI, especially when it comes to privacy and data security. In the movie “Her,” the AI system knows intimate details about the protagonist’s life, raising concerns about how much personal information these systems should have access to. This leads to a broader discussion about the urgent need for legislation to regulate these rapidly advancing technologies. Trust in AI is not just about the algorithms; it’s also about the people programming them. As AI continues to evolve, so should the laws that govern its use.

 The Existential Questions

Finally, the advent of AI brings forth existential questions about the future of humanity. Can we coexist with increasingly advanced AI, or are we setting ourselves on a path to obsolescence? Opinions are divided. Some argue that AI could mark “the beginning of the end for humanity,” while others are more optimistic. They believe that humanity has the resilience and ingenuity to adapt and thrive, even in a world increasingly influenced by AI.

In conclusion, the movie “Her” serves as a compelling lens through which to examine the ethical and emotional implications of AI. As we continue to integrate these technologies into our daily lives, it’s crucial to approach them with a sense of caution and responsibility, always considering the broader impact they may have on society.

 

Transcript

Title: AI And The Movie “Her”: A Team Conversation

Guests: Emily Caddell, Ann McKinney, Marcel Colon, Chris Goldman

Peter: Today, we’re joined by Ann McKinney. She is our project manager. And Marcel Colon, he is our designer. Emily Caddell, she is our marketing director. And Chris Goldman, who is our marketing coach and marketing language consultant.

One of the great things that we do in our weekly team meetings is we have a theme. Each week we have a different theme. For example, the first Wednesday of the month, we talk about content ideation. Second Wednesday of the month, we talk about our KPIs. Third Wednesday of the month, we give our designer Marcel an opportunity to do a show and tell of his recent designs.

And on the fourth Wednesday, we have what we call an extended education moment and it rotates through the team. This month is with Ann McKinney. We have a great education moment that Anne has come up with. Anne, do you want to kick us off?

Ann: Essentially, we’re going to review the movie Her and its relevance to chat or AI.

Peter: We’re talking about the movie Her, a movie about, is it Joaquin Phoenix? Is he the guy in the movie? And his relationship with an operating system, and it is the voice of a woman that is this assistant. What spurred us on is I had recently seen this movie and others had seen it just recently, even though the movie came out ten years ago. And it’s about this relationship he develops with his AI OS, they call it.

Anne, you’ve got some questions for us?

Ann: One of the striking questions that came up in my mind was asking if the AI or chat GPT type technologies could ever evolve to provide emotional support and companionship similar to what’s depicted in her.

Emily: Yeah. I think it could, honestly. I think just even seeing how, like, chat GPT and all that is already evolving since it came out. Again, this movie came out ten years ago, and I honestly feel like it’s a little bit of a prediction of where we’re heading. Maybe there’ll be a little bit more hesitancy to get there, but also just where AI already is.

I definitely believe that it has the capacity to develop this personal connection with people. Yeah. Kinda scary.

Chris: This is Chris, and I wanna chime in on some of the bigger themes of this movie. And it really gets to people substituting relationships with real people and substituting in other entities, in this case, AI or an operating system. And we’ve already seen this happening with social media quite a bit where people are substituting online relationships for person to person relationships. And in the core of this movie is this theme of loneliness that is increasing between two human beings, because their relationships are not focused on one another. Their energy, relational energy is not focused on each other.

It’s focused on things that they’re getting through computers, through operating systems, through in our world AI. And you do think about when does art predict future or when does the future actually create the art? And in this case, these two dovetail beautifully together. We look back now ten years and think whoever was writing this was probably writing just a little ahead of their time.

Ann: Absolutely, in fact, that brought up my second question. That was how does chat GPT address issues of loneliness and social isolation? I think you answered that one perfectly.

Peter: ChatGPT is what’s called generative AI. So the premise of things like generative AI is to finish your sentence, for example, or finish your thought. That’s the idea is using this large language model to come up with a response to your question or but it’s trying to use this language model to come up with the response that you want to hear in so many respects. If I need encouragement, the chatbot could potentially pick that up in my intonation or the questions I’m asking or just the way that I’m asking or even the way I’m breathing and offer that up. It’s just using all these cues in theory to figure it out.

Right now, like ChatGPT is the only basis it has is a typed in question. So we’re going to have to fast forward a little bit to where we’re actually talking to the AI. However, we do have, you know who, whose name starts with an A that I’m not going to say the name because she’s going to respond to me sitting in our rooms, listening in, and offering some feedback as well. So that just seems like the gateway to where we could end up having a relationship and getting responses that are actually going to build us up.

Ann: In the movie, Samantha actually claimed to understand and experience emotions. So do you think AI will become sentient in the future or able to perceive and feel things?

Chris: This is Chris again. One of the things I would come in with is that language and conversation are the basis of relationship and understanding one another, correct? And one thing ChatGPT does is it uses language. It uses conversational prompts. For example, I just asked ChatGPT, how soon will I be able to have conversation with you?

And the response was, you’re already having a conversation with me right now. I’m here to chat and assist you with questions of information you need. If you’re referring to improvements in conversational abilities for AI in the future, it’s an area that’s continuously evolving, and then goes to give me the history dating back to 2021. So I can perceive just through words, feelings and emotions or attribute those. So whether it becomes sentient or not, the ability to detect emotion and feeling through language is going to be so enhanced every year as this evolves, that I don’t know that we’ll be able to tell the difference between sentient or just great imitation.

And I don’t know that it really matters. If I’m a good imitator, does it matter if I’m sentient or not?

Peter: I guess I would say I’m a skeptic in that regard, just because of the complexity of our brain and what it takes us to be biologically sentient. And the fact that, and I’m looking at it just purely from a physical material standpoint, scientists are just now beginning to understand how the brain really operates. What we’re talking about, at least from my perspective, is kind of an artificial brain actually coming to life, is what I understand. Sentient to be I just checked with our friend ChatGPT, who we call Chatty Cathy, aka Catherine. What does sentient mean as it refers to AI?

And the answer I received was it refers to the hypothetical capability of an AI system to have subjective experiences, awareness, emotions, or consciousness, essential qualities that are considered unique to sentient beings like humans and certain animals. It goes on to talk about, in science fiction, sentient AIs are often depicted as having self awareness, which brings back the classic from Space Odyssey where the AI, Sentient, detects the mission and figures out that it’s gonna need to kill everybody on the mission. The point of that movie was it did reach sentience or it did become sentience, and then it went off the rails and did some bad things. So I think that’s a question for me is like, all these bad things that can happen very quickly as we rely on it, especially if there’s no moral standard.

Ann: That does bring up another question. How will we know if an AI is alive?

Emily: Yeah. This is Emily. I think Chris had kind of hit on the point of will we know? Does it matter if we know if it’s actually invitation or if it’s real? And I don’t know.

That’s a really interesting thing to think about when it comes down to, like we said earlier, the personal connection. What ethically is right? And if it did evolve into becoming this, is it ethical to even have this AI stuff do this? Can we prevent it? Can we stop it?

I don’t know. That’s

Peter: I think my question is, can we trust it? And a lot of what I’ve been reading is saying that some of these models, they don’t even know how they work. They don’t know how they’re pulling the connections together. And there was actually just a breakthrough recently where they were able to actually understand better how the model was clustering different points of data. And they were thinking that would allow them to then be able to control it in a better way.

I think the trust part is the part that scares me the most is because I think there’s a novelty. We’ll get lulled into it. They sound like us. They look like us. It seems like if you are not understanding the underlying functions of how that works, that it could easily get used in a bad way.

Either not necessarily is that I don’t trust the AI itself, it’s do I trust the people that have the AI?

Chris: Ultimately, all programs, all AI is based on programmers. Right? Whoever is programming it, feeding it the information can slant whether it will do something useful and valuable, or something ultimately not. So the trust level becomes who is behind this, that’s steering the AI. Now, of course, we all ask this imaginative question, what if it became its own living entity like we’re talking about here?

So if you set that aside, because I think that’s a big conversation down the line, we need to be asking now the smaller questions that are more immediate. And that’s going to help shape the future of it. So for example, if I see a picture or video that looks real, I have to at least in our already culture, be somewhat skeptical. Is it actually real? In what I’m reading, is it accurate?

And I think in the middle of all of this, it becomes so difficult to have confidence at where we are right now, because so many people are doing things that are not beneficial with it.

Peter: One thing that could be helpful is if there are laws with respect to labeling, was this generated using AI to produce it, or will it be attributed to an individual? I think it’s important that we know how it was produced, and that really comes down to legislation and laws. The challenge that we have now, in my opinion, is that the technology is so far ahead of the law and so far ahead of even understanding. Then you go to Washington, D. C, and the folks that are sitting there don’t have a chance or a clue to understand this stuff, and they’re having some hearing with the CEO of Microsoft or the CEO of Google.

How much are they really going to learn in some hearing versus really understanding what’s happening? So that’s one of my current fears is that this stuff is so far out in front of legislation and laws.

Ann: Brings up the issue of privacy and data security And the system, at least in the movie Her knew so many intimate details about Ted’s library. The concept of AI having access to all that personal information in relation to privacy concerns is really concerning to me. And how do others feel about that?

Emily: This is Emily again. Yeah, there’s these like new photo apps, upload a picture of your kid and like it’ll show you what they’re gonna look like in thirty years. And I know people are just like, oh, sure. That sounds great. Nope.

I’m not gonna upload pictures of my kids to an AI system. And but also I know I’m posting them on social media and that can always be taken itself. But just having that hesitancy, I think, is important because I think just diving in and being like, yeah, this is awesome, can really get you into trouble. Because I do think there’s stuff that AI does not need to know about you. And how do you prevent that is a different question.

But also, yeah, I definitely think I’m a lot more hesitant to use it in personal circumstances than a lot of people are.

Chris: This is Chris again. I wanna maybe flip the conversation just a little bit because there’s positive sides of all of this as well. I think the fear is really the biggest concern, legitimate concern we have. But for people who have struggles on loneliness and literally not being able to have conversation, you can imagine how that can, at some level, make you feel less isolated, that you can actually get words out. We talk about that as human beings, our need to get words out.

I’m a very wordy person. I have to imagine a lot of people in my life would love for me to have conversation with AI before I get a situation with them, just so I got my words out, right? And that way I can have a normal conversation. I can imagine that people who live alone or feel extremely isolated, being able to walk in and have a conversation back and forth with an operating system, AI entity, whatever, could actually in and of itself could feel somewhat better. I think the danger is that we forget that machines by nature, and this little theological here, machines by nature don’t have a soul.

It makes it very different. And I even think some of the robotic stuff that they’re trying to gravitate towards to imitate human to human contact through robotic relationship is going to be leaving people empty. However, a lot of the conversation right now, dealing with different purposes of robots, for example, is being able to provide people with something they can touch and talk to, that’s not a human being, especially if they have a hard time developing and maintaining human relationships. I think if we lose human relationships, we’re in real trouble. And substituting any kind of machinery for a relationship.

And we do that already, by the way, right? You can go in a family where everybody’s on their iPad, they’re all on their phones, they’re all sitting in the room, but they’re actually avoiding each other by being connected with people not in the room. So one person I know recently started a new role, they’ve got kids in the house, that when you walk in their house, they have actual holders for everybody’s and their Androids. So when you walk in, the first thing you do is you shelve your phones. And it is interesting.

The first time I experienced this, like, what do we do? Then quickly, we went into human mode. We play games, we chat with each other, we joke, we start telling stories. And it’s really amazing that every time I walk into that household now and the phones go up in the little holsters, it gives life to the room. And I think it’s something we need to watch and really think through.

Ann: And it brings up a very good point that chat GPT can actually make issues of loneliness and isolation worse. It’s this artificial relationship. It’s not an emotional true relationship with another human. And my final question, can our humanity survive AI?

Peter: I sure hope so.

Ann: Based on a couple of books that I’ve read, like Stephen Hawkins and Yero Harari of Sapiens.

Peter: Yeah, I’m hopeful that it can. We’ve survived this far. I think that’s one thing. I think if you go back a thousand years, there was probably something else where people said, can humanity survive this?

Ann: Why two ks? Yeah. Yeah.

Peter: Two ks or I’m not sure what it would be, but humanity has survived massive pandemics. Yeah. Humanity has survived a lot outside of some geological catastrophe or some asteroid hitting Earth. It seems like we’ll be able to maybe I’m just being Pollyanna about it, but I always feel like even though there are smart AIs and smart bots, there’s always some smart humans too.

Chris: I will say in a conversation I was having with friends and family, one of the individuals in the room was a lawyer, and he and he made the comment. He said, I don’t know where we go. They use it all the time in their firms. Legal firms are using a ChatGPT specifically a lot. And they’ve got to watch it because sometimes it’ll give bad information.

So they have to really check it. But we got in this conversation about where does it end up with AI and all of that. And he said, I don’t know where it ends up, but I do suspect in my heart of hearts, it is the beginning of the end of humanity. And I was really taken aback by that comment thinking this a really sharp individual, who in reflection is just saying, I do think that some of the naysayers about this have something.

Peter: We covered a lot of ground here today. Appreciate everybody’s input. Thank you. This was a great conversation, and we’ll have more conversations like this as well. So keep listening to the podcast.

If you have any ideas for us to cover, let us know. Shoot an email to podcastbizmkt.