Live Markets, Charts & Financial News

AI chatbots: How parents can keep kids safe

2

The mother of a 14-year-old boy from Florida has filed a lawsuit against an AI chatbot company after her son, Sewell Setzer III, died by suicide, which she claims was prompted by his relationship with the AI ​​bot.

“Megan Garcia seeks to prevent C.A.I. from doing to any other child what it did to her child,” the 93-page manslaughter transcript reads. lawsuit Which was filed this week in US District Court in Orlando against Character.AI, its founders, and Google.

Technical Justice Bill Director Mitali Jain, who represents Garcia, said in a press release about the case: “By now we are all aware of the risks posed by unregulated platforms developed by unscrupulous tech companies — especially for children. But the harms revealed in this case are new, new.” In the case of Character.AI, the deception is by design, and the platform itself is the predator.

Character.AI released a Statement via XNoting: “We are saddened by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we continue to add new security features which you can read about here: https://blog.character.ai/community-safety-updates/…“.

Garcia alleges in the lawsuit that Sewell, who committed suicide in February, was drawn to harmful and addictive technology without any protection, which led to an extreme personality shift in the boy, who appeared to prefer the robot over other real robots. Life connections. His mother claims the “abusive and sexual interactions” occurred over a 10-month period. The boy committed suicide after the robot told him: “Please come home to me as soon as possible, my love.”

Friday, New York Times Reporter Kevin Rose discussed the situation on his site Hard Fork Podcastplaying a clip of an interview he conducted with Garcia His article Who told her story. Garcia did not learn the full extent of the robot relationship until after her son’s death, when she saw all the messages. In fact, she told Rose, when she noticed Sewell was often preoccupied with his phone, she asked him what he was doing and who he was talking to. He explained that it was “just an AI robot… not a person,” she recalled, adding: “I was relieved, ‘Okay, it’s not a person, it’s like one of his little toys.’ Garcia doesn’t fully understand the robot’s potential emotional power, and she’s far from alone.

“This is not on anyone’s radar,” said Robbie Turney, AI program manager at… Common sense media The lead author of New guide on AI companions aimed at parents who are constantly grappling with keeping up with confusing new technology and setting boundaries for their children’s safety.

But Turney emphasizes that AI attendants are different from, say, the service desk chatbot you use when you’re trying to get help from a bank. “They are designed to carry out tasks or respond to requests,” he explains. “Something like an AI character is what we call a companion, and it’s designed to try to form a relationship, or simulate a relationship, with the user. That’s a completely different use case that I think we need parents to be aware of. This is evident in Garcia’s lawsuit, which It includes creepy, sexual, and realistic text exchanges between her son and the robot.

Turney says sounding the alarm about AI companions is especially important for parents of teens, as teens — especially male teens — are particularly vulnerable to over-reliance on technology.

Below, what parents need to know.

What are AI companions and why do children use them?

According to the new The ultimate parent’s guide to AI companions and their relationships From Common Sense Media, created in collaboration with mental health professionals in Stanford Brainstorming LabAI companions are “a new class of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from previous conversations, role-play as mentors and friends, mimic human emotions and empathy, and “agree more easily with the user than typical AI-based chatbots.” , according to the guide.

Popular platforms include not only Character.ai, which allows its 20 million-plus users to create text-based companions and then chat with them; Replika, which offers a text-based or animated 3D accompaniment to friendship or romance; And others including Kindroid and Nomi.

Children are drawn to them for a range of reasons, from non-judgmental listening and around-the-clock presence to emotional support and escape from real-world social pressures.

Who is at risk and what are the concerns?

Common Sense Media warns that those most at risk are teens — especially those with “depression, anxiety, social challenges, or isolation” — as well as males, young adults going through major life changes, and anyone who lacks real-world support systems. .

This last point was particularly troubling to Raffaele Cirillo, a senior lecturer in business information systems at the University of Sydney Business School, who He searched How “emotional” AI challenges the essence of being human. “Our research reveals the paradox of (de)humanization: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to ontological blurring of human-AI interactions.” In other words, Cirillo wrote in a recent op-ed for Conversation With PhD student Angelina Ying Chen, “Users may become too emotionally invested if they believe their AI companion truly understands them.”

Another studythis study from the University of Cambridge that focuses on children, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat these companions as “real, human-like confidants,” particularly at risk of harm.

For this reason, Common Sense Media highlights a list of potential risks, including that companionship can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may increase feelings of loneliness or isolation, and brings the potential Occurrence of inappropriate behaviour. Sexual content, it can become addictive, and tends to agree with users – a frightening reality for those suffering from “suicidality, psychosis, or mania.”

How to spot red flags

Parents should look for the following warning signs, according to the guide:

  • Preferring interaction with artificial intelligence over real friendships
  • Spending hours alone talking with a companion
  • Emotional distress when unable to reach a companion
  • Sharing deeply personal information or secrets
  • Develop romantic feelings for the AI ​​companion
  • Low grades or school participation
  • Withdrawal from social and family activities and friendships
  • Loss of interest in previous hobbies
  • Changes in sleep patterns
  • Discuss problems exclusively with an AI companion

Consider getting professional help for your child, Common Sense Media emphasizes, if you notice them withdrawing from real people in favor of artificial intelligence, showing new or worsening signs of depression or anxiety, becoming overly defensive in using companion artificial intelligence, or showing significant changes in Behavior. Or mood, or expression of thoughts of self-harm.

How to keep your child safe

  • demarcation: Set specific times to use the companion AI and do not allow unsupervised or unlimited access.
  • Spend some time offline: Encourage friendships and real-world activities.
  • Log in regularly: Monitor the content of the chatbot, as well as your child’s level of emotional attachment.
  • Talk about it: Keep communication open and judgment-free about AI experiences, paying attention to red flags.

“If parents hear their kids saying, ‘Hey, I’m talking to an AI chatbot,’ that’s really an opportunity to tap into that information and take it — and not think, ‘Oh, well, you’re not talking to a person,’” Turney says. “Person.” Instead, he says, it’s an opportunity to learn more, assess the situation and stay alert. “Try to listen from a place of compassion and empathy, and don’t think that just because someone isn’t safer, or that there’s no need to worry,” he says.

If you need immediate mental health support, call 988 Suicide and Crises Lifeline.

More on children and social media:

Comments are closed, but trackbacks and pingbacks are open.