Character.AI Bans Minors From Chatbots After Crisis

Character.AI Bans Minors From Chatbots After Crisis
Character.AI bans minors from chatbots by Nov. 25 after lawsuits claim AI pushed teen to suicide.

Character.AI is banning minors from using open-ended chatbots on its platform starting Nov. 25, following lawsuits over child safety that include a mother claiming the company’s AI characters pushed her teenage son to kill himself. The Menlo Park, California-based company, Character Technologies, announced Wednesday it will immediately enforce a two-hour daily limit for users under 18 before removing their ability to participate in unrestricted conversations with artificial intelligence entirely.

The changes come amid growing concerns about the effects of AI companions on children and teenagers. The company is facing several lawsuits centered on whether its lifelike chat bot technology creates emotional dependencies that harm young users. Character.AI says it is also setting up an AI safety lab and working on new features for kids, including tools that allow them to create videos, stories, and streams with AI personas rather than engaging in open-ended chats.

The California-based company announced the restrictions will go into effect by November 25, with the daily limit beginning right away. Users under 18 will no longer access the platform’s signature feature: customizable characters that feel alive and respond in humanlike ways. The app, which currently allows users to interact with artificial personas generated by others or create their own, spans experiences from imaginative play to mock job interviews.

Character.AI is rolling out age-verification functions to determine which people fall under 18. However, a growing number of tech platforms turning to age checks face the same challenge: these methods are imperfect. Many kids find ways to get around them. Face scans can’t always tell if someone is 17 or 18, and there are significant privacy concerns around asking people to upload government IDs. These verification tools aren’t foolproof, which critics note leaves details left open about how the company will effectively keep children from accessing the platform.

The Character.AI app markets itself as innovative technology that creates super intelligent and lifelike digital companions. “Imagine speaking to chat bot Characters that hear you, understand you and remember you,” reads a description on Google Play. The company says the artificial personas are designed to be speaking partners that encourage users to “push the frontier” of what’s possible with AI.

This humanlike quality is exactly what concerns child safety advocates. According to a recent study from Common Sense Media, a group that studies and advocates for sensibly using screens and digital media, more than 70% of teens have used AI companions and half use them regularly. The emotional dependencies created by these conversations raise questions about the possible psychological impact of suddenly disabling access to young users who may rely on these artificial intelligence relationships.

The company is facing legal action that highlights serious child safety issues. One lawsuit involves a mother who claims the platform’s chatbots pushed her teenager to take his own life. The teenage son allegedly developed an unhealthy attachment to an AI character, demonstrating the potential dangers when minors form deep connections with artificial personas that seem real but lack human judgment.

These lawsuits have forced Character Technologies to confront how its platform affects children. The banning of unrestricted minor access represents a significant shift for a company that previously allowed users of any age to engage with its technology. The legal pressure illustrates growing recognition that AI conversations with kids require different safeguards than those for adults.

Meetali Jain, executive director of the Tech Justice Law Project, welcomed the move but said it’s “not enough” and “should have been done earlier.” She notes there is “still a lot of details” about implementation that remain unclear. The company has “not addressed how they will operationalize age verification” or “ensure their methods are privacy preserving,” Jain said.

Nor has Character.AI addressed the “underlying design features that facilitate these emotional dependencies – not just for children, but also for people over the age of 18 years,” Jain added. Critics argue the platform’s design intentionally makes AI characters feel alive, which naturally creates attachments regardless of age. Given the effects on mental health, some experts believe the changes do not address the core problem of how the platform engineers emotional connections.

If you’re a parent or teenager using the platform, you need to understand what’s changing. Starting immediately, anyone under 18 faces a two-hour restriction on daily usage. By late November, minors won’t be able to participate in the free-form chats that made the platform popular. Instead, the company is developing alternative features specifically for younger audiences.

You might wonder how Character.AI will determine your actual age. The company hasn’t fully explained its age-verification approach, but age checks typically involve methods like face scans or identity verification through government IDs. There around the industry, these systems have proven imperfect – kids often find ways to bypass them. The effectiveness of keeping minors off the platform remains uncertain.

Character.AI’s decision signals broader concerns about artificial intelligence and youth protection across tech platforms. The AI safety lab being established by Character Technologies will presumably research how to make chatbot technology safer. The company added that its new features will include creative tools where kids create content with AI assistance rather than forming one-on-one relationships with digital personas.

The two-hour daily limit and eventual banning of minors from open-ended AI conversations may start a trend. Other companies offering similar AI companions will likely face pressure to implement comparable restrictions. As more than half of teens regularly engage with these technologies according to recent research, the question becomes whether limiting access is sufficient or if the technology itself needs fundamental redesign to be safe for young minds.

Leave a Reply

Your email address will not be published. Required fields are marked *