Last year, Megan Garcia filed a lawsuit against the artificial intelligence company Character.ai, alleging that her son’s AI girlfriend contributed to his death by suicide.
Now she’s pushing for legislation that can safeguard children from so-called AI “companionship” chatbots, which can act as friends, lovers and therapists.
Character.ai’s users, many of them children, chat to bots based on both fictional characters and real people.
The platform says it receives 20,000 queries per second – about a fifth of the volume of queries that Google search handles per second.
Garcia met with California senator Steve Padilla on Tuesday, telling a press conference the bots are “inherently dangerous” and can encourage inappropriate sexual conversation or self-harm.
They can also be highly addictive.
To address the problem, Padilla proposed a bill that forces chatbot platforms to introduce a number of safety measures, including protocols for suicidal ideation.
It will join a similar California bill that aims to ban AI companions for under-16s, and another in New York.
Even if Padilla’s bill is passed, the legislation may still not go far enough.
Further listening: My AI girlfriend: a cure for loneliness