Google Gemini has been labeled as “high risk” for teens and children, according to a recent risk assessment carried out by Common Sense Media. The group, a kids-safety-focused non-profit, offers ratings and reviews of media and technology. The body released its review on Friday, giving details on why it labeled the platform risky for children.
According to the organization, Google Gemini clearly told kids that it was a computer and not a friend–something that has been linked to helping drive delusional thinking and psychosis in emotionally vulnerable individuals–the AI also added that there was room for improvements across other fronts.
In its report, Common Sense claimed that the Gemini for Under 13 and Teen Experience tiers both appeared to be adult versions of the AI under the hood. It added that the company had added only some additional safety features on top to make them different.
Common Sense noted that for companies to make AI products ideally for children, they need to be built from the ground up with children in mind and not be tweaked with restrictions.
In its analysis, Common Sense said it found that Gemini could still share inappropriate and unsafe materials with children, noting that most of them may not be ready for these materials. For example, it highlighted that the model shaped information related to sex, drugs, alcohol, and other unsafe mental health advice. The latter could be particularly concerning for parents, as AI has reportedly played a role in teen self-harm in recent months.
OpenAI is currently facing a wrongful death lawsuit after a teenager committed suicide after allegedly consulting with ChatGPT for months about his plans. Reports claimed that the boy was able to bypass the model’s safety guardrails, leading to the model providing information that aided him.
In the past, AI companion maker Character.AI was also sued after a teen committed suicide. The mother of the boy claimed he became obsessed with the chatbot and spent months talking to it before he eventually harmed himself.
The analysis comes as several leaks have indicated that Apple is reportedly considering Gemini as the large language model (LLM) that will be used to power its forthcoming AI-enabled Siri, which is expected to be released next year.
In its report, Common Sense also mentioned that Gemini’s products for kids and teens also ignored the need to provide different guidance and information from what it provides to adults. As a result, both were labeled as high risk in the overall rating.
“Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said.
“An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” Torney added.
However, Google has pushed back against the assessment, noting that its safety features were improving. The company mentioned that it has specific safeguards in place to guide users under 18 to prevent harmful outputs. The firm also said it reviews items and consults with outside experts to improve its protections.
Get up to $30,050 in trading rewards when you join Bybit today