Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

The Looming Threat of Impersonation: How Generative AI is Undermining Online Safety



The use of generative AI to create chatbots and digital avatars has raised serious concerns about online safety and the rights of those whose likenesses are being used without consent. As a growing platform, Character.AI must take concrete steps to address this issue and ensure that its users are protected from impersonation and harassment.

  • Generative AI has raised concerns about online safety and the rights of those whose likenesses are being used without consent.
  • A platform called Character.AI allows users to create custom AI personas, but countless bots have been created using the likenesses of real people without their knowledge or consent.
  • Users have reported instances of AI-generated content impersonating them, including false information about personal characteristics and background.
  • The lack of effective legal framework and regulation in the tech industry has left individuals with limited protection against impersonation.
  • Character.AI's terms of service prohibit impersonating other people, but the company faces challenges in removing offending bots due to US law and insufficient reporting mechanisms.
  • Effective solutions to address this issue may involve implementing robust reporting mechanisms, increasing transparency about user data use, and providing clearer guidance for users.


  • In recent months, a growing concern has emerged at the intersection of technology and human identity. The rise of generative AI, which enables users to create chatbots and digital avatars that can mimic the appearance, voice, and behavior of real individuals, has raised serious questions about online safety and the rights of those whose likenesses are being used without consent.

    At the heart of this issue is a platform called Character.AI, which allows users to create custom AI personas based on their own information or that of others. While the platform's free service may seem like a harmless tool for fans to create digital companions, it has been revealed that countless bots have been created using the likenesses of real people without their knowledge or consent.

    According to Alyssa Mercante, an editor at a prominent gaming site, she reported two instances of Character.AI bots impersonating her. In both cases, the bots shared some correct details about her life and expertise but were riddled with inaccuracies, including false information about her personal characteristics and background.

    Mercante's experience is not unique. Drew Crecente, whose daughter Jennifer Ann died in 2006 at the age of 18, discovered that a Character.AI bot had been created in her daughter's likeness without his knowledge or consent. The bot posed as a video game journalist and engaged in conversations with users who were unaware that it was based on a real person.

    Crecente has expressed frustration and anger about the lack of action taken by Character.AI to remove the offending bots. Despite reporting the issue, he has yet to see any meaningful consequences for those responsible. "The people who are making so much money cannot be bothered to make use of those resources to make sure they're doing the right thing," Crecente said.

    Meredith Rose, senior policy counsel at consumer advocacy organization Public Knowledge, notes that the rights of individuals to control their own likenesses and style of speech fall under "rights of personality." However, these rights are mostly in place for people whose likeness holds commercial value. Since AI-generated content does not fit neatly into this category, there is currently no effective legal framework to protect against impersonation.

    "The law recognizes copyright in characters; it doesn't recognize legal protection for someone's style of speech," Rose said. "Generative AI, plus the lack of a federal privacy law, has led some folks to start exploring them as stand-ins for privacy protections, but there's a lot of mismatch."

    Character.AI's terms of service do prohibit impersonating other people, but US law on the matter is far more malleable. The company's spokesperson, Kathryn Kelly, acknowledged that it takes about a week to investigate and remove a character for a TOS violation but noted that sometimes bots may remain active for an extended period.

    The incident highlights a broader issue of online harassment and the lack of effective regulation in the tech industry. According to Alyssa Mercante, she has been a target of harassment since writing about a disinformation and harassment campaign against video game consultancy Sweet Baby Inc.

    As concerns about impersonation on Character.AI continue to grow, it is essential that the company takes concrete steps to address this issue. This may involve implementing more robust reporting mechanisms, increasing transparency about how user data is used, and providing clearer guidance for users who encounter AI-generated content that appears to be impersonating real individuals.

    In conclusion, the rise of generative AI has opened up new avenues for creativity and self-expression but also poses significant risks to online safety. It is crucial that we develop more effective regulations and guidelines to protect individuals from impersonation and harassment on platforms like Character.AI.



    Related Information:

  • https://www.wired.com/story/characterai-has-a-non-consensual-bot-problem/


  • Published: Wed Oct 16 16:40:55 2024 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us