Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram: A Deep Dive into the Widespread Problem of Non-Consensual Intimate Image Abuse


Millions of people are using abusive AI ‘nudify’ bots on Telegram, creating highly realistic images that can cause irreparable harm to their victims. A recent investigation by WIRED reveals the widespread problem of non-consensual intimate image abuse on the messaging app.

  • Millions of people are using Telegram bots, known as "nudify" or "undress" bots, to create and disseminate non-consensual intimate images.
  • The bots use artificial intelligence (AI) algorithms to manipulate images, often with the intent of humiliating or blackmailing their victims.
  • Telegram's vast reach and ease of use have made it an attractive platform for non-consensual intimate image abuse tools.
  • The lack of clear guidelines in Telegram's terms of service has enabled abusers to exploit loopholes in moderation policies.
  • The scale of the problem is staggering, with at least 50 identified bots and millions of users engaging with them.
  • Telegram has taken steps to address the issue, but critics argue that more needs to be done to proactively moderate its platform.



  • In recent years, the internet has witnessed an alarming rise in the proliferation and accessibility of non-consensual intimate image abuse (NCII) tools. These digital abominations have been used to create highly realistic and disturbing images that can cause irreparable harm to their victims. One platform that has become a hub for these malicious tools is Telegram, a messaging app with over 200 million active users worldwide.

    According to a recent investigation by WIRED, millions of people are using Telegram bots, collectively known as "nudify" or "undress" bots, to create and disseminate non-consensual intimate images. These bots use artificial intelligence (AI) algorithms to manipulate images, often with the intent of humiliating or blackmailing their victims.

    The problem of NCII has been ongoing for several years, but it has gained significant attention in recent times. A deep dive into Telegram's community features and its vast user base reveals a disturbing landscape where abusers can easily create, share, and monetize these malicious images.

    Telegram's vast reach and the ease with which users can create bots have made it an attractive platform for NCII tools. The messaging app's terms of service do not explicitly prohibit the creation or sharing of intimate images, leaving room for interpretation. This lack of clear guidelines has enabled abusers to exploit loopholes in Telegram's moderation policies.

    The most disturbing aspect of this phenomenon is that many of these bots are well-crafted and can produce highly realistic images with ease. Some bots even claim to offer "token-based" services, allowing users to purchase access to the creation of these images. The revenue generated from these sales has likely contributed to the proliferation of NCII tools.

    The scale of this problem is staggering. According to WIRED's investigation, at least 50 Telegram bots have been identified that create and disseminate non-consensual intimate images. These bots have collectively attracted millions of users, with some individual bots boasting over 400,000 monthly users. The total number of users who engage with these bots is estimated to be in the tens of millions.

    The impact of NCII tools on their victims cannot be overstated. Survivors often face long-term psychological trauma, social isolation, and even suicidal thoughts as a result of being subjected to such abuse. Lawmakers and tech companies have been working tirelessly to address this issue, but more needs to be done to prevent the spread of these malicious tools.

    Telegram has taken steps to address the problem, including removing identified bots and channels that promote NCII content. However, critics argue that the company's efforts are insufficient and that it must do more to proactively moderate its platform. Elena Michael, co-founder of #NotYourPorn, a campaign group working to protect people from image-based sexual abuse, stated, "The burden shouldn't be on an individual to take action; surely it should be on the company to put something in place that's proactive rather than reactive."

    In conclusion, the widespread use of Telegram bots for non-consensual intimate image abuse is a pressing concern that demands immediate attention. As technology continues to advance and the internet becomes increasingly intertwined with our daily lives, it is crucial that we develop effective strategies to prevent such harm. The spread of NCII tools like those found on Telegram serves as a stark reminder of the ongoing battle against online harassment and the importance of proactive measures from tech companies and lawmakers.

    Millions of people are using abusive AI ‘nudify’ bots on Telegram, creating highly realistic images that can cause irreparable harm to their victims. A recent investigation by WIRED reveals the widespread problem of non-consensual intimate image abuse on the messaging app.



    Related Information:

  • https://www.wired.com/story/ai-deepfake-nudify-bots-telegram/

  • https://globalcommunityweekly.substack.com/p/millions-of-people-are-using-abusive


  • Published: Wed Oct 16 17:34:58 2024 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us