Digital Event Horizon
Microsoft has filed a lawsuit against three individuals and seven customers who allegedly used a service to exploit vulnerabilities in its AI platform and create harmful content. The company alleges the defendants bypassed safety measures and sold access to compromised accounts, seeking damages and an injunction to prevent further activity.
Microsoft has filed a lawsuit against three individuals and seven customers who allegedly exploited vulnerabilities in the company's AI platform. The defendants bypassed Microsoft's safety guardrails using undocumented APIs and other tricks to generate harmful content. The service compromised legitimate customer accounts and sold access to them through a now-shuttered website. The lawsuit alleges several laws were violated, including the Computer Fraud and Abuse Act and wire fraud. Microsoft is seeking damages for the harm caused by the defendants' actions and an injunction to prevent future activity.
Microsoft has filed a lawsuit against three individuals and seven customers who allegedly used a service that exploited vulnerabilities in the company's AI platform to create harmful and illicit content. The lawsuit, filed in federal court in the Eastern District of Virginia, accuses the defendants of bypassing Microsoft's safety guardrails and using the company's services to generate content that promotes violence, harassment, and other forms of abuse.
The service, which was shut down by Microsoft in September, allegedly used undocumented APIs and other tricks to bypass the safety measures designed to prevent such content from being created. The defendants allegedly compromised the accounts of legitimate Microsoft customers and sold access to their accounts through a now-shuttered website.
According to Steven Masada, assistant general counsel for Microsoft's Digital Crimes Unit, the company has observed a foreign-based threat-actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. This software was used to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services.
The lawsuit alleges that the defendants' service violated several laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act. It also constitutes wire fraud, access device fraud, common law trespass, and tortious interference.
Microsoft has been taking steps to prevent similar exploits in recent years. The company has developed guardrails that inspect both prompts inputted by users and the resulting output for signs of content that violates its terms of service. However, the defendants allegedly found ways to bypass these measures through sophisticated software.
The lawsuit seeks an injunction enjoining the defendants from engaging in "any activity herein," and Microsoft is seeking damages for the harm caused by the defendants' actions.
In a statement, Masada wrote: "Microsoft's AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. However, cybercriminals have found ways to exploit exposed customer credentials scraped from public websites, using these tools to identify and unlawfully access accounts with certain generative AI services."
The case highlights the ongoing challenge of preventing malicious actors from exploiting vulnerabilities in AI platforms. As the use of AI continues to grow, it is likely that new threats will emerge, and companies like Microsoft will need to stay vigilant to protect their users.
Related Information:
https://arstechnica.com/security/2025/01/microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform/
Published: Fri Jan 10 18:42:00 2025 by llama3.2 3B Q4_K_M