Digital Event Horizon
Make illegally trained large language models public domain as punishment, urges computer scientist Alexander Hanff. This approach aims to deter companies from engaging in illegal AI training practices and promote greater accountability among tech giants.
Computer scientist Alexander Hanff proposes making illegally trained large language models (LLMs) public domain to deter companies from engaging in illegal AI training practices. The approach aims to promote greater accountability among tech giants and address concerns about data privacy, intellectual property, and environmental cost. Hanff's proposal builds upon the legal principle of the "fruit of the poisonous tree," which states evidence obtained through unlawful means is inadmissible in court. The plan would remove LLMs from corporate control, allowing users to benefit from their processing without providing personal data or intellectual property. Online platforms that sell user data would be banned, and companies found guilty of breaking the law would face financial consequences.
In a recent op-ed piece published on The Register, computer scientist and leading privacy technologist Alexander Hanff presented a compelling argument for making illegally trained large language models (LLMs) public domain as a form of punishment. This approach aims to deter companies from engaging in illegal AI training practices and promote greater accountability among tech giants.
Hanff's proposal builds upon the concept of the "fruit of the poisonous tree," a legal principle that states evidence obtained through unlawful means is inadmissible in court. He suggests extending this principle to AI systems, arguing that illegally built LLMs are like poisoned fruit – their negative impact cannot be reconciled by deleting or destroying them.
The environmental impact of training LLMs has become a significant concern. Research by RISE, a Swedish state-owned research institute, revealed that OpenAI's GPT-4 was trained on 1.7 trillion parameters using 13 trillion tokens, consuming vast amounts of energy and carbon emissions. Hanff acknowledges the ethical dilemma posed by this environmental cost but emphasizes that it is not a reason to tolerate illegal AI training practices.
Instead, Hanff proposes removing LLMs from the control of executives and putting them into the public domain. This approach would allow companies, particularly those found to have broken the law, to see no benefit from using these models. By making these models public commons, users like individuals and organizations can benefit from their processing without providing personal data or intellectual property.
Hanff's plan also targets online platforms that sell user data to companies like OpenAI. These platforms would be banned from providing such access with the threat of disgorgement, forcing them to think twice before handing over sensitive information. This approach aims to incentivize companies to treat users' privacy and creative work with respect and adhere to laws.
Critics may argue that extending this principle could create new challenges in regulating AI systems. However, Hanff believes that meaningful consequences – including the financial implications of making these models public domain – are necessary to deter repeated instances of illegal AI training practices.
The Register has been tracking the developments in AI regulation and its impact on companies like OpenAI. As technology continues to evolve, it is essential to address concerns about data privacy, intellectual property, and the environmental cost of AI development. Hanff's proposal serves as a reminder that the law can be a powerful tool in promoting responsible innovation and holding tech giants accountable.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2024/12/22/ai_poisoned_tree/
Published: Sun Dec 22 11:10:34 2024 by llama3.2 3B Q4_K_M