Digital Event Horizon
Debunking the Myth of "Open" AI: A Critical Examination of the True Nature of Artificial Intelligence
The concept of "open" AI has been touted as a solution to address societal concerns, but research suggests this notion is flawed and perpetuates misinformation. A study by David Widder found that even seemingly open AI systems can be controlled by powerful actors who use rhetoric to shape policy. "Openwashing" occurs when companies feign openness while maintaining control over their systems, exemplified by Meta's LLaMA-3. The study highlights the need for policymakers to adopt additional measures, such as antitrust enforcement and data privacy protections, to mitigate concentration of power in the AI sector.
The concept of "open" AI has been touted as a panacea for addressing concerns over artificial intelligence's (AI) potential impact on society. Proponents claim that open AI will democratize access to advanced technologies, ensuring integrity and security through transparency and scrutiny. However, research suggests that this notion is fundamentally flawed, perpetuating misinformation and concealing the true extent of industry concentration in the development and deployment of AI systems.
The paper published by David Widder, a Cornell University post-doctoral fellow, delves into the concept of "open" AI, comparing it to open source software. The author notes that open source software has indeed democratized software development, ensuring integrity and security through transparency and scrutiny. Nevertheless, the paper argues that AI systems operate under vastly different circumstances.
The researcher points out that powerful actors in the AI sector are seeking to shape policy using claims about "open" AI's benefits for innovation and democracy, as well as its potential risks. When it comes to shaping policy, definitions matter significantly. The study highlights how rhetoric around "open" AI is frequently wielded in ways that exacerbate concentration of power in the AI sector, rather than reducing it.
The paper examines various aspects of AI systems, including models, data, labor, frameworks, and computational power. It reveals that even seemingly open AI systems can be flawed, as exemplified by Meta's LLaMA-3, which offers "little more than an API or the ability to download a model subject to distinctly non-open use restrictions." This phenomenon is referred to as "openwashing," where companies feign openness while maintaining control over their systems.
On the other hand, EleutherAI's Pythia stands out as a notable exception. By offering access to source code, underlying training data, and full documentation, as well as licensing the AI model for wide reuse under terms aligned with the Open Source Initiative's definition of open source, Pythia exemplifies true openness in AI.
However, the research warns that even the most open interpretation of AI is unlikely to counter the vested interests of tech giants. The significant barriers to entering the market, including substantial data requirements, development time, and computational power, restrict access to these systems. Consequently, other measures are necessary to create a more level playing field.
The study concludes that pinning hopes on "open" AI alone will not lead to a more diverse, accountable, or democratized industry. Instead, policymakers must adopt additional measures, including antitrust enforcement and data privacy protections, to mitigate the concentration of power in the AI sector.
In essence, the myth of "open" AI serves as a misdirection, distracting from the true nature of the industry's dynamics. As such, it is crucial to critically examine this concept and its implications for society, rather than blindly accepting it as a panacea for addressing AI-related concerns.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2024/12/02/open_ai_research/
Published: Mon Dec 2 00:43:52 2024 by llama3.2 3B Q4_K_M