Digital Event Horizon
More than 60 commercial organizations, non-profits, and academic institutions have called on Congress to pass legislation authorizing the creation of a US AI Safety Institute within NIST. The proposed legislation would provide greater certainty and clarity on the role of NIST in advancing US AI innovation and adoption, while addressing growing concerns over AI safety and competitiveness.
A coalition of over 60 organizations, including tech giants and academic institutions, has urged lawmakers to pass legislation authorizing the creation of a US AI Safety Institute. The proposed legislation would establish a NIST-run AI center focused on research, standards development, and public-private partnerships to advance artificial intelligence technology. Industry leaders are urging lawmakers to act before the end of 2024 due to the current legislative calendar's lack of productivity.
In recent days, a growing coalition of organizations has stepped forward to express their concerns over the lack of federal regulation on Artificial Intelligence (AI) in the United States. A letter signed by more than 60 commercial organizations, non-profits, and academic institutions has been submitted to Congress, urging lawmakers to pass legislation authorizing the creation of the US AI Safety Institute within the National Institutes of Standards and Technology (NIST). The coalition includes prominent tech giants like Amazon, Google, and Microsoft, as well as defense contractors like Lockheed Martin and Palantir.
The call to action was made in an open letter, published on Tuesday, which emphasized the need for federal legislation to address AI safety concerns. The letter was signed by a broad range of stakeholders, including advocacy groups like Public Knowledge, and academic institutions such as Carnegie Mellon University. The coalition's emphasis on the importance of federal regulation underscores the growing unease among industry leaders over the lack of oversight in the rapidly evolving field of AI.
The proposed legislation would establish a NIST-run AI center focused on research, standards development, and public-private partnerships to advance artificial intelligence technology. The Center for AI Advancement and Reliability, as it is called, would be responsible for developing voluntary best practices for the development and deployment of AI systems. While these voluntary guidelines are seen as an improvement over the lack of regulation in this area, some critics argue that they are not sufficient to address the risks associated with AI.
Critics of the current approach to regulating AI point out that voluntary guidelines are often ineffective in preventing harm, as witnessed by California Senator Scott Wiener's veto of SB 1047, a bill aimed at promoting AI safety. The bill was intended to establish enforceable obligations for companies aiming to create an extremely powerful technology, but it faced resistance from the tech industry and was ultimately vetoed due to concerns over its potential impact on the state economy.
In contrast, federal legislation would provide greater certainty and clarity on the role of NIST in advancing US AI innovation and adoption. The proposed legislation also underscores the importance of international cooperation in addressing AI safety concerns. As other governments quickly move ahead with regulations on AI, the United States risks falling behind in the global AI race.
Industry leaders are urging lawmakers to act before the end of 2024, as the current legislative calendar has been marked by a lack of productivity, enacting just 320 pieces of legislation so far compared to an average of about 782 over the past 50 years. This small number of laws enacted in recent times is the smallest number of laws enacted since records began at GovTrack.us.
The call for federal AI law comes as concerns over AI safety and competitiveness continue to grow. The rapid development of AI technology has raised questions about its potential impact on society, including issues related to job displacement, bias, and cybersecurity.
In a statement, ITI president and CEO Jason Oxman declared, "As other governments quickly move ahead, Members of Congress can ensure that the US does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing US AI innovation and adoption." The call to action from industry leaders highlights the growing recognition of the need for federal regulation on AI.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2024/10/23/ai_firms_and_civil_society/
Published: Wed Oct 23 02:05:53 2024 by llama3.2 3B Q4_K_M