Digital Event Horizon
US businesses are being encouraged to focus on adopting the US National Institute of Standards and Technology (NIST) AI Risk Management Framework due to the lack of federal action on regulating artificial intelligence use. Experts warn that Congress is unlikely to pass legislation governing AI, instead leaving states such as California, New York, and Connecticut taking the lead on AI safety regulation.
The US has seen a Republican majority in both the House of Representatives and the Senate after recent elections. Experts warn that federal legislation governing AI use is unlikely to pass Congress soon. The NIST AI Risk Management Framework, a voluntary framework, is being adopted by US businesses as a resource for managing AI risks. States like California and New York are taking the lead on regulating AI safety due to federal inaction. US businesses need to engage with stakeholders and share opinions on AI policy to shape its future. Executive actions taken by President-elect Donald Trump could be subject to review and potential overturn by future administrations.
The recent elections in the United States have resulted in a Republican majority in both the House of Representatives and the Senate. Despite this shift in power, experts warn that federal legislation governing the use of artificial intelligence (AI) in business is unlikely to pass Congress anytime soon. In their place, US businesses are being encouraged to focus on adopting the US National Institute of Standards and Technology (NIST) AI Risk Management Framework, a voluntary framework designed to help organizations manage the risks associated with AI.
According to Chandler Morse, a former chief of staff to Republican Senator Jeff Flake, the NIST framework is now the most advanced component in US AI policy, despite being voluntary. Morse attributes this development to the fact that Congress has been unable to pass legislation governing AI use due to close margins and partisan disagreements. Instead, states such as California, New York, Connecticut, and Colorado are taking the lead on regulating AI safety.
The lack of federal action on AI regulation has left US businesses looking elsewhere for guidance. In this context, the NIST framework has emerged as a key resource for organizations seeking to implement AI safely and responsibly. The framework provides a set of guidelines and best practices for managing AI risks, including those related to data privacy, security, and bias.
In an interview with The Register, Morse emphasized the importance of engaging with stakeholders and sharing opinions on AI policy. He noted that the development of effective AI regulation is unlikely to occur without broad engagement and collaboration. "Where this all lands is going to direct where AI goes," Morse said, highlighting the need for businesses to take an active role in shaping AI policy.
Meyer's perspective on AI regulation under a new administration also highlights the complexities of US federal policy. Joel Meyer, Domino Data Lab public sector president and former Homeland Security strategic initiatives deputy assistant secretary, warned that any executive actions taken by President-elect Donald Trump could be subject to review and potentially overturned by future administrations. This underscores the need for businesses to focus on developing their own AI policies and strategies, rather than relying solely on federal regulation.
In this context, the NIST framework offers a vital resource for US businesses seeking to navigate the complexities of AI regulation. By adopting the framework's guidelines and best practices, organizations can help ensure that their AI systems are developed and deployed safely and responsibly. As the landscape of AI regulation continues to evolve in the United States, it is clear that businesses will play a crucial role in shaping the future of AI policy.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2024/12/13/nist_framework_for_ai_presents/
Published: Fri Dec 13 12:37:02 2024 by llama3.2 3B Q4_K_M