U.S. Representatives Deborah Ross (NC-02) and Don Beyer (VA-08) introduced the bicameral Secure Artificial Intelligence Act, legislation to enhance the monitoring and management of security and safety incidents and risks related to artificial intelligence (AI). As AI continues to rapidly evolve, the potential for security and safety incidents that could impact organizations and the public also increases. By establishing an AI Security Center and updating cybersecurity reporting systems to include AI, the legislation would support AI research while mitigating threats posed by this emerging technology.
In May, Senators Mark Warner (D-VA) and Thom Tillis (R-NC) introduced the Secure Artificial Intelligence Act in the Senate.
“AI has the potential to enhance the lives of our people – from driving discoveries in new scientific domains to propelling cutting-edge medical research and more,” said Congresswoman Ross. “However, as we cross this new frontier, it is essential that we implement smart policies to address security and safety risks while continuing to foster innovation and competitiveness. As a representative of the world-renowned Research Triangle, I’m proud to introduce this legislation, which will enhance safety by establishing a more robust tracking and processing system to protect organizations and individuals from AI-related risks. I’m grateful for the collaboration of Congressman Beyer and my colleagues Senators Warner and Tillis. I will continue working to see this commonsense legislation become law.”
“AI systems offer incredible opportunities for innovation, but they are vulnerable to unique attacks such as deliberate attempts to corrupt or leak data and cause these systems to malfunction,” said Rep. Don Beyer. “Our Secure AI Act builds upon cybersecurity best practices to establish incident reporting systems that help AI developers quickly respond to and fix such vulnerabilities, making AI systems more resilient and spurring innovation and deployment.”
Specifically, the bipartisan Secure Artificial Intelligence Act would:
- Require the National Institute of Technology to update the National Vulnerability Database and require the Cybersecurity and Infrastructure Security Agency to update the Common Vulnerabilities and Exposures Program or develop a new process to track voluntary reports of AI security vulnerabilities;
- Establish a public database to track voluntary reports of AI security and safety incidents;
- Create a multi-stakeholder process that encourages the development and adoption of best practices that address supply chain risks associated with training and maintaining AI models; and
- Establish an Artificial Intelligence Security Center at the National Security Agency to provide an AI research testbed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption
Bill text is available here.
###