Today, the House Science, Space, and Technology Committee passed Congresswoman Ross’ bipartisan legislation, the AI Incident Reporting and Security Enhancement Act, which she introduced with Congressmen Jay Obernolte (CA-23) and Don Beyer (VA-08). This bill will tackle core security challenges related to advances in artificial intelligence (AI), including vulnerability management and “incident,” or harm, tracking.

The National Institute of Standards and Technology (NIST) operates the National Vulnerability Database (NVD), an authoritative dataset that organizations across the world use to make sure they are addressing their cybersecurity vulnerabilities. The AI Incident Reporting and Security Enhancement Act would direct NIST to update definitions and processes for the NVD to ensure this database advances with the rapid development of AI. It also directs NIST to engage with the private sector and assist in setting standards and guidance for technical vulnerability management processes related to AI systems.

 

To address problems associated with AI usage, the AI Incident Reporting and Security Enhancement Act would also direct NIST to bring together stakeholders to study the failures and vulnerabilities of AI systems across sectors – supporting the reporting and documentation of AI incidents. NIST would then be required to submit its findings to Congress.

 

“I’m proud to represent much of the Research Triangle Park – home to organizations and institutions breaking new ground every day in AI and cybersecurity,” said Congresswoman Ross. “AI has the profound potential to improve the lives of our people, but we must enact necessary guardrails to address risks associated with this evolving field. That’s why I am excited to see my bipartisan AI Incident Reporting and Security Enhancement Act pass out the House Science, Space, and Technology Committee. This commonsense legislation is a concrete step that Congress can take to better understand and advance safe and secure AI systems.”

 

"Like existing cybersecurity industry practices, it is essential that AI companies have a centralized mechanism to report potential risks to the security and safety of their systems, ensuring proactive collaborative measures are taken to mitigate risks,” said Rep. Obernolte. “By fostering consensus on defining and addressing these risks, the AI Incident Reporting and Security Enhancement Act will help protect our critical infrastructure and our economy."

 

“While AI offers revolutionary opportunities for advancements in scientific research, health care and more, AI systems can be vulnerable to attacks that expose personal data, exacerbate cybersecurity risks, and beyond,” said Rep. Don Beyer. “Congress cannot afford to be behind the curve when it comes to addressing the risks of AI and ensuring the proper guardrails are in place. I’m glad to see our bipartisan AI Incident Reporting and Security Enhancement Act pass out the House Science, Space, and Technology Committee and will continue working on this important effort with colleagues in both parties.”

 

Bill text is available here.

 

###