Congress Works to Tighten AI Risk Management

Believe it or not, border control, impeachment investigations, escalating overseas wars, and Donald Trump are not the only issues facing Congress. There’s also Artificial Intelligence (AI), arguably far more important in the long run than even Trumpism.

Among Congress’s attempts to deal with AI is HR 6936.  Introduced last month by Rep. Ted Lieu (D-CA), the bill would “require Federal agencies to use the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology with respect to the use of artificial intelligence.”

In drafting the Act (short titled “Federal Artificial Intelligence Risk Management Act of 2024”), the bill’s sponsors commented that while AI can be useful in: “unlocking scientific breakthroughs that improve healthcare outcomes, enabling personalized education, and providing safer transportation,” AI systems also pose potential risks.

Because of those risks, Congress directed the National Institute of Standards and Technology (NIST) to develop an AI Risk Management Framework that organizations could employ to ensure they use AI systems in a trustworthy manner.

HR 6936 would:

  • Require the Office of Management and Budget (OMB) to issue guidance requiring agencies to incorporate the AI Risk Management Framework into their AI risk management efforts consistent with guidelines;
  • Require OMB to establish a workforce initiative that enables federal agencies access to diverse expertise;
  • Require the Administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council to act to ensure federal agencies procure AI systems that incorporate theFramework; and
  • Require NIST to develop test and evaluation capabilities for AI acquisitions.

The bill has been referred to the House Committee on Oversight and Accountability and to the Committee on Science, Space, and Technology.  Whether it ever leaves committee is anyone’s guess.