Press "Enter" to skip to content
Credit: Starline on Freepik

US Backs ‘Trust But Verify’ Policy for Advanced AI Models

Key takeaways:

  • The National Telecommunications and Information Administration (NTIA), which advises the White House on technology matters, is recommending a cautiously open approach to regulating advanced AI models.
  • It is advising the White House to keep models openly available, but also watching them closely and restricting them once the risks outweigh the benefits.
  • But the NTIA warns that this requires a “significant” investment by the U.S. government and that if not executed well, all the money will be spent without “substantially mitigating risks.”

In what could be a pivotal moment for AI innovation, the Biden administration is supporting what is tantamount to a ‘trust but verify’ approach to regulating advanced AI models.

A report by the National Telecommunications and Information Administration (NTIA), which advises the White House on technology matters, recommends taking a cautiously open approach: Let the developers of these models release key components publicly if they wish, but keep monitoring the models and restrict them once the risks of being open outweigh the benefits.

One reason for this recommendation is practicality: U.S. restrictions on advanced AI models would not curb risk effectively if AI talent leaves for more permissive nations and develops the technology there, the NTIA report said. Another reason is competition: As the U.S. puts restrictions on American tech companies, hostile nations stay free to keep advancing their AI. The world is not safer.

The NTIA also noted that, at present, there is not enough evidence that advanced AI models with open components such as weights – or parameters that determine the capabilities of a model – pose greater risk than closed models.

Thus, restricting advanced AI models would do more harm than good – at least, at this time.

“Prohibiting the release … would limit the crucial evidence-gathering necessary while also limiting the ability of researchers, regulators, civil society, and industry to learn more about the technology, as the balance of risks and benefits may change over time,” the NTIA said.

The NTIA proposes three steps the government can take to monitor AI models:

  • Collect evidence: Research the safety of AI models and their uses; support external research into their current and future capabilities and risk mitigations; and keep a set of risk indicators.
  • Evaluate evidence: See if the risks of being open outweigh its benefits by using benchmarks and developing government expertise.
  • Act on the evidence: Restrict access to models; engage in risk mitigation measures; work with international partners to set norms; and invest in model research.

Major drawback

The NTIA does concede, however, that a “major” drawback to monitoring is the cost to the government and businesses.

“AI will impact many corners of government, so cross-sector monitoring capacity will likely require significant investment,” the report said. Also, “monitoring imposes obligations on companies, which could be costly, especially for smaller companies in the AI value chain, and burden the U.S. innovation system.”

The agency said it is crucial to keep a good balance between openness and monitoring. If this approach is not executed well, it will cost the government a fortune “without substantially mitigating risks.”

Author