Introduction
This bill, introduced by Senator Cantwell, Maria, is a promising first step that improves government and public capacity for building AI safely. It would establish the AI Safety Institute, create testbeds for AI, initiate building an international coalition on AI standards, create public datasets for AI training, and promote federal innovation in AI. The Senate Commerce committee has passed this bill and sent it to the full Senate for their consideration meaning it is not yet passed into law.
The AI Safety Institute would conduct scientific research on how to responsibly build AI but would not have the authority to translate that research into binding standards. Therefore it would lack the ability to robustly ensure that AI developers are behaving responsibly. For example, the requirement that the model evaluations performed by the Institute come only from voluntarily provided data means that AI developers can refuse to provide access until their models are already public. This means that flaws in model safety would not be identified until those flaws were already actively posing a risk.
The bill’s efforts to coordinate with U.S. allies on standards are a useful step in building international consensus on AI issues. It can also be seen as a clear attempt to counter China’s push to lead in global governance of AI, hinting at the geopolitical struggle over AI.
Finally, the focus on supporting innovation and promoting government adoption of AI are admirable but take an anti-regulatory approach that may undermine the ability of Federal agencies to mitigate risks.
Amendments:
There were a number of amendments that were introduced and accepted prior to the bill being passed out of committee which are notable. The amendments can be accessed here.
Senator Budd amended to explicitly exclude China from the international coalition on AI until it complies with the U.S. interpretation of China’s World Trade Organization commitments. He further amended to exclude entities controlled by China, Russia, Iran, North Korea from accessing any resources of the AI safety institute and require that no data can be shared with any hostile countries.
Senator Cruz amended to prohibit federal agencies regulating AI in a number of ways related to race and gender. The most controversial provision prohibits agencies mandating that AI systems be designed in an equitable way to prevent disparate impacts based on a protected class (such as race or gender) which contradicts the Biden Executive Order on AI. It also prohibits the review of input data by Federal agencies to determine if an AI system is biased or produces misinformation. In response to the controversy over this particular amendment, a spokesperson for the committee’s Democratic majority stated that “Rather than giving the senator [Cruz] a platform to amplify divisive rhetoric and delay committee progress, the Chair accepted the amendment — knowing there will be many opportunities to fix the legislation, and with the support of the other Republican Senators.” This statement indicates that future versions of the bill will likely continue to evolve.
Senator Cruz also amended to require that consultants or other temporary employees cannot perform any “inherently governmental function” for federal agencies related to AI or other critical and emerging technologies. This would prohibit temporary employees from many roles which would restrict bringing in private sector talent to advise on AI.
Senator Schatz amended to include energy storage and optimization as an additional focus of the testbed program that previously only pertained to advanced materials and manufacturing. He also added the promotion of explainability and mechanistic interpretability, ie the ability to understand how AI systems work internally, as priorities of the Institute. Another addition was including developing cybersecurity for AI and developing AI for modernizing code and software of government agencies on the list of Federal Grand Challenges that will inform agency innovation competitions. His final amendment mandates that multilateral research partnerships include coordination with other Federal open data efforts when possible.
Senators Young and Hickenlooper amended to significantly expand the bill by including the creation of a nonprofit to be known as the “Foundation for Standards and Metrology”. This nonprofit would support the mission of the AI Safety Institute in a broad variety of ways, notably including supporting the commercialization of federally funded research. The nonprofit will be an independent 501(c)(3) and its board members will be appointed from a list created by the National Academies of Sciences, Engineering, and Medicine. The Foundation is directed to create a plan to become financially self-sustaining within five years of its creation and its initial annual budget is set at a minimum of $500,000 and maximum of $1,250,000.
Detailed Breakdown of the Bill’s Content:
Subtitle A—Artificial Intelligence Safety Institute and testbeds
Sec. 101. Artificial Intelligence Safety Institute.
The Under Secretary of Commerce for Standards and Technology will establish the AI Safety Institute as well as a consortium of relevant stakeholders to support the Institute. The Institute’s mission will be carried out in collaboration with National Institute of Standards and Technology (NIST). The mission of the Institute will be to:
- Assist companies and Federal agencies in developing voluntary best practices for assessing AI safety
- Provide assistance to Federal agencies in adopting and using AI in their operations
- Develop and promote “voluntary, consensus-based technical standards or industry standards”, advancement in AI, and a competitive AI industry
One area of focus will be supporting AI research, evaluation, testing and standards via the following:
- Conducting research into model safety, security, interpretability
- Working with other agencies to develop testing environments, perform regular benchmarking and capability evaluations, and red teaming
- Working with all stakeholders to develop and adopt voluntary AI standards This will include standards regarding:
- physical infrastructure for training, developing, and operating AI models
- Data for training and testing AI models
- AI models and software based on such models
- Expanding on the AI Risk Management Framework regarding generative AI
- Establishing secure development practices for AI models and develop and publish cybersecurity tools and guidelines to protect AI development
- Developing metrics and methodologies for evaluating AI by testing existing tools and funding research to create such tools (and notably looking at the potential effect of foundation models when retrained or fine-tuned)
- Coordinating global standards setting for AI evaluation and testing
- Developing tools for identifying vulnerabilities in foundation models
- Developing tools for agencies to track harmful incidents caused by AI
Another key area of focus will be supporting AI implementation via the following:
- ”Using publicly available and voluntarily provided information, conducting evaluations to assess the impacts of artificial intelligence systems, and developing guidelines and practices for safe development, deployment, and use of artificial intelligence technology”
- Coordinating with U.S. allies and partners on AI testing and vulnerability and incident data sharing
- Develop AI testing capabilities and infrastructures
- establish blue teaming capabilities and partner with industry to mitigate risks and negative impacts
- develop voluntary guidelines on detecting synthetic content, watermarking, preventing privacy right violations by AI, and transparent documentation of AI datasets and models
Sec. 102. Program on artificial intelligence testbeds.
The Under Secretary of Commerce for Standards and Technology will use various public and private computing resources to develop evaluations and risk assessments for AI systems. In particular these assessments will prioritize identifying potential security risks of deployed AI systems with a focus on autonomous offensive cyber capabilities, cybersecurity vulnerabilities of AI, and “chemical, biological, radiological, nuclear, critical infrastructure, and energy-security threats or hazards”. Additionally such tests should be evaluated for use on AI systems trained using biological sequence data and those intended for gene synthesis.
The Under Secretary will also provide developers of foundation models the opportunity to test such models. To support this they will conduct research on how to improve and benchmark foundation models, identify key capabilities and unexpected behaviors of foundation models, evaluate scenarios in which these models could pose risks, support developers in evaluating foundation models, and coordinate public evaluations of foundation models and publicize reports of such testing.
Sec. 103. National Institute of Standards and Technology and Department of Energy testbed to identify, test, and synthesize new materials.
The director of Nist and the Secretary of Energy will jointly establish a testbed for creating new materials to advance materials science and support advanced manufacturing via AI.
Sec. 104. National Science Foundation and Department of Energy collaboration to make scientific discoveries through the use of artificial intelligence.
The direction of the National Science Foundation and the Secretary of Energy shall collaborate to support progress in science via AI.
Sec. 105. Progress report.
The director of the AI Safety Institute shall submit to congress a progress report on the above goals.
Subtitle B—International cooperation
Sec. 111. International coalition on innovation, development, and harmonization of standards with respect to artificial intelligence.
The bill also directs the heads of several agencies (notably including the Secretary of Commerce and the Secretary of State) to form a coalition with “like-minded” foreign governments to cooperate on innovation in AI and to coordinate development and adoption of AI standards. It specifies that the coalition may only include countries that have “sufficient” intellectual property projections, risk management approaches as well as research security measures and export controls. This emphasis on export controls would likely limit the coalition to U.S. allies or strategic partners thereby excluding China.
This would entail setting up government-to-government infrastructure to coordinate, agreements on information sharing between governments, and inviting participation from private-sector stakeholders as advisors.
Sec. 112. Requirement to support bilateral and multilateral artificial intelligence research collaborations.
The bill requires the Director of the National Science Foundation to support international collaborations on AI research and development, again requiring that partner countries have security measures and export controls.
Subtitle C—Identifying regulatory barriers to innovation
Sec. 121. Comptroller General of the United States identification of risks and obstacles relating to artificial intelligence and Federal agencies.
The bill requires the Comptroller General to submit a report to Congress identifying regulatory obstacles to AI innovation. The report would include identifying federal laws and regulations hindering AI development, challenges in how those laws are currently enforced, an evaluation of how AI adoption has taken place within government, and recommendations to Congress on how to increase AI innovation.
TITLE II—ARTIFICIAL INTELLIGENCE RESEARCH, DEVELOPMENT, CAPACITY BUILDING ACTIVITIES
Sec. 201. Public data for artificial intelligence systems.
The director of the Office of Science and Technology Policy will create a list of priorities for Federal Investment in creating 20 datasets of public Federal data for training AI. Once identified, the goal of assembling these datasets will be delegated to various agencies who can provide grants/other incentives or work through public-private partnerships. These datasets will then be provided to the National Artificial Intelligence Research Resource pilot program.
Sec. 202. Federal grand challenges in artificial intelligence.
The director of the Office of Science and Technology Policy will assemble a list of priorities for the Federal government in AI in order to expedite AI development and the application of AI to key technologies such as advanced manufacturing and computing. It also specifies that various federal agencies will establish a prize competition, challenge based acquisition or other R&D investment based on the list of federal priorities.