Title 15Commerce and TradeRelease 119-73

§278h–1 Standards for artificial intelligence

Title 15 › Chapter CHAPTER 7— - NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY › § 278h–1

Last updated Apr 6, 2026|Official source

Summary

The National Institute of Standards and Technology (NIST) must lead work to make artificial intelligence (AI) systems safe, fair, and trustworthy. NIST must create and promote frameworks, standards, guidelines, and methods for AI. It must help build a risk‑mitigation approach for using AI and support ways to test training data and AI systems for bias. The NIST Director can fund research and best practices in many technical areas, such as privacy and security for data and devices, AI chips and hardware, ways to clean and label data and use open licenses, safety and robustness against errors or attacks, auditing and benchmarks for accuracy and transparency, using AI to help science, and clear model and system documentation. NIST can also make curated, privacy‑protected datasets, back research institutes, work on voluntary national and international standards, run testbeds to find vulnerabilities, and enter into contracts and partnerships to do this work. Within 2 years after January 1, 2021, the Director must develop and periodically update a voluntary risk management framework for trustworthy AI, in collaboration with other public and private groups. That framework must give standards, guidelines, and methods for building, checking, and lowering risks from AI; set common meanings for trustworthiness terms (like explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, and interpretability); include case studies; align with international standards when suitable; use voluntary consensus practices; and not require any specific technology products. Within 1 year after January 1, 2021, the Director must provide guidance to help voluntary data sharing between industry, research centers, and Federal agencies, and must issue best practices for training datasets, including metadata standards (origins, purpose, allowed uses, who is included or excluded, and other properties) and rules for privacy and security of human‑related datasets. Congress authorized funding to carry out this work: $64,000,000 for FY2021; $70,400,000 for FY2022; $77,440,000 for FY2023; $85,180,000 for FY2024; and $93,700,000 for FY2025.

Full Legal Text

Title 15, §278h–1

Commerce and Trade — Source: USLM XML via OLRC

(a)The Institute shall—
(1)advance collaborative frameworks, standards, guidelines, and associated methods and techniques for artificial intelligence;
(2)support the development of a risk-mitigation framework for deploying artificial intelligence systems;
(3)support the development of technical standards and guidelines that promote trustworthy artificial intelligence systems; and
(4)support the development of technical standards and guidelines by which to test for bias in artificial intelligence training data and applications.
(b)The Director of the National Institute of Standards and Technology may—
(1)support measurement research and development of best practices and voluntary standards for trustworthy artificial intelligence systems, which may include—
(A)privacy and security, including for datasets used to train or test artificial intelligence systems and software and hardware used in artificial intelligence systems;
(B)advanced computer chips and hardware designed for artificial intelligence systems;
(C)data management and techniques to increase the usability of data, including strategies to systematically clean, label, and standardize data into forms useful for training artificial intelligence systems and the use of common, open licenses;
(D)safety and robustness of artificial intelligence systems, including assurance, verification, validation, security, control, and the ability for artificial intelligence systems to withstand unexpected inputs and adversarial attacks;
(E)auditing mechanisms and benchmarks for accuracy, transparency, verifiability, and safety assurance for artificial intelligence systems;
(F)applications of machine learning and artificial intelligence systems to improve other scientific fields and engineering;
(G)model documentation, including performance metrics and constraints, measures of fairness, training and testing processes, and results;
(H)system documentation, including connections and dependences within and between systems, and complications that may arise from such connections; and
(I)all other areas deemed by the Director to be critical to the development and deployment of trustworthy artificial intelligence;
(2)produce curated, standardized, representative, high-value, secure, aggregate, and privacy protected data sets for artificial intelligence research, development, and use;
(3)support one or more institutes as described in section 9431(b) of this title for the purpose of advancing measurement science, voluntary consensus standards, and guidelines for trustworthy artificial intelligence systems;
(4)support and strategically engage in the development of voluntary consensus standards, including international standards, through open, transparent, and consensus-based processes; and
(5)enter into and perform such contracts, including cooperative research and development arrangements and grants and cooperative agreements or other transactions, as may be necessary in the conduct of the work of the National Institute of Standards and Technology and on such terms as the Director considers appropriate, in furtherance of the purposes of this division.11 See References in Text note below.
(c)Not later than 2 years after January 1, 2021, the Director shall work to develop, and periodically update, in collaboration with other public and private sector organizations, including the National Science Foundation and the Department of Energy, a voluntary risk management framework for trustworthy artificial intelligence systems. The framework shall—
(1)identify and provide standards, guidelines, best practices, methodologies, procedures and processes for—
(A)developing trustworthy artificial intelligence systems;
(B)assessing the trustworthiness of artificial intelligence systems; and
(C)mitigating risks from artificial intelligence systems;
(2)establish common definitions and characterizations for aspects of trustworthiness, including explainability, transparency, safety, privacy, security, robustness, fairness, bias, ethics, validation, verification, interpretability, and other properties related to artificial intelligence systems that are common across all sectors;
(3)provide case studies of framework implementation;
(4)align with international standards, as appropriate;
(5)incorporate voluntary consensus standards and industry best practices; and
(6)not prescribe or otherwise require the use of specific information or communications technology products or services.
(d)(1)The Institute shall participate in the development of standards and specifications for artificial intelligence.
(2)The purpose of this participation shall be to ensure—
(A)that standards promote artificial intelligence systems that are trustworthy; and
(B)that standards relating to artificial intelligence reflect the state of technology and are fit-for-purpose and developed in transparent and consensus-based processes that are open to all stakeholders.
(e)Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop guidance to facilitate the creation of voluntary data sharing arrangements between industry, federally funded research centers, and Federal agencies for the purpose of advancing artificial intelligence research and technologies, including options for partnership models between government entities, industry, universities, and nonprofits that incentivize each party to share the data they collected.
(f)Not later than 1 year after January 1, 2021, the Director shall, in collaboration with other public and private sector organizations, develop best practices for datasets used to train artificial intelligence systems, including—
(1)standards for metadata that describe the properties of datasets, including—
(A)the origins of the data;
(B)the intent behind the creation of the data;
(C)authorized uses of the data;
(D)descriptive characteristics of the data, including what populations are included and excluded from the datasets; and
(E)any other properties as determined by the Director; and
(2)standards for privacy and security of datasets with human characteristics.
(g)In coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 1001 of title 20), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems.
(h)There are authorized to be appropriated to the National Institute of Standards and Technology to carry out this section—
(1)$64,000,000 for fiscal year 2021;
(2)$70,400,000 for fiscal year 2022;
(3)$77,440,000 for fiscal year 2023;
(4)$85,180,000 for fiscal year 2024; and
(5)$93,700,000 for fiscal year 2025.

Legislative History

Notes & Related Subsidiaries

Editorial Notes

References in Text

This division, referred to in subsec. (b)(5), probably means div. E of Pub. L. 116–283, Jan. 1, 2021, 134 Stat. 4523, which is classified principally to chapter 119 of this title.

Amendments

2022—Subsecs. (g), (h). Pub. L. 117–167 added subsec. (g) and redesignated former subsec. (g) as (h).

Reference

Citations & Metadata

Citation

15 U.S.C. § 278h–1

Title 15Commerce and Trade

Last Updated

Apr 6, 2026

Release point: 119-73