The White House Office of Management and Budget (OMB) issued a new memorandum directing federal agencies to improve their responsible acquisition of AI which covers rights and safety issues inherent in the “acquisition of generative AI and AI-enabled biometric systems.”
The directive sets forth specific requirements for the government’s acquisition of generative AI and AI-enabled biometric systems and capabilities.
The memorandum follows OMB’s March policy directive which laid out the requirements and responsibilities for the use of AI throughout the federal government. This past week, US executive departments and agencies released their plans to comply with that earlier directive.
The directive, Advancing the Responsible Acquisition of Artificial Intelligence in Government, establishes “acquisition-related practices that agencies must implement to ensure effective deployment of required risk management practices for rights-impacting and safety-impacting AI,” emphasizing that federal agencies “must ensure that relevant equities and risks are proactively considered when planning for an AI acquisition,” and “should prioritize, at a minimum, privacy, security, data ownership, and interoperability as identified” in the new directive.
This includes specific actions OMB said are “designed to address complex issues related to privacy, security, data ownership and rights, and interoperability that may arise in connection with the acquisition of an AI service or system,” as well as “additional practices [that] are required or recommended to ensure responsible acquisition of generative AI and AI-enabled biometric systems.”
The directive also includes new requirements and guidance on establishing cross-functional and interagency collaboration to reflect new AI responsibilities, managing AI risk and performance, and promoting a competitive AI market with innovative acquisition.
OMB made clear that its new directive “does not supersede other, more general federal policies that apply to AI but [also] are not limited in scope to AI, including policies relating to procurement, enterprise risk management, information resources management, competition, antitrust, data, privacy, accessibility, Federal statistical activities, or cybersecurity.”
OMB said the September 24 directive “is scoped to address considerations associated with agencies’ acquisition of an AI system or service, regardless of whether the acquired AI system or service is standalone or integrated into broader information technology products, offerings, or services.”
OMB said that “given the varied nature of AI, this memorandum includes requirements for subcomponents of AI systems or services, such as requirements specific to models and data. The considerations addressed by this memorandum include ‘risks from the use of AI, as defined in Section 6 of OMB Memorandum M-24-10, and “does not address all considerations that may arise in connection with the acquisition of AI, such as those related to federal information and information systems in general.”
The directive requires federal agencies to ensure that AI-based biometrics protect the public’s rights, safety, and privacy by ensuring “that contractual requirements address risks inherent in the procurement of AI systems that identify individuals using biometric identifiers (e.g., faces, irises, fingerprints, or gait), including risks that such an AI system is trained on or otherwise makes use of biometric data that was not lawfully collected or is not sufficiently accurate to support reliable biometric identification.”
OMB explained in a footnote that “this could include any such AI trained or operated using biometric data that embeds unwanted bias or was collected substantially without appropriate informed consent, or for another purpose, or without validation of the included identities.”
To help address the risks for biometric identification and verification, OMB said all federal agencies should avoid biometric systems that rely on unreliable or unlawfully collected information and to ensure contractual terms address the following:
- Verification that AI-based biometric systems are not trained on data collected in violation of applicable law or federal policy, and that such systems are sufficiently accurate to support reliable biometric identification and verification across different groups based on the results of testing and evaluation in operational contexts;
- Requirements for vendors to submit systems that use facial recognition for evaluation by the National Institute of Standards and Technology (NIST) as part of the Face Recognition Technology Evaluation and Facial Analytics Technical Evaluation, where practicable;
- Requirements for supporting documentation or test results, as well as underlying test data where appropriate, sufficient to independently validate the operational performance of the AI system’s ability to match identities and the appropriateness of the relevant data used for training, through evaluations such as those offered by NIST or the testing facilities of the Department of Homeland Security’s Science and Technology Directorate. These independent assessments and benchmarks of systems should first include lab testing of algorithms, followed by operational testing, which should be continuously conducted using the operational version of the biometric system and be measured using standardized methodologies in as close to an operational context as possible; and
- Requirements that biometric systems include the following:
- A configurable minimum similarity threshold for candidate results;
- Enforcement of minimum quality criteria for input biometric data or samples used for biometric systems, including data used in reference galleries or training datasets for the system. These criteria should be based on standards set by independent bodies, such as the International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC)’s 29794-5:201;
- For a given probe image or data input in use cases using a one-to-many identification search, returning a list of candidate matches above the minimum similarity threshold alongside similarity scores whenever possible; and
- Maintaining detailed logs of use for auditing and compliance, including capturing input and output data in ways that incorporate appropriate protections for personally identifiable information (PII) and other data throughout the information life cycle, and limiting retention and restricting reuse of PII for other purposes; system configuration parameters; resulting candidate matches, scores, and ordering; and other information as appropriate.
The OMB directive further requires that federal agencies comply with civil rights laws to avoid unlawful bias, discrimination, and harmful outcomes.
OMB explained that “many AI systems rely on vast amounts of data,” and because they do, “these tools have the potential to produce outcomes that result in unlawful discrimination. Discrimination may come from different sources, including problems with data model opacity and access, and with system design and use. Consistent with the risk management requirements of OMB Memorandum M-24-10, agencies should address risks that procured AI may generate unlawful bias, unlawful discrimination, or harmful outcomes, and require vendors to identify potential AI biases and mitigation strategies to address biases.”
Article Topics
biometric identification | biometrics | facial recognition | government purchasing | identity verification | NIST | responsible AI | responsible biometrics | standards | U.S. Government