In the absence of “no comprehensive regulation” for the use of AI by the U.S. critical infrastructure sector, the Department of Homeland Security (DHS) has issued a “first of its kind” strategic plan for guiding the safe and secure integration of AI across the nation’s critical infrastructure. A plan that has industry support.
DHS said its new framework “was developed by and for entities at each layer of the AI supply chain: cloud and compute providers, AI developers, and critical infrastructure owners and operators – as well as the civil society and public sector entities that protect and advocate for consumers.”
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure represents a significant step toward ensuring that AI technologies are integrated into the nation’s critical infrastructure in a manner that enhances security, resilience, and public trust. By clearly defining the roles and responsibilities of all stakeholders, the framework aims to foster a collaborative environment where AI can be leveraged safely and effectively to support essential services.
The framework addresses the increasing deployment of AI technologies within critical infrastructure sectors, including energy, water, transportation, and telecommunications, and aims to provide clear guidance on the roles and responsibilities of various stakeholders in ensuring that AI applications enhance the resilience and efficiency of these sectors without compromising safety and security.
DHS said in a statement that, “if adopted and implemented by the stakeholders involved in the development, use, and deployment of AI in U.S. critical infrastructure, this voluntary framework will enhance the harmonization of and help operationalize safety and security practices, improve the delivery of critical services, enhance trust and transparency among entities, protect civil rights and civil liberties, and advance AI safety and security research that will further enable critical infrastructure to deploy emerging technology responsibly.”
The framework is a collaborative effort involving industry leaders, civil society, and public sector entities that was developed under the guidance of DHS’s newly established Artificial Intelligence Safety and Security Board (AISSB).
Required to be stood-up by President Biden’s October 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, the board was convened by DHS Secretary Alejandro Mayorkas for the first time in May, at which time board members identified a number of issues impacting the safe use and deployment of AI, including the lack of common approaches for the deployment of AI, physical security flaws, and a reluctance to share information within industries.
The board advises the DHS Secretary, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe, secure, and responsible development and deployment of AI technology in the nation’s critical infrastructure. The board is tasked with developing recommendations to help critical infrastructure stakeholders, such as transportation service providers, pipeline and power grid operators and Internet service providers to responsibly leverage AI technologies. It will also develop recommendations to prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety.
The DHS framework complements other federal initiatives, such as the National Institute of Standards and Technology’s AI Risk Management Framework, by providing sector-specific guidance tailored to the unique challenges of critical infrastructure.
In announcing the framework, Mayorkas said in a statement that “AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms.”
Mayorkas said “the framework … will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, Internet access and more. The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow.”
At a press briefing, Mayorkas added that “in forming the [AISSB], I sought members who are leaders in their fields and who collectively would represent each integral part of the ecosystem that defines AI’s deployment in critical infrastructure. We have … assembled such a board comprised of leaders of cloud and compute infrastructure providers, AI model developers, critical infrastructure owners and operators, civil society and the public sector. We believe the safety and security of our critical infrastructure is a shared responsibility.”
Mayorkas added that he “expect[s] the board members to implement the guidelines, to catalyze other organizations in their respective spheres and across the ecosystem, [and] to adopt and implement the guidelines as well.”
The DHS Secretary also said that DHS is seeking to harmonize AI standards internationally. Mayorkas said, “We have spoken about the fact that we not only want to ensure that these guidelines are adopted and implemented domestically but also across the Atlantic, internationally.”
Secretary of Commerce Gina Raimondo said the “Framework will complement the work we’re doing at the Department of Commerce to help ensure AI is responsibly deployed across our critical infrastructure to help protect our fellow Americans and secure the future of the American economy.”
Anthropic CEO and Co-Founder Dario Amodei said “the framework correctly identifies that AI systems may present both opportunities and challenges for critical infrastructure. Its developer-focused provisions highlight the importance of evaluating model capabilities, performing security testing, and building secure internal systems. These are key areas for continued analysis and discussion as our understanding of AI capabilities and their implications for critical infrastructure continues to evolve.”
The framework delineates responsibilities across different layers of the AI supply chain.
Cloud and computing infrastructure providers are asked with securing the environments used to develop and deploy AI, including vetting hardware and software suppliers, implementing robust access management, and ensuring the physical security of data centers.
AI developers are encouraged to adopt a “Secure by Design” approach, evaluate potentially dangerous capabilities of AI models, and ensure alignment with human-centric values. They are also advised to implement strong privacy practices and support independent assessments for models that present heightened risks.
Under the framework, critical infrastructure owners and operators are responsible for maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data, and providing transparency regarding their use of AI. They are also encouraged to monitor AI system performance and share results to improve understanding of model behavior in real-world scenarios.
Universities, research institutions, and consumer advocates involved in civil society are urged to engage in standards development, conduct research on AI evaluations specific to critical infrastructure, and inform the values and safeguards shaping AI system development.
Federal, state, local, tribal, and territorial governments are essential in supporting the responsible adoption of AI, advancing standards of practice for AI safety and security, and collaborating internationally to protect global citizens.
The framework categorizes AI-related risks into three primary areas: attacks using AI, attacks targeting AI systems, and failures in AI design and implementation.
To mitigate these risks, the framework offers several recommendations:
- Adopt a Secure by Design Approach: Integrate security measures throughout the AI development lifecycle to prevent vulnerabilities.
- Conduct Comprehensive Risk Assessments: Regularly evaluate AI systems for potential threats and vulnerabilities, considering both technical and operational aspects.
- Enhance Transparency and Accountability: Maintain clear documentation of AI system functionalities and decision-making processes to facilitate accountability and trust.
- Foster Collaboration Across Sectors: Encourage information sharing and joint efforts among stakeholders to address common challenges and develop unified standards.
Article Topics
access management | cloud services | cybersecurity | data center | DHS | national security | standards