As AI technology advances, the lines between convenience, security, and privacy grow increasingly blurred. This tension is illustrated in the recent findings of the Center for Security and Emerging Technology’s (CSET) report, Through the Chat Window and Into the Real World.
The report explores the rise of AI agents and highlights significant privacy concerns, particularly regarding how multi-factor authentication (MFA) and other security protocols intersect with the growing use of AI agents, which pose serious challenges for privacy and security. Among the concerns is the evolving role of MFA in protecting sensitive information, ensuring accountability, and preserving user trust.
At the heart of the privacy concerns surrounding AI agents lies their dependency on data. To function effectively, AI agents must access vast amounts of information about their users. They track habits, preferences, and even sensitive details such as financial records and medical histories. This requirement for data access raises the stakes for MFA, such as a password and a physical token.
MFA has long been regarded as a gold standard for securing access to digital services. However, the rise of AI agents complicates its implementation and introduces questions about how MFA protocols can be adapted to allow seamless agent interactions while ensuring robust safeguards against unauthorized access.
One of the report’s key insights is the tension between the automation promised by AI agents and the accountability required for secure operations. Traditionally, MFA protocols have relied on human interaction at critical junctures. AI agents challenge this paradigm. To operate autonomously, they need mechanisms to authenticate themselves securely without requiring constant user input.
CSET’s report suggests that evolving MFA systems could include biometric authentication or tokenized credentials embedded within the agent itself. For example, an AI agent could carry a digital certificate that serves as secure proof of identity, enabling it to access restricted systems or perform transactions. While this approach could streamline workflows, it also raises concerns about how these credentials are stored, protected, and potentially misused if compromised.
“Requiring agents to provide authentication credentials and proof of authorization for certain kinds of actions would help to ensure that agents are acting appropriately on behalf of their users,” the report says, adding that “this would verify that the agent is what it claims, is acting on the behalf of the person it says it is, and has been delegated the proper authority to take a specific action on behalf of that person. In addition, authentication would facilitate identification and visibility, discussed earlier, as many existing authentication schemes (e.g., logging into an online bank account) already involve some amount of monitoring and logging.”
“At a minimum,” the report says, “AI agents could use a person’s credentials to log into online services and be subjected to the same tracking and authorization restrictions as the human user. However, widespread agent adoption would also provide an opportunity to deploy more secure access control methods, like those based on Public Key Infrastructure, that humans often find challenging to use but could be leveraged more easily by an AI agent. Perhaps this type of authentication could be expanded to cover a wider range of activity by agents.”
AI agents introduce another layer of complexity to privacy by centralizing sensitive user data. For an agent to perform tasks effectively, it must often store information locally or in the cloud. This creates a single point of failure that could become a goldmine for hackers if breached. The report underscores that MFA systems, when paired with robust encryption and tokenization techniques, could mitigate this risk by ensuring that even compromised data remains unusable without the necessary credentials.
However, MFA itself is not immune to vulnerabilities. Threats such as phishing attacks targeting the recovery mechanisms of MFA systems could allow malicious actors to bypass even the most secure setups. When AI agents are factored into this equation, the risk landscape expands. An attacker who compromises an agent’s credentials could gain access to not just a single service but an entire ecosystem of interconnected systems.
As the CSET report points out, “the security and privacy of the agent’s operations and associated user data must be safeguarded from a range of potential threats. These threats include the misuse of sensitive information by the actors involved in developing and running the agent, as well as hijacking of the agent’s actions or exfiltration of data by malicious actors.”
Technical guardrails that could support security and privacy include secure coding practices; adversarial testing; access control; data minimization; and encryption.
Privacy concerns are further exacerbated by the opacity of many AI systems. Users often do not fully understand what data their agents collect, how it is used, or who ultimately has access to it. This lack of transparency undermines trust and creates a fertile ground for abuse. Multi-factor authentication, in this context, must serve a dual purpose: not only to secure systems, but also to reassure users that their data is handled responsibly.
“In some cases, trade-offs arise between different goals. Enabling visibility into, and control of, how an agent functions to anyone other than the user may infringe on the security and privacy of that user’s interactions with the agent, though the extent of this infringement depends on implementation,” the report says.
The CSET report advocates for the development of more transparent frameworks that allow users to control their agents’ access to sensitive information. For example, users could define specific conditions under which an agent is allowed to act, such as requiring additional authentication for high-risk actions like financial transactions or data sharing. These safeguards could be implemented through tiered authentication models that combine traditional MFA with real-time user monitoring and alerts.
“Agents should only take action based on instructions from valid users,” CSET’s report says. “In cases where an agent may accept external instructions, the agent should validate those inputs with the user before acting upon them. Just like many other software systems, permissions to execute certain actions or access certain capabilities should be dependent upon a user’s privileges. Agents will need to enforce those permissions and prevent privilege escalation.”
As AI agents become more integrated into daily life, innovations in authentication will play a critical role in addressing privacy concerns. Emerging technologies such as decentralized identity systems and zero-knowledge proofs could offer pathways to more secure and private authentication processes. These approaches allow users to verify their identities without revealing sensitive information, potentially minimizing the data footprint required by AI agents.
Additionally, AI-driven threat detection systems could complement MFA by monitoring agent behavior for anomalies that might indicate a security breach. For instance, an agent making an unusual number of high-value transactions could trigger a secondary authentication requirement, ensuring that compromised credentials are not exploited without detection.
As the CSET report illustrates, AI agents offer the potential to enhance productivity and convenience, but they also demand a reimagining of how foundational systems like MFA operate. Striking the right balance between automation and accountability will require ongoing collaboration between technologists, policymakers, and end-users. By embedding privacy and security into their core design, developers can ensure that these tools enhance, rather than erode, the trust that is placed in them.
Article Topics
AI agents | biometric authentication | biometrics | data privacy | data protection | multifactor authentication