Agentic AI: The Double-Edged Sword That Security Leaders Must Understand How to Defend Against and Wield
Versatile and powerful technology that it is, artificial intelligence—and specifically systems known as agentic AI—can make a security team’s job easier and more difficult all at once.
Cyber risks are emerging and evolving quickly, and agentic AI introduces entirely new risk considerations that organizations and their security leaders would be wise to understand and take steps to control. Ironically, AI also may be the very tool that can help them do so.
With agentic AI, an intelligent “agent” executes a specific prescribed task or set of tasks with autonomy, learning capabilities, and adaptability. It has the agency to make decisions and act on its own, based on situational context, with little or no human supervision. An organization could task an AI agent to handle trafficking of customer invoices, for example, or to manage a certain aspect of an employee benefits program. These agents also may work with other AI agents to coordinate execution of complex tasks. And in some cases, an AI agent can even be deployed to help an organization control for agentic AI risk. More on that in a moment.
Why should security and compliance leaders be concerned about agentic AI risk in the first place? Perhaps most importantly, because it is neither deterministic nor entirely predictable, agentic AI can't be managed with traditional risk frameworks and threat models. Thus, organizations must adapt their models to meet the internal and external threats presented by a system that learns, adapts, and acts on its own. For organizations with unique and stringent security and compliance requirements, such as aerospace and defense companies, there’s added urgency to do so.
Agentic AI risk comes in two forms:
- Internal agentic risk involves the AI tools you employ internally—attack surfaces that introduce the risk of IP theft, privacy breaches, data leakage, and the like. Think of an employee using an AI note-taking app that is using your confidential meeting data to train its models.
- External agentic risk involves bad actors using an adaptive AI agent to continually evolve their attacks against an organization, learning from each attempt.
Because the risks associated with agentic AI are bound to escalate as its use expands, now is the time for organizations and their building and security leaders to take action. Here are seven best practices to consider following to help your organization stay a step ahead.
1. Expand the scope of your visitor management strategy and capabilities to control risk, not just manage it. New technologies like AI demand new ways of approaching risk. In a traditional risk-management stance, you would build an understanding of different categories of risk so that you can create mitigation plans for each, then react to individual issues as they arise. Agentic AI demands a risk-control approach in which systems and processes monitor for much broader anomalous behavior and continuously evaluate and update security measures in cycles, so that they’re better equipped to monitor and address the less predictable actions of agentic AI. Companies like StandardAero are taking this approach, recognizing that it’s not just about who you physically let into your buildings, but also what new technologies expose your organization to risk.
2. Evaluate AI tools before you adopt them. New AI tools introduce new data-leakage risks that do not exist in traditional software as a service (SaaS). This warrants establishing a framework whereby you rigorously evaluate any AI capabilities you’re considering investing in—stand-alone apps as well as software platforms with embedded AI tools—to determine how they store data, whether and to what extent your information will be used to train external models, and the security certifications to which they conform. You should also confirm whether you can control the tool’s data retention policies.
3. Establish AI governance structures. Clear governance of your AI capabilities is non-negotiable in controlling agentic AI risk. That includes designating specific AI safety officers within your organization, assigning them responsibilities, and creating cross-functional review boards to oversee AI operations. Also develop clear incident response procedures for AI-driven systems, so you’re ready to act quickly and decisively in the event of an AI-related incident. Regularly revisit and revise your governance program to reflect new AI capabilities you’ve added and the evolving threat landscape.
4. Develop and socialize clear AI usage policies and protocols for all employees. Who among your employees are allowed to interact with which AI systems? Which AI tools have permission to access specific data? It’s vitally important to document your answers to questions like these within a clear set of AI usage policies and protocols. What’s more, organizations with multiple locations should implement standardized visitor policies across all sites to reinforce these standards, as financial services company Everfox has to help manage agentic risk across its operations.
5. Use AI to control AI risk. Remember the old “Spy vs. Spy” comic strip? Now, with the external risks that agentic AI presents, think “Agent vs. Agent,” where AI agents can work on your behalf to thwart agentic AI-driven attacks. Enhanced pre-screening, evaluating visitors against threat intelligence databases before they arrive on-site, and advanced behavior pattern analysis to identify unusual visitor behaviors are a few of the areas where AI agents can help. AI can also automate compliance checks, helping organizations streamline regulatory adherence processes, while enhancing the visitor experience in the process by personalizing visitor interactions, all without compromising security.
Meanwhile, security leaders can put a different type of AI, generative AI, to work to help them research and understand the new kinds of risk that AI presents. Open up your preferred tool (like ChatGPT or Google Gemini), put it in deep research mode, query the tool with the right questions, let it gather information, then critically evaluate the results and probe deeper.
6. Keep a close eye on your AI systems. Continuously monitoring all your AI tools and software is a must. Because they act autonomously, agentic AI systems (including those deployed for security) can't simply be implemented and forgotten. Close monitoring can identify unusual patterns in AI system behavior, track data flows in and out of AI tools, and alert you to anomalies that could signal an emerging threat.
Supplement monitoring with regular audits of AI decision processes to ensure systems remain transparent and under control. In these audits, you’ll want to be sensitive to issues like data privacy concerns around how visitor information is stored, processed, and protected, and aware of potential issues like algorithmic bias to ensure AI systems aren’t unfairly targeting or flagging certain groups. Transparency requirements are also critical, so security teams have clear visibility into how AI systems make decisions. And as a backstop, be sure to establish override procedures so humans can intervene whenever necessary and maintain ultimate authority in security operations.
7. Put people at the center of your AI initiatives. AI’s ultimate purpose is to enhance human capabilities rather than replace them. It’s a message you should reinforce to people throughout your organization, and specifically in a security context. To do so, put in place human-AI collaboration frameworks by training security personnel to prompt AI systems effectively, and establish domains where human beings have the final say over AI recommendations and formal supervision over AI agents. Build information interrogation skills within your security teams so they can critically evaluate AI outputs, maintain oversight and take appropriate action when issues arise.
Put all these practices together and it’s easy to imagine a world where multiple AI agents are working for and supervised by a single person who has the final say over their agents’ work. This partnership represents the future of AI in security operations. In a world in which it’s no longer enough to rely on traditional threat models, the organizations that are most effective at leveraging the speed and processing power of AI in tandem with the contextual understanding and ethical judgment that only humans can provide will be the ones that stay a step ahead of agentic AI risk—and the competition.
About the Author
Richard Hills
Richard Hills is vice president of advanced technologies at Sign In Solutions and heads up innovation projects and AI across the business—in particular, how AI can be applied to real problems in visitor management.