Corporations everywhere are under pressure to embrace artificial intelligence as quickly as they can, and the latest incarnation of that trend is the rush to deploy AI agents: autonomous snippets of software code intelligent enough to execute tasks with little to no human oversight.
That might sound nifty to operations teams looking to boost efficiency or to finance teams hoping to cut costs. For GRC teams, however, AI agents herald a new era of security, privacy, and business continuity risk — and you need to develop a capacity to assess those risks sooner rather than later.
Let’s consider what that will entail.
A Quick Review of AI Agents
Most internal audit and GRC teams probably already have a rough sense of what AI agents are; some of you might even be experimenting with agents of your own to automate tasks such as gathering or validating evidence for SOX controls testing. Still, it’s wise to understand the larger world of AI agents: what they do, where they come from, and the types of risks they bring.
Quite simply, AI agents are autonomous software programs that can complete tasks without human supervision. Give them a goal — “test this IT system for unknown vulnerabilities” or “book flights for the entire R&D team for the company retreat,” for example. The agent will plan its course of action and then execute that plan, delivering the final result back to you.
This isn’t “human in the loop,” the metaphor we often use for human oversight of generative AI. This is human above the loop, doing something else while the agent completes the task on its own.
The challenge for corporations is that agents can come from virtually anywhere:
-
Agents that business units develop themselves in coordination with the IT team, for purposes you’ve already defined and according to standards you’ve already determined.
-
Agents that employees develop on their own, not necessarily following whatever policies or standards you have for security and testing. (That is, “shadow agents,” just like any other shadow IT risk you don’t want to see.)
-
Agents that other parties have developed and sent into your corporate enterprise. For example, a customer might have an agent contact your sales system and order new parts. More darkly, hackers might have an agent find vulnerabilities in your corporate firewall, search for personal data, and exfiltrate that information to parts unknown.
Some agents will have perfectly legitimate reasons to interact with your IT systems; others will not. Any of them might pose security, privacy, or business continuity risks, even if they have a legitimate business purpose; if they were poorly coded or just start behaving in unexpected ways.
That’s the new risk environment that AI agents pose for your organization. So how can your risk management capabilities adapt to it?
Risk Management Capabilities You’ll Need
Even though we’re talking about AI agents, as always, start with building consensus among the people in your organization. That means crafting policies and standards to govern how agents interact with your data and systems and what actions agents are allowed to take.
This might not be easy. For example, the EU AI Act and other laws governing AI require that AI decisions be explainable and auditable — but if some agents interacting with your systems arrive from outside your enterprise, you might not know whether those agents’ decisions are explainable.
So does the company turn a blind eye to the potential risk of those agents? Does it block all external agents, which might crimp your transaction volume?
Internal audit teams don’t make those decisions; senior management and the business unit leaders do. But internal audit can clarify the risks involved and the mitigation steps that could be implemented; and then test the controls and processes your organization has to be sure that “agent risk” is kept at acceptable levels.
Next, be sure your organization has the right tools to monitor agent behavior. For example, you would likely want tools that can detect AI agents and determine whether those agents are authorized or not, so that you can then activate other controls as necessary to block unauthorized agents.
This is an important point about agents and how they change your cybersecurity landscape. Until now, it was fairly easy to enforce a strict “block all the bots” policy because AI agents didn’t exist. Now they do. Soon enough, legions of agents will be roaming around cyberspace. Organizations will need an ability to distinguish between authorized and unauthorized agent behavior.
And as always, you’ll want to develop risk management frameworks that can help guide your AI agent policies and controls. The NIST AI Risk Management Framework is one place to start; the ISO 42001 standard for AI management systems is another.
Both frameworks, however, are designed to address AI risks broadly, whether you’re using predictive, generative, or agentic AI systems. Others are trying to develop more agent-specific frameworks, although those tend to be commercial ventures so I won’t endorse any particular one here.
Regardless of which framework you use, the main goal is to find a framework that can help you build a capacity to monitor AI agents and enforce your risk management guardrails on their actions.
Start Planning Now
One survey after another lately has shown that the top concern of the board is cybersecurity risk. AI agents are simply the latest incarnation of that threat. So as usual these days, internal audit teams have a fantastic opportunity to help drive the conversation as your organization ponders how to allow AI agents without exposing your business to unnecessary risk.
That’s going to require careful analysis of how business units want to embrace agents, either using agents themselves or allowing other agents to interact with your business. It will require conversation with the cybersecurity team, to understand whether they have the right tools to monitor (and, where necessary, intercept) agents as they go about their automated business.
For GRC and audit teams themselves, it will also mean finding the right frameworks to help you identify agent-related risks and the best blend of controls to keep those risks where senior management wants them.
So as much as AI agents are a result of artificial intelligence, you’ll still need lots of human experience and judgment to manage them.
About Matt Kelly
|
|
Matt Kelly is an independent compliance consultant specializing in corporate compliance, governance, and risk management. He shares insights on business issues on his blog, Radical Compliance, and is a frequent speaker on compliance, governance, and risk topics.
Kelly was recognized as a "Rising Star of Corporate Governance" by the Millstein Center in 2008 and named to Ethisphere's "Most Influential in Business Ethics" list in 2011 and 2013. He served as editor of Compliance Week from 2006 to 2015.
Based in Boston, Mass., Kelly can be contacted at mkelly@RadicalCompliance.com.
|
#SOXPro
#AI
#Audit
#Technology,Software,Vendors
#RiskManagement