By Matt Kelly
Artificial intelligence is transforming all parts of the corporate enterprise — including, unfortunately, frauds launched against your enterprise. Which means your anti-fraud controls must transform too, or else you’ll never keep pace with the threat that AI-enhanced fraud poses.
That’s not likely to be an easy task. Fraudsters are hatching ingenious ways to use AI in furtherance of their schemes, with the potential for severe financial or data losses to your company. Moreover, most laws and regulations imposing a duty of care to prevent fraud are “technology agnostic,” in the sense that it doesn’t matter what specific tricks fraudsters are using against you. You still have a duty to devise effective anti-fraud controls. Your need to anticipate AI-enhanced fraud is already here.
That’s going to challenge internal audit teams on risk assessment, control design, testing, and the overall control environment in ways we’ve never seen before.
The Many Types of AI-Enhanced Fraud
Fraudsters are using artificial intelligence both to devise new types of fraud and to make “traditional” frauds even more difficult to detect. For example…
-
Deepfakes. Deepfakes are digital images of bogus passports, driver’s licenses, birth certificates, and similar identity documents; or fabricated audio or video clips of people both real and conjured out of thin air. Generative AI has let fraudsters run wild with deepfakes, churning out convincing fakes both quickly and at scale.
-
Phishing and business email compromises. Neither of these frauds are new; AI simply makes them easier to execute. Fraudsters can now use generative AI to fire off one convincing business email scam after another, each tailored to the target executive. Coupled with deepfake audio or video, the chance of being duped into wiring money or clicking on a poisonous URL can soar.
-
Invoice and vendor frauds. Again, these frauds are not new; AI just puts them on steroids. Fraudsters can make convincing websites, flood the internet with fake reviews of vendors, and fire off flawless but bogus invoices — and do so at scale, until some victim out there falls for it.
-
Social engineering attacks. Fraudsters can use AI to research a target in depth, and then send messages designed for that specific person. Example: “ChatGPT, make me sound like an Oxford University grad from the early 2000s, asking about job prospects in the chemistry field. And draft material for a LinkedIn profile to reflect that work experience.”
Those are only some of the AI-driven threats coming at you. So how should internal audit and control teams respond?
Internal Control Challenges
At the abstract level, we can say that fraudsters are using AI to become better at impersonation or deception — so therefore, the appropriate response is more skepticism. The question is how to weave greater skepticism into your controls, business processes, and employees.
For example, fraudsters might pose as potential business partners, using AI to supply you with false documentation that makes them appear to be upstanding people. In that case, you’ll need robust due diligence procedures to cross-reference that material against external databases. Do you have those due diligence capabilities? Can you access independent data that might disprove the AI-generated falsehoods fraudsters provide you?
Or imagine criminals impersonating senior managers in a business email compromise scam. Have you implemented requirements for multi-factor authentication to confirm the executive’s identity? Do you require more confirming factors for transactions that are unusually large or high-risk in some other way?
Let’s call such steps “challenge controls” — controls meant to pause a specific transaction while you, the company, try to confirm the identity of the person at the other end of the transaction.
The task for audit teams is to consider where and how to add more challenge controls to your business processes, and what form those controls should take.
For example, requiring more documentation from a would-be business partner can help, but AI now makes generating false documentation easier. So to avoid falling for a business email compromise, will you require a verbal request from that senior executive asking the accounts team to wire $10 million overseas? Will you require that executive to be physically in the office, to avoid the threat of voice-cloning AI? Will you train employees to insist on that, no matter how angry the executive sounds on the phone?
The answer, of course, depends on the level of risk to the company — which reminds us of the fundamental challenge. Internal audit teams will need to get much better at risk assessment and control design, and that means working more closely with operating units to understand what those risks are.
Systemic Anti-Fraud Concerns
Internal audit teams need to see the larger picture, too. It’s not enough to add new anti-fraud controls to your business processes; you need to appreciate the control environment too. Consider the following.
How often will you need to re-assess your fraud risks? AI is evolving rapidly. For example, “agentic AI” is just starting to arrive, where AI bots can act with far more independence to achieve certain tasks. That will drive fraud risks in even more new directions, and waiting for your next regular assessment might not be enough.
How will you talk with boards and operations leaders about AI-enhanced fraud? Strengthening your anti-fraud controls often means asking others in the First and Second lines of defense to adjust their business processes. So you’ll need to place these AI-driven fraud risks in business context, with compelling examples and data to show the threat and to win the rest of the enterprise’s support.
How can you use AI yourself for better anti-fraud efforts? Yes, AI can be your friend even while fraudsters use it against you. For example, AI allows for more sophisticated data analytics, to identify suspicious transactions with higher accuracy. It can also help to identify the fakes made by other AI systems. Your job as an audit leader is to figure out how to harness the power of AI to your benefit, and to keep your team on the cutting edge of technology.
Conclusion
Companies cannot ignore the risk of AI-enhanced fraud now that it’s here. Not only would that violate your legal and compliance obligations as a business; it’s just bad business, because fraudsters will use AI to rob you blind.
The good news is that tools do exist to help you combat this new threat, but using those tools to their maximum effect will require creative thinking about risk assessment and internal control design, and diligent work to cultivate cross-enterprise support for the new anti-fraud measures you’ll need.
In other words, as impressive as AI is — the human element still matters more.
About Matt Kelly
|
Matt Kelly is an independent compliance consultant specializing in corporate compliance, governance, and risk management. He shares insights on business issues on his blog, Radical Compliance, and is a frequent speaker on compliance, governance, and risk topics.
Kelly was recognized as a "Rising Star of Corporate Governance" by the Millstein Center in 2008 and named to Ethisphere's "Most Influential in Business Ethics" list in 2011 and 2013. He served as editor of Compliance Week from 2006 to 2015.
Based in Boston, Mass., Kelly can be contacted at mkelly@RadicalCompliance.com.
|
#Blog
#SOXPro
#Audit
#ProcessImprovement
#Technology,Software,Vendors
#Compliance
#ControlsManagement
#InternalAudit
#RiskManagement