PublicationsThe U4 Blog

Blog

AI in corporate anti-corruption risk management: Emerging uses and challenges

Companies are increasingly deploying AI to strengthen their anti-bribery and corruption risk management systems. The practical benefits are real, but so are the risks, and development agencies partnering with these firms need to understand both.
19 March 2026
AI-generated image of a human-like face appearing from a computer screen. A human face looks directly at it.
The questions that matter are not whether a company is using AI, but whether it is being used to address a clearly defined risk, whether it is governed responsibly, and whether the company not only understands its potential, but its limitations as well. Photo:
Andres Siimon/Unsplash

AI in public, private, and development sectors

Much of the public conversation about AI and anti-corruption has focused on governments: how regulators and public institutions might use technology to detect misconduct and close enforcement gaps. Less attention has been paid to corporate anti-corruption programmes and what their growing use of AI means for development partnerships.

Development agencies need to ask searching questions: what are the potential consequences of AI being incorporated into these programmes? And how might this affect the ways in which agencies do due diligence?

Development agencies frequently partner with private sector companies to deliver development outcomes. The approach these companies take to managing bribery and corruption risks has direct implications for the integrity of those projects. Development agencies need to ask searching questions: what are the potential consequences of AI being incorporated into these programmes? And how might this affect the ways in which agencies do due diligence?

How companies are using AI in anti-corruption

Companies are gradually beginning to deploy AI and machine learning tools to enhance their anti-bribery and corruption programmes, for example in monitoring, due diligence, training, and investigations.

In transaction monitoring, AI systems analyse payment data, invoices, and financial records to flag irregular patterns associated with bribery and corruption. Machine learning models can identify unusual spending patterns or correlations between contract awards and past bribery cases.

Third-party due diligence and supply chain risk management have always been challenging, partly because there is limited visibility across intermediaries and vendors. AI screening tools are helping. They draw on sanctions lists, adverse media mentions, and beneficial ownership records, while natural language processing can identify links between entities that traditional database searches miss, such as shared addresses, family relationships, or overlapping directorships.

Third-party due diligence and supply chain risk management have always been challenging ... AI screening tools are helping.

In internal investigations, AI assists with fact-finding and scoping, including analysing financial data for unusual patterns, surfacing relevant communications based on concepts rather than keywords, and mapping relationships between individuals and entities. An example is law firm DLA Piper’s Aiscension Bribery service, which reportedly enables investigations to be conducted ten times faster than manual review processes.

Companies are also experimenting with generative AI in training, using it to develop scenario-based learning adapted to specific country contexts and job roles, and to support employees in navigating anti-corruption policies in real time. An example is LRN Corporation’s Catalyst Engage.AI, an ‘AI-enhanced’ service that delivers short, conversational learning modules on anti-bribery, anti-corruption, and other compliance risk areas.

AI-powered chatbots trained on company policies and jurisdiction-specific regulations can respond to employee questions around the clock, even including advice on practical or hypothetical scenarios. For example, a user could ask the chatbot whether it is permissible to host a government official for a business meal in a particular country.

Where AI falls short

Despite these use cases, the use of AI in anti-corruption risk management raises some concerns, which development partners should be aware of.

First, the reliability of any AI system depends on the quality and representativeness of the data on which it was trained. In the anti-corruption context, historical data used to train transaction monitoring or due diligence tools may reflect uneven enforcement practices and structural biases. The result is that these systems can disproportionately flag transactions involving particular countries, sectors or demographics, while under-detecting risks elsewhere. This can distort risk assessments and investigative priorities.

The use of AI in anti-corruption risk management raises some concerns, which development partners should be aware of.

There is also a risk that compliance teams place excessive reliance on AI-generated outputs, accepting recommendations without the critical scrutiny those outputs require. This underscores the value of human oversight as a tenet of responsible AI use. Compliance teams must understand the uses and limitations of these tools and take responsibility for interpreting outputs and determining appropriate follow-up.

Lastly, the use of generative AI introduces additional concerns. Large language models can produce hallucinations, that is, outputs that appear plausible but are factually incorrect, creating risks when such tools are used in training materials, knowledge translation, or employee support functions without independent verification.

Due diligence questions for development agencies

When private sector partners highlight their use of AI in anti-corruption programmes, development agencies should look beyond the presence of the technology itself and consider how these tools are embedded within broader compliance systems. They must ask the following questions:

  1. Is AI being used to address a clearly defined risk or gap in the company’s anti-corruption system, or primarily for signalling purposes? This is a critical question. AI that fills a genuine gap, detecting transaction patterns that manual review cannot feasibly catch or surfacing third-party risks across a complex supply chain, is meaningfully different from AI adopted because it signals sophistication to external partners. Development agencies should ask how the tool connects to the company’s broader anti-corruption programme, and what problem it was designed to solve.
  2. Is there a governance framework for responsible AI use within the company’s broader risk management system? This means having clarity on who oversees the technology, how it is tested and validated, whether decisions can be audited and appealed, and who is accountable when AI-supported decisions cause harm. Without that, the presence of AI tools tells you very little about whether a company’s anti-corruption programme is actually working.
  3. Has the company selected its AI vendors carefully? For companies deploying third-party AI solutions in their anti-bribery and anti-corruption programmes, vendor selection is an important consideration. Development partners should ask whether the company has a policy requiring vendors to comply with applicable AI legislation and responsible AI principles, and whether the company’s own AI policies extend to its vendors. Equally important is whether these requirements are reflected in contractual agreements.

Using AI in the right ways, for the right reasons

As more and more companies consider leveraging AI in their anti-corruption programmes, development agencies will increasingly encounter partners who present these tools as evidence of compliance effectiveness. The questions that matter are not whether a company is using AI, but whether it is being used to address a clearly defined risk, whether it is governed responsibly, and whether the company not only understands its potential, but its limitations as well.

    About the author

    Oludolapo Makinde

    Oludolapo Makinde is a PhD candidate in law at the University of British Columbia.

    Disclaimer


    All views in this text are the author(s)’, and may differ from the U4 partner agencies’ policies.

    This work is licenced under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence (CC BY-NC-ND 4.0)