PublicationsThe U4 Blog

Exploring artificial intelligence for anti-corruption

Uncovering corruption and fraud with artificial intelligence

There are relatively few examples of how artificial intelligence (AI) and machine learning (ML) have been deployed in anti-corruption work. Such technologies are more often implemented by investigators, banks, and financial institutions to uncover financial crime, fraud, or suspicious transactions. Anti-corruption organisations recently have been offered similar tools. A collaboration between Exiger  and Transparency International (TI) in the UK aims to improve TI’s capacity to analyse public records to identify risk for corruption. In Ukraine, the local chapter of TI has developed its own AI tool to reveal fraudulent bids in public procurement. They named the tool Dozorro, as they deployed it to monitor the open source government procurement system Prozorro.

The Brazilian Office of the Comptroller General has developed a machine learning application to estimate the risk of corrupt behaviour among its civil servants. Variables from criminal records, education registries, political affiliation, business relations, and more are included in the analysis. The team behind the project later developed a similar tool to predict the likelihood for corrupt behaviour among businesses, but has run into challenges integrating information from different public databases. Brazilian law also does not permit any sanctions based on the predictions from these tools. AI tools can indeed be effective to uncover and even predict corruption. Challenges remain in taking the offenders to court so as to secure a conviction and a sentence.

Using AI to change faulty systems to promote integrity

A different strategy in applying AI for anti-corruption purposes is to redesign systems previously prone to bribery or corruption. Enabling AI tools to increase integrity, simplify procedures, or reduce points of interaction may over time undercut opportunities for bribery. The IBM research group in Kenya claims to have done just that. They have been working with the government of Kenya to climb the World Bank index for Ease of doing Business since 2014. They identified complex regulations, inefficiency, and bureaucrats feeling helpless towards the system as the primary drivers for bribes, usually just to speed up the decision processes. The IBM team worked not only on the technical side of the problem, but also addressed the challenge from multiple angles to improve the establishing process. Kenya has since climbed from 136 to 61 out of 189 countries on the list.

Digitisation is a prerequisite for AI solutions

A prerequisite to deploy AI, either to track and uncover corruption or to renew government service systems, is accessible, digitised data. Several countries are still dependent on paper-based systems, and private corporations offer their services to digitise registries or services. Some projects are based on extracts from telecom data, while others rely on the analysis of satellite imagery. Mobile money or the digitisation of cash-based aid not only simplifies transactions, but also make them more secure and possible to monitor. The data produced can be utilised for analytical purposes.

Privacy concerns within ‘information capitalism’

Call records and individual transaction data are sensitive data. Biometric identification information is even more so. In some countries, trust in private companies may exceed the trust in government to keep such information safe and protected from misuse. However, it remains a source of concern that private enterprises control vast amounts of critical data harvested from developing countries. Information is power, and the term information capitalism has been used to describe what goes on.

Education needed to promote local ownership of data and projects

Promoting local ownership of data and of projects is a significant motivation for supporting education and research in developing countries. Again, the big players in the field such as Google, IBM, Microsoft, and Facebook are promoting projects and developing solutions in developing countries. While some companies clearly state their intentions to develop financially viable projects, others engage in development projects through their social responsibility programmes.

Persisting concerns over biased outcomes of algorithms

When and if AI is implemented in governance and decision making to support or replace existing systems, there are reasons to worry about biased outcomes. Unwanted side-effects of such decision-making systems may stem from bias in the data used to train the AI or in the design of an algorithm. Opaque algorithms and thereby opaque decision-making systems represent a challenge known as the black box problem. The right to explanation requires transparent design of algorithms or methodologies able to test or contest decisions. Several institutions have developed ethical guidelines for the design, application, and promotion of trust in AI, including the European  which emphasises that a trustworthy AI should be lawful, ethical, and robust. Challenges occur when technology develops faster than legislation and can therefore operate in unregulated, global contexts.