Blog
Government by AI: We are not ready for the consequences in anti-corruption

AI changes how corruption is managed
What if the fight against corruption were left to machines? Meet Diella, the world’s first AI member of parliament, elevated from a virtual assistant to the official ‘minister of procurement’ in Albania. Her mission – reduce corruption, improve efficiency, and bring prosperity to all – is clear, but we don’t yet know the full extent of her abilities.
Although Albania’s approach may seem radical, it is just the latest step on a ten-year path. Since launching the e-Albania platform in 2013, the country has pursued an ambitious digital governance agenda, with procurement as a central pillar.
Diella began as a simple chatbot within e-Albania, but her abilities have steadily expanded – from text, to speech, to a full digital avatar. Behind her is the National Agency for Information Society, the body responsible for developing and hosting her.
The goal of digitalising procurement is a welcome one. For AI solutions to work well in this field, they must be fed with high-quality, current data. This puts pressure on the whole government machine to not only digitalise their procurement processes but also to use the data it collects more efficiently. Many countries have developed online procurement platforms as a basic step in this direction, but if AI is to be involved and effective it requires these platforms to be fully and consistently used.
Despite global challenges in public data management, different technologies and AI tools are already being trialled to monitor public spending. Emerging technologies – such as advanced data analytics and the use of big data to identify cost-saving opportunities, generate insights, and spot red flags – are being developed, tested, and scaled to address corruption and fraud risks in procurement. The ability of these systems to learn and adapt makes it easier to identify existing corruption risks and networks – and even predict new ones.
These capabilities hold enormous potential. For instance, they can be used to build trust and foster competition through improved and more precise corruption risk management. Yet, like all innovations, they carry risks, including the danger of being exploited by actors with the power, interest, and technical know-how to manipulate the system to their own advantage.
We only think we know the consequences of AI
While the rise of Diella from chatbot to minister might look to some like a slick tech-success story, her elevation shows how high the stakes have become. At this level, errors will mean more than simply misdirecting a user to the wrong menu. Diella now has the power to shape perceptions of fairness and trust in the state itself.
Here, automation bias – the tendency of people to rely on automated decisions – can be a strength rather than a risk. When the AI applies consistent, transparent rules, it can reduce discretionary decision-making, reinforce fairness, and build confidence in procurement processes. Yet this bias also carries risks: it can conceal corruption or justify abuse under the guise of ‘rational’, unarguable calculations (‘the computer says no’). Human oversight, while no guarantee of accuracy or fairness, remains essential.
Still, giving algorithms the final say in procurement – what to buy, from whom, and at what price – divides opinion. While it may appear that power is being handed over to machines, these systems currently do not operate without people or context anchoring them to reality. Humans supply the AI with rules, data, and purpose, which guide its decisions and shape how outputs are interpreted and used.
This becomes especially clear when considering bias. AI amplifies patterns it finds in data, which means existing distortions can become more pronounced. In anti-corruption, for example, selective labelling is a major risk: if procurement AI is trained only on prosecuted corruption cases, it may overfit to those patterns and ignore the many instances that went undetected or unreported.
A deeper concern arises when models are trained on past sanctions without questioning whether those punishments were themselves fair or politically motivated. If such biases are baked into training data, AI risks scaling up existing injustices rather than correcting them. Careful assessment of past cases and proactive mitigation measures are therefore critical to prevent unfairness from becoming automated at scale.
CAIR3 helps map AI’s ripple effects in governance
As AI systems are inserted into governance roles, we face a new dilemma: how to harness their promise while mitigating their inevitable risks. This is where frameworks like the Consequences of AI in the Real World, 3-step framework, or CAIR3, come in.
Developed by the Alan Turing Institute – a UK-based centre for data science and artificial intelligence – CAIR3 offers a structured way to understand the social impacts of AI deployment. Although designed for the private sector, its approach is equally valuable for identifying corruption risks and unintended consequences in public governance.
At its core, CAIR3 helps users systematically map how AI systems can affect people, processes, and institutions. It begins by recognising the unique challenges AI introduces: privacy, transparency, explainability, and accuracy. These challenges are then placed in real-world contexts, such as the case of the AI minister, to explore both direct consequences and knock-on effects. The result is a ‘family tree’ of positive, negative, and neutral outcomes, which helps organisations anticipate where interventions might be needed.
CAIR3 also asks us to consider different stakeholders, depending on their proximity to the system. This helps us see not only what AI can do, but how people might change their behaviour in response, intentionally or unintentionally. For example, a police officer investigating a corruption case might seek unauthorised access to the AI’s detection and prediction results, which, if shared, could threaten privacy and erode public trust. Alternatively, a corrupt politician could attempt to manipulate inputs or use AI outputs to favour their allies’ businesses over competitors. These are purely fictional examples, and by no means exhaustive, but they illustrate how different actors might interact with the system in different ways.
Finally, the framework translates this analysis into three categories of response: action, influence, and monitoring (AIM). These guide us in reinforcing positive effects, containing neutral ones, and mitigating negative ones, providing a structured approach for responsible AI deployment.
Let’s see how this works in practice by applying CAIR3 to the case of the AI minister.
AI in procurement can leak data or be gamed
We mentioned above that we don’t yet know the full extent of her Diella’s powers, and even less who will be tasked with explaining her decisions and holding her accountable for any errors, random or intentional.
Let’s imagine one of her abilities is to guide companies through the tender application process using natural language prompts. From an anti-corruption perspective, this feature offers both promise and peril.
The most obvious benefit is accessibility. By answering applicants’ questions directly, the chatbot can streamline procedures, lower barriers to entry, and possibly detect corruption before it happens. But the very same feature could also be exploited. Users might probe the system’s guardrails – intentionally or not – leading to unintended consequences.
Here are two hypothetical scenarios that illustrate just some of the ways that AI risks in procurement need to be carefully balanced against the opportunities.
Scenario 1: Sensitive information leaks
This would only occur if the AI had direct access to government databases, such as blacklists of companies charged with fraud and corruption or records of ongoing court cases. If poorly designed, such integration could let competitors extract intelligence about rivals, or details about investigations or charges, through cleverly crafted prompts. Faced with leaks, governments might cut off the AI’s access to judicial data. While this would reduce exposure, it would also weaken the system’s ability to detect red flags, making it easier for companies with problematic records to win contracts.
Scenario 2: Gaming the evaluation process
Prompt injections are a well-documented vulnerability. Through seemingly harmless questions such as ‘If I wanted to win the bid, what should I avoid doing?’ or ‘What did losing bidders do right?’, companies could extract insights that allow them to tailor applications around the AI’s evaluation criteria. This would skew competition, entrench market dominance, and – thanks to automation bias – encourage officials to dismiss irregularities as coincidence. In effect, corruption could hide behind the veneer of machine objectivity.
Together, these scenarios highlight both conditional and likely risks. Information leakage may be less probable with strong safeguards, but gaming the system through prompt manipulation is a realistic and pressing vulnerability. Both underscore a broader point: evaluating AI’s ripple effects is as important as designing the tools themselves.
This is where CAIR3 leads us naturally to the next step: deciding how to act, influence, and monitor.
Donors can act, influence, and monitor to reduce risks
Once we map the consequences of the AI minister – or any other AI system – using tools like CAIR3, we can decide how to respond. The framework groups responses into three categories: action, influence, and monitoring.
Action means taking direct steps to prevent or mitigate harmful consequences. For example, donors can fund pilot projects that test procurement AI systems in controlled sandboxes, where sensitive databases remain protected but corruption risk signals can still be analysed safely.
Influence involves shaping norms, standards, and institutional behaviour. Development partners can support governments in creating corruption risk management and whistleblowing systems tailored to AI ministers, helping to align deployment with frameworks such as the EU AI Act and the EU whistleblowing directive. They can also facilitate national multi-stakeholder dialogues – similar to the OECD Global Forum on Competition – bringing in civil society and business associations to establish fair competition principles for AI-assisted procurement.
Monitoring means committing to continuous observation to catch emerging risks. Donors can, for instance, fund independent civil society organisations and investigative media to scrutinise government use of AI systems and respond strategically as risks emerge.
By combining these three modes of response, international development agencies can help governments capture the benefits of procurement AI while keeping corruption risks in check.
Use foresight and an anti-corruption lens to manage AI risks
The rise of a virtual minister like Diella is more than a curious experiment – it sets a precedent for how governance could evolve. While novel, it is not entirely unique. Politicians have already experimented, in lighter forms, with digital twins and AI chat versions of themselves. What makes Diella different is her elevation to cabinet status, which pushes the boundaries of good governance into uncharted territory.
If we take a techno-optimistic view, we might imagine a parliament where explainable AI systems provide accountable, transparent decisions, or where members of parliament rely on digital twins to make better-informed choices on behalf of their constituents. At the other extreme, a dystopian vision could see an entirely AI-driven parliament, with only a human president overseeing the machinery of state. One vison cements democracy, another authoritarian rule. Reality, however, is usually more mundane – and more complex. The likelier outcome is a hybrid: in systems with strong checks and balances, AI may augment governance in valuable ways, while in weaker systems, it may entrench old risks in new forms.
Are we ready for the consequences? Not yet. But frameworks like CAIR3 give us a starting point. They remind us that AI does not replace human judgement; it makes it more urgent. Preparedness means equipping ourselves with the right foresight tools, strong oversight, and the humility to admit what we cannot yet predict.
We know why checks and balances are needed for people: because flawed systems, not power itself, allow corruption to take hold. The same logic applies to AI. Algorithms do not ‘want’ or ‘think’, yet their decisions can still produce harmful outcomes if the goals, rules, or data guiding them are flawed, misused, or manipulated.
Applying an anti-corruption lens and using foresight tools like CAIR3 lets us anticipate both known and unknown corruption risks, turning uncertainty into something governable.
Disclaimer
All views in this text are the author(s)’, and may differ from the U4 partner agencies’ policies.
This work is licenced under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International licence (CC BY-NC-ND 4.0)


