FinanceTech

How authorities should tackle AI challenges

Artificial intelligence differs from other technological advancements in finance, such as the initial adoption of computers and automatic trading systems. As Norvig and Russell (2021) note, AI is a rational maximising agent, one that not only analyses and recommends but also makes decisions.

Even though many would wish to, the financial authorities cannot slow down the move towards autonomous AI systems in the private sector, nor should they persist in demanding features such as explainable models that reflect pre-AI concerns. Banks that deploy AI gain an immediate competitive advantage, forcing their competitors to follow while leaving the supervisors no choice but to go along.

The financial authorities are in an AI `arms race’ with the private sector – one they currently seem set to lose. This increases the risk of an ineffective supervisory structure and costly financial crises. Our research (Danielsson et al 2023, Danielsson and Uthemann 2025) examines how AI affects systemic risk and the effectiveness of financial regulation. We build on that work here, focusing both on the operational realities that face the financial authorities and how they can best respond.

The financial authorities face a difficult dilemma. The same AI that helps them in executing their mission also undermines their control. At the micro-level, AI excels at finding regulatory arbitrage and designing pricing algorithms that master price discrimination or even abuse. Technically, this AI might comply with the authorities’ rules, but in ways they might not find palatable. Existing approaches for preventing abuse might be inadequate to meet challenges from AI.

AI helps those intent on exploiting and damaging the financial system, whether criminals and terrorists or hostile nation states. Those attackers need only one successful breach, whereas the defenders must guard the entire system. We call this the ‘defender’s dilemma’, and it can only get worse with time.

AI creates risks that current monitoring frameworks miss. When AI takes over critical functions, most importantly, liquidity management, it is likely that extant systemic risk dashboards based on past performance and practices will not detect new forms of emergent risks.

Ultimately, AI gives rise to wrong-way risk when the risk it creates is the highest at times when we have the greatest exposure to that risk factor.

Financial crises occur when banks shift from maximising profits to survival, the one-in-a-thousand-day problem discussed in Danielsson (2024). Speed is always of the essence, as the first bank to act decisively in a crisis is the most likely to survive, while the laggards face massive losses and even bankruptcy.

AI accelerates the speed of crises through its unmatched capacity to monitor the system, evaluate strategic alternatives and execute complex decisions at a speed no human can match. When a shock occurs, the AI engines rapidly parse vast streams of market, macroeconomic, political and competitor data, updating forecasts and adjusting positions.

This speed advantage means that by the time supervisors register an abnormal market move, significant shifts in liquidity or asset pricing may already have taken place. There is a significant competitive advantage in acting quickly, as the first to react to a shock minimises losses, while the last faces significant losses and even bankruptcy.

Strategic complementarities arise when AI engines monitor and react to one another’s visible market footprints. When one system moves in response to stress, others may interpret it as confirming evidence and adjust accordingly. The result is self-reinforcing action across institutions, creating a rapid convergence of behaviour even without direct coordination. This is not illegal, and there is nothing the authorities can do to prevent this behaviour.

Similarity in AI design and operation reinforces this tendency towards synchronisation. Many, if not most, private sector institutions procure systems from the same small set of vendors, train them on overlapping datasets and optimise for comparable objectives.

The combined effect is to compress the timeline of crises. Events that once unfolded over days or weeks can now play out in minutes or hours, leaving almost no window for policy intervention.

While such systems smooth out minor fluctuations in calm markets, their tendency towards rapid, coordinated action under stress increases the probability and severity of extreme market moves. This dynamic lowers observed day-to-day volatility but produces a fatter-tailed distribution of outcomes.

The fundamental motivation for regulating the financial system is to align the interests of the private sector with society. In technical language, this is a principal–agent problem, where the principal (the supervisor) seeks to make the agent (the financial institution) act in the interest of society.

That relationship changes with AI, as the one-sided principal–agent problem becomes two-sided: principal–agent–AI. The supervisors seek to control the behaviour of banks, while both must control their AI. Unfortunately, the way we now incentivise – the carrots and sticks inherent in the supervisory structure – does not work with AI.

In effect, the supervisors’ traditional levers of incentives, penalties and reputational pressure lose traction when key decisions are delegated to autonomous systems.

Banks cannot effectively explain how AI works, nor how it makes decisions. Meanwhile, the supervisors cannot effectively regulate algorithms to which penalties, reputational damage and bonus clawbacks mean nothing.

Ultimately, this implies that the current slow and deliberate human-centred control system is ill-suited to controlling a far more agile AI system.

AI brings new challenges in accountability. It is already very difficult to hold individual bankers accountable. That becomes even harder as AI use proliferates, creating new avenues for those intent on exploiting the system for private gain.

AI accelerates the speed of crises through its unmatched capacity to monitor the system, evaluate strategic alternatives and execute complex decisions at a speed no human can match

The authorities cannot outpace the private sector, but they can narrow the gap. To begin with, they need to develop their own AI capabilities directly within the operational functions of the authorities. Financial stability, monetary policy and supervision should take the lead on AI in their organisations, and not leave it to auxiliary divisions such as IT, data or innovation.

One challenge is the implementation of AI engines. The authorities are justifiably reluctant to use commercial systems, especially from vendors in foreign jurisdictions, as there is a significant chance of data leakage and confidentiality violations. The alternative is to develop their own internal engines, either directly or by using open-source engines.

While seemingly attractive, we suspect that most authorities will find it difficult, if not impossible, to allocate the necessary financial and human capital resources to set up their own engines to match the capabilities of private-sector systems.

The vast middle ground might be engagement with vendors in the local jurisdiction to set up high-quality AI engines for authority purposes. Doing this in the local jurisdiction is important as it allows the authority to exercise the necessary control.

The financial system is global, but the authorities operate within narrow and jealously guarded silos. Here, AI can help. Authorities across multiple jurisdictions could set up a single AI engine for a common purpose and to meet global challenges. However, restrictions on data sharing preclude doing that today.

The authorities can leverage a technique called federated learning: training takes place locally inside each authority and on data it controls, while only model weights are shared. Federated learning allows authorities to train a shared model across jurisdictions without sharing the underlying data.

Since consequent neural networks are significantly over-parameterised and the result of optimisation across multiple jurisdictions, there is practically no way to reverse the engine weights onto individual data points. This protects confidentiality while enabling collective intelligence.

Furthermore, AI creates new ways for real-time supervision. It is technically straightforward to set up direct AI-to-AI communication links (known as API interfaces) that allow the authority AI to communicate directly with the private-sector AI so that it can test responses and benchmark regulations.

These ideas build on early experiments in interactive stress testing, such as the Bank of England’s 2024 ‘system-wide exploratory scenario’, which incorporated interactive elements that allowed participants to adjust strategies in response to emerging conditions during the simulation.

Fast crises require fast responses, and current crisis-intervention facilities are likely to be too slow. This suggests that the authorities should set up automated facilities to pre-empt a crisis, perhaps to release liquidity at the same time that private-sector AI contemplate whether to run in response to an external shock.

Finally, the authorities should keep track of AI use in their monitoring frameworks. Directly identifying AI adoption at a divisional level in the private sector (such as risk management, credit and treasury functions) is a fruitful avenue. This would include the type of AI engines used, how they are trained and where they are procured from.

This same framework should also monitor vendor concentration, since dependence on a small set of providers increases the risk of synchronised behaviour during stress.

The financial authorities find it difficult to meet challenges arising from AI. If they proactively engage with AI, the authorities can markedly improve the supervisory process and stabilise the financial system. If they do not, the likely outcome is more misbehaviour, fraud, instability and financial crises.

Unfortunately, much of the authorities’ public discussion of the impact of AI on the financial system does not seem to engage with the most important threats arising from AI. Instead, much of their AI policy analysis appears to remain grounded in traditional, pre-AI methodological approaches.

Ultimately, the effectiveness and stability of the financial system depends on whether the authorities can master the same AI technologies that already are revolutionising the way private-sector firms operate.

References

Bank of England (2024), System-Wide Exploratory Scenario: Final Report.

Danielsson, J (2022), The Illusion of Control, Yale University Press.

Danielsson, J (2022), “The illusion of control”, VoxEU.org, 7 November.

Danielsson, J (2024), “The one-in-a-thousand-day problem”, VoxEU.org, 24 December.

Danielsson, J, R Macrae and A Uthemann (2023), “Artificial Intelligence and Systemic Risk”, Journal of Banking and Finance 140: 106290.

Danielsson, J and A Uthemann (2024), “AI financial crises”, VoxEU.org, 26 July.

Danielsson, J and A Uthemann (2025), “Artificial intelligence and financial crises”, Journal of Financial Stability 80: 101453.

Norvig, P and S Russell (2021), Artificial Intelligence: A Modern Approach, Pearson.

This article was originally published on VoxEU.org.