
Financial institutions are at an inflection point in AI adoption. Early deployments focused on automating discrete tasks, broadly categorized as TEAG-CC (translate, extract, analyse, generate, compare and code). The next phase is more consequential, with AI being applied to automate entire workflows. Improving how risk decisions are governed, documented and applied consistently across the organization will be key.
Risk management is fundamental to financial institutions, but in practice decision-making is distributed across functions and jurisdictions.
Approvals are sometimes made under time pressure and, even with robust frameworks, outcomes can depend on local conditions and expert judgement. This is not a flaw, but a feature of how markets operate.
This inconsistency creates challenges. Similar risks may be assessed differently across teams, and decision rationales are not always captured in a structured or reusable way. Over time, this reduces the ability to apply risk appetite consistently or learn from past decisions.
Three Structural Frictions
Three ongoing issues show the gap between how risk policies are designed and how they are applied:
These raise deeper questions: Are risk limits applied consistently across jurisdictions? Can front desks consistently manage risks they have been approved to take? How do we separate bad luck from bad judgment?
Most trading or banking business lines and risk departments already combine quantitative metrics with qualitative, human reasoning. Credit approvals incorporate underwriting analysis and exposure scenarios. Market risk decisions reflect liquidity conditions, pricing dynamics and forward-looking judgment. This generates a rich but fragmented dataset spanning structured inputs and unstructured reasoning, including committee minutes, chat logs, research and approval conditions.
Much of this information is used in isolation, limiting its reuse as a learning dataset.
Agentic AI introduces the possibility of capturing and structuring decision-level intelligence.
A well-designed AI layer can ingest approval data across trading and financing activities, combining quantitative attributes with qualitative reasoning. Over time, this creates a structured record of how risk has been assessed under different conditions, with the appropriate jurisdictional data governance.
The objective is not fully automated approval, but rather more consistent and auditable judgment.
In this model, AI operates within existing governance frameworks. It does not change risk appetite or replace human decision-makers. Instead, it enhances transparency and consistency by:
The use of AI in risk management introduces several constraints.
First, data quality and completeness remain critical. Fragmented or inconsistent inputs limit reliability.
Second, large language models (LLMs) tend to hallucinate and generate inaccurate outputs. While improving, these limitations would need to be managed with the application of a model risk management framework, specific validation requirements and human oversight.
Third, institutions must ensure that AI outputs are explainable, auditable and aligned with existing control frameworks.
For these reasons, AI should be viewed as a decision-support tool, rather than a decision-maker.
Effective adoption of AI in risk management requires a robust control framework. This includes model risk management, clear agent registration and inventory, and strong observability of how systems operate and evolve.
Identity and access controls are crucial to ensure appropriate use, alongside monitoring frameworks to detect anomalies or unintended behavior.
These guardrails should ensure that AI operates within established governance structures, reinforcing rather than weakening existing risk controls.
Regulators would require periodic reporting and updates for consequential trade approvals and credit decisions taken by seasoned risk managers using agentic AI-assisted architecture.