Citigroup.com Homepage

Agentic AI and the Future of Risk Decision-Making

Article  •  April 16, 2026  •  Contributors
robotic arm pointing to text that says Risk Management

Key Takeaways

  • Financial institutions face challenges with fragmented and inconsistent risk decision-making, where the rationale behind decisions is often lost over time.
  • Agentic AI can help by structuring both quantitative and qualitative data to create a unified, auditable record of risk assessments.
  • The goal of implementing Agentic AI is not to replace human judgment but to augment it, leading to more consistent and transparent risk management.
  • The effectiveness of AI in this context is constrained by data quality, the potential for inaccurate outputs, and the necessity for explainable systems that operate within robust control frameworks.
Financial institutions are not short of data or frameworks, but risk decision-making can be fragmented, inconsistent and hard to scale. How do risk managers ensure that decisions are applied consistently and lessons are not lost over time? For this article, Citi Institute collaborated with colleagues in Citi's Risk teams to hear about their everyday experiences and explore how agentic AI could help bridge the gap between policy and practice.

 

The New AI Landscape

Financial institutions are at an inflection point in AI adoption. Early deployments focused on automating discrete tasks, broadly categorized as TEAG-CC (translate, extract, analyse, generate, compare and code). The next phase is more consequential, with AI being applied to automate entire workflows. Improving how risk decisions are governed, documented and applied consistently across the organization will be key.

Risk management is fundamental to financial institutions, but in practice decision-making is distributed across functions and jurisdictions.

Approvals are sometimes made under time pressure and, even with robust frameworks, outcomes can depend on local conditions and expert judgement. This is not a flaw, but a feature of how markets operate.

This inconsistency creates challenges. Similar risks may be assessed differently across teams, and decision rationales are not always captured in a structured or reusable way. Over time, this reduces the ability to apply risk appetite consistently or learn from past decisions.


Three Structural Frictions

Three ongoing issues show the gap between how risk policies are designed and how they are applied:

  1. Variation in applied risk appetite: Risk appetite may be defined centrally but interpreted differently across geographies, desks and market conditions. This reflects legitimate differences in context but also introduces inconsistency.
  2. Loss of contextual reasoning over time: Rationales behind approvals often fade as teams rotate and market conditions change. Without a structured record, institutions lose visibility into why decisions were made.
  3. Difficulty separating decision quality from outcome: Losses do not always imply poor decisions. When a decision generates losses, it is often unclear whether the original risk assessment was flawed or whether unforeseeable market conditions overwhelmed a sound assessment.

 

These raise deeper questions: Are risk limits applied consistently across jurisdictions? Can front desks consistently manage risks they have been approved to take? How do we separate bad luck from bad judgment?

The Data Reality: Abundant, but Underutilized

Most trading or banking business lines and risk departments already combine quantitative metrics with qualitative, human reasoning. Credit approvals incorporate underwriting analysis and exposure scenarios. Market risk decisions reflect liquidity conditions, pricing dynamics and forward-looking judgment. This generates a rich but fragmented dataset spanning structured inputs and unstructured reasoning, including committee minutes, chat logs, research and approval conditions.

Much of this information is used in isolation, limiting its reuse as a learning dataset.

Agentic AI: Augmenting, Not Replacing

Agentic AI introduces the possibility of capturing and structuring decision-level intelligence.

A well-designed AI layer can ingest approval data across trading and financing activities, combining quantitative attributes with qualitative reasoning. Over time, this creates a structured record of how risk has been assessed under different conditions, with the appropriate jurisdictional data governance.

The objective is not fully automated approval, but rather more consistent and auditable judgment. 

In this model, AI operates within existing governance frameworks. It does not change risk appetite or replace human decision-makers. Instead, it enhances transparency and consistency by:

  • Making deviations from established practice visible
  • Providing context from comparable historical decisions
  • Creating clearer audit trails for how decisions were reached
  • Supporting oversight by risk, compliance and regulators

Risks and Constraints

The use of AI in risk management introduces several constraints.

First, data quality and completeness remain critical. Fragmented or inconsistent inputs limit reliability.

Second, large language models (LLMs) tend to hallucinate and generate inaccurate outputs. While improving, these limitations would need to be managed with the application of a model risk management framework, specific validation requirements and human oversight.

Third, institutions must ensure that AI outputs are explainable, auditable and aligned with existing control frameworks.

For these reasons, AI should be viewed as a decision-support tool, rather than a decision-maker.

Responsible AI: Controls and Guardrails

Effective adoption of AI in risk management requires a robust control framework. This includes model risk management, clear agent registration and inventory, and strong observability of how systems operate and evolve.

Identity and access controls are crucial to ensure appropriate use, alongside monitoring frameworks to detect anomalies or unintended behavior.

These guardrails should ensure that AI operates within established governance structures, reinforcing rather than weakening existing risk controls.

Regulators would require periodic reporting and updates for consequential trade approvals and credit decisions taken by seasoned risk managers using agentic AI-assisted architecture.

 

Sign up to receive the latest insights from Citi.