AI IndustryFinance

AI in Finance: How JPMorgan, Goldman Sachs Are Deploying LLMs for Risk Management

Wall Street's largest banks are moving beyond experimental AI projects to full-scale LLM deployments for risk management, fraud detection, and regulatory compliance. JPMorgan's IndexGPT and Goldman's internal AI platform are processing millions of transactions daily with unprecedented accuracy.

Laura KimJan 15, 202610 min read
Share:

TL;DR

Major Wall Street banks have moved from AI experimentation to full-scale production deployments. JPMorgan's IndexGPT platform now processes risk assessments for $2 trillion in daily transactions. Goldman Sachs' internal AI system has reduced false positive rates in fraud detection by 60%. Across the industry, LLM-based systems are becoming essential infrastructure for regulatory compliance, credit risk analysis, and real-time market surveillance.

What Happened

JPMorgan Chase, the world's largest bank by assets, revealed that its IndexGPT platform — initially developed for investment research — has expanded into a comprehensive risk management system processing over $2 trillion in daily transactions. The system uses a custom fine-tuned LLM trained on decades of market data, regulatory filings, and internal risk assessments to identify potential risks in real-time, flagging suspicious patterns that traditional rule-based systems miss.

Goldman Sachs has deployed what it calls "GS AI," an internal platform built on a customized version of an open-source model, fine-tuned on 30 years of proprietary trading, risk, and compliance data. The platform serves 15,000 employees across trading, risk management, and compliance divisions. Its fraud detection module alone has reduced false positive rates by 60% while catching 25% more genuine fraud cases — a dramatic improvement that saves the bank an estimated $200 million annually in operational costs.

Morgan Stanley, Bank of America, and Citigroup have made similar moves. The industry-wide shift was catalyzed by updated guidance from the OCC (Office of the Comptroller of the Currency) that for the first time explicitly acknowledged AI/ML models as acceptable components of bank risk management frameworks, provided they meet certain transparency and validation requirements.

Why It Matters

The financial sector's embrace of LLMs for risk management addresses several critical pain points. Traditional rule-based systems generate enormous numbers of false positives — sometimes 95% or more of flagged transactions turn out to be legitimate. This wastes investigator time and creates alert fatigue. LLM-based systems dramatically improve signal-to-noise ratios by understanding context and nuance in ways that rule-based systems cannot.

For regulatory compliance, LLMs can analyze thousands of pages of regulatory text, interpret how new rules apply to specific business activities, and automatically update compliance procedures — work that previously required armies of compliance officers and external counsel. Bank of America estimates that its AI compliance system saved 2 million person-hours in 2025.

Technical Details

Technical approaches vary but share common patterns:

  • Domain-Specific Fine-Tuning — All major banks are training custom models on proprietary data. JPMorgan's model was fine-tuned on 500 billion tokens of financial data including SEC filings, earnings transcripts, market data feeds, and internal risk reports.
  • Real-Time Streaming Integration — These systems integrate with real-time transaction feeds, processing millions of events per second through lightweight inference endpoints while routing complex cases to more powerful models for deep analysis.
  • Explainability Requirements — Regulatory requirements mandate that AI risk decisions be explainable. Banks use attention visualization, feature attribution, and natural language explanation generation to satisfy audit requirements.
  • Human-in-the-Loop Governance — All systems operate under strict governance frameworks where AI recommendations above certain thresholds require human approval. The median automation rate is 70% for routine decisions, dropping to 20% for high-value or unusual transactions.

What's Next

The next frontier is cross-institutional AI for systemic risk monitoring. The Federal Reserve is piloting a program where anonymized AI insights from multiple banks are aggregated to detect emerging systemic risks — essentially using AI to create an early warning system for the financial system as a whole. Additionally, central banks worldwide are exploring how AI can improve monetary policy analysis and financial stability assessments.

Share:

Related Articles