Drawn from published evidence and regulatory guidance specific to financial services. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 4 · Impact 5
Algorithmic Discrimination Producing Unlawful Disparate Impact in Credit and Insurance
AI credit scoring, mortgage underwriting, and insurance pricing models trained on historical financial data encode and amplify past discriminatory practices, producing systematically less favourable outcomes for racial and ethnic minority applicants, women, older consumers, and residents of historically redlined geographies, in violation of fair lending and equal opportunity law even when protected characteristics are not explicit model inputs.
CriticalLikelihood 3 · Impact 5
AI Model Failure in Credit, Risk, or Trading Systems Causing Material Financial Loss
A material AI model used in credit risk, market risk, trading, or fraud detection produces significantly erroneous outputs due to model drift, data quality failure, out-of-distribution market conditions, or adversarial manipulation, resulting in large unexpected credit losses, trading positions outside risk appetite, material fraud losses, or incorrect regulatory capital calculations that are not detected until substantial harm has occurred.
CriticalLikelihood 3 · Impact 5
AI Algorithmic Trading Amplifying Market Volatility or Contributing to Flash Events
AI-driven trading algorithms interacting in a shared market microstructure produce emergent, unintended collective behaviour — including feedback loops, liquidity withdrawal cascades, or correlated position unwinding — that amplifies market volatility, triggers circuit breakers, or contributes to a flash crash causing widespread investor losses and attracting regulatory scrutiny over market integrity.
HighLikelihood 4 · Impact 4
Adverse Action Explanation Failure for AI-Driven Credit and Financial Decisions
An AI credit or insurance decisioning system produces a denial or adverse outcome but cannot generate the specific, principal reasons required by ECOA Regulation B, GDPR Article 22, and equivalent UK and EU law — either because the model is insufficiently interpretable or because the vendor lacks explanation capability — exposing the institution to regulatory enforcement, class action litigation, and remediation costs.
HighLikelihood 4 · Impact 4
AI-Enabled Financial Fraud, Deepfake Authentication Bypass, and Social Engineering at Scale
Adversaries deploy AI-generated synthetic voice, video, or text to bypass identity verification systems, impersonate executives for business email compromise, conduct AI-powered social engineering at scale, or generate deepfake documentation to defeat KYC and AML controls, causing direct financial losses, fraud liability, and regulatory sanctions for inadequate financial crime prevention systems.
CriticalLikelihood 3 · Impact 5
AI Vendor Concentration Creating Systemic Financial Sector Vulnerability
Widespread adoption of a small number of shared AI platforms — for credit scoring, fraud detection, AML monitoring, or trading — across the financial sector creates systemic risk where a vendor outage, model error, or security compromise simultaneously impairs multiple institutions, with correlated AI model behaviour potentially amplifying sector-wide credit or market stress during adverse economic conditions.