Drawn from published evidence and regulatory guidance specific to all sectors. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 5 · Impact 5
Algorithmic Bias and Discriminatory Outcomes Across Protected Characteristic Groups
AI systems trained on historical or unrepresentative data produce systematically different outputs across gender, racial, ethnic, age, disability, and socioeconomic groups — either by direct reliance on protected attributes or through proxy variables — resulting in disparate treatment, indirect discrimination, and legal exposure under equality and anti-discrimination law in the operating jurisdiction.
CriticalLikelihood 5 · Impact 4
Generative AI Hallucination and Confabulation in Consequential Decision Workflows
Large-language-model and multimodal-generative AI tools produce plausible-sounding but factually incorrect output — fabricated citations, invented statistics, non-existent sources, hallucinated product or legal information — that is acted upon by staff or customers without independent verification, resulting in financial loss, regulatory exposure, reputational harm, and third-party liability.
CriticalLikelihood 5 · Impact 5
Personal Data Leakage Through AI Training, Inference, or Telemetry
Personal data is exposed to AI systems without a valid lawful basis — either through staff pasting personal or confidential information into public AI chatbots, through vendor telemetry and logging arrangements not covered by a Data Processing Agreement, through model-memorisation enabling adversarial extraction of training data, or through cross-border inference endpoints lacking a valid transfer mechanism — triggering GDPR, national privacy-law, and contractual breach consequences.
HighLikelihood 4 · Impact 4
Automation Bias and Unsafe Over-Reliance on AI Outputs
Staff accept AI recommendations without applying independent judgment — particularly in high-volume workflows where AI is used for triage, prioritisation, or draft generation — resulting in systematic failures to catch AI errors that a vigilant human would have identified, and in erosion of the human-in-the-loop safeguards on which regulatory and contractual commitments depend.
HighLikelihood 4 · Impact 4
Third-Party AI Vendor Concentration and Supply-Chain Risk
Critical AI capabilities are concentrated in a small number of foundation-model providers, cloud-inference endpoints, and AI-service vendors — creating systemic exposure to correlated failure, vendor outage, unilateral pricing or terms changes, model-behaviour changes breaking downstream integrations, and vendor insolvency or sanctions-regime changes that can make a mission-critical AI service abruptly unavailable.
HighLikelihood 4 · Impact 4
Prompt Injection, Data Poisoning, and Adversarial Attacks on Production AI
AI systems face attack vectors unique to AI — direct and indirect prompt injection, adversarial inputs, model-extraction through API probing, training-data poisoning via compromised third-party datasets, and supply-chain attacks on model components and adapters — that can cause the AI system to leak confidential information, execute unauthorised actions through tool-use or agentic integrations, or produce attacker-controlled outputs.