Drawn from published evidence and regulatory guidance specific to technology and SaaS. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 3 · Impact 5
Provider Liability Cascade from GPAI Model Misclassification or Systemic-Risk Threshold Trigger
A foundation or general-purpose AI model operator underestimates its training compute, inadequately documents its capability profile, or fails to detect that it has crossed the EU AI Act systemic-risk threshold (currently 10^25 FLOPs), resulting in late notification to the AI Office, missed model-evaluation and adversarial-testing obligations, and retroactive regulatory exposure. Because GPAI obligations flow downstream to every deployer of the model, a misclassification cascades into downstream non-compliance across the entire customer base, generating coordinated enforcement risk and large-scale contractual breach exposure across enterprise contracts.
CriticalLikelihood 4 · Impact 5
Training-Data IP and Copyright Exposure from Undisclosed Pipeline Components
Training datasets incorporate copyrighted material, personal data, or content scraped in violation of platform terms — including through third-party dataset suppliers whose provenance claims are unreliable — without adequate licensing, opt-out honouring, or Article 53(1)(c)(d) EU AI Act training-data transparency. Discovery during litigation or regulatory inquiry exposes the organisation to copyright infringement claims, GDPR Article 5 violation findings, algorithmic-disgorgement remedies, and contractual indemnity triggers on enterprise agreements.
CriticalLikelihood 5 · Impact 5
Prompt Injection, Model-Extraction, and Training-Data Poisoning of Production AI Systems
Adversaries exploit AI-specific attack vectors — direct and indirect prompt injection, adversarial inputs, model-weight extraction through API probing, membership-inference, and supply-chain poisoning of third-party model components and datasets — to cause the AI system to leak confidential system instructions or customer data, execute unauthorised actions through tool-use or agentic integrations, produce attacker-controlled outputs, or enable reconstruction of training data. Attacks compound in agentic AI systems with tool access, where a successful prompt injection can escalate to full system compromise.
CriticalLikelihood 4 · Impact 5
Cross-Border Data Transfer and Export-Control Breach from Model Weights or Inference Endpoints
Model weights, training data, or inference API access are made available across jurisdictions — through open-weight publication, partner integration, cloud-region expansion, or employee relocation — without completing export-control classification against US EAR advanced-computing controls, the AI Diffusion Framework, EU dual-use controls, or PRC data-outbound security assessment. The transfer crystallises a licensable export, a cross-border personal-data transfer lacking a valid mechanism, or a regulatory filing breach, exposing the organisation to criminal penalties (EAR), administrative fines (GDPR), or operational prohibition (CAC).
HighLikelihood 4 · Impact 4
Downstream Deployer Misuse or Off-Label Use of a Provided AI System
Enterprise customers deploy a provided AI system outside the intended use documented by the provider — for example, a summarisation model deployed for consequential decision-making, a general-purpose chatbot deployed as a clinical-triage system, or a vision model deployed for employment screening — without the technical controls, evaluation data, or compliance documentation needed for that downstream use. The provider is drawn into the resulting enforcement action under EU AI Act Article 25 (re-purposing doctrine) or equivalent national rules as a substantial-modification provider or co-responsible party.
CriticalLikelihood 5 · Impact 4
Model Hallucination or Confabulation in High-Stakes Deployer Workflows Causing Third-Party Harm
A generative-AI product produces plausible-sounding but factually incorrect output — fabricated citations, invented product specifications, hallucinated legal or clinical guidance, non-existent software APIs — that is acted upon by a deployer's end customer, causing financial loss, regulatory exposure, safety harm, or reputational injury to third parties. Product-liability exposure under the revised EU Product Liability Directive (applicable December 2026) extends to software and AI, materially increasing class-action and joint-and-several liability risk for the AI provider.