Drawn from published evidence and regulatory guidance specific to legal and professional services. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 4 · Impact 5
AI Hallucinated Legal Citations Submitted to Courts Causing Sanctions and Reputational Harm
Legal professionals relying on AI legal research tools submit court documents containing fabricated case citations, non-existent statutes, or invented legal propositions generated plausibly by large language models without adequate independent verification — as documented in Mata v. Avianca (SDNY 2023) and multiple subsequent incidents — resulting in court sanctions, bar disciplinary referrals, client harm, and severe reputational damage to the responsible attorneys and their firms.
CriticalLikelihood 4 · Impact 5
Client Confidentiality Breach Through AI Tool Data Processing
A legal professional submits privileged client communications, confidential instructions, transaction documents, or litigation strategy to a cloud-based AI tool whose terms of service permit use of submitted content for model training, whose data handling creates cross-client data exposure, or whose security practices are insufficient to protect against breach — resulting in inadvertent waiver of privilege, breach of confidentiality duty, regulatory sanction, and client loss.
CriticalLikelihood 3 · Impact 5
AI-Generated Legal Advice Without Adequate Professional Supervision Causing Client Harm
AI tools used in legal intake, client-facing chatbots, document drafting, or automated legal advice services produce substantively incorrect, incomplete, or jurisdiction-inappropriate legal guidance that clients act upon without the professional identifying the error — causing financial loss, missed limitation periods, invalid legal documents, or regulatory non-compliance that would not have occurred with appropriately supervised professional advice.
CriticalLikelihood 3 · Impact 5
Predictive Legal Analytics Bias Encoding Systemic Discrimination in Legal Outcomes
AI tools used to predict case outcomes, sentencing ranges, parole decisions, bail risk, litigation settlement value, or judicial behaviour are trained on historical legal data that encodes systemic racial, socioeconomic, and gender biases in the justice system — producing predictions that perpetuate those biases when relied upon by practitioners, insurers, and courts making decisions that affect individuals' fundamental rights and liberties.
HighLikelihood 3 · Impact 4
AI Contract Review Failure to Identify Material Legal Risks
AI contract review and due diligence tools used without adequate professional oversight miss material contractual risks, adverse terms, missing provisions, jurisdiction-specific enforceability issues, or regulatory non-compliance in commercial or financing documents — resulting in clients entering transactions with unidentified legal exposures, triggering professional negligence claims against the supervising lawyer or firm.
HighLikelihood 4 · Impact 3
AI Perpetuating Inequitable Access to Legal Services and Justice
AI legal tools that are accurate and reliable primarily for English-language, common-law, commercially sophisticated legal matters — reflecting their training data composition — perform significantly worse for non-English-language matters, civil-law jurisdictions, legally aided clients, immigration and asylum cases, and criminal defence contexts, widening the already substantial access-to-justice gap between well-resourced and under-resourced parties in legal proceedings.