Drawn from published evidence and regulatory guidance specific to HR and recruitment. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 4 · Impact 5
AI Resume Screening Perpetuating Historical Hiring Discrimination at Scale
AI applicant screening systems trained on historical hiring decisions encode and replicate past discriminatory selection practices — systematically downranking candidates with minority ethnic names, employment gaps associated with caregiving responsibilities, or backgrounds from non-elite educational institutions — generating disparate impact against multiple protected groups across thousands of applications before the discrimination pattern is identified by HR oversight.
CriticalLikelihood 3 · Impact 5
AI Video Interview Analysis Using Prohibited or Pseudo-Scientific Assessment Proxies
AI video interview tools that purport to assess candidate suitability through facial expression analysis, micro-expression detection, voice tone evaluation, or eye movement patterns rely on pseudo-scientific methodology, produce racially and disability-correlated assessment disparities, and from August 2026 constitute prohibited emotion recognition in EU employment contexts under EU AI Act Article 5 — exposing employers to regulatory enforcement and discrimination claims while providing no validated predictive value for job performance.
CriticalLikelihood 3 · Impact 5
Automated Workforce Reduction Selection Without Meaningful Human Oversight
AI systems used to identify employees for redundancy, performance management, or role elimination — including AI productivity scoring, attendance monitoring, and performance ranking — produce selection outputs that disproportionately affect protected characteristic groups and are implemented without adequate human review of individual circumstances, creating wrongful dismissal liability, discrimination claims, and EU AI Act Article 14 human oversight violations for high-risk employment AI.
HighLikelihood 4 · Impact 4
Employee Monitoring AI Causing Psychological Harm and Constituting Unlawful Surveillance
Pervasive AI employee surveillance systems — including keystroke logging, continuous screen monitoring, webcam surveillance during remote work, location tracking, productivity scoring, and sentiment analysis of communications — cause documented psychological harm including anxiety, stress, and burnout, reduce retention, and may constitute unlawful processing under GDPR where monitoring is disproportionate, insufficiently disclosed, or lacks a valid legal basis for each specific monitoring activity.
HighLikelihood 4 · Impact 4
AI Compensation Tools Encoding and Amplifying Gender and Racial Pay Gaps
AI salary benchmarking, pay band recommendation, and compensation equity analysis tools trained on market pay data that reflects historical gender, race, and disability pay discrimination produce algorithmic pay recommendations that perpetuate those gaps under the guise of market-rate objectivity, obscuring pay equity violations and creating legal exposure under equal pay legislation while providing employers false assurance that AI-determined compensation is non-discriminatory.
HighLikelihood 4 · Impact 4
AI Assessment Tool Failure to Accommodate Candidates and Employees with Disabilities
AI cognitive aptitude tests, personality assessments, gamified evaluations, and timed psychometric tools used in hiring and performance management fail to provide reasonable adjustments for candidates with dyslexia, ADHD, autism spectrum conditions, visual impairments, or motor disabilities that affect AI-assessed performance but do not affect job capability — systematically screening out qualified disabled candidates in violation of ADA, Equality Act, and EU disability equality law.