Drawn from published evidence and regulatory guidance specific to education and EdTech. Each is pre-scored on a 5×5 likelihood × impact matrix in the Risk Register tool and referenced in the generated policy.
CriticalLikelihood 4 · Impact 5
AI Exam Proctoring Producing Biased False Suspicion Flags Against Protected Groups
AI remote exam proctoring systems — which use facial recognition, gaze tracking, and behavioural anomaly detection to flag potential cheating — produce systematically higher false-positive suspicion rates for students with darker skin tones, students with disabilities affecting gaze and motor behaviour, students in non-Western home environments, and students using assistive technology, resulting in discriminatory academic integrity allegations, unwarranted grade penalties, and psychological harm to disadvantaged student groups.
CriticalLikelihood 4 · Impact 5
AI Adaptive Learning Systems Widening Achievement Gaps Rather Than Closing Them
AI adaptive learning platforms that personalise content difficulty, pacing, and learning pathway based on student performance data systematically assign lower-level content and lower academic trajectory pathways to students from disadvantaged backgrounds, students with disabilities, and students from minority groups — reinforcing rather than challenging lower performance expectations, and perpetuating educational inequity through algorithmic tracking that replicates and amplifies the effects of historical educational underinvestment.
CriticalLikelihood 3 · Impact 5
Commercial Exploitation of Student Data Through EdTech AI Platforms
EdTech vendors with access to student personal data — including behavioural profiles, learning performance, assessment responses, and engagement patterns — use AI to build detailed student profiles that are sold or licensed to third parties, used for behavioural advertising targeting minors, or used to train AI models deployed commercially beyond the educational context, in violation of FERPA, COPPA, GDPR, and student data protection laws, causing harm to students whose educational data is exploited without their meaningful knowledge or consent.
HighLikelihood 3 · Impact 4
AI Emotion Recognition and Surveillance in Classrooms Violating Student Dignity and Privacy
AI systems deployed in physical or virtual classrooms that monitor student facial expressions, body language, gaze direction, or vocal patterns to infer attention, engagement, emotional state, or cognitive load constitute prohibited emotion recognition in educational settings under EU AI Act Article 5, violate student privacy and dignity, disproportionately affect disabled and neurodivergent students whose physical behaviour differs from normative AI training data, and create chilling effects on natural student behaviour and expression in educational environments.
CriticalLikelihood 3 · Impact 5
AI Admissions and Placement Decisions Perpetuating Educational Inequity
AI university admissions scoring, school placement, gifted programme selection, and vocational tracking systems trained on historical admission and outcome data reproduce and amplify existing socioeconomic, racial, and gender disparities in educational access — systematically underscoring applicants from state schools, non-English-speaking families, first-generation students, and minority ethnic groups relative to equally capable applicants from more privileged educational backgrounds.
HighLikelihood 5 · Impact 4
AI Content Generation Enabling Undetected Academic Dishonesty at Scale
Widespread availability of generative AI tools enables students to submit AI-generated assignments, essays, and assessments as their own work at a scale and quality that current AI detection tools cannot reliably identify, undermining the validity of educational credentials, creating unfair advantages for students with greater AI access and proficiency, and requiring institutional assessment redesign at a pace and cost that many educational institutions are ill-equipped to manage.