These are faithful structural previews of what each live tool produces — real section headings, real table shapes, real drafting style. The specific content in your generated document will reflect the jurisdiction, industry, staff size, and risk appetite you provide.
These samples are illustrative — not generated documents and not legal advice. Every generated document includes its own review notes and disclaimers.
A 10-section AI usage policy with regulation citations tailored to your jurisdiction and industry.
AI Usage Policy
Draft — for review by in-house practitioners
Table of Contents
Staff may use approved generative AI tools to assist with drafting, summarisation, and research. Staff must not input customer personal data, commercial terms, or confidential product plans into any AI tool that has not been listed in Schedule A — Approved AI Tools. Where an AI output materially influences a customer-facing decision, a named human reviewer…
(Sample extract — full document runs across all ten sections.)
A pre-scored register of AI risks mapped to your sector, with likelihood, impact, mitigations, and owners.
Sheet 1 — Risk Register
| ID | Risk | Likelihood | Impact | Rating |
|---|---|---|---|---|
| R-01 | Unintended disclosure of personal data to third-party model APIs | High | High | Critical |
| R-02 | Material model error in customer-facing decisioning flow | Medium | High | High |
| R-03 | Bias in resume-screening output producing disparate impact | Medium | Medium | Medium |
| R-04 | Vendor AI changes alter model behaviour without advance notice | High | Medium | High |
| R-05 | Regulatory obligation (EU AI Act Art. 16) missed on high-risk system | Low | High | High |
(Sample extract — the live register includes mitigation, owner, review-date, and residual-risk columns across 12–15 rows.)
Plain-language staff guidelines with golden rules, data-handling tiers, and an escalation process.
Employee AI Guidelines
Golden rules
01
If the tool is not on your organisation's approved list, assume anything you type leaves the organisation. When in doubt, check with your manager first.
02
AI output can contain errors, outdated information, or fabricated citations. You remain responsible for accuracy — the AI is a drafting aid, not a source of truth.
03
If an AI tool produces output that looks harmful, discriminatory, or materially wrong, stop using it for that task and report to the named escalation owner within 24 hours.
(Sample extract — the full document contains 8–10 golden rules, a four-tier data classification guide, a printable wallet card, and an incident-reporting flow.)
Pick the tool that fits your next compliance milestone. Each generation is a one-time payment — no subscription, no account.