Every AI automation we implement comes with built-in governance — documented, auditable, and aligned with your regulatory obligations.
UK AI regulation is evolving. The FCA, FRC, Charity Commission, and ICO all expect documented, controlled use of AI.
Boards and committees need assurance that AI adoption is responsible, transparent, and doesn't expose the organisation to undue risk.
Without proper documentation, AI-assisted processes create audit risk. Auditors need to see what AI did, who reviewed it, and how it was evidenced.
Built into every engagement — not an add-on.
All AI processing is performed via Anthropic Claude under a commercial Data Processing Agreement. Client data is never used to train AI models. Anonymous templates are used for development and testing. International data transfers are covered by UK GDPR-compliant safeguards. We have reviewed the EU AI Act obligations coming into full effect from August 2026. The automations we build and deploy sit outside the high-risk categories — they assist your finance team's judgement, they don't replace it.
We review your existing AI policy or create one from scratch. Covers approved tools, data classification rules, review protocols, escalation procedures, and incident reporting.
One-page process note for every automation: what data goes in, the logic applied, what comes out, who reviews it, and how the audit trail is maintained.
Every AI output is reviewed by a qualified person before posting to the ledger. Parallel running during implementation. Formal sign-off protocols for every automated process.
Finance Committee and Risk Committee briefing notes, adapted to your governance structure. Trustees receive clear, jargon-free updates on what AI is doing and how it is controlled.
Quarterly effectiveness reviews, annual policy review, continuous regulatory monitoring, and incident reporting procedures. Governance evolves as regulations and your use of AI mature.
Every package includes governance as standard. This is a core differentiator — not an upsell.
Half-day diagnostic
Core automation package
Full month-end automation
Ongoing partnership
The regulatory landscape for AI in UK finance is evolving rapidly. Here are the key frameworks we monitor and align to.
Five AI principles: safety, transparency, fairness, accountability, and contestability. Pro-innovation approach with sector-led regulation.
Consumer Duty requirements and SM&CR accountability for AI-driven decisions. Expectations on firms to demonstrate responsible AI use.
AI in audit guidance (June 2025). Expectations for documentation, human oversight, and quality management when AI supports financial reporting.
Trustee duties around prudent resource management and authentic representation. Guidance on responsible use of new technology in the sector.
UK GDPR compliance, Data Protection Impact Assessments for high-risk processing, and guidance on AI and automated decision-making.
Professional accountability for AI-assisted work. Guidance on maintaining scepticism, recognising bias, and documenting AI involvement.
Technology provisions in the international ethics code for accountants. Principles for responsible use of technology in professional services.
75% of UK financial services firms are already using AI. Only 1 in 20 charities feel “extremely well prepared” to manage the risks.
Book a free 30-minute discovery call. We'll discuss your governance requirements and how our framework fits your organisation.