AI Use Policy

1. Purpose of this policy

This policy explains how we use artificial intelligence tools responsibly, transparently, and in line with our legal and ethical obligations. It is designed to:

  • Be open about how AI is used in our work

  • Protect personal, confidential, and sensitive data

  • Comply with UK GDPR in a proportionate, practical way

  • Acknowledge risks including bias and environmental impact

  • Keep humans accountable for all decisions and outputs

This policy applies to all AI tools used within the business, whether free or paid.

2. What I mean by AI

For the purposes of this policy, AI tools include:

  • Generative tools that produce text, images, audio, or video

  • Assistive tools used for summarising, drafting, analysing, or organising information

  • Automation features within software that support workflow.

AI is used as a support tool, not as a decision-maker.

3. How we use AI

AI is used to assist, not replace, professional judgement and expertise. We may use AI to:

  • Draft or summarise content for further human review

  • Support research, idea generation, or clarity checkingImprove accessibility, efficiency, or sustainability of our work

  • Assist with administrative or low-risk tasks.

We do not use AI to:

Make decisions about individuals

  • Replace professional advice or lived experience

  • Produce final outputs without human review

  • Bypass legal, ethical, or contractual responsibilities

4. Human oversight and accountability

A human is always responsible for:

  • Reviewing and approving AI-assisted outputs

  • Checking accuracy, tone, and contextIdentifying and addressing bias or exclusion

  • Making final decisions.

AI outputs are treated as drafts or suggestions. Accountability always sits with the individual user, not the tool.

5. Bias, fairness, and inclusion

We recognise that AI systems:

  • Reflect the data they are trained on

  • Can reproduce or amplify bias and exclusion

  • May perform unevenly across different groups

To manage this risk, we:

  • Critically review AI-assisted content

  • Avoid using AI in high-risk or sensitive contexts

  • Apply an accessibility and inclusion lens to outputs

  • Revise or discard content where bias is identified

6. Data protection and UK GDPR compliance

Our use of AI aligns with our obligations under the UK General Data Protection Regulation.

Lawfulness, fairness, and transparency:

We are open about if and how AI is used. AI is not used in unexpected or misleading ways. This policy supports our transparency obligations.

Purpose limitation:

AI is only used for clear, legitimate purposes. Personal data is not reused or repurposed through AI. AI is not used for profiling or evaluating individuals.

Data minimisation:

We do not enter personal data into AI tools that are not UK GDPR compliantI do not input special category data into AI systems.

Accuracy:

AI outputs are reviewed by a human before useI do not rely on AI as a source of factual truth. Errors are corrected promptly.

Storage limitation:

AI tools are not used as data storage systems. We avoid retaining AI outputs that include personal data. Data retention follows our existing policies, please see our data protection policy here.

Security:

We take reasonable steps to use reputable toolsI avoid sharing confidential information with AI systems. AI use does not replace existing security measures.

7. Environmental impact

I recognise that AI systems have environmental costs, including energy use. Our approach is to:

  • Use AI intentionally, not automatically

  • Avoid unnecessary or excessive use

  • Balance efficiency gains against environmental impact

  • AI is only used where it meaningfully improves outcomes or accessibility.

8. Transparency and disclosure

Where appropriate, We are open about AI assistance. We may include statements such as:

  • “Created with the help of AI and reviewed by a human”

  • “AI-assisted drafting, human-edited and approved”

Transparency is applied proportionately, with particular care for public-facing or influential work.

9.What AI software we use

We use a small number of AI-enabled software tools to support our work. These tools where possible are used selectively and purposefully, rather than as default systems. We recognise that some of our tools that we do not use specifically for their AI function may include AI as part of the software package. We do not rely on a single platform, and we regularly review the tools we use as technology, risk, and best practice evolve. Tools currently in use:

10. AI as a reasonable adjustment

In some cases, AI may be used as a reasonable adjustment, for example to:

  • Support fatigue or energy management

  • Improve accessibility of communication

  • Reduce cognitive or administrative load

When used this way, the purpose is inclusion and equity, not advantage. Accountability and quality standards remain the same

11.Client choice and AI use

We recognise that clients may have different comfort levels with the use of AI. Our approach is that:

  • Clients are welcome to ask how AI may be used in their work

  • AI will never be used in a way that breaches confidentiality or contractual terms

  • Clients can request that AI is not used on a specific piece of work or project. 

Where a client expresses a preference not to use AI, this will be respected wherever reasonably possible, and alternative approaches will be discussed transparently. We believe that AI is a support tool and not a requirement for our work, however added cost to match the techonology output when used as a reasonable adjustment may be passed onto the Client to ensure accessibility.

12.How we choose the AI tools we use

We are selective about the AI tools we use and do not adopt technology simply because it is new or popular. When choosing AI tools, we consider:

  • Whether the tool is appropriate for a small, low-risk business context

  • How data is handled, stored, and processed at a high level

  • Whether the tool allows us to maintain human oversight and accountability

  • The clarity of the tool’s limitations and risksWhether the benefits outweigh potential ethical, environmental, or accessibility concerns

We regularly review the tools we use and will stop using them if they no longer meet these principles.

13.How AI is used in our work 

The way AI is used varies depending on the type of work.

Writing and content development

AI may be used to:

  • Support early drafting or structuring

  • Improve clarity or accessibility of language.

  • Sense-check tone or readability

AI is not used to:

  • Replace original thinking, lived experience, or professional judgement.

All written work is reviewed, edited, and approved by a human before delivery.
Research and insight work

AI may be used to:

  • Organise or summarise notes or transcripts

  • Support early pattern-spotting in large volumes of material

  • AI is not used to: Interpret findings independently

  • Replace qualitative judgement or participant voice

  • Generate conclusions or recommendations without human oversight

Strategy and planning support

AI may be used to:

  • Support idea generation or scenario exploration

  • Summarise background information or options

  • Sense-check structure or logic

AI is not used to:

  • Make strategic decisions

  • Replace professional judgement or contextual understanding

Administrative and operational tasks

AI may be used to:

  • Draft internal notes or task lists

  • Support scheduling or workflow organisation

  • Reduce administrative burden

AI is not used to:

  • Make decisions affecting individuals

  • Process sensitive or personal data.

14. Review and updates

This policy is a living document and will be reviewed to reflect:Changes in technology, Legal or regulatory developments, Learning from practice.

Last reviewed: 31.03.2026 Next review: 30.11.2026

This policy was drafted using our FREEAI policy template, helping small and solo businesses owners to feel empowered in using AI through an ethical, access-first lens.