Make Sense Of It · Our Future AI Training

AI in 2026

What has changed, what it means for Our Future, and what to expect from the sessions ahead.

Pre-session briefing · March 2026
Part 1 of 4

Significant shifts

What has changed since early 2025 and why it matters for how organisations work.

Where the sector is

Most charities are still at the very beginning

Despite the significant shifts, most charities are experimenting with individual tools without a strategic view of where AI actually moves the needle. The organisations seeing results are the ones who are clear about which problems to aim it at.

New to AI
Experimenter
Established
Advanced
Leader

Being here - investing in understanding this before it becomes urgent - already puts Our Future ahead of most of the sector.

What's new

Five things that changed since 2025

Reasoning has improved significantly.
Models can now work through multi-step problems, analyse complex documents, and catch their own errors.
Agents are becoming real.
AI can take sequences of actions on its own: browsing, searching, drafting and sending outputs, not just answering a single question.
Working with your own data is much easier.
Point AI at documents, spreadsheets, or community feedback and get meaningful analysis back quickly.
Tools are embedded in your workflows.
AI is now built into Microsoft 365 and Google Workspace - several of you are already using Copilot this way.
Prototyping is now hours, not weeks.
Going from a rough idea to something testable in an afternoon is now realistic for a small team.
The pace

Capability is doubling roughly every seven months

Research from METR tracks how long AI can work independently on real tasks. GPT-4 could handle tasks taking a few seconds. Current models handle tasks taking an hour or more. That gap has opened up in under two years.

Seconds Minutes Hours Task duration AI can handle seconds ~10 min ~4 hrs 8+ hrs 2021 2022 2023 2024 2025 2026 Source: METR
Signals from outside

Outside the charity sector, results are already dramatic

360,000 hrs
saved per year - JPMorgan redesigned a single process (contract review)
12 wks → 10 min
Novo Nordisk redesigned one regulatory reporting workflow
3 hrs → 15 min
Average AI-assisted task speedup across 1m real conversations (Anthropic Economic Index)
1 in 4
Organisations that properly committed are already seeing measurable gains (Exponential View, 2026)
Usage research

How organisations are actually using AI

Anthropic's Economic Index analysed millions of real workplace conversations. The pattern that emerges is consistent.

12x
speedup on tasks that would take approximately 3 hours, done in approximately 15 minutes with AI
43 min
saved per person per day in the NHS Copilot trial across 30,000 workers

Most people use AI as a thinking partner and advisor rather than to complete tasks outright. The more judgment and context someone brings, the more value they get back. AI amplifies the expertise already in the room.

How it's being used

Tasks, not roles - and mostly augmentation

Anthropic's Economic Index, based on how people actually use AI across the economy, found that AI gets adopted selectively for different tasks, not uniformly across whole jobs.

57%
Augmentation: AI helping people do things better
43%
Automation: AI doing things directly

Both are useful. The right balance depends on the work.

The other side

Not everyone is convinced

approximately 75%
of AI researchers believe AI will benefit people
less than 25%
of the general public agree

Your beneficiaries and communities are part of that public, not the expert group. Their scepticism is not unfounded.

There is also a significant gap within organisations: senior leaders report saving several hours a week; most frontline staff report saving far less, or nothing.

Part 2 of 4

What the tools can do now

The different kinds of AI tool available today, and a framework for thinking about where each type fits in your work.

The tool landscape

Five kinds of AI tool

Assistants
You talk to them, they respond. ChatGPT, Claude, Gemini. The starting point for most people.
Copilots
AI built into tools you already use. Copilot in Word and Outlook, Gemini in Google Docs. Lowest friction, often already available.
Agents
AI that takes actions on your behalf, not just generates text. Can browse, send, update records, work across multiple steps. Cowork sits here, as does Copilot when it's doing more than drafting.
Builders
Using AI to design and build: websites, apps, dashboards, branded assets. Tools like Cursor and Lovable let you create working prototypes without writing code yourself - increasingly accessible to non-technical teams.
Automation
Connecting systems and triggering actions, with AI making judgment calls along the way. Zapier with AI, or custom workflows.

Worth knowing: most AI tools are now multimodal - they can work with images, documents, audio and video, not just text. This opens up possibilities beyond what you might have tried so far.

Strategic adoption

What strategic adoption looks like

AI handles
AI completes the task. You verify the output.
Processing, extraction, reformatting
AI assists
You set the direction. AI drafts. You review and refine.
Analysis, drafting, planning
Fully yours
Work that needs your judgement, relationships, and expertise.
Strategy, voice, decision-making

Most work sits somewhere in the middle. We will use this framework in Session 1.

Part 3 of 4

What has not changed

Some risks have grown as the tools have become more capable. These are the things to keep front of mind.

Staying grounded

The things that remain as true as ever

AI still makes things up.
Models can generate confident-sounding content that is factually wrong. Always verify anything that matters.
Having appropriate governance in place matters.
Organisations need clear policies about what AI should and shouldn't do on your behalf, and accountability structures that are well understood.
Agentic risks are different.
When AI is taking actions, errors can compound across multiple steps and systems.
Accountability sits with you.
The ICO published guidance in January 2026 making clear organisations remain fully responsible for data protection compliance, even when AI acts on their behalf.
Data ethics still matter.
Information shared by community members was shared in a context of trust. Using it with AI raises questions beyond legal compliance: would the people who shared it feel comfortable? We explore this in Session 2.
Context is everything.
The quality of what you get out depends on the quality of what you put in.
Part 4 of 4

What other organisations are doing

Real examples from charities finding value in operations, impact measurement, and community reach.

Sector practice - operations (1 of 2)

Reducing the documentation burden

Citizens Advice - SORT

AI live-transcribes client calls and drafts case notes automatically. Across 2,000 advice sessions, it halved average write-up time while maintaining quality scores. A second tool, ConvoCoach, uses AI personas to simulate stressed or frustrated clients for adviser training.

Kingston Council

AI-powered case note assistants for social workers. By automating administrative notes, they returned several hours a week per worker for direct client contact. The AI handled the documentation burden; the humans got more time for the work that matters.

Sector practice - operations (2 of 2)

Insight from data and self-service knowledge

London Funders

Meeting transcription through Fathom saved the most time of all their AI experiments. They also used AI to analyse 130+ grantee survey comments - identifying themes across the full dataset, rather than only the handful of responses that happened to catch someone's eye.

RSPCA

A chatbot embedded in Google Chat helps frontline staff access over 4,500 internal documents. Early estimates: 160,000 staff hours saved per year through self-service answers, instead of waiting for a subject expert to become available.

Sector practice - impact and community (1 of 2)

Creating evidence and surfacing insight

Noise Solution CIC / Transceve

Young people record short reflection videos after music sessions. AI analyses these for wellbeing indicators and personal development markers - turning natural conversation into structured evidence where none existed before. The approach spun out into a separate organisation helping other charities do the same.

Masonic Charitable Foundation

Used AI to analyse monitoring reports across their grant portfolio. It surfaced patterns that would have taken weeks to identify manually - and found connections across programme areas that staff hadn't been able to see at that scale before.

Sector practice - impact and community (2 of 2)

Reaching communities and building trust

GRCC - Gloucestershire Rural Community Council

Built a smart dashboard collating community-level data from multiple sources. Their key finding: "even with cutting-edge tools, trust is the foundational currency." Community partners engaged more when they saw themselves as co-owners of the data, not just sources of it.

Limbic Access

An AI chatbot used by 45% of NHS Talking Therapies services for mental health self-referral. Referrals from non-binary people increased by 179%, from Asian patients by 39%, from Black patients by 40%. 40% of referrals now come outside working hours. The gain was reach, not just efficiency.

Before Session 1

What to bring

You have already done the most useful preparation by completing the survey. But it is worth a few minutes thinking about: