Having served as a Local Councillor from 2018, I gained firsthand experience in the intricacies of local decision-making and the operational challenges faced by authorities. Later, in Change Management, I worked to enhance business functions and processes. When AI emerged as a topic of serious public discourse in 2022, its potential to reshape public services became clear. With these experiences in mind, I followed yesterday’s government announcement with interest and would like to reflect on its immediate implications here.
Yesterday, Sir Kier Starmer set out a blueprint to ‘turbocharge’ AI in the UK. In unveiling details of the government’s AI Opportunities Action Plan, the Prime Minister promised that AI would transform lives, making public services more streamlined and more efficient.
But AI is not a panacea, nor is it an existential threat—at least not yet. It is a set of tools that can help public services streamline routine processes. Like the World Wide Web and smartphones before it, this gradual streamlining will, over time, lead to radical changes in how companies and the public sector conduct their daily operations.
When we talk about AI in 2025, we’re generally referring to Large Language Models (LLMs), like ChatGPT and Google Gemini. These systems process vast amounts of data and generate outputs that approximate or combine what they’ve “learned”—essentially creating a synthesis of all the information they’ve been trained on about a topic.
Since ChatGPT and other LLMs sparked public interest in AI, every major corporation has sought to reassure investors by unveiling their own AI strategy—even if some of these have proven to be half-baked. Behind this rush to roll out AI tools to consumers lies a more gradual, less glamorous process that is transforming business operations. In the longer term, this transformation will create new industries and reshape existing ones, much as having internet access in our pockets revolutionised business and communication.
As this transition occurs it is critical to provide workers with the skills they need to take advantage of these opportunities, promoting innovation and contributing to fair and decent work.
Yesterday, the Prime Minister promised that AI could revolutionise public service. What does this look like in practice?
Governance
Local authorities handle tremendous amounts of complex data. LLMs could be deployed quickly to assist with transcribing meetings, writing minutes, and tracking actions. Local officials spend an inordinate amount of time writing briefings and reports—many of which are read by only a handful of people. LLMs streamline these processes by creating templates, converting meeting transcripts into draft reports, or summarising lengthy papers into digestible outputs for overstretched public representatives.
Google Notebook LM allows users to train their own models and have an LLM find patterns, compile briefings and draft Frequently Asked Questions with a click. Local officials could, for instance, upload every set of minutes from a particular committee to quickly create a searchable dataset.
Planning
Similarly, personalised chatbots could be quickly trained on historical planning data, local policies, and previous decisions to help consultants and the public interact with the planning process more intuitively. For instance, a chatbot could provide the first line in support for planning enquiries, referring complex issues up to an expert. These systems would need to maintain complete transparency, avoid hallucinations, and operate under human oversight. Initially, these bots should serve only as an additional interface, alongside existing portals and must clearly cite their training data.
Casework
Government at every level, rightly, receives enquiries from the public. Social care, housing and freedom of infomation requests are three areas where LLMs could support human case workers. LLMs could be trained to sift through huge data sets and draft Freedom of Infomation requests, again providing their sources are transparent and accessible. Similarly, housing and social care enquiries could potentially be automoted. Tools like Magic Notes are already being trialed by Barnet, Kingston and Swindon councils to transcribe meetings, suggest actions and draft letters to GPs. But, these systems need gradual implementation and human oversight if they are to be making life chaning decisions.
Ethical considerations
Bias in AI systems is a well-documented risk, particularly in decision-making areas like social care or housing. These biases often stem from the data used to train models, which may reflect historical inequalities. Ensuring fairness requires robust oversight, transparent algorithms, and diverse datasets. Data security is another pressing concern; public trust hinges on safeguarding sensitive information from breaches or misuse. Additionally, while AI may streamline processes, it raises questions about its impact on employment. Automation will displace people, making it vital to invest in upskilling staff for new opportunities.
Disclainer: The Image for this blog was generated by an AI, an AI proofed some of the content and helped me brainstorm initial ideas.