
Andrew Neel
OpenAI, the developer of ChatGPT, has signed a new agreement with the UK government aimed at increasing productivity in public services through the use of artificial intelligence (AI), officials have announced.
The partnership, formed between OpenAI and the Department for Science, Innovation and Technology, could allow the AI firm access to UK government data. It also paves the way for its technology to be integrated across several public sectors including education, defence, national security, and the justice system.
Technology Secretary Peter Kyle described AI as “fundamental in driving change” across the UK and essential for “driving economic growth.” However, digital privacy advocates have criticised the agreement, saying it reflects what they call “this government’s credulous approach to big tech’s increasingly dodgy sales pitch.”
The document outlining the agreement states that the UK and OpenAI may collaborate on an “information sharing programme” and jointly develop “safeguards that protect the public and uphold democratic values.”
Additionally, the parties will explore investment in AI infrastructure—typically involving large-scale data centres, which are critical to running modern AI systems. OpenAI also plans to grow its presence in the UK by expanding its London office, which currently employs more than 100 people.
While the deal is a statement of intent rather than a binding contract, it sets a framework for collaboration between the tech firm and the UK government.
OpenAI CEO Sam Altman called the initiative one that could “deliver prosperity for all.” Dr. Gordon Fletcher, associate dean for research and innovation at the University of Salford, echoed the potential of the collaboration, suggesting it could free skilled civil servants to focus on complex, exceptional cases that AI may not be equipped to manage. But he cautioned that this would only be viable if done “transparently and ethically,” using a limited scope of public data.
Digital rights group Foxglove strongly criticised the agreement, describing it as “hopelessly vague.” Co-executive director Martha Dark raised concerns over the “treasure trove of public data” potentially available to OpenAI, warning that it would have “enormous commercial value” in training future versions of ChatGPT.
“Peter Kyle seems bizarrely determined to put the big tech fox in charge of the henhouse when it comes to UK sovereignty,” she said.
Kyle, who met with Altman for private dinners in March and April according to government transparency records, acknowledged in a podcast interview that the UK state struggles to match the scale of innovation seen in global tech companies.
The agreement arrives amid government efforts to stimulate an economy that is projected to have grown by only 0.1% to 0.2% between April and June.
Earlier this year, Prime Minister Keir Starmer introduced an “AI Opportunities Action Plan” to accelerate economic growth. The initiative gained support from several leading technology companies, although Tim Flagg, chief operating officer of UKAI—an association for British AI companies—criticised the plan for taking a “narrow view” by focusing heavily on large US firms.
The UK has already shown willingness to accept AI investment from overseas, having entered into similar arrangements with OpenAI competitors Google and Anthropic.
Officials say the new OpenAI agreement “could mean that world-changing AI tech is developed in the UK, driving discoveries that will deliver growth.” The government already uses OpenAI models in “Humphrey,” a suite of AI-powered tools intended to improve efficiency in the civil service.
Nevertheless, the Labour government’s strong embrace of AI has raised concerns from artists and musicians opposed to the unlicensed use of their work in training generative AI models.
Generative AI technologies, such as OpenAI’s ChatGPT, can produce text, visuals, music, and videos in response to user prompts. These capabilities are based on training data derived from books, images, music, and film footage—sparking debate over copyright violations and whether such data was obtained with permission.
In addition, generative AI has come under scrutiny for inaccuracies and misleading outputs, including false information and poor advice.