The UK government has struck a partnership with Silicon Valley AI firm Anthropic to support the integration of artificial intelligence into public services.
The deal marks one of the first private partnerships for the UK’s Sovereign AI unit, formed following the release of the tech department’s AI Opportunities Action Plan in January.
The partnership will see Anthropic sharing insights into the ways AI can be used in the public sector and to support scientific breakthroughs.
“AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents,” said Anthropic co-founder and CEO Dario Amodei.
Anthropic was founded in 2021 by former staff at ChatGPT owner OpenAI. In just four years, the company has raised more than $12bn in funding, the majority of which has come from Amazon with contributions from Google.
Last year Anthropic pushed for increased use of its tools from governments. In its home country, the company made its Claude AI models available in the AWS Marketplace for the US Intelligence Community [IC] and in AWS GovCloud.
The latest agreement was announced alongside a rebrand of the UK’s AI Safety Institute into the completely different…AI Security Institute.
According to Tech Secretary Peter Kyle, the name change represents the organisation’s shifting focus away from general questions around the safety and ethics of AI such as bias, towards issues concerning national security such as the technology’s potential use in creating chemical and biological weapons.
As part of the rebrand, the AI Security Institute will partner across government departments, including with the Ministry of Defence’s science and technology unit to assess security risks.
The Institute will also launch a criminal misuse team to work alongside the Home Office to research illegal uses of AI. An area of particular focus for the team is the use of AI in the creation of sexually explicit images of children, which the government aims to crackdown on as part of the upcoming Crime and Policing bill.
“The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” said Kyle.
“The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”
The AI Security Institute was founded in 2023 as the AI Safety Institute, announced during former prime minister Rishi Sunak’s summit on AI safety in Bletchley Park. The Institute is chaired by Ian Hogarth.
Get daily updates and enjoy an ad-reduced experience.
Already have an account? Log in
Staff at Alphabet’s Google are facing more possible layoffs, as the search engine giant continues cost cutting actions during its AI infrastructure d
Pearson is doubling down on its generative AI integration as it posts a slight bump in its pre-tax profit for 2024. The education company reported a profit
This week’s UK tech funding deals include credit score provider ClearScore, AI logistics startup Relay and more. UKTN tracked £95.6m worth of UK tech investm
International Women’s Day next week will be marked against the backdrop of a new US administration committed to removing gender equity aspirations (amongs