Last year, as thousands of customers withdrew their deposits from Silicon Valley Bank in one of the biggest bank runs in US history, British venture capitalist Matt Clifford hit the phones.
He spent a frenzied weekend juggling calls with UK ministers, explaining that small British tech companies would be unable to cover their costs, and passing on messages from government to businesses to dissipate panic. When HSBC stepped in to buy the failing bank, entrepreneurs thanked him.
Since then, the 39-year-old has gained increasing authority across government, advising successive Tory and Labour administrations on their tech policies.
But several media and tech executives told the Financial Times they felt the former McKinsey consultant — who set up a successful investment firm with offices around the world — had been given outsized influence over policy in the AI sector.
Sceptics argue that Clifford, who helped establish the UK’s AI Safety Institute and AI Safety Summit and has led the new government’s AI policy review, has allowed the government to become too narrowly focused on AI safety.
“He is not elected, and yet he has vast amounts of power to shape policy,” said one British tech executive.
Clifford — who studied history at Cambridge and computational statistics at MIT and co-founded venture capital firm Entrepreneur First at the age of 25 — is part of a clutch of influential government advisers who want the UK to take a leading role in AI safety.
But critics argue this has been detrimental to other parts of the British AI industry.
One chief executive said their biggest concern was that the narrow pool of advisers, including Clifford and AI Saftey Institute chair Ian Hogarth, had not challenged the government enough on its focus on what they described as “absurdly distant” threats, such as AI producing chemical weapons.
Several tech experts and media executives have argued that this has allowed more pedestrian dangers in AI, including bias and copyright infringements, to be overlooked.
Damian Collins, former tech minister, said Clifford was “clearly a hugely capable person but the balance of interests being represented and how they’re being represented is a concern”, referring in particular to the fierce debate around copyright protections for publishers and media producers.
Clifford has been an advocate for relaxing copyright restrictions to allow AI companies to mine data, text and images to train their algorithms.
Meanwhile, Hermann Hauser, co-founder of chipmaker Arm and venture capital firm Amadeus Capital, said that policy discussion in the UK “is so dominated by danger and security that the government is missing the main plot” — its huge potential to advance economic productivity.
One tech executive stressed that the question of who has the ear of the government on formulating AI policy was critical. “The thing that’s at stake here is an irreversible one-time challenge: educating politicians as to what this technology can do.”
One former government official points out that in other policy areas — such as clean power and national infrastructure — ministers take advice from a wide range of voices to provide strategic guidance.
They said it was “extremely odd for a country of 70mn that there are so few people that are being called upon to advise on AI policy”, adding that “it’s very puzzling that [Clifford] was able to go from running the Safety Summit to basically crafting the government’s AI strategy”.
Still, allies in the tech community argue Clifford has found the right balance of supporting government but also speaking truth to power. They point out that many of his critics have their own vested interests, including those in the media sector.
Clifford’s involvement with government goes back to 2022, when he was chosen to run Britain’s new Advanced Research and Invention Agency. He was then appointed by former prime minister Rishi Sunak in 2023 to build the UK’s task force on advanced AI.
He was called upon again later that year to lead preparatory work for the inaugural AI Safety Summit, where he was wheeled out in front of investors and politicians to explain the dangers of the nascent technology.
“Most of the tech community is very technical, they come across as very nerdy, politicians find that hard to understand,” said Dom Hallas, executive director of the Startup Coalition, who has worked with Clifford.
“Matt is warm and personable. What he understands is how to work out what is happening, find a solution and then give politicians the credit,” he added.
When Sunak launched the UK’s AI Safety Institute at the summit in November last year, Clifford was made a member of the advisory board.
After Labour’s election victory in July, new tech secretary Peter Kyle asked Clifford to compile an AI Opportunities Action Plan with recommendations on how to support the sector in the UK. His report is due to be published next week.
Following the summit last year, Clifford said he was excited about “the people who are going to build the technology that means that AI is safe, seen to be safe and therefore widely adopted”.
This has led some to argue that government policy is now disproportionately benefiting companies that operate in the AI safety market, including Pattern Labs and Advai.
Clifford is a small investor in Faculty Science, a company that has received more than £1mn in contracts from the government’s AISI, to test for things such as “jailbreaking” — prompts to coax AI chatbots into bypassing their guardrails.
He points out he owns just 0.02 per cent of Faculty, which represents less than 0.1 per cent of his assets, and had no hand in the awarding of that contract.
Clifford’s wide-ranging investment portfolio also includes medtech company Accurx, which has large NHS contracts.
Asked why he had taken on successive unpaid advisory roles, Clifford told the FT he was motivated by patriotism: “I think the UK can be the best place in the world for tech . . . and I believe we really need the private sector to engage in government.”
Clifford added that ideas on AI safety he had espoused were “mainstream”, and that far from being an “AI doomer”, he had spent his career helping people build AI companies. People close to Clifford say his Action Plan next week will demonstrate his more bullish view of potential in the AI market, including expanding Britain’s compute capacity.
A government spokesperson said that when industry experts were brought in to support the government’s work, “they will often come with outside interests, which is why we have robust processes in place to manage them appropriately”.
“The AI Opportunities Action Plan will identify ways to accelerate the use of AI across the economy and Matt has engaged widely across AI start-ups, industry leaders, academia and civil society,” they said. They added that he had “no role in deciding government policy or awarding contracts and has been appointed to put forward ideas”.