New Delhi: David Lammy, Deputy Prime Minister of the UK and former Prime Minister Rishi Sunak – on opposite sides of the political aisle – shared stage late Thursday to talk about something they clearly see as bigger than politics — artificial intelligence (AI) and the future it will shape.

The symbolism mattered. Lammy has maintained policy continuity on AI, building on foundations laid under Sunak, including the first global AI summit and the UK’s 2023 AI White Paper.
Lammy described this as “the most extraordinary year of a relationship” between India and the UK — anchored by a new trade deal and an announcement that nine British universities are opening campuses in India. On talent, he was forthcoming, “Last month, the Chancellor of The Exchequer said she wants to attract top talent in the world for AI, and India is at the top of that list.”
Simplified visa rules, he suggested, are central to that ambition. The UK has already begun easing work visa pathways to attract top AI talent.
The New Delhi Frontier AI Impact commitments emerging from this summit centre around two main themes towards AI working effectively for diverse populations. First, gathering real world AI usage insights albeit with privacy and anonymisation for analytics to help with policymaking regarding jobs, skills and workforce development. Secondly, multilingual as well as contextual evaluations for AI systems across a number of languages and cultural contexts with a particular focus on countries in the Global South.
Lammy was clear about India’s role in shaping that framing. “Very grateful to India for centering the Global South. Obviously, India has an outsized contribution to make, recognise them. The face of the average worker in the global South in a very rural community, and the huge opportunities, there are in agriculture,” he said. “AI is making that difference”.
Lindy Cameron CB OBE, the British High Commissioner, reinforced that direction in a conversation with HT. She pointed to the India-UK Joint Centre for AI and the India-UK Connectivity and Innovation Centre as vehicles to build more inclusive AI ecosystems. At the summit, the UK also announced a partnership aimed at using AI to help doctors diagnose faster, teachers personalise learning, councils deliver services in minutes, and businesses create the next generation of jobs.
“You can take cutting edge tech and solve global challenges, but it has to be done securely, safely and responsibly, for massive global impact,” she said.
The Delhi summit is the fourth global AI summit since the first edition at Bletchley Parkin 2023, which focused on catastrophic AI risks. The Seoul summit in 2024 broadened that to safety, innovation and inclusivity. France’s AI Action Summit in 2025 leaned into implementation and economic opportunity. India’s 2026 edition pushed the debate toward the “global AI divide” and democratisation of AI resources.
The fifth global AI Summit is scheduled to be held in Geneva, Switzerland, next year.
Medicine and AI collaboration
Beyond geopolitics, the discussion turned to scientific convergence. Sunak highlighted the UK’s long-standing life sciences advantage — home to the Watson and Crick model that defined DNA’s three-dimensional double helix structure — and how that now intersects with AI.“With our strength in artificial intelligence, because these two things have come together, and we’re now doing drug discovery in a completely different way,” says Sunak, adding that as high as 95% of experiments don’t succeed.
That convergence is no longer theoretical. Sir Demis Hassabis, the well-known CEO and Co-Founder of Google DeepMind also co-founded artificial intelligence firm Isomorphic Labs. The London-based AI drug discovery company announced this month, a proprietary drug design engine that advances beyond AlphaFold 3 to model complex and previously “undruggable” targets, with improved accuracy, identification of novel binding pockets using amino acid sequences, and 2.3x higher performance than AlphaFold 3 in antibody–antigen structure prediction.
Sunak framed this opportunity in broader terms. “Much as we talk about economic growth with AI, actually being able to solve some of those really profound challenges we face, and improve our health and wellness, is the most exciting opportunities. When I look at India, with similar strength in healthcare and AI, it feels a pretty obvious way for us to collaborate.”
The signal is clear. AI is no longer just about chatbots and productivity tools; it is about compressing timelines in drug discovery, reducing failure rates, and translating computational power into public health outcomes. For two countries with deep research ecosystems and large patient populations, collaboration here is not aspirational — it is strategic.
The sovereignty question
The conversation also confronted the structural reality of AI geopolitics. Lammy acknowledged the dominant narrative, which is the US and China as twin poles in models, chips and compute infrastructure. Both India and the UK, he suggested, are navigating that landscape with an emphasis on sovereign capability.
The UK established a Sovereign AI Unit last year, aligned with its AI Opportunities Action Plan and backed by £500 million in funding. The objective is to develop local data, compute capacity and talent pipelines while working with frontier firms including Anthropic, Nvidia and OpenAI to ensure national capability.
Sunak reframed sovereignty through historical analogy.
“There’s a very strong historical lesson that we can all learn, and that’s because every time there’s been one of these big technological revolutions, it isn’t necessarily the country that invents the technology, is also the country country that benefits the most from it,” he said, citing the printing press invented in Mainz in 1440, and how other European nations ultimately drew greater advantage due to regulatory and institutional conditions.
The implication is sharp. In AI, invention alone does not determine outcome.
The UK’s regulatory approach reflects that philosophy. It follows a sectoral model, where individual regulators shape AI frameworks within their domains rather than relying on a single central authority. The five principles outlined in the 2023 White Paper — safety, security and robustness; transparency and explainability; fairness; accountability and governance; as well as contest-ability and redress, forming the backbone.
While the UK initially avoided bespoke AI legislation, momentum is building toward a statutory framework in 2026 to address frontier model risks. The Data (Use and Access) Act 2025 introduced requirements for reports on copyright in AI training — requiring detailed analysis of AI training data, specifically addressing licensing, transparency in model training, technical measures for copyright protection such as like metadata based identifiers, and impact of AI developed outside the UK.
The AI race seems one which both India and the UK appear determined not just to participate and win, but also play by their rules.