Cutting Through the Noise: Contic’s CEO on DeepSeek and the New AI Landscape

AI/Machine learning

News

Written by

Adam Lyth

Date

14 days ago

Read time

7 minutes

I’m going to cut right to the chase: DeepSeek is more than just “the next AI model.” It’s a seismic jolt in a world where we’ve grown used to only a handful of massive players like OpenAI, Meta, and Anthropic duking it out with budget-shattering GPU clusters and proprietary code. A few months ago, I might have said it was impossible for a scrappy new entrant—especially one operating under Chinese export restrictions and less-than-top-shelf hardware—to produce something that rivals the big guns. Now I’m eating those words.

How did they do it?

What’s fascinating is that DeepSeek isn’t simply another generic model. They’ve come impressively close to state-of-the-art results, and they’ve done it in a way that’s cheaper and more transparent than I would have believed possible. Everyone always assumed you’d need an endless supply of high-end chips and mountains of cash to train a frontier model. Yet DeepSeek, by their own account, pulled off something that at least edges near the best US labs for a fraction of the cost. You can find people arguing about the exact training numbers—whether it’s six million dollars or more—and the timeline of their R&D spend versus the cost of a single training run, but the bottom line is that they did it for an order of magnitude less than what typical Western labs spend. That alone would be enough to turn heads.

Then there’s the matter of how they did it. If you’ve been following all these “scaling laws” discussions, you know that a big piece of the puzzle is simply doubling down on more GPUs, more data, and bigger everything. But what many folks outside the labs don’t always realise is that algorithmic breakthroughs and engineering efficiencies can dramatically shift your cost curves. It’s not purely about brute force. DeepSeek’s team pushed the envelope on “mixture of experts,” letting different parts of the model focus on different tasks, so not every part of the network has to light up for every token. They also introduced all sorts of memory optimisations that smaller research groups have been dreaming about, but rarely get the chance to apply at serious scale.

What really got me, though, was how they’ve embraced a second stage of training—chain-of-thought reinforcement learning—to give their R1 model more robust reasoning skills. Granted, OpenAI also does something similar with their so-called “o1” variant, but they’re famously tight-lipped about the internal workings. They don’t release the real chain-of-thought or share exactly how they reward the model for correct step-by-step solutions. DeepSeek has laid all those cards on the table: you can see the entire reasoning chain, you can see how they trained it, and you can adapt those methods for your own problem domains. Suddenly, smaller outfits—research labs, universities, specialised startups—can replicate or extend these techniques without begging a tech giant for an API key. We’re going from the walled gardens of AI to something closer to open fields, and that alone is revolutionary.

Of course, none of this is happening in a geopolitical vacuum. The US has been ratcheting up chip export controls to China on the grounds that advanced AI capability is a national security concern, which it absolutely is. The logic is that if you choke off access to the best chips, you keep your lead in building these “genius in a datacenter” models. But DeepSeek’s success complicates that narrative. They’ve proven that you don’t necessarily need the top-of-the-line H100s to develop a model that stands toe-to-toe with what Anthropic or OpenAI were doing a year ago. Maybe they got some gear through creative channels, maybe they leveraged previously unbanned GPUs, maybe they used a mixture of hardware that was partially restricted only after they’d already acquired it. The exact pipeline is murky, but the practical message remains: it’s extremely difficult to lock down AI progress at a global scale.

Some analysts will say this whole situation makes export controls more important than ever, because it’s the only lever left to prevent China from spinning up a million-GPU cluster and leaping past the West. Others point out that even if you ban the absolute top chips, it’s not too hard to smuggle in a few tens of thousands of them—and once you can do that, you’re a short jump away from building powerful next-generation models anyway. We’re dealing with something akin to an AI “arms race,” where the real challenge is that any cost-savings trick discovered by one lab will be discovered (or copied) by the others in short order. Maybe DeepSeek got there first this time, but the fundamental trends in algorithmic efficiency keep marching forward for everyone. I’ve always thought that truly open AI would happen eventually—it’s just that for a while, it felt like we were drifting in the other direction. Now I’m starting to believe we’re witnessing a genuine pivot. You can read DeepSeek’s code and watch it think. You can pick apart the chain-of-thought to see exactly how it solves multi-step reasoning problems. You can re-train or fine-tune your own local version. It’s like the entire field is shifting from “only the big guys can do it” to “everyone with a decent cluster has a shot.” And while I get that it may not be great news for certain valuations, from an innovation standpoint, I can’t help but be thrilled.

At this point, I’m not sure we’ll see a total meltdown of the big AI incumbents. They’ve still got monstrous budgets, an ongoing stream of incremental improvements, and the ability to scale new models into the stratosphere. But any illusions of permanent Western dominance in LLM technology should probably be laid to rest. We might be heading for a bipolar world in which both the US and China wield extremely potent models, or perhaps a unipolar world where the US does manage to maintain an edge due to tighter export restrictions and bigger data centers. That gets us into the realm of national security strategy, which I’ll only say is above my pay grade. What’s within my wheelhouse is the excitement of seeing new methods so publicly shared. It means a research lab in London can replicate advanced reinforcement learning steps next week without forking over a king’s ransom. It means open-source enthusiasts who’ve been perfecting smaller LLM clones now have a blueprint for bridging that last mile of performance. It even means new questions about censorship, bias, and trust become urgent for more than just a handful of American or European tech firms. For better or worse, everyone is suddenly in the game.

I personally welcome that. AI doesn’t feel like the kind of technology that should be locked away behind corporate or national walls. Yes, it’s powerful, and yes, it can be a double-edged sword—but so can the internet, and we all know what wonders and horrors that’s unleashed. Openness at least ensures that people outside big industry or big government have a seat at the table, which fosters a healthier ecosystem of scrutiny, creativity, and (one hopes) progress for the greater good.

What does this mean for the UK?

Now, let’s talk about the UK angle. Earlier this year, the government released its latest AI Opportunities Action Plan, championing a vision where Britain stays at the cutting edge while also safeguarding public interests. On paper, it’s a roadmap for building up compute infrastructure (the so-called “sovereign AI resource”), adopting AI in public services, and attracting world-class AI talent with new scholarship programs. The tricky bit is that big breakthroughs aren’t always about the biggest GPU cluster anymore—DeepSeek proved that. So where does that leave us?

If the government’s plan pans out, we’ll see local compute capacity multiply, new “AI Growth Zones” get established, and a national Data Library open up—for the sake of open research and early-stage innovation. That’s precisely the kind of forward momentum that might leverage DeepSeek’s efficiency breakthroughs. Instead of playing catch-up in an arms race for the biggest data centre, the UK can pivot toward smarter ways to train these models, doubling down on the part of our national DNA that gave the world Turing and Lovelace. DeepSeek’s approach sets the tone: you don’t need an ocean of hardware; you just need the right algorithms, memory hacks, and training strategies.

One reason I find this exhilarating, as the CEO of Contic, is that it puts cutting-edge AI within reach for a broader range of UK organisations—be they SMEs, universities, or government agencies looking for a cost-effective way to harness AI in the public interest. This is exactly the “open fields” scenario that the UK government claims it wants to enable with its Action Plan. The question is: will the strategy actually make it feasible for dozens of British startups and labs to do what only a handful of super-funded companies managed before? Or will we end up funnelling resources into huge top-down projects that still lock out smaller innovators?

A new era for AI innovation

For now, I’m left marvelling at DeepSeek’s “R1” chain-of-thought. It’s genuinely fun to see the model walk itself through a math puzzle line by line, then nail the answer. It’s even more fun to realise that a year ago, that kind of multi-step reasoning was borderline mystical, exclusively the domain of hush-hush research labs with budgets in the billions. We’re in a new chapter. The genie is out of the bottle, the code is out in the wild, and the computing hardware is—despite sanctions—available in enough quantity to keep the ball rolling. Whether you find that exhilarating or terrifying likely depends on your vantage point, but for me, as CEO of Contic, who thrives on building the next big thing, it’s a shot of pure adrenaline. It feels like we just took a giant leap forward, or sideways, or maybe in every direction at once. And I, for one, can’t wait to see what tomorrow brings—especially now that the UK has set its own plan in motion.

person with an email icon

Subscribe to our newsletter

Be the first to know about our latest updates, industry trends, and expert insights

Your may unsubscribe from these communications at any time. For information please review our privacy policy.