now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
TechTalk
Solving AI not sufficient to solving everything else
Listen to leading voices in Silicon Valley, and you might come to believe that “solving intelligence” is sufficient to “solve everything else”. But as seductive as this claim about AI’s potential may be, it rests on a number of assumptions that do not withstand scrutiny
Carl Benedikt Frey   22 Mar 2026

In the mid-1960s, the mathematician and Bletchley Park cryptographer I.J. Good proposed a thought experiment that has since become the secular gospel of Silicon Valley. If we were to build an “ultraintelligent machine”, he argued, it could then design even better machines, sparking an intelligence explosion that would leave human cognition far behind. The first such machine, therefore, would be “the last invention that man need ever make”.

Today, that prophecy, once the stuff of science fiction, has become the core objective of the world’s most powerful institutions. Google DeepMind’s Demis Hassabis, for example, speaks of “solving intelligence” in order to “solve everything else”. It is a seductive story. But even if we assume, for the sake of argument, that future systems can learn, experiment and generate genuinely novel solutions far beyond today’s models, the last-invention thesis still rests on multiple questionable assumptions.

The first is that innovation resembles a frictionless sprint from idea to impact. It does not. Rather, the discovery process is more like a chain, only as strong as its weakest link.

These weak links define much of human progress. In 1986, the space shuttle Challenger broke apart 73 seconds after launch, not because of a failure in its world-class engines or software, but because a small rubber seal failed when subjected to cold atmospheric temperatures ( as the Nobel laureate physicist Richard Feynman brilliantly exposed at hearings into the disaster ). The “O-ring” has since become a metaphor for the kinds of critical bottlenecks that will sink even the most sophisticated systems.

Discovery works the same way. Artificial general intelligence ( AGI ), generally understood as a model that can perform any cognitive task, may dramatically accelerate early-stage medical research, but if it cannot navigate clinical trials, manufacture at scale, or secure regulatory approval, the “breakthrough” never becomes an invention that improves lives. When the early stages of discovery are automated, the human role does not vanish; it simply migrates toward the remaining bottlenecks, where judgment, tacit knowledge and practical know-how are what matter.

This complication points us to an even bigger one: AGI would not just have to outperform humans; it would have to outperform humans using AGI. For the last-invention story to hold, people would have to become unnecessary even as partners or supervisors to artificial intelligences ( AIs ).

But intelligence is not a quantity: “more” does not simply replace “less”. Even a very capable AGI might be different in kind than a human: exceptional at speed and pattern-finding, but fragile when confronted with rare cases. Different strengths imply different blind spots, and when those do not overlap, combining human and machine judgment will continue to beat either one alone.

The game of Go offers a useful reminder. After Google DeepMind’s AlphaGo beat Lee Sedol 4-1 in 2016, its superiority to human players seemed settled. But in 2023, researchers showed that by steering top engines into unusual positions outside their training, a human amateur with modest computing skill could reliably defeat the best programs. Apparent supremacy can still hide systematic weaknesses, and that is often where human input adds the most value.

A third problem concerns knowledge itself. The last-invention thesis assumes that all relevant information can be codified, but this is usually not the case. Few inventions changed the world more than the Ford Model T, which transformed the automobile into a mass market product. But Henry Ford’s achievement lay not just in a new design. More important was his approach to organizing production.

That is why delegations from Italy, Germany, the Soviet Union, and elsewhere travelled to study Ford’s factories first-hand. The crucial know-how could not be gleaned from any blueprint. It was embedded in routines, sequencing, tooling and day-to-day problem-solving by those on the shop floor. Similarly, Toyota’s lean-production system was difficult to replicate because it is embedded in human routines and culture, not a schematic.

More intelligence does not automatically overcome the “knowledge problem” – the fact that what makes complex systems work is dispersed, local, often unspoken information. If knowledge were frictionlessly portable, industries would not cluster so intensely, as in Silicon Valley or the City of London.

AI enthusiasts might respond by saying, “Fine, put sensors, cameras, and microphones everywhere, and we’ll codify the missing knowledge.” But this strategy assumes that people being monitored will openly communicate and share the knowledge they generate, and it assumes away politics and the law. Recording “everything, everywhere” would collide with the European Union’s General Data Protection Regulation, which has become a blueprint for privacy regulation worldwide.

Moreover, the EU’s AI Act does not give a free pass to the surveillance-heavy deployments that would be necessary to harvest human know-how at scale. And even if it did, one cannot assume that all human know-how, let alone judgment, is so easily digitized.

Ultimately, AGI may well automate intelligence. But the process of invention depends on something more. Often, the hard part is not thinking up a solution but translating it into practice. You need local know-how, trusted routines, supply chains and institutional capacity to make something work reliably in the real world. More intelligence does not automatically produce those complements.

AGI will change discovery by making expertise cheaper and experimentation faster. But “humanity’s last invention” is a much stronger claim. For it to be true, we would need a world where practical know-how is fully transferable through digital channels and where responsibility can be automated along with cognition. That is not the world we live in.

As intelligence gets cheaper, the assets that command the highest value will change. The advantage will go to those who can deliver outcomes. Humans are not becoming redundant; they are becoming the world’s most decisive bottlenecks.

Carl Benedikt Frey is an associate professor of AI & Work at the Oxford Internet Institute and the director of the Future of Work Program at the Oxford Martin School.

Copyright: Project Syndicate