← back to home

On Progress and Intelligence

May 3, 2026

Material progress is stagnating. For the last several decades, our societal investments in the hard sciences have seen marginally fewer and fewer returns, despite more money, more researchers, and more powerful tools: the number of new drugs invented per dollar* has halved every decade (Eroom’s Law, Moore’s Law backwards); abundant, reliable clean energy at civilization-wide scale is not within reach; and each new semiconductor process node is vastly more expensive. This slowdown of civilization-scale innovation---the Great Stagnation, as Tyler Cowen terms it---is the exigent problem of our time; many of our largest problems like wage stagnation, climate change, and expensive healthcare are (perhaps surprisingly) somewhat downstream of a society that has stopped innovating. The common explanation is that we’ve already picked most of the low-hanging fruit. This is pretty accurate (things *have* actually gotten harder!). Yet, I think “low-hanging” is relative, and there is a deeper cause: we are in a regime where intelligence is both scarce and poorly allocated. When intelligence becomes abundant, it changes the economics of progress.

In the past, good ideas were simply easier to find. Hennig Brand discovered a brand-new element, phosphorus, with his own urine. Alexander Fleming discovered antibiotics by tinkering with mold on a petri dish. The Wright Brothers discovered flight with wood, fabric, and wire. The modern frontier is quite different: innovation requires teams of geniuses with decades of expertise and the most powerful tools. EUV lithography is the product of millions of man-hours of PhDs spanning precision optics, light sources, vacuum systems, collaborating across industry and academia, and it still took a quarter-century.

Is this lack of low-hanging fruit really an irreducible explanation for slow progress? Let’s look at the production function of “progress.” This function takes in intelligence as its most important input and uses it to explore and experiment over idea-spaces. Today, this input is both scarce and mis-allocated.

A) Scarcity. To contribute to frontier science, one must go through decades of education and become incredibly specialized; there’s a thirty-year gap between being a bright-eyed teen interested in biochemistry and then moving the field forward, even by an inch. This specialization means that the people who can innovate are a much smaller fraction of people who are smart. Those without the financial means to access an education, or have more pressing matters of day-to-day survival, cannot.

B) Allocation. Today’s (roughly) efficient markets structurally allocate intelligence away from science. In a world with finite intelligence with an infinite supply of problems, intelligence naturally flows to problems producing the greatest economic value. The world of bits, not the world of atoms, provides explosive economic returns to intelligence. Therefore, the market rationally allocates most marginal units of intelligence to the digital world. Over the past decade, the most popular major at Stanford has become computer science (over 2x more popular than the next runner-up!), not physics, chemistry, or astronomy, largely due to these incentives.

Yet, we are about to live in a world of abundant intelligence.

Its cost will fall precipitously, upending its unit economics. The digital world cannot absorb every marginal unit; eventually, the marginal mind computing linear regressions for corn futures and link click-through rates is more valuable somewhere else. It will become economically attractive, once again, for intelligence to tackle the greatest scientific problems of our time.

Innovation is a loop between the search and evaluation of ideas, and today, the search process is the expensive half. AI will vastly lower its cost. Context-building over decades of papers and experimental results can be compressed into seconds of inference. It will now take orders of magnitude less time, given a body of knowledge and a new result, to make the best guess of what to try next. This will become true for every field of science, from electrical engineering to medicine to chemistry to material science to nanotechnology, at roughly the same time. We will see an explosion of incredible ideas.

The evaluation of these ideas then becomes the bottleneck. This will determine where innovation takes place the fastest. How many RL steps can we take every day? How many experiments can we run? There are no unit tests for the real world, and we must pay the costly price of running a real experiment to validate every hypothesis. Yet, if we can make fewer, more accurate hypotheses, we will thus require less real-world validation. World simulators, from nanoscale to planetary scale, will become critical. AlphaFold 3 gives us a powerful model of biomolecular interactions. When we simulate entire virtual cells, can we be more confident about drugs before human trials? When we can use AI-assisted physics engines to test the properties of new materials, can we accelerate their discovery? When we simulate the sub-atomic interactions for a new semiconductor process node, can we reduce the multi-decade wait to productionizing it?

Human progress is a sum of stacked S-curves, and for a long time, we had plateaued. Not only had we nearly saturated our known S-curves, but the very discovery of new S-curves to hillclimb became harder as people over-specialized. Soon, the speed of both discovering and climbing these S-curves will accelerate: a world of abundant progress.

* per inflation-adjusted billion R&D dollars

** In-context learning, it turns out, is really awesome. No matter how much I think about it, I am somehow never any less amazed. The pretraining objective of getting thrown into a random spot in human recorded history 1,000,000,000,000s of times, and guessing what comes next, creates a machine that can learn on-the-fly.

Related readings: