Moore's Law for AI?
From Single-Core Limits to Multi-Agent Systems
Moore’s Law [1] fueled decades of computing progress by predicting the doubling of transistor density approximately every two years. For a long time, this translated into rapid gains in CPU speed-until thermal limits and the end of Dennard scaling hit. At that point, the industry pivoted: instead of faster single cores, we got multicore processors.
AI seems to be approaching a similar transition point. While model sizes have grown exponentially, the returns on intelligence are proving sub-linear. We are now in a resource-constrained environment where the brute-force scaling of the past few years is no longer economically or practically viable. This pressure is forcing an architectural pivot. What comes next might not be another order-of-magnitude increase in model parameters but a shift in architecture. Just like CPUs went multicore, AI is going to find a way to continue to scale intelligence.
Podcast
If you prefer listening over reading, check out this podcast episode where the topic is explored in more detail.