Your Brain Can't Do Exponentials — AI Just Proved It
The Lily Pad Problem
There is a classic riddle that exposes a bug in human cognition. A lily pad doubles in size every day. On day 30, it covers the entire lake. On what day did it cover half the lake?
Most people hesitate. The answer is day 29. One day before the end.
This is not a math problem. It is a perception problem. Your brain wants to distribute the growth evenly across all 30 days. It insists that "halfway" should be somewhere around day 15. But exponential growth does not work that way. The overwhelming majority of the action happens at the very end, in a violent burst that looks like it came out of nowhere — unless you understood the curve from the start.
We are living inside a lily pad problem right now. And the lake is intelligence.
The Speed Differential
Terence McKenna made an observation decades ago that has aged disturbingly well: human neurons fire at roughly 100 Hz. A modern processor operates at billions of cycles per second. This is not a difference in degree. It is a difference in kind.
McKenna's point was not about raw speed alone. It was about what happens when you compound that speed advantage over time. A system that thinks a million times faster than you does not just solve the same problems more quickly — it solves different categories of problems that you cannot even formulate. It operates in a cognitive space you lack the bandwidth to perceive.
This is where our exponential blindness becomes genuinely dangerous. We keep evaluating AI by asking "can it do what a human does?" when the real question is "what can it do that no human has ever done?" We are measuring the lily pad on day 22 and concluding there is still plenty of lake left.
There isn't.
We Have Been Here Before and Missed It Every Time
McKenna made another observation that most people overlooked: distributed machine intelligence did not arrive with some grand announcement. It arrived quietly, embedded in commodity pricing algorithms, logistics optimization, chip design tools, and financial trading systems. It was already reshaping the world before anyone thought to call it AI.
This is the pattern. Every exponential technology sneaks in disguised as a toy or a tool, then detonates.
The internet was a curiosity for academics. Then it ate media, retail, communication, and finance. Smartphones were dismissed as expensive gadgets for enthusiasts. Then they became the primary interface for human civilization. Social media was a place to share vacation photos. Then it rewired democracy, mental health, and the attention economy.
Each time, the people who saw the curve early were called crazy. The people who waited for it to "feel" significant were already too late. The feeling of significance is your brain recognizing linear change. By the time exponential change feels significant, you are already on day 29.
The Weirdest Guest at the Table
McKenna had a framing that I keep coming back to: for decades, humanity expected to encounter alien intelligence beamed from some distant star. We built radio telescopes. We scanned the cosmos. We imagined first contact as a dramatic external event.
Instead, the alien intelligence emerged from us. From our own mathematics, our own silicon, our own data. The "other" we were searching for in the sky was being quietly assembled in server farms and research labs. Not the product of evolution on some distant world — the product of evolution's latest trick on this one.
This is not a metaphor. When a system can generate novel scientific hypotheses, write working code, compose arguments you find persuasive, and learn from its own outputs — the question of whether it is "truly" intelligent becomes less important than the question of what it will do next. And "next" arrives faster every day, because the curve is exponential.
Three Frames for What Is Happening
In a conversation between Terence McKenna, Ralph Abraham, and Rupert Sheldrake — three thinkers who disagreed on almost everything — they mapped out three lenses for understanding moments of radical transformation. Each lens illuminates something the others miss.
McKenna's lens: Information acceleration. For McKenna, history itself was an accelerating process. Information begets more information. Complexity compounds. The curve bends upward until the rate of change becomes so fast that the concept of "change" itself loses meaning. AI is not an anomaly in this view — it is the inevitable consequence of an information-saturated civilization reaching escape velocity. The attractor is not some specific technology. The attractor is novelty itself.
Abraham's lens: Co-evolution at a bifurcation point. Abraham, the mathematician, saw these moments as bifurcations — points where a system must choose between radically different futures. In his view, the outcome is not predetermined. We are not passengers on an exponential curve with a fixed destination. We are participants at a fork in the road, and the choices we make right now — about governance, access, ethics, and purpose — will determine which branch of the future we actually get. The exponential only tells you the change will be dramatic. It does not tell you whether it will be good.
Sheldrake's lens: The qualitative question. Sheldrake pushed back on the assumption that speed equals depth. A system that processes information a billion times faster is not necessarily a billion times more aware. Speed is a quantitative measure. Consciousness, meaning, understanding — these might be qualitative phenomena that do not scale the same way. Sheldrake's challenge is uncomfortable but necessary: What if the exponential in processing power does not produce an exponential in wisdom?
All three lenses are useful. All three are incomplete. But together they give you a richer picture than any single narrative about AI.
What This Means for You
Your inability to think exponentially is not a minor quirk. It is the single biggest risk to your future.
Every institution you interact with — your employer, your government, your school system — is run by people with the same cognitive limitation. They are all looking at the lily pad on day 23 and budgeting for a world where the lake stays mostly empty. Their strategic plans assume linear progression. Their timelines assume the next five years will resemble the last five.
They are wrong. And being wrong about an exponential does not give you proportionally wrong outcomes. It gives you catastrophically wrong outcomes, because all the action is compressed into the final moments.
The people who will navigate this well are not the ones with the best predictions. Predictions are almost useless against exponentials because the specific form of the disruption is inherently surprising. The people who will navigate this well are the ones who internalize the shape of the curve and build accordingly:
- They stay aggressively adaptable instead of optimizing for a world that is about to stop existing
- They invest in learning how to learn rather than accumulating static knowledge
- They build small, test fast, and discard what doesn't work without emotional attachment
- They maintain relationships and communities because no individual can process exponential change alone
- They recognize that their intuition is miscalibrated for this and actively correct for it
This is not about being an AI optimist or an AI doomer. It is about being an exponential realist. The curve does not care about your opinion of it.
The Exponential We Actually Need
Abraham made a point in that conversation that has stayed with me longer than any of the futurism. He argued that sustainable evolution — the kind that does not destroy the system undergoing it — requires something beyond intelligence. It requires ethics. It requires care. It requires, in his word, love.
This sounds soft. It is the hardest problem on the table.
We are building systems of extraordinary power at extraordinary speed. The exponential in capability is happening whether we like it or not. But there is no corresponding exponential in wisdom, in ethical reasoning, in our collective ability to agree on what these tools should be for. That gap — between what we can do and what we should do — is growing exponentially too.
McKenna was right that the acceleration is real. Sheldrake was right that speed is not the same as depth. Abraham was right that we are at a bifurcation point where choices matter.
The lily pad is doubling. The question is not whether the lake will be covered. It is what grows on the other side.