Discussion about this post

User's avatar
Haris Marath's avatar

Your imagery of "building gods" is striking—ancient entities of calculation and foresight, forged not from myth but from data. As a clinician, though, I find myself looking less at the "instrument" (AI) and more at the "organism" wielding it: us.

You ask if seeing the future clearly helps us choose a different one. History and biology point to an unvarnished but necessary diagnosis.

1. The Deterministic Loop: Our behaviour isn't primarily a failure of "moral imagination." It emerges from a "biological source code" shaped by environments that rewarded survival and resource acquisition above all. When engines like TimesFM or Earth-2 sharpen our view of coming weather, markets, or conflict, they illuminate the path, but they don't automatically rewrite the drivers pushing us down it.

2. Insidious Catastrophes vs. Strategic Realignment: You mention the abolition of slavery and the birth of democracy (suffrage) as triumphs of care. While Walter Scheidel’s The Great Leveler documents how acute catastrophes—wars and collapses—reset power, I believe a more insidious biological mechanism is often at play in these "moral" shifts. For the enslaved or the disenfranchised, systemic suppression eventually threatens fundamental biological imperatives: survival and genetic propagation. The resulting resistance makes the "cost of maintenance" for the system through suppression and instability higher than the cost of reform. The concession of rights is rarely a moral epiphany; it is a strategic realignment of survival by the powerful to preserve the system's long-term homeostasis.

3. The AI Trigger: Today, the barrier to challenging entrenched interests has risen exponentially. Without a genuine "Black Swan" event to disrupt the current alignment of incentives, AI is more likely to serve the survival strategies of existing elites than to force a collective course correction.

4. Ingenuity vs. Wisdom: We went from the Wright brothers to the Moon in just 66 years—and we did it without AI. Building AGI is now a tangible engineering problem. But if history is any indication—from the Haber-Bosch process to the atomic bomb—we have rarely been wise enough to use new technology just as a "cure." We almost always use it as a club as well.

I have spent a great deal of time exploring these "terminal loops" in my book, The Theory of Us: The Final Diagnosis of the Human Species, but the question of agency remains the most challenging part of the prognosis.

What do you see as the most plausible path out of this loop—or do the gods we're building simply amplify what is already hardcoded?

Tom Shakir's avatar

“Illusion of choice“ 🔓

5 more comments...

No posts

Ready for more?