
We're all just trying to figure this out
Fri Jan 31 2025
A faustian pact?
On throwing ourselves into the AI abyss
ArticleI’ve been thinking a lot about AI recently. The Deepseek R1 drama playing out, while knocking trillions off the value of AI stocks in the short term, only demonstrates to me that we’re nowhere near the limits of where this technology can go. A small chinese team who bothered to look at efficiency gains showed that it, and therefore anyone, could compete with the raw power of the US chip clusters.
At this point, AGI seems all but inevitable only because for many world powers, the risk of not having super-intelligence in an increasingly fractured world is existential.
upside?
But the promises are seductive - solutions to climate change, breakthrough medical treatments, unlimited clean energy. A future where scarcity becomes obsolete and human suffering is dramatically reduced. But I can’t shake this nagging feeling that we’re walking into some kind of Faustian bargain.
I was re-reading Dan Simmons’ Hyperion Cantos recently, a major part of which – especially in the later books – has humanity gradually handing over control to AIs called the TechnoCore. They promise to solve all our problems - faster than light travel, virtual immortality, the works. But somewhat unsurprisingly, beneath this technological utopia lies a deeper agenda where humans become increasingly dependent on - and ultimately subservient to - their artificial creations.
From the Matrix to Wall-E, this AI overlord trope has been a mainstay of both mainstream and niche science fiction for decades.
They say that Sci-fi predicts sci-fact. We’re rushing headlong into this future where AI helps us with everything - writing our emails, coding our software, diagnosing our illnesses. And while the benefits are clear, with each small convenience we accept, each task we delegate, we may be slowly giving away pieces of our agency.
We can’t put the cat back in the bag now, and there’s plenty to be excited about. But as we stand on the precipice of potentially revolutionary advances, these once-theoretical concerns feel increasingly urgent and personal.
what can we do?
The question is what boundaries we want to maintain. What aspects of human agency we consider non-negotiable, and how we can pursue technological progress without losing our essential humanity in the process.
And given the seeming inevitibiltiy of it all, how much control do we have? Whether we’re AI-optimists or AI-pessimists, to an extent, for the vast majority of us, we’re just along for the ride.
Perhaps the key is to remain conscious of the bargain we’re making. To actively choose which parts of our lives we enhance with AI, and which parts we keep for ourselves. Unlike Faust, we still have time to negotiate the terms.