Faces of AI with Liv Boeree
Starring game theory expert , Faces of AI integrates different perspectives on artificial intelligence into a synthesis view.
CREDITS
Brought to you by: Synthesis Media
Director: Stephanie Lepp
Script: Stephanie Lepp and Liv Boeree
Starring: Liv Boeree
Music: design school, by Rob Voigt
Special thanks: Igor Kruganov, Jonah Sachs, Daniel Rechtschaffen, Matt Pirkowski, Erik Brynjolfsson, Corey deVos, and Zoë Haney
POINTS OF SYNTHESIS
All perspectives have some partial truth, but some perspectives are more incentivized than others to ignore their blind spots — like those who have a financial stake in the AI arms race
Different perspectives have identified real failure modes we need to avoid: too fast, too slow, too centralized, and too anarchic
AI’s complexity is greater than any one individual mind (or perspective) can grasp
We need more transparency from the labs — and lawmakers who actually know what to do with that transparency
We need whistleblower protections, but we also need — but we also need to address the competitive pressures that make whistleblowing necessary in the first place
We need critical AI decisions to be made by representatives of humanity’s collective interest — but only if “collective” means global, e.g. including China and the Global South
This isn’t about being pro- or anti-AI, but about collaboratively facing the complex reality of AI
A rite of passage prepares us for making a developmental leap — like from childhood to adulthood. For humanity, AI is our next one. Because if we can coordinate to build something this powerful, safely, together, we’ll have grown up in a meaningful way.
SCRIPT
ACCELERATIONIST: Artificial intelligence is the GREATEST FORCE MULTIPLIER ever known.
PESSIMIST: Um, I think you mean the most DESTRUCTIVE force ever known.
ACCELERATIONIST: Cancer, poverty, climate change — AI solves it all. Every day we delay is another day of preventable suffering. Time to LET IT RIP.
PESSIMIST: More like time to PULL THE PLUG. We’re racing to build minds vastly more capable than ours with zero guarantee they’ll care about us! Our greater intelligence has NOT fared well for other species.
AI SKEPTIC: Come on! Artificial super-intelligence is DECADES (if not centuries) away! AI today is just sophisticated pattern-matching with a marketing budget. We’ve got plenty of time.
GEOPOLITICAL REALIST: Time is precisely what we DON’T have. While y’all debate hypotheticals, CHINA is building AI for social control. If WE slow down, AUTHORITARIAN AI wins.
CHINA: [in Mandarin with subtitles] “Authoritarian AI”? YOUR social media ALREADY hijacks your citizen’s minds. WE use technology for social harmony and education — YOU use it for addiction and profit.
AI BUILDER: Which is why WE need to not only get there FIRST, but also get it RIGHT. We care about the safety of our users! Trust us, we’re on it.
GEOPOLITICAL REALIST: I admire your optimism, but the genie is out of the bottle — the question isn’t WHETHER we build it, but WHO gets there first.
RISK REALIST: But we don’t even know what “there” is! We have no idea what awaits us with AI, and especially not super-intelligent AI. Plus, we did technically “get there first” with social media, and we’re more DIVIDED and confused than we’ve ever been.
PESSIMIST: Well, we’ll all be unified in DEATH if these AI labs rush on super-intelligence. [Looks at AI BUILDER] Do you even realize that you’re building a technology more dangerous than nukes, and you expect it to not incinerate civilization?
AI BUILDER: But we DID create nuclear safeguards, and nukes HAVEN’T incinerated civilization.
RISK REALIST: Yeah but we had so many nuclear close calls during the Cold War, frankly it’s a miracle we’re still here…
HUMANIST: The real miracle is the beautiful web of life we’re all a part of — and I worry that it may not be able to co-exist with artificial intelligence. We may be bringing about our own obsolescence.
ACCELERATIONIST: More like bringing about our transcendence! AI can liberate us from the biological constraints that have caused MILLENNIA of suffering!
HUMANIST: Biology has developed from MILLIONS of years of evolution, and you want to alter it after a few years of AI research? That’s not wisdom my dear, that’s hubris.
ACCELERATIONIST: Tell that to the parents watching their children die of diseases we could have already cured. Your luddite thinking is trapping us in Humanity 1.0 when we could be Ascending the Kardashev scale and spreading intelligence across the universe!
AI SKEPTIC: “Ascending the Kardashev scale,” what?! All I see are a bunch of trumped up auto-complete bots being treated like sci-fi oracles. Frankly you sound a bit like one yourself.
ETHICS ADVOCATE: EXCUSE ME! While you’re all debating philosophical fantasies, AI is harming REAL PEOPLE RIGHT NOW ! Gig workers are getting replaced by algorithms! Facial recognition is turning minority neighborhoods into digital police states! Data centers emit more carbon than some countries — while you [looks at ACCELERATIONIST] claim it’s a climate solution!
And don’t get me started on wealth inequality — the rich keep just getting richer and richer, while YOU [looks at AI BUILDER] automate away all our livelihoods!
PESSIMIST: And STILL, none of this matters if we build a super-intelligence.
AI BUILDER: Look, every transformative technology creates challenges. The printing press ALSO destabilized society. We don’t abandon fire because it can burn us — we LEARN how to use it. AI is no different.
RISK REALIST: AI is WILDLY different — it isn’t just ONE civilizational risk, it’s an accelerant to ALL of them! Bio-terrorism… drone swarms… cyber-attacks… social unrest… AI is turbo-charging them ALL!
[DEVOLVES INTO CHAOS]
SYNTHESIS: My darling humans. How wonderful that you all care so much about what may well be THE most consequential technology ever built.
The issue is: you each think you’re exclusively right. But the reality is: you’re all partially right. Which means: you’re partially wrong. And some of you are highly incentivized to ignore your blind spots.
There are many wrong ways to build AI: too fast, too slow, too centralized, too anarchic. So you have all found different failure modes we need to avoid.
The truth is: none of you yet know how to navigate this landscape because AI’s complexity is greater than any one individual mind can grasp.
So, the task at hand is to identify the kernels of truth you’re each holding, and then integrate them into a more comprehensive view.
GEOPOLITICAL REALIST: And how exactly are we gonna do that?
RISK REALIST: Well, we’d need a better sense of what’s actually going on in the labs. So how about some transparency? Like maybe the labs can disclose their development process and their safety protocols LONG BEFORE they deploy?
AI BUILDER: But only if lawmakers actually know what to do with that!
ETHICS ADVOCATE: And how about some whistleblower protections — so employees can actually report risks without facing retaliation?
HUMANIST: And what if all these monumental decisions aren’t made in closed boardrooms, but by representatives of humanity’s collective interest?
SYNTHESIS: Yes, perhaps!
But ultimately, this isn’t about being pro- or anti-AI, but about collaboratively facing the complex REALITY of it. It’s about staying focused on your part, but within the context of the larger WHOLE.
In other words, it’s about doing the synthesis. I mean, of course I would say that ;)
But really, we do need all your viewpoints.
Artificial Intelligence is humanity’s most complex challenge, but it’s also our greatest opportunity. It’s our rite of passage. Because IF we can coordinate to unleash AI’s power responsibly — then we can coordinate to do anything.
CUTTING ROOM FLOOR
When it comes to AI, the defining polarity is: innovation and safety. Some perspectives are holding the mantle of innovation – of the unprecedented breakthroughs AI could unleash. Other perspectives are holding the mantle of safety — warning that what we build could spiral out of control. The idea isn’t to choose between innovation and safety, but to hold both in dynamic equilibrium.
We are not building AI. What’s building AI is: our misaligned incentives — where anyone who pauses for safety gets crushed by competitors, forcing everyone to race towards risks that nobody wants. Whatever is the proper balance between innovation and safety, we’ll never strike it with our misaligned incentives at the helm.
AI labs can’t “solve alignment internally,” because alignment is upstream of the labs — in the incentive landscape in which AI is funded, developed, and used.
Our most feared AGI scenario is already here. We already can’t pull the plug on something that’s perpetually upgrading its capacity and whose “objective function” creates major collateral damage. We’re already in a situation where no one wants microplastics in our lungs and oceans, but no one can stop it. And AI turbocharges that until we drown in paperclips.
Look at what happened with OpenAI. They started as a NON-profit devoted to ensuring AI benefits humanity…and pivoted into a capped-profit devoted to winning the AI arms race. Companies like OpenAI think they’re working on the hardest problem, but doing anything other than racing for AI dominance would be much harder.
This isn’t about being pro-AI or anti-AI, but about creating the conditions conducive to safe innovation. For example, by creating a global citizens assembly for AI, or shifting from GDP to a measure that accounts for more of what we truly value.
At the deepest level, changing the incentive landscape is about zooming out from our desired part – GDP, biodiversity, or whatever – to serve the whole we’re all part of. It’s about shifting from a body’s organs competing against each other, to coordinating in service of the body’s overall health.
The most talented engineers in the world think they’re working on the hardest problem. But the hardest problem isn’t building AGI. It’s coordinating to build AGI that’s truly win-win.
REFERENCES
The Last Invention, Longview with Andy Mills (2025)
AI 2027, Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean (2025)
Silicon Dreams and Carbon Nightmares: The Wide Boundary Impacts of AI with Daniel Schmachtenberger, The Great Simplification with Nate Hagens (2024)
Daniel Schmachtenberger & Liv Boeree on AI, Moloch & Capitalism, Win-Win with Liv Boeree (2023)
The AI Dilemma, Your Undivided Attention with Tristan Harris and Aza Raskin (2023)
Daniel Schmachtenberger: Artificial Intelligence and The Superorganism, The Great Simplification with Nate Hagens (2023)
The Dark Side of Conscious Ai | Daniel Schmachtenberger, Theories of Everything with Curt Jaimungal (2023)
Bohm & Krishnamurti, J. Krishnamurti and David Bohm (1980)


