Faces of AI: A live performance
Last week, I staged a live performance of Faces of Artificial Intelligence!
A Faces of AI video doesn't exist yet (forthcoming :), so this performance was the debut. It was at FutureWorks — Indeed.com's flagship HR leadership conference.
Backing up for a sec: how did this come about??
Crazy story: a Marketing Director from Indeed.com was at TED, and loved my talk. She invited me to give a similar talk at FutureWorks. But instead of Faces of Capitalism, she requested Faces of AI. Which makes sense — the HR industry is already experiencing AI disruption. So much so that....Indeed's Marketing Director who invited me to speak at FutureWorks was then let go in a round of AI-driven layoffs! Oy. But she'd already made the handoff, and my presentation proceeded.
At TED, capitalism is somewhat theoretical. But at FutureWorks, AI is real.
And I could feel that in the audience. After our presentation, so many people came up to thank me — because we spoke precisely to the optimistic vs. pessimistic AI debate that was happening at the conference, and offered a more realistic and comprehensive view. People were especially grateful for the message of synthesis given that Charlie Kirk had been shot the day before ❤️🩹
So...what wisdom did we offer this AI-rattled industry? Watch the performance here, and check out the full transcript below:
And when will there be a video for Faces of AI? Hopefully by November. There will be more than 2 perspectives before we get to synthesis — likely 8 or 9! Including China. FUN. Stay tuned...
TRANSCRIPT: Faces of Artificial Intelligence
THESIS: Look, I've been in HR for 25 years, and AI is the most EMPOWERING force we've ever seen.
ANTI-THESIS: Honey, I’ve been around just as long, and AI is the most DESTRUCTIVE force we've ever seen.
THESIS: Think about it — each wave of innovation creates new jobs we couldn't have imagined. The internet gave us web developers and app designers…and AI will create WHOLE NEW INDUSTRIES.
ANTI-THESIS: But for every new job AI creates, it destroys MORE jobs — faster! We’re talking about MASS DISPLACEMENT of the very people we're supposed to support. Which means WE go from building careers…to managing layoffs
THESIS: Okay, but think about what happens when AI helps us see not just people’s PAST credentials…but their FUTURE potential! A “skills-first” approach could enable a brilliant candidate from a community college to get the same shot as one from the Ivy Leagues.
ANTI-THESIS: Um, AI doesn’t 'see' potential — it predicts the future based on the PAST, so it’s literally programmed with our old blind spots. We’re conjuring a corporate version of Minority Report.
THESIS: Listen, we’ve been through tech revolutions before — email replaced endless phone tag, and job boards opened up candidate pools beyond the local paper. Were they disruptive? Yes! But they freed us from busywork so we could focus on what matters: UNLEASHING HUMAN POTENTIAL. AI can put the human back in human resources.
ANTI-THESIS: At the end of the day, you and I got into HR because we love people. Our desks are cluttered with scribbled notes from late-night calls with anxious candidates, and reminders to follow up with hiring managers -- because we CARE about the person behind every application. But now, we’re training AI to replace all of that, and maybe even replace us! And once AI takes the human OUT of human resources, there’s no going back.
SYNTHESIS: Ladies, sorry to interrupt—
THESIS: Can I help you?
ANTI-THESIS: We’re in the middle of a conversation here
SYNTHESIS: I couldn’t help but overhear – you’re each seeing DIFFERENT parts…of the same BIGGER picture.
When it comes to AI, the defining polarity is: INNOVATION and SAFETY.
One of you [look at where Thesis was facing] is holding the mantle of innovation — of using AI to unlock new capabilities. The other [look at Anti-Thesis was facing] is holding the mantle of safety — all too real given recent AI-driven layoffs at companies in this room. The idea isn’t to choose BETWEEN innovation and safety, but to hold BOTH in dynamic equilibrium.
But here’s the twist: we’re not balancing innovation and safety in a vacuum. We’re balancing them in a BUSINESS ENVIRONMENT that too often rewards short-term profit over long-term value. For us, that becomes an arms race to deploy AI before our competitors – but sometimes without the proper safeguards.
This isn’t about being pro-AI or anti-AI, but about creating the conditions conducive to safe innovation. When we shifted from horse carriages to cars, we needed seatbelts. And if the cars become driverless? Then we need a whole new infrastructure.
Ultimately, AI’s most feared disruption is: JOBS. Which means the people in this room might have THE MOST IMPORTANT job: to re-imagine HUMAN work for an age of ARTIFICIAL intelligence.
The people in this room are uniquely equipped — because of everything we know and love about people — to imagine a world where: if the jobs don’t need the people, the people don’t need those jobs. What would the future of work – and of OUR work – look like if people’s livelihoods came not from plugging into the job market, but from contributing their unique genius?
THESIS: Now that’s exciting.
ANTI-THESIS: Okay. You’ve got my attention.
We may not know what makes humans unique. But whatever it is – wisdom, soul, something else – let’s use AI to elicit MORE of it. Let’s not just adopt AI, but shape the CONTEXT in which we adopt it — holding innovation and safety in proper equilibrium.
So you ARE both partially right. As am I. Our view will always be incomplete, but it's by integrating our different perspectives that we can see a bigger picture.
[SCREEN - Coming Up: Faces of X creator Stephanie Lepp]
STEPHANIE: So you might think that your way of seeing the world is right, but the three Arianna’s and I are here to suggest that perhaps you are all partially right. So instead of choosing one perspective and getting a partial view, how might we integrate different perspectives and gain a BIGGER view?
Well, THAT is the inspiration behind what you all just watched – a performance of my recent production: Faces of X. Faces of X is a series of short videos that integrate different perspectives on culture war issues – like gender, abortion, and race. You know, keeping it light ;)
The format is simple: each video first presents the strongest arguments on each side – the thesis and the antithesis – and then attempts to integrate them into a synthesis perspective. Thesis, antithesis, and synthesis are all played…by the same person.
So you all just watched a LIVE performance of "Faces of X." The great Arianna Ortiz just showed you "Faces of Artificial Intelligence," but the potential pipeline of the series is infinite. Imagine "Faces of Immigration," right? Imagine "Faces of Politics in the Workplace." Imagine "Faces of" whatever issue you and your team are wrestling with. And I do take requests, so come talk to me after ;)
But zooming out, what does it actually MEAN to integrate different perspectives? Well, one thing it doesn't mean is meeting in the middle. Given the complexity of the challenges we face, instead of meeting in the middle, I want to see us move from the horizontal plane to a new dimension, from common ground to higher ground. Let's think not just in terms of changing hearts and minds, but of EXPANDING hearts and minds.
One metaphor I love is: parallax vision. The view from our right eye is slightly different than the view from our left eye. Each eye gives us a view that's true, but partial. So it's by looking through BOTH EYES TOGETHER (along with other visual cues), that the world goes from flat to 3D. Integrating different perspectives quite literally gives us greater DEPTH.
So a question that I always get asked is: how? How do we integrate different perspectives -- especially in workplaces where different perspectives collide every day? Well, it's an art and a science, and something that has been explored at length by thinkers from Georg Hegel to Ken Wilber. But for us here today, I'm going to distill it into three questions that I ask myself. Here we go:
First question: Is there an either/or that can be flipped to a both/and? Well, on the issue of DEI -- yep, we're going there -- the goal is framed as increasing diversity, equity, and inclusion. But EACH of those values are part of a POLARITY – of seemingly-opposing values that actually need each other to exist. Diversity co-exists with unity, hence our national motto of e pluribus unum – out of many, one. Equality co-exists with meritocracy, because fairness without striving leaves us stagnant, and striving without fairness leaves too many behind. And inclusion co-exists with exclusion – because healthy boundaries are necessary for identity and belonging. Echoing Arianna, the idea isn’t to choose between diversity and unity (for example), but to hold BOTH poles of a polarity in dynamic equilibrium. From either/or to both/and.
Second question: Is there an opportunity to shift from “yes or no” to "under what circumstances, if any"? On the issue of politics in the workplace, we can shift from politics in the workplace: yes or no?” to “under what circumstances should politics be discussed in the workplace?” And if we leave out "all circumstances" and "no circumstances," and integrate what’s left, we’ll likely end up with guidelines that generally encourage political discussion in contexts that are 1:1 and informal (like over lunch and via text), and discourage it in contexts that are large-group and formal (like all-hands meetings and over Slack). From “yes or no” to “under what circumstances.”
And finally, are there perverse incentives that are making the issue harder to resolve? Back to the issue of artificial intelligence, we are up against: a fiduciary duty to maximize short-term shareholder value, and a hyper-competitive arms race to beat other companies for AI dominance. So if we addressed these upstream incentives, then more of us could trust that AI is being deployed not because it maximizes profit or wins a race, but because it genuinely creates long-term multi-stakeholder value. Are there perverse incentives?
Now this kind of integration does get confused with both-sidesism, the idea that all sides are equally relevant and valuable. It also gets confused with relativism, the idea that there’s no absolute truth, and everything is contextual.
But the reality is: it's unlikely that one of us is entirely right. (Usually that's a sign of tribalism.) It's also unlikely that all of us are equally right. (That's both-sidesism.) What's most likely is that most of us are partially right, and some of us are more right than others. Which doesn't make for a great tagline, but that's what's up when we're contending with the complexity of reality ;)
So the next time members of YOUR team are locked in an argument about some hot political issue, assume that they EACH have some nugget of insight, and your job is to help them find each other’s, and incorporate it into their OWN perspectives. Challenge them to not just listen to each other and hang out on the horizontal plane, but to enter a new dimension.
Thank you.



