Siemens Xcelerator Community Beta
Innovation and Future Resources
December 16, 2025

The Good Future of AI is Human Augmentation, Not Replacement: An Interview with Futurist Gerd Leonhard

The Good Future of AI is Human Augmentation, Not Replacement: An Interview with Futurist Gerd Leonhard
# AI
# Automation
# Digital Transformation
# Ecosystem & Collaboration
# Horizons
# Industrial AI
# Interview
# Thought Leadership

Futurist Gerd Leonhard on why industrial leaders should prioritize exponential human augmentation.

The Good Future of AI is Human Augmentation, Not Replacement: An Interview with Futurist Gerd Leonhard
This article is based on a recent Industry Thought Leader interview with Gerd Leonhard, Futurist and Founder of the Futures Agency. The full video interview is included below. Gerd Leonhard predicted streaming music before Spotify existed. In his 2005 book The Future of Music, he described 'music like water'; ubiquitous, on-demand, subscription-based. This vision, which seemed radical at the time, now defines how billions consume media.
Today, as CEO of The Futures Agency and one of the world's most respected futurists, Leonhard brings that same clarity to the questions every industrial leader is wrestling with: What should AI actually do for us? Where does automation make sense, and where does it destroy the very value we're trying to create?
His answer challenges the prevailing narrative. In a recent conversation for the Siemens Xcelerator Community, Leonhard made a compelling case that our obsession with automation is misplaced and that the real opportunity lies in exponentially augmenting humans.

The CAVA Framework: A Hierarchy for AI Applications

Leonhard offers a practical framework for thinking about AI's appropriate uses, one that flips the typical automation-first mentality on its head. He calls it CAVA.
Cognification comes first: using AI to work faster, find information more efficiently, stay organized, and simply perform better at cognitive tasks. Then Augmentation: language translation, document retrieval, analytics, and the tools that extend human capability. Next, Virtualization: digital twins, simulation, and modeling that allow us to test and iterate in virtual environments.
These three categories, Leonhard argues, represent roughly 80% of AI's real value. The fourth category, Automation, is what most executives focus on first, yet it's actually the most difficult to implement well and often the least valuable.
"Automation is highly overrated because automation is actually quite difficult, as we can see with the self-driving car. It is possible, but it's difficult."
Consider the call center industry: 20 million people work in call centers globally. The automation narrative suggests we'll eliminate 95% of those jobs. Leonhard's read is more nuanced. Perhaps half will be automated, while the rest evolve into different, often more valuable roles. "These jobs are on fire," he acknowledges. "They're burning platforms clearly, but we're not gonna automate all of it."
The practical implication for industrial leaders: before investing in automation, exhaust the opportunities in cognification, augmentation, and virtualization. The returns are often higher, and the implementation challenges far more manageable.

Androrithms: The Uniquely Human Capabilities

Leonhard has coined a term for the human capabilities that machines cannot replicate: androrithms, a play on algorithms that captures what remains distinctly human. These include imagination, intuition, common sense, storytelling, empathy, compassion, creativity, design thinking, and negotiation.
"Machines are very bad at imagination. Intuition, no common sense, no storytelling, no empathy, no compassion... They can properly simulate it sooner or later, but they can't actually be emotional. They can pretend to be, of course."
This distinction matters enormously for workforce development. As AI handles more routine cognitive tasks, the premium on these human capabilities increases. Organizations that invest in developing androrithms alongside their technology investments will find themselves with teams capable of doing what AI cannot.
Leonhard points to a troubling trend: roughly 20% of ChatGPT's 750 million users employ it for therapy and personal questions. "That is just utterly stupid," he says bluntly, "because this machine knows nothing about human values or empathy. It's a machine that's best when it's about logic." The lesson: understanding what AI is actually good at, and what it isn't, should guide every implementation decision.

Show Me the Incentive

Throughout our conversation, Leonhard returned repeatedly to the importance of incentive structures, quoting Charlie Munger: "Show me the incentive and I'll show you the outcome."
If the incentive is to eliminate as many humans as possible to reduce costs (Sam Altman's vision of a billion-dollar company with ten employees), then the outcome is predictable: everything becomes "good enough" but nothing is actually good. "There would be no soul in it," Leonhard observes. "You will not be valuable there because we create the value behind this. The humans do."
The alternative? Design incentives around broader objectives. Leonhard uses a framework of five Ps: People, Planet, Purpose, Peace, and Prosperity. Not as a replacement for profit, but as a balanced scorecard.
"You'll never have a decision that ticks all five boxes at 10 out of 10. That's impossible. So one day you tick the box that has prosperity profit, that's 10 and people is two. But the next day people are 10 and purpose is eight and profit is three."
The key insight: being explicit about which values you're prioritizing in any given decision creates accountability and forces strategic clarity. "If you only ever tick one or two boxes," Leonhard warns, "then that's probably not such a hot idea."

Trust Isn't Digital

In a world where AI can generate increasingly convincing content, trust becomes the critical differentiator. "In business, especially in tech, it's all about trust," Leonhard emphasizes. "You don't have trust, you evaporate."
But trust operates differently with machines than with humans. "Trust isn't digital," he notes. "You can't trust machines in the same way that you would trust humans." This has profound implications for customer relationships, partnerships, and brand value, all of which depend on human-to-human trust that AI can support but never replace.
Leonhard points to Microsoft's recent embrace of "humanist AI" as a promising direction: technology designed to enhance human capability rather than substitute for human presence. Companies like Salesforce have built entire strategies around this principle.

Possible Versus Desirable

For industrial technology leaders, Leonhard frames the core question simply: "What is possible and what do I want to do? Those are two different things."
Is it possible to upload a human brain to the internet? Possibly, within our lifetimes. Is it desirable? Leonhard doubts it. Is it possible to replace most human workers in manufacturing within a decade? Perhaps. Is it desirable? The answer requires thinking beyond technical capability to broader consequences.
"Every company that's now in the business of technology and industrial technology as well is facing the ethical question: How good will this be? Can we control it? And does it have too many side effects?"
He points to IBM's decision not to sell facial recognition technology to certain governments as an example of a company drawing ethical lines around technical capability. These aren't purely moral considerations. They're strategic ones that affect long-term trust, brand value, and social license to operate.

Keep the Human in the Loop

Leonhard draws a sharp distinction between AI as intelligent assistance (improving processes, augmenting human capability) and AI as autonomous agent. For the foreseeable future, he argues, the safest approach is clear: "Keep the human in the loop. That is the safest way to proceed for the time being."
The reason isn't just ethical caution. We don't yet have reliable ways to ensure AI systems remain aligned with human values as they become more capable. "Right now we can't trust the AI to be aligned," Leonhard notes. "There's a super alignment issue that we're facing as to what the AI will ultimately do."
He draws a sobering parallel to social media: "If we do the same in AI, we won't be here in 2050. Because the AI will be the ones running the regulation." The lesson from social media's unchecked growth, where frameworks for acceptable behavior were never established, should inform how we approach AI governance today.

The Good Future

Leonhard's overarching concept for navigating these choices is what he calls "the good future," a deceptively simple framework that cuts through political and ideological divisions.
"A good future is the opposite of a bad future, kept very basic. Here I have the right for self-expression. I can have a family, I can make a living, I can express myself, I can have healthcare, energy, basic stuff."
The question then becomes: does this technology, this decision, this strategy move us toward or away from that good future? Not just for shareholders, but for the broader ecosystem of stakeholders.
Leonhard is optimistic about the raw materials available to us. "Science and technology, not the problem. Money, not really the problem." He points to humanity's track record of collaboration in crisis (the Nuclear Non-Proliferation Treaty, the Montreal Protocol on ozone) as evidence we can solve collective action problems when the stakes are clear.
The challenge is acting before crisis forces our hand. "It's interesting that humans always take a long time to collaborate," he observes, "but when it's really getting ugly, then we come together and just solve it."

The Bottom Line

Leonhard's parting insight captures his philosophy in eight words: "Embrace technology, but don't become technology."
The temptation to become technology is powerful. Rather than speaking directly to team members, we have ChatGPT craft emails and AI avatars deliver video messages. We can do all these things, but should we?
"When we embrace technology, it means we're doing it for a reason. And the reason is to be better humans, not to replace ourselves."
For industrial leaders navigating AI's rapid evolution, this framing offers both strategic clarity and practical guidance. The goal is to implement AI in ways that make humans more capable, businesses more valuable, and futures more worth inhabiting.
Too much of a good thing, after all, can become toxic. The best builders know that restraint is part of the design.

• • •
Gerd Leonhard is CEO of The Futures Agency and author of Technology vs. Humanity. His work on the intersection of humanity, technology, and transformation has reached millions through keynotes, publications, and his YouTube channel with nearly half a million subscribers. Learn more at futuristgerd.com.


Sign in or Join the community
Siemens Xcelerator Community Beta
Create an account
Dive in

Related

Video
[Video] The Good Future of AI : An Interview with Futurist Gerd Leonhard
Dec 18th, 2025 Views 11
© 2025 Siemens Xcelerator Community Beta
Corporate Information