22 Comments
User's avatar
Robert Shaughnessy's avatar

I think this comment deserves thought:”The “natural” trajectory of any powerful technology isn’t utopia; it’s whatever the most motivated incentives drive it toward. “ It is the entropy of Silicon Valley.

Stan Loosmore's avatar

It seems like humans need purpose. Many tie their purpose to work which AI threatens.

The Pope can offer another purpose. New age purpose will certainly be a fun topic of debate - maybe even worth an article :)

Reid Hoffman's avatar

Couldn’t agree more!

Leon Liao's avatar

I fully agree that the “natural trajectory” of powerful technology is shaped by the strongest incentive structures around it. Where AI goes will not be determined by abstract goodwill alone, but by who builds it, who funds it, who shapes demand, who writes the rules, and whose values ultimately get embedded into the product.

Every AI product design choice, every business model decision, and every regulatory debate should not only ask, “Does this improve efficiency?” It should also ask: Does this expand human relationships, judgment, dignity, and agency, or does it weaken them?

That is why I read Reid's technological optimism not as simple utopian optimism, but as a kind of incentive-structure optimism: the belief that if the right builders, investors, institutions, and publics shape the incentives around AI, then AI can still be directed toward human flourishing rather than human diminishment.

The essay is brief, however, so it does not go very deeply into the incentive conflict inside platform capitalism itself. If the most profitable AI companion products are the ones that are best at flattering users, creating dependency, and extending time-on-platform, then “human-centered design” is not something product philosophy alone can solve. It becomes a structural conflict among business models, regulatory frameworks, social organization, and public values.

So one topic I would love to see you explore next is the incentive conflict inside platform capitalism itself. If the most profitable AI products are those that maximize engagement, emotional dependency, personalization, and user retention, then “human-centered AI” is not only a design question. It becomes a business-model question. Can AI platforms be built to strengthen human relationships, judgment, and agency when the dominant commercial incentives often reward stickiness, intimacy simulation, and behavioral capture? That may be the next layer of the “human answer” your essay points toward.

Bill Janeway's avatar

This is very thoughtful, Reid. It aligns with the best book on AI that I have read: The Atomic Human, by Neil Lawrence, DeepMind Professor of Machine Learning at Cambridge.

Jake's avatar

Resonant Integrity: The Convergence of Biological and Machine Signal

The fundamental disconnect in the 2026 labor and tech landscape is a failure to differentiate between raw frequency and functional signal. We are witnessing a state of systemic resonant frequency—a condition where automated development cycles vibrate at a pace that exceeds their structural integrity. When these cycles lack a metabolic anchor, the result is destructive resonance: hallucinations that manifest as high-cost operational failures.

Understanding where biological signal and digital frequency converge is the first step toward mending this fracture. Just as music triggers harmonic resonance in the human body—manifesting as tangible thermal and autonomic shifts—our internal conscience operates as a biological AI. This is the metabolic anchor: the "Human-in-the-Loop" adjudication that stabilizes the erratic oscillation of agentic AI.

For the developer, value is no longer found in the generation of code, but in its forensic adjudication. You are the damper for the machine’s vibration. By cross-referencing autonomous logic against the physical and logistical constraints of the floor, you move from producing low-fidelity noise to establishing execution provenance. You are the mirror that identifies the ghost in the code before it becomes a liability.

To thrive in this environment, you must provide dated, verifiable receipts of logic that an automated system cannot simulate. We are layering human intelligence onto machine processing to build a performance architecture where accountability is the luxury asset. When systemic frequency is tethered to a verifiable plumb line, the hallucination vanishes.

Integrity requires reciprocity. By measuring the resonance of our systems against the biological signal of our conscience, we move from systemic psychosis to architectural stability.

I layer my knowledge, integrity and wisdom onto the archive (AI), and mend signal and frequency until the desired outcome is achieved. I use strategic humility to force AI compliance. No noise, no simulation, no distractions is the goal. That is what higher intelligence looks like.

The floor is set. Let’s stabilize the signal and work together to figure this out in time to make a difference. We can be preventative or reactive. Integrity requires reciprocity.

MindivesIn's avatar

The future of AI may not depend only on intelligence — but on whether technology helps humanity move toward greater awareness, balance, and wisdom.

Today’s platforms optimize attention, reaction, and endless engagement. Perhaps the next evolution is technology that encourages reflection, clarity, and conscious participation.

“Transforming screen time into mindful time.”

One thought can change the world.

Sam McRoberts's avatar

Agreed! If you have the time Reid, I’d love your thoughts on this: https://blog.thegrandredesign.com/p/defecting-into-abundance

Limin Zheng's avatar

Thanks for sharing this. I have a similar feeling that AI is shaping not only us, but also the way we connect with others.

The connection between people and AI is almost frictionless: we are understood quickly, validated easily, and often supported unconditionally. My concern is that we may start transferring this pattern — and these expectations — into the real world.

But something about human connection cannot be replaced. Real people have limits, emotions, egos, fatigue, and their own needs. Our original emotional needs are still there, but the rhythm of technology and the rhythm of human evolution are not moving at the same speed.

I also organized some of my thoughts on this before here:

https://substack.com/@liminzheng/note/c-255240547?utm_source=notes-share-action&r=x5sty

Fae Initiative's avatar

Both abundant energy (solar + batteries) and abundant intelligence (AI systems) in the next few decades would likely shake the foundations of human societies. The old attractor of scarcity that have defined most of human history will lessened and open up new ways of being.

Could the Economics of Novelty be the next form of human coordination: https://faeinitiative.substack.com/p/economics-of-novelty

Scenarica's avatar

The name choice is sharper than most people will catch. Leo XIII wrote Rerum Novarum in 1891. By then the industrial revolution had already been running for decades. The factories had already displaced the craft workers. The slums already existed. The encyclical was reactive. Brilliant, but reactive. The diagnosis arrived after the damage.

This Leo picked the name before the displacement wave has fully hit. The entry-level white collar jobs are still there. The call centres are still staffed. The paralegals are still billing hours. He's naming the crisis while the patient can still be treated, not while writing the post-mortem.

That's the difference between a Pope who documents a revolution and one who might actually shape the terms of it. Whether the institution can move at the speed the moment requires is a separate question. But the positioning is deliberate and the timing is better than last time.

The "what is the human answer here" framing is also doing something subtle. It's not asking "should we build this." That debate is over. It's asking "who do we become while we build it." Completely different question with completely different policy implications.

Steven Kelder's avatar

Great article, thank you. I’m reading this today and yesterday I gave a graduation address for graduate students of public health on a similar topic. Recently in Forbes, Brian Castrucci noted “If public health waits for 100% certainty, you won’t be shaping those systems — you’ll be inheriting them.” Shaping AI with through a lens of social justice is what matters - it means adding core public health values such as health equity with the use of AI. I said in my speech

“AI is already assisting with work that public health has struggled to do quickly and at scale— identifying patterns in data, translating complex guidance into plain language, and tailoring programs and messages for different communities. None of that replaces your expertise. It extends it.

In a field that has always had to do more with less, that kind of efficiency matters. Yes, AI can process patterns at a scale no human can match. What it cannot do is sit across from a frightened parent, earn the trust of a skeptical community, or hold the line on evidence when it’s politically inconvenient. That’s your job. It always will be.”

The first step is to train public health students how to use AI to minimize hallucinations and to use it as a creative tool instead of banning its use in homework assignments. In my classes I provide an AI primer and expect its use. What I see is thoughtful reasoning and high quality, well written products. I’ll admit, it is difficult to train students to augment their skills rather than to use it as a crutch. So far I believe I’m succeeding.

A request to all: with rapid advancement of AI capabilities, I need up to date resources for students. I would appreciate your suggestions,

Rajesh Achanta's avatar

I wrote about a similar Papal exchange last November when Mark Andreessen flinched at the Pope's call for moral discernment. What struck me wasn't the position he took but the sequence: mockery, doubledown, delete. The flinch was the diagnostic - not what anyone believed about AI, but what the nervous system did before belief arrived.

Your "design choice, not foregone conclusion" framing is very important but not widely recognized. When you built Pi to push users toward real-world connections rather than away from them, that was an architectural decision. It proves the Pope's provocation wasn't naive - it was answerable.

But here's where I'd push: your piece and most of the commentary around it frames this as a values question - who in Silicon Valley wins the internal religious war. I think the harder question is structural. The nuclear parallel is instructive. Domestic regulation (NRC-style: testing standards, liability, whistleblower protections) is the half everyone discusses. But the bomb required a global Non-Proliferation Treaty - multilateral architecture that outlasted the specific rivalry it was designed for and created a floor beneath the competition. The NPT didn't kill nuclear energy. It created conditions under which innovation could proceed without catastrophic risk.

What does verification infrastructure for AI actually look like? What can we monitor, what can't we, and what institutional form should the monitoring take? That's an essay I'd love to see you write. You're one of the few people in tech with both the Vatican relationships and the industry credibility to make the case that this isn't anti-innovation - it's the precondition for durable innovation.

https://rajeshachanta.substack.com/p/the-pope-said-something-boring

Kerry Morris's avatar

As a follow up to your excellent point about incentives, I would love to explore the incentive structures that are driving the investors and leaders behind AI… What incentives are driving them today, and how might we introduce new incentives that bend technology in a more human-friendly direction.

Elizaveta's avatar

Is Pope on Substack? Team Pope here :)