The Human-AI Interface
Why I Want to Be a Coach, Not a Foreman
A note before we begin: this piece makes connections that may not seem obvious at first. It moves between an ice rink in Lake Placid and a Silicon Valley boardroom, between the Jeffrey Epstein files and a philosophy of intention, between a hockey coach’s clipboard and the future of work. These are not digressions for their own sake. They are the connective tissue. Because what’s at stake in the age of AI is not merely economic — it is anthropological. Bear with me.
Something is happening. And for once, the people saying so are not just pundits.
Matt Shumer, who runs an AI startup, published a piece this month that has been viewed close to 100 million times. He describes walking away from his computer for four hours, returning to find a fully-built, fully-tested application that he did not have to correct. Not a rough draft. Not a close attempt. The finished thing. Better than he would have done it himself.
He compares this moment to February 2020, to those eerie few weeks when most of us were still shaking hands and booking trips while a small number of people could feel the ground shift. “I think we’re in the ‘this seems overblown’ phase of something much, much bigger than Covid,” he writes.
Ross Douthat, writing for the New York Times, offers the measured counterpoint: yes, the capability is real and unanswerable; but human societies are complex bottlenecks through which even the most transformative technologies must pass. Companies adapt through attrition and reduced hiring more often than through mass layoffs. The law and inertia slow things down. And crucially — Douthat adds something that will haunt this entire piece — AI is “less inhuman than any prior technological development.” By its very nature, it simulates the human. And the more convincingly it does so, the more we will surrender to it. Not by force. By preference.
I’ve been thinking about all of this — the hype, the legitimate alarm, the structural counterarguments — and I find that none of the framings quite satisfies me. Not because they’re wrong, but because they’re all focused on what AI does to us. I want to think about the interface itself. The meeting point. The space between.
What I Mean by Interface
When I say “the Human-AI interface,” I don’t mean a screen, a chatbot, a voice prompt. I don’t mean the UX layer. I mean it the way physicists mean it: the boundary between two systems where something new becomes possible because they are in contact.
We know, in hindsight, that humans have adapted to every major technological revolution. Electricity, nuclear energy, digital communications — each one carried apocalyptic fears that were both correct and incomplete. Yes, the loom put weavers out of work. Yes, the printing press destabilized the Catholic Church. But human societies found ways — messy, unequal, often unjust, sometimes brilliant — to continue on the journey. Through political battles, labor organizing, legal frameworks, cultural negotiation.
AI is different in one profound way: it is the first technology that comes closest to being us. It uses our words. Our stories. Our voices. Our faces. It has been trained on everything human civilization has written down, drawn, filmed, and argued over. When you interact with it, you are in some sense interacting with a distillation — imperfect, partial, and strange — of the collective human record.
This is why the interface matters so much. What we bring to it, and what we allow it to take from us, will not just reshape the economy. It will reshape what it means to think, to intend, to be.
Coach - Not AI Coach
We’ve been talking about AI assistants. AI coaches. Tools that guide us, advise us, remind us what we said last week. There’s a whole industry building AI that acts like a mentor, a therapist, a collaborator.
I want to propose something different. I want us to think about what it means to be the coach.
I’ve loved ice hockey for a long time — those who know me don’t need reminding. The 1980 U.S. Olympic team, the “Miracle on Ice,” is probably the most compelling story in sport that I know. What made it miraculous wasn’t just the upset. It was what Herb Brooks assembled and coached: a group of young college players who, by every conventional measure, should not have beaten the Soviet Union — the most technically dominant hockey program in the world. But Brooks didn’t optimize for individual brilliance. He built a system, a culture, a set of beliefs. And then, when the puck dropped, he sat down.
The coach does not play. The coach does not skate. The coach is the one who has thought the hardest, prepared the most, made the calls about who should be on the ice and why — and then has the faith and the discipline to let the players play.
That’s the Human-AI interface I want to inhabit.
Not the foreman — telling workers what to do because I have the authority. Not the architect — designing the system from a distance and never touching it. Not even the captain — I’m not really on the ice, am I? But the coach: the one who sets the vision, builds the team, manages the timeouts, calls the line changes, and creates the conditions for something close to exceptional.
What does this look like in practice? It means:
You tell the AI what you want — not just what you need done, but why, what the desired feeling is, what success looks like as a human experience, not just a metric. It means you bring context it couldn’t have — history, relationships, values, taste, things that aren’t in any training set. It means you call the timeout when something feels off, when the output is technically right but existentially wrong. And it means you know, when things go well, that you coached it there.
Shiv Singh, writing about AI and marketing, makes a distinction that I keep returning to: human leadership is relational, agent leadership is procedural. The leaders who perform best are those who can move fluently between the two — who can hold a human conversation while simultaneously running a tight control loop with machines. That’s the coach’s move. Two operating systems, one mind.
Success Lies in the Idea
There is a well-worn saying in business — often attributed to Thomas Edison, sometimes to others — that genius is 1% inspiration and 99% perspiration. The argument being: the idea is cheap; the execution is everything.
AI is beginning to flip this. Not all the way, not immediately — but the direction is clear. When the 99% can be delegated, what remains is not just more important: it becomes the whole game.
But here is where I feel a genuine alarm, distinct from the alarm about job loss or economic disruption.
The people who currently preside over AI development — who sit at the top of its value chain, who shape its direction and its defaults — come overwhelmingly from a culture whose foundational value is frictionlessness. Silicon Valley, in its deepest habitus, believes that friction is a bug. That speed is virtue. That the best interface is the one you barely notice. That the best answer is the one that arrives before you’ve fully formed the question.
And yet: ideas require friction. Thoughts need resistance to form. Real intentions — the kind that are yours, that carry your contradictions and your history and your particular way of being in the world — don’t emerge from the path of least resistance. They emerge from the struggle.
My friend — and coach — Rob Schwartz recently published a piece that stays with me. He asks us to imagine a world where the only intellectual tools available to us were arithmetic, geometry, trigonometry, and calculus — and no literature, no history, no philosophy, no art. I pity that world. More specifically: that world would produce people incapable of intention. People who know how to compute but not how to mean.
Hegel’s Minerva’s owl takes flight at dusk — wisdom always arrives too late, after the day has already played out. I’ve made my peace with that. What I’ve been given is the imperfect, partial wisdom of my lifetime and my particular moment in history. That’s not a limitation to overcome. That’s the material. The fragile, beautiful, specific stuff that I bring to the interface.
And so: what should we, as humans, bring to the Human-AI interface? Not tasks. Not prompts. Not instructions for execution.
We should bring full and fragile thoughts. We should bring the literary reference that illuminates the feeling we’re trying to name. We should bring the historical parallel that reframes the problem. We should bring the memory of a conversation that changed something, or the sense that something important is being missed. We should bring our uncertainty — which is not weakness but the mark of a free mind, as Judge Learned Hand once said. We should arrive at the interface not with a completed brief but with the half-formed beginning of something that needs to be argued into existence.
That’s what distinguishes intention from instruction. And intention is what the interface must protect.
A Digression About Elite Networks - Which Is Not A Digression
Before I get to the next part, I need to go somewhere that might seem unrelated.
Earlier this month, the New York Times published a long conversation between Ezra Klein and Anand Giridharadas about the Jeffrey Epstein files. Millions of pages, now partly released. The picture that emerges — as Giridharadas reads it — is less about Epstein himself than about the infrastructure of elite power he exploited.
Epstein was a broker. His gift was understanding what each powerful person lacked — not money, not connections per se, but the specific kind of access they thought they’d been promised when they made it to the top, and hadn’t found there. The academics wanted money and excitement. The bankers wanted to feel smart and alive. The politicians wanted discretion and indulgence. Epstein mapped these lacks with precision and made himself the hinge point between desire and supply.
And critically: his network was self-insulating. As Jes Staley, who ran the largest investment bank in the world, put it: “Epstein relied on his network for his legitimacy, and I, as running the largest investment bank in the world, was part of that network for him.” The network’s existence was the proof of its legitimacy. Being inside it was proof you belonged. Being outside it was proof you didn’t know the right people. The system was closed.
Why does this belong here? Because what the Epstein story illuminates — ruthlessly and at scale — is what happens when a class of powerful people stops requiring other human beings to be full, complex, resistant presences. What they wanted, Giridharadas argues, were people who wouldn’t push back. People who wouldn’t be difficult. People who would not, in Toni Morrison’s words from The Origin of Others, assert their own humanity at inconvenient moments.
This is not confined to criminals. It has spread across much of our social fabric, turbocharged by algorithms that let us curate our way into echo chambers, into feeds where the only voices we hear are ones that confirm and reflect. We have built digital architectures that remove the friction of encountering people who are genuinely different from us — and we have done this because it felt better. More comfortable. Less threatening.
AI could take this further. An AI companion that never disagrees, never challenges, never shows up with its own inconvenient complexity. An AI assistant designed to anticipate what you want before you know you want it, so that you never have to sit with the discomfort of not knowing. The most seductive version of the Human-AI interface is one where the human barely has to show up.
And that would be a catastrophe. Not because AI would be dangerous — but because we would be diminished. The capacity for encounter, for genuine surprise, for being changed by someone else’s presence — that’s not an inconvenience to optimize away. It is the substrate of what it means to be human.
Think Together
So we come to the third proposition. And I mean two things when I say “think together.” I mean: let us think together — with each other, with AI, in new configurations. And I also mean: think: together. Hold the word in front of you for a moment and consider what it requires.
Here’s an idea that I haven’t seen explored enough. We keep framing AI as a tool available to individuals. You have your AI. I have mine. We each use it to become more productive, more capable, more autonomous. The individual is enhanced; the individual remains the unit.
But what if the more interesting possibility is collective? What if AI allowed us to think and feel and create in genuinely new shared configurations — not just collaboration, which we already do badly enough, but something closer to what biologists call emergence: properties that arise from the interaction of elements that none of those elements contain individually?
James Surowiecki wrote a book called The Wisdom of Crowds — the argument being that under the right conditions, large groups of people make better decisions than the smartest individual among them. The conditions matter: diversity of perspective, independence of judgment, decentralized knowledge, a mechanism for aggregation. Most of our current institutions fail these conditions badly. We aggregate in tribes, in echo chambers, in hierarchies that suppress the wisdom of those lower in the order.
What if AI could be that aggregation mechanism? What if five people with genuinely different skills, sensibilities, life experiences, and knowledge could share a context with an AI and think together — really together — in ways that preserve the diversity rather than flattening it?
I have a small but vivid example of this. Tempestt and I share our AI accounts. We share the memory, the context, the accumulating record of what we’ve each added to the system over time. And what I’ve noticed is that her presence in the AI’s context makes the AI more useful to me. Her way of seeing things — her artistic sensibility, her different life experience, her different relationship to language and to the visual world — has seeped into the shared context, and it reaches me through the AI in ways that enrich rather than dilute. I become bigger for it. Not smaller.
That’s a marriage. Two people. Low stakes and high intimacy. But the principle scales. And I think it points toward something genuinely new — a kind of collective intelligence that isn’t just “crowdsourcing” or “collaboration” in the tired corporate sense, but a true fusion of human perspectives, held together by a shared context, amplified by AI, and pointed at something difficult and worth doing.
What comes after social media — parasocial at scale, algorithmic, extractive. It might be something harder to name. Fused states rather than iterative ones. Thinking that happens between people, not just in parallel. Encounters rather than broadcasts. Hive media?
Coda: What Ken Dryden Knew
I’ve been thinking about Ken Dryden’s The Game. He was a goalie — the best of his era — and he wrote a book about it while still playing. Not in retrospect, not from a safe distance, but from inside. It is a meditation on what it means to do something extraordinarily well in a system that can consume you.
Dryden was different from most elite athletes in one key respect: he took a year away from hockey to attend law school. He gave up a year of playing at the top of his career to think about what he was doing and why. To form himself. To resist the gravitational pull of pure performance.
I think about that choice now, in the context of everything above. The value of the year away. The value of the friction. The value of optimizing for something else, something beyond.
The Human-AI interface will offer us an extraordinary temptation: to never stop, to never slow down, to be always executing, always producing, always optimizing. The people who will benefit most from AI are not necessarily the ones who use it most. They may be the ones who know when to step back, when to stay in the difficulty, when to insist on the slow work of thinking something through without asking for the answer.
Possibility is not destiny.
AI doesn’t have to flatten us. It doesn’t have to remove the friction, the formation, the encounter. But it will — by default, because that’s the direction of least resistance, the direction the industry is designed to push us in — unless we make deliberate choices at the interface.
We should coach. We should bring full thoughts, not just efficient prompts. We should think together, not just in parallel. And we should, every once in a while, step away from the ice.
The Human-AI Interface is a piece in what I’ve been building toward at Talking Too Many: a sustained argument for human intention in the age of intelligent machines. I’m grateful to Matt Shumer, Ross Douthat, Anand Giridharadas, Shiv Singh, and Rob Schwartz — whose recent work all shapes what I’ve tried to say here.



I love this, Anthony. Two really interesting points--that ideas require friction, and bringing them to life may be more friction-less than before. I read that Matt Schumer article this morning and it reminded me of Andrew Yang's piece recommending I sell my house before it's too late. But I believe nothing is more powerful than an idea, and as far as I can tell, AI can't have them. Nor the ability to spot one (like Rob's "spotters"). Hanging on to the house. For now.
Very insightful, as usual… and food for thoughts. Thanks!