AI Transformation — A Lens
The Positive Case: Why Shared Agency Is the Competitive Advantage, Not the Concession
As often, I’ve been standing atop fault lines, looking for converging principles. Two documents landed within days of each other this past week — one from Jamie Dimon, who runs the world’s largest bank, one from OpenAI, which is building what it describes, with remarkable candor, as a transition toward superintelligence — and both perform the same intellectual maneuver: naming the systemic risks of forces they are simultaneously accelerating, from a position of having already captured most of the upside.
Dimon’s letter is the annual shareholders’ address of JPMorganChase, which posted record revenue of $185.6 billion in 2025 and has done so for eight consecutive years. These letters are normally a genre of confident reassurance. This one is different. Dimon names — with unusual directness — that AI will eliminate jobs faster than society can retrain workers, that the American Dream is slipping out of reach for too many families, and that the organizational model capable of surviving this turbulence looks like small, empowered, Navy SEALs-style teams operating with genuine autonomy inside large institutional platforms — what he calls a “neural network” of people whose connections create value no competitor can easily replicate.
OpenAI’s document is something different in form but similar in register: a 13-page policy paper proposing a Public Wealth Fund, a Right to AI modeled on universal literacy campaigns, and worker voice mechanisms with real teeth. It is the rare corporate document that names its own firm as a potential source of harmful wealth concentration. And it defines the current moment with unusual precision: we are not approaching superintelligence — we are already in the transition toward it.
What makes the convergence of these two documents remarkable is the specific thing both are circling without landing on: that the power to navigate what’s coming lies not in better individual tools or better individual strategies, but in the quality of human communities — in the team that owns its mission, the labor group that bargains collectively, the guild that pools its knowledge. Both documents invoke this truth. Neither fully draws its implications.
There is something worth saying that neither document quite says — a positive case that is a little naive, but not entirely wrong, and maybe worth making out loud.
The Invisible Through-Line
Read carefully, both documents orbit a common problem without fully naming it. What they are circling is collective agency — the conditions under which groups of people, not just individuals and not just institutions, can exercise real power over the systems that shape their lives.
The creator economy makes this concrete. Only about 12% of creators make more than $50,000 a year — likely less than a full-time journalist. Only 1% to 2% make more than $100,000. Meanwhile, the market is approaching half a trillion dollars by 2027. Platforms have been extraordinarily effective at converting collective human creativity into individual economic precarity: isolating creators from each other, making them compete for attention in zero-sum feeds, structuring the creator-platform relationship as one of radical asymmetry.
This is not a design flaw. It is the intended architecture. Platforms maximize value extraction by atomizing their user base — turning what could be collective labor into individual competition, what could be organized negotiation into parasocial performance. Guy Debord’s society of the spectacle doesn’t just represent social life; it reorganizes it. What looks like community is the image of community.
And the legal system is beginning to catch up. A California jury in March 2026 found that Meta and Google were to blame for the depression and anxiety of a woman who compulsively used social media as a small child, awarding her $6 million in what many experts are calling Big Tech’s “big tobacco moment.” The arc is now visible. The question is how long it takes and how much damage accumulates on the way.
AI arrives into this landscape ready to amplify whichever model it is deployed within. The question is not whether AI is pro-collective or anti-collective by nature — it is neither. The question is whose governance framework, whose data architecture, and whose institutional interests shape how it is deployed.
The Three Models of Power That AI Will Encounter
Before asking what AI will do to collective human agency, it is worth being precise about what kinds of collective agency actually exist.
The first is the institutional collective — the corporation, the bank, the government agency. This is Dimon’s JPMorgan: 320,000 people organized around shared principles, proprietary data, institutional memory, and governance structures. AI deployed inside an institution that already has governance, data, and trust architecture functions as an amplifier of existing power. The institution gets faster, smarter, more efficient. Its competitive advantage over smaller actors widens.
The second is the organic collective — the labor group, the community, the creative circle. Dimon invokes this admiringly but deploys it instrumentally. His small teams are empowered within the institution’s framework, in service of the institution’s goals. A Navy SEALs team has real autonomy inside a defined mission, but it does not set the mission, does not own the outcomes, and does not persist independently of the institution that deploys it. The organic collective that actually exercises independent power — the labor union, the cooperative, the mutual aid network, the guild — is structurally different. And it is this kind that the current technological architecture has been most effectively dissolving.
The third is the parasocial pseudo-collective — the creator’s audience, the brand’s “community,” the platform’s engaged users. Followers do not constitute a collective in any politically meaningful sense. They cannot negotiate collectively. They have no shared ownership of the platform or its revenue. What looks like community is — in Debord’s terms — the image of community.
The Fault Lines: Where AI Meets the Three Collectives
Against institutional collectives, AI is mostly an amplifier. The soft institutional knowledge that I’ve argued makes organizations genuinely defensible — the accumulated, living, human-built context that makes a banker’s understanding of a neighborhood’s cash flow anxiety different from any competitor’s — becomes exponentially more powerful when an agentic AI framework can summon, synthesize, and act on it without a prompt. The competitive moat, in the AI age, is not the model. It is the context the model operates within.
Against organic collectives, AI cuts both ways. On one side, AI genuinely can reduce the overhead that fragments collective action — the coordination costs, the information asymmetries, the time required to synthesize complex situations into collective decisions. A labor organizer with good AI tools can do in hours what used to require weeks. On the other side, AI dramatically reduces the cost of surveilling, modeling, and preempting collective action. The same tools that help a union analyze a contract can help an employer model the probability of a strike. The asymmetry is not in the capability. It is in the data — and institutions that have been collecting behavioral data for decades have a structural advantage that organic collectives cannot easily replicate.
Against parasocial pseudo-collectives, AI is an accelerant. AI lowers the cost of content production, increases supply, depresses per-unit creator income, and pushes more creators toward the attention capture economy. The platforms, meanwhile, use AI to optimize feed algorithms with greater precision. What doesn’t change, unless deliberately restructured, is the underlying architecture: isolated individuals competing for platform-mediated attention, with no collective bargaining power, no shared ownership of the data they generate, and no institutional voice in the governance of the systems they depend on.
One platform is worth examining more carefully here, because it points toward what a different architecture might look like — and because I know it from the inside. Substack’s model is structurally different from the major social platforms. Its value proposition is direct subscription: the economic relationship runs between writer and reader, not between writer and algorithm. It works better the more people share genuinely, and it does not structurally reward spectacle over substance — or at least it is designed not to. The result is something closer to a guild of independent publishers than a feed of competing performers. That said, Substack is not without its tensions: algorithmic recommendation is creeping in, bestseller dynamics concentrate attention and income at the top, and the platform’s governance remains entirely in private hands. It is not a utopian model. But it suggests that platform architectures can be designed to create value through collective participation rather than individual competition — and that this is a design choice, not an inevitability.
The Organizational Case: Why Shared Agency Outperforms Extracted Compliance
Start at the level Dimon is most comfortable with: the firm.
A team that is genuinely empowered — that owns its mission, that has the authority to decide and the accountability to bear the consequences — produces something qualitatively different from a team that executes instructions well. It produces institutional knowledge that cannot be replicated by examining the outputs. The how lives in the relationship, in the accumulated judgment, in the shared context between people who work together over time.
Now introduce AI. The naive deployment model treats AI as a node replacement: automate the task, reduce the headcount, capture the efficiency saving. This is the model that produces the labor displacement Dimon hedges about and OpenAI warns against. It is also, structurally, the model that destroys the neural network it is deployed inside. When you automate the junior analyst’s first-draft work, you don’t just eliminate a task — you eliminate the apprenticeship pathway through which institutional knowledge transfers. The senior banker’s judgment was built on ten years of doing the work the AI now does. The next generation will not have done that work. The institutional knowledge doesn’t transfer automatically; it evaporates.
The alternative model treats AI as a context amplifier — as the technology that makes the connections in the neural network faster, richer, and more durable. The junior analyst doesn’t disappear; their work changes. Instead of producing first drafts, they interpret AI outputs, challenge AI assumptions, and develop the judgment to know when the model is wrong. The institutional knowledge accelerates, because the AI carries accumulated context forward, and the humans do the higher-order work of building and contesting it.
This is what I have been calling the Human-AI Interface: the moral and operational boundary where human intention meets AI execution. The distinction matters enormously here. The firm that positions humans as coaches — shaping, directing, contesting — rather than foremen issuing prompts and accepting outputs, retains the institutional knowledge that the firm that automates it destroys. In a competitive environment where AI tools are increasingly commoditized, the differentiator is not the tool. It is the accumulated, living, human-built context that surrounds and gives the model meaning. And that context is only produced by organizations that treat the people inside them as genuine agents.
The Platform Case: Why Collective Value Creation Outperforms Extraction at Scale
Platforms built on individual creator competition produce content optimized for attention capture, not for quality, depth, or durability. The algorithm rewards what spreads, and what spreads is not the same as what is valuable. Over time, this produces a content environment that degrades — more volume, less signal, increasing user fatigue, and a slow erosion of the trust that makes the platform worth being on.
AI accelerates this self-destruction. If AI can produce adequate content in most categories at near-zero cost, then content itself is no longer the asset. The asset is the trust and collective intelligence that surrounds and validates content — the community context that makes it meaningful rather than merely present.
The tobacco arc is instructive again. The platforms have been doing something structurally similar to the cigarette companies: they have known their architectures were causing harm, and they have continued to optimize for engagement over wellbeing. The legal exposure is now real. The bellwether trials scheduled for June and August 2026 will tell us how far the liability cascade extends.
The platform that figures out how to share enough of the upside to sustain genuine community investment will retain the one asset AI cannot replicate: genuine human community, built on trust. The platform that continues to extract will preside over a pool of AI-generated content with no reason for anyone to prefer it.
The Governance Case: Transparency as Competitive Moat
The distinction worth drawing is between performative transparency — publishing reports, releasing policy papers — and structural transparency: actually sharing aggregate, anonymized data on platform economics, AI deployment outcomes, and the distribution of AI-generated productivity gains. The kind of transparency that makes invisible things visible, not as a gift, but as a condition of operating in a society that bears the costs of the technology’s disruptions.
The competitive case for structural transparency is counterintuitive but real. JPMorgan’s position as the world’s most trusted financial institution is built not on secrecy but on the credibility that comes from decades of surviving crises that destroyed competitors. The transparency that sustains that trust — reporting requirements, stress tests, regulatory examinations — is not a constraint on JPMorgan’s power. It is part of the infrastructure of that power.
The same logic applies to AI platforms. Organizations that understand this earliest will have a structural advantage — not because they are more virtuous, but because trust is a genuine competitive moat in an environment where governance risk is increasingly the dominant variable.
Toward Collective Intelligence in the AI Age
The history of human intelligence is not the history of brilliant individuals. It is the history of brilliant collectives — communities of practice that accumulated shared knowledge faster than any individual could, built the trust architectures needed to maintain it, and produced the institutions capable of transmitting it across generations. The scientific community. The legal tradition. The craft guild. The jazz ensemble. The neighborhood that collectively knows which landlords are trustworthy.
What has limited collective intelligence historically is cost — coordination cost, communication cost, the trust cost of building the relationships that make genuine collaboration possible.
AI changes this cost structure fundamentally. The coordination friction that has constrained collective intelligence — the overhead of bringing distributed knowledge into productive contact — is precisely what AI is most effective at reducing. An AI system that carries the accumulated context of a community of practice, that can surface relevant prior knowledge instantly, that can translate between the different vocabularies of different contributors, is not a replacement for collective human intelligence. It is an amplifier of it, in the same way that writing was an amplifier, printing a further amplifier, and the internet a further amplifier still. Each of those prior amplifications produced both concentrated power and distributed capacity. What determined which force predominated was not the technology but the governance architecture built around it.
The Honest Limits
I called this positive case a little naive at the start, and I should say clearly where it runs into real walls.
It works best where the time horizon is long enough for collective value creation to outperform extraction. In the short run, extraction often wins. It works best where the technology is genuinely commoditized — where the differentiator is the context and community built around the tool, not the tool itself. In the current moment, the most powerful AI systems are not quite commoditized, which is precisely why OpenAI’s Right to AI proposals matter. And it requires that the collectives capable of building it actually exist, or can be built — which is not a technology problem. It is a political economy problem.
The historical precedents are not fully comforting. The Progressive Era and the New Deal were not produced by the willingness of industrial companies to govern themselves well. They were produced by decades of organized pressure, by people who had very little and lost more, who built institutions capable of holding power accountable. The tobacco industry didn’t reform itself. It took fifty years, thousands of lawsuits, and a cultural shift that made smoking not just dangerous but embarrassing.
Something analogous will happen with AI. The California verdict is the first Surgeon General’s report. The arc is visible. The positive case is real. The path to it is not automatic. But the window for making the right choices is open — in organizations, in platforms, in governance frameworks — and the people best positioned to argue for those choices are the ones who understand both sides of the ledger.
The question you end up with is the oldest one in democratic societies. Not whether the technology will transform everything — it will. But who decides what it transforms, in whose interest, and with what accountability to the people who live inside the transformation.
That question doesn’t get answered in a policy paper or a shareholder letter. It gets answered, slowly and incompletely, by the communities that organize around it.


