What I Didn’t See at CES or Why I’m Betting on Human Intention
I’m thinking this somewhere above the American West, on a plane taking me back to New York after a few days in Las Vegas at CES. The cabin is dim, the screens around me glow softly, and my thoughts are doing what they often do when movement replaces notification: they slow down, they wander, they connect.
What I didn’t see at CES 2026 is what our future should be like.
CES has become a theatre of inevitability. Screens everywhere. AI everywhere. Friction nowhere. I’m not disappointed with the technology on display (the ingenuity is real, the engineering often remarkable) but with the anthropology behind it. The implicit model of the human being it assumes, and quietly enforces.
CES excels at showing us what can be built. It is far less interested in asking what ought to be built, for whom, and at what cognitive, social, and cultural cost.
Intention Is Not a Prompting Problem
Much of today’s AI discourse frames intention as a prompting problem: How do I ask better questions? How do I phrase my request more clearly so the system gives me the right answer?
That framing is profoundly insufficient.
Intention is not a line of text typed into a box. It is a human practice. It takes time, solitude, boredom, and resistance. It requires the ability to pause, to hesitate, to hold competing possibilities in mind, to decide not to act, or not yet at least.
For years, our digital environments have been engineered to eliminate precisely those conditions. Latency has been treated as a bug. Friction as a flaw. Hesitation as lost engagement. The tap, the swipe, the scroll, the infinite feed: all designed to collapse the distance between impulse and execution.
Now AI threatens to compress that distance even further.
If the partnership between humans and AI is to work to the benefit of the former, we must reappropriate a realm that is fundamentally ours: human intention. Not as something to be optimized away, but as something to be protected, cultivated, and amplified.
AI should not replace intention.
It should extend it — which is what it did with this post for instance.
Cognitive Technologies with Built-In Friction
The printed page slows us down.
Handwriting slows us down.
A walk without headphones slows us down.
Standing in the shower, doing nothing in particular, slows us down.
These are not nostalgic rituals. They are cognitive technologies with built-in friction. They create the temporal and mental space our brains need to connect distant dots, surface half-formed thoughts, and produce ideas that are not merely reactive.
This is where the work of Matthew Crawford resonates so strongly. In Shop Class as Soulcraft, Crawford reminds us that meaning emerges through engagement with resistance — with materials, with time, with limits. Remove resistance, and you remove not just effort, but judgment.
Similarly, Jenny Odell, in How to Do Nothing, reframes withdrawal from the attention economy not as escapism, but as a political and creative act: a way to reclaim attention as a shared, situated, intentional resource.
Long before them, Marshall McLuhan warned us that media are not neutral tools. They are extensions of our senses, and in extending one faculty, they inevitably reshape the balance of all the others. The medium is not just the message; it is the environment in which meaning becomes possible — or impossible.
What struck me at CES is how little space was left for that environment.
The Human-Machine Design Lens: Rethinking the Desk of Tomorrow
Perhaps the desk of tomorrow should look very different from the one we’ve normalized over the past twenty years.
Maybe screens become adjacent, not central.
Maybe they retreat from the foreground into the periphery.
Maybe AI becomes ambient rather than imperial.
On that desk, paper and pen return to the center — possibly AI-enabled themselves, quietly listening, remembering, responding when invited rather than demanding attention. The rumors of a device being designed by Jony Ive for OpenAI are interesting not because of who is involved, but because of what they suggest: a post-screen imagination of human–machine interaction.
Less spectacle.
More presence.
The Organizational Design Lens: The Learning Curve We Are About to Break
There is another absence that haunted me as I walked the aisles of CES, and it has less to do with hardware than with people.
We keep hearing that junior white-collar positions are the most exposed to AI disruption. That there is a looming skills gap they will not be able to bridge. This is usually framed as an employment issue.
It is, more fundamentally, a formation issue.
For decades, expertise in knowledge professions was acquired by climbing a ladder: ten to twenty years of exposure to tasks of increasing complexity, partial responsibility, repetition, error, correction, explanation. People learned not only what the outcome should be, but how one gets there.
Expertise is not just outcomes; it’s exposure to process.
AI shortcuts that ladder. When systems jump straight to the answer, junior professionals lose access to reasoning, deliberation, and the tacit knowledge that lives between steps. They see results without paths, decisions without struggle.
And without paths, intention withers.
Designing AI for Differentiated Intention
This leads to a design question we are barely beginning to ask:
Should AI workflows be built with differentiated feedback loops for different users?
A junior analyst should not interact with AI in the same way as a senior expert. The former needs:
explanations, not just outputs;
options, not just recommendations;
prompts that encourage reflection, not compliance.
In some cases, AI should deliberately slow them down. It should show its work. It should invite disagreement. It should make room for learning rather than racing toward delivery.
This is not a call for weaker AI. It is a call for more pedagogical AI — systems designed not only to perform tasks, but to transmit ways of thinking, to scaffold judgment, to nurture intention.
What CES Didn’t Show
CES is doing what it is meant to do: showcasing possibility. But possibility should not be destiny.
As we rush toward ever more capable systems, we urgently need other spaces — cultural, organizational, intellectual — that ask different questions:
What kinds of humans are we shaping through our tools?
What kinds of attention are we rewarding?
What kinds of intention are we quietly designing out?
The future should not be decided by intelligence alone. It will be decided by what we choose to do with it, and by what we deliberately choose not to automate.
I left CES convinced of one thing:
If we do not design explicitly for human intention, we will lose it by default.
And that would be a future rich in answers, and poor in meaning. Maybe there would be more meaning to be found on this desk…



I guess the question is will AI appeal to/be designed for our best selves or worst?