Far From Reassuring
AI companies need to focus on limits, not limitless possibilities, to get the public on board
by Dan Cohen

Sometimes companies struggle to define the business they’re in.
Take FedEx. It was based on a concept that famously earned its founder, Fred Smith, a C at Yale. That idea, to ship letters and boxes via airplanes rather than trucks, allowed the company born in 1971 as Federal Express to provide a unique service: guaranteed overnight delivery across the United States. Smith may have gotten a bad grade on his term paper, but he was a genius at the technology and logistics of his new business, and for most of its first decade, the company touted delivery through the air in their marketing, featuring lots of soaring planes. Or as the motto in their annual reports artlessly declared: “Federal Express Corporation is America’s airline for small packages.”
Customers, however, had a different view of the business Federal Express was in. In surveys in the late 1970s, FedEx discovered that what mattered most to the public was not its nifty aerial delivery, but its ability to provide reassurance. The documents and objects they were entrusting to FedEx were usually of utmost importance — legal contracts, résumés, urgent parts for critical machines — and the senders most of all wanted a secure feeling when they handed off their precious cargo to the FedEx employee on the other side of the counter, not more pictures of the airplanes FedEx used, which festooned the walls of their stores.
Smith obsessively thought of the business from the supply side, and the early marketing followed his lead; his customers, on the other hand, naturally thought about it from the demand side. They didn’t care how many planes Fred had, or how cool his Memphis sorting center was. Lurking right there in plain sight was the crucial word before “overnight delivery”: guaranteed. It turned out that that word was more salient to customers than all of the airplanes and operational engineering. Soon after these surveys revealed the primary sentiments of FedEx customers, the company changed its advertising from airplanes to the legendary “When it absolutely, positively has to be there overnight.” They foregrounded the warm reassurance they provided, rather than their cold, efficient mechanics.
For years now, the big AI companies, like Fred Smith in the 1970s, have been so in awe of their technology that they have paid little attention to their users’ feelings. Reassured we are not. First, AI seemed like the greatest machine ever created for cheating, then an unreliable narrator and obsequious bestie, then the coming destroyer of jobs, then opaque agents that can possibly do…everything? Our feelings were sheepish (cheating), uncertain (hallucinations), fearful (might lose my job), very fearful (we’re all going to lose our jobs).
Surveys show the public simultaneously adopting AI and loathing it — a strange and disquieting combination for a promising new technology. The most recent Gallup poll found that Americans are impressed with what AI can do, but 75% think it will reduce the number of jobs, and only 1 in 10 — 10%! — think AI does more good than harm.
AI companies are belatedly realizing this enormous disconnect, and are trying to find an alignment with positive public feelings — or to shift those feelings in a positive direction. In the latest advertising from companies like Anthropic and OpenAI, they rehash the old Steve Jobs “bicycle for the mind” idea — AI as an aid and accelerant for human pursuits. Thus the rise of the prefix “co” before everything AI: Coworker, Copilot, etc. AI works with you. There is, to be sure, something real here; in this newsletter I’ve tried to show positive uses for AI in the last few years, where the human (or an institution, like a library) directs the technology in a truly useful way.
But the "co-" branding rings hollow when everyone knows the companies are racing toward autonomy, a next stage where the human coworker simply isn’t needed. To get the public on board, AI companies will have to emphasize limits, not limitless possibilities. It is perhaps unusual for a company to emphasize what its products can’t do, but a little humility and sense of parameters are essential. More attention needs to be paid to how regular people — not the early adopters — will remain in control, rather than becoming coworkers with shrinking roles.
In short, we need more of a focus on human sentiments, not sensational technology. As Simon Willison, a prominent web developer who uses AI extensively, has recently written, “Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That’s no longer valuable.” What counts instead is our human-to-human ability to explain and vouch for the code. “The human provides the accountability,” Willison emphasizes, “A computer can never be held accountable. That’s your job as the human in the loop.” Customers absolutely, positively want human beings in charge of, and answerable for, the end product. It’s not just worries about hallucinations, but a more abstract and powerful concern: we need someone to call, to praise or berate, to ask further questions of — to provide reassurance.