AI and The Death of the Reader
When Lost ended, it wasn’t just the story that unraveled. It was the trust between creator and audience. Today, a similar breach of trust is unfolding across social media and marketing, where AI-generated content often mimics human creativity but lacks the one thing audiences crave: authentic intent. Drawing on studies, cultural references, and the psychology of communication, this essay explores how Lost offers more than a cautionary tale; it reveals the key to using AI without losing your audience.
When Lost aired its final episode, people were angry.
For six seasons, the show piled one mystery onto another—abandoned bunkers, smoke monsters, cryptic numbers, and a trail of clues that promised answers. Viewers clung to the hope that it would all add up to something. But when the ending arrived, the resolution never did.
Years later, the truth became clear. Lost gave the impression of a meticulously planned story where every symbol, number, and mystery pointed to a deeper answer, but the story was actually changing in real time. The writers adjusted course based on audience reactions and extended the series to prolong its success, creating new mysteries before resolving the old. Over time, clues began to contradict one another, until they were too tangled to resolve. The resolution fans were expecting was never possible, because the puzzle was still being assembled even as they tried to solve it. What many described afterward wasn’t just disappointment with the ending—it was a profound sense of betrayal.
Whether you’re updating your team, presenting a proposal, or onboarding new hires, every meeting has a purpose. In the world of communication, achieving that purpose is the speaker’s responsibility.
Modern audiences are no strangers to feeling betrayed. It plays out daily across digital platforms, where people lend their attention, only to discover they’ve been lured by content created by AI to capture clicks and not communicate anything meaningful. Our repeated attempts to cut back on social media (only to be pulled back by a promising hook) are rooted in a human instinct: we trust that someone created something with an authentic intent, a fragment of someone’s lived experience, a point of view worth hearing. Instead, we feel betrayed when we discover that no one is behind the message, and that it is designed only to keep us watching, just like Lost. But Lost isn’t just a cautionary tale. It also reveals what it takes to use AI effectively.
Most conversations about AI in business revolve around its capabilities: what it can do, how it affects productivity, and what it means for hiring. That is understandable, since AI is still a relatively new technology. Leaders and employees still find it hard to map out a stable path forward with the relentless pace of technological development, which naturally breeds uncertainty and fear. In this environment, focusing on AI’s capabilities just feels like chasing a moving target. After all, anything AI cannot do today, it will be able to do tomorrow. Rather than fixating on AI’s capabilities and impact on jobs, we may be better served by asking a different question entirely: how do we keep the kind of authentic intent audiences still expect?
The backlash to Lost’s ending serves as a powerful metaphor for the discomfort we feel when we realize content lacks authentic intent, especially in the age of AI. Lost’s fans weren’t outraged because they didn’t like the ending, but because it revealed that the show had been improvising all along. The same risk applies to any creative work generated with AI, whether its writing social media posts, designing images, or communicating with clients. The tool may generate a high quality product, but the audience will still disengage if they sense that there’s no clear human intent guiding the message. A study from the Georgia Institute of Technology highlights this effect: when people receive condolence emails written by AI, they tend to feel uncomfortable and react negatively, even if the message is well-written.
Over the past few years, researchers have begun to measure how people respond to AI-generated content, and the findings are revealing. In a recent survey of 2,000 participants, half were able to correctly identify copy written by AI. More tellingly, 52% said they would become less engaged if they suspected the content wasn’t human-made. Another study, led by researchers at Berkeley Haas and MIT Sloan, found that in blind tests, AI-generated content was often rated higher in quality. However, once participants believed the content might be AI-created, their preferences shifted toward work they knew came from a human. The mere suspicion of automation was enough to tip the balance.
People’s reactions to AI-generated content play out daily across social media. As audiences become more attuned to spotting AI-generated posts, creators are finding it harder to keep engagement high. The same challenge is echoed by marketing and sales professionals, many of whom report growing difficulty in generating interest through newsletters and cold outreach. But as AI tools continue to improve at a rapid pace, it becomes clear that the problem isn’t technological. The real issue lies in how people perceive and respond to communication when the human behind it feels absent.
In 1967, Roland Barthes published The Death of the Author, where he argued that once a creative work enters the world, the author’s personal intentions and background should no longer influence its meaning. The text stands on its own, and meaning is created by the reader, not dictated by the creator. Barthes famously wrote, “the birth of the reader must be at the cost of the death of the Author.” Yet in Barthes’ argument lies an assumption that the work in question was created by someone making choices. Every creative work, at its core, is the result of choosing one option over many possibilities. Authors choose words, and photographers choose the camera’s position and lens. Without choice, there is no authorship. Without authorship, there is no intent. And if no one means it, does it still have meaning? According to the studies cited earlier, most people would say no.
Intention is the origin of all communication (creative or not), regardless of whether the listener or reader fully understands it. In his 1987 book, philosopher Daniel Dennett introduced the concept of the “intentional stance,” the idea that we interpret behavior as meaningful when we believe it stems from purpose. We assume actions are driven by goals, they are not just random events. That assumption allows us to make sense of other people, anticipate their behavior, and navigate complex social situations (Ironically, Dennett’s work also laid the groundwork for how artificial intelligence systems model human reasoning). In daily life, our instinct to interpret meaning is constant (driving in a city, for instance, is less about controlling a vehicle than it is about predicting what other drivers intend to do). The same applies to how we engage with stories, conversations, and even social media hooks. Our attention and motivation follow not just what is said, but the belief that someone meant to say it.
As we consider the role of AI in creative and strategic work (and accept that its output will soon be indistinguishable from that of humans) it’s time to move beyond the outdated frame of human versus machine. The more relevant distinction is intent versus automatic. From the perspective of an audience, that’s the difference between presence and absence, and it determines whether they stay engaged. AI can absolutely express intent, but it cannot originate it. And it’s that sense of genuine intention, of someone meaning what they say, that audiences are listening for.
Sources cited in this article:
https://cs.stanford.edu/~diyiy/docs/chi22_perception.pdf
https://www.bynder.com/en/press-media/ai-vs-human-made-content-study/
https://www.cambridge.org/core/journals/judgment-and-decision-making/article/human-favoritism-not-ai-aversion-peoples-perceptions-and-bias-toward-generative-ai-human-experts-and-humangai-collaboration-in-persuasive-content-generation/419C4BD9CE82673EAF1D8F6C350C4FA8
https://writing.upenn.edu/~taransky/Barthes.pdf
https://mitpress.mit.edu/9780262540537/the-intentional-stance/


Lost, created by Jeffrey Lieber, J.J. Abrams, and Damon Lindelof