What Are AI and Social Media Doing to Us?
Also: English doesn’t exist, against “relevance-mongering,” the chatbots of the dead, in praise of dive bars, Charles de Gaulle’s war memoirs, and more.

Good afternoon! AI news is always the same. Journalists like to write that AI is either going to save us or ruin us—very little is published that doesn’t end up making one of these two claims. If I had to pick one, which I don’t, I would pick “ruin us.” If it’s not AI, it’ll be something else.
All that to say, I lost interest in the AI debate some time ago because of the above. But, as a humble cultural correspondent, I feel a small sense of responsibility to occasionally read something about AI and bring readers up to speed on the state of the debate and share the most interesting pieces.
I feel the same way about the question of “attention” and what social media might or might not be doing to it. Or, nearly the same. This question does interest me a little more than AI, and there has been—just in the past few weeks—a number of interesting pieces on the topic.
For example, in The New Atlantis—the best place to go to get answers to all of your science and tech questions—Nicolas Carr revisits the work of Harold Innis and examines what it might tell us about our current communicative situation:
Communication systems are also transportation systems. Each medium carries information from here to there, whether in the form of thoughts and opinions, commands and decrees, or artworks and entertainments.
What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.
Because every society organizes and sustains itself through acts of communication, the material biases of media do more than determine how long messages last or how far they reach. They play an important role in shaping a society’s size, form, and character — and ultimately its fate. As the sociologist Andrew Wernick explained in a 1999 essay on Innis, “The portability of media influences the extent, and the durability of media the longevity, of empires, institutions, and cultures.”
The always interesting Brad Littlejohn reviews Anton Barba-Kay’s A Web of Our Own Making: The Nature of Digital Formation in American Affairs:
For years, it seemed, digital technology was the ultimate one-way ratchet. There was no going back—only onward and upward (for the optimist), or onward and downward (for the doomer). Not only was digital technology here to stay (where else could it be?), but it seemed destined to inexorably colonize every corner of our lives and our attention, from cradle to grave. The average age of first smartphone fell steadily (it now stands at ten), and for younger children, there was always the tablet (those born after 2010 have been dubbed “the iPad generation” after the one object they saw more during their earliest years than their mother’s face.) Smartphones and smartwatches were soon augmented with smart refrigerators and smart homes.
There were a number of naysayers along the way, to be sure. Facebook (now Meta) and Google (now Alphabet) and Twitter (now X) weathered any number of scandals and whistleblowers, any number of flash-in-the-pan public outcries over censorship or surveillance or dirty little algorithms designed to send preteen girls into death spirals of suicidal depression. Nicholas Carr and Sherry Turkle and Jean Twenge published their bestsellers courageously telling us what our own eyes could see well enough about “what the internet is doing to our brains” (the subtitle of Carr’s 2008 The Shallows), but most of us, after sagely nodding along, went out and purchased the newest iPhone or Pixel, with triple rear-facing cameras and automatic Night Sight.
The year 2020 might have seemed a turning point: Shoshana Zuboff published her seven-hundred-page tome The Age of Surveillance Capitalism, a ponderously self-important and yet genuinely insightful polemic. The Netflix exposé later that year, The Social Dilemma, became perhaps the most talked-about documentary of the year, connecting the dots between teen depression, political polarization, and the algorithms that rule our lives. But in 2020, we were all too online to really care—where else could we be? The Social Dilemma just became more lockdown binge-watching fodder. Perhaps, like peak oil, peak digital will prove an ever-receding horizon.
If future historians identify an inflection point in the linear growth of digitization, however, they are likely to find it in the year 2023. On the one hand, 2023 might turn out to mark an acceleration point—certainly that was the trajectory signaled by the launch of ChatGPT at the start of the year and the subsequent explosion of generative AI into every corner of industry and public discussion. The wildest predictions of techno-optimists or at least techno-fatalists seemed justified: after more than two decades translating ourselves into disembodied digital avatars, our embodied selves might now be obsolete. And yet the widespread apprehension provoked by generative AI—however misplaced or sensationalist much of it might be—seemed to at last awaken a generation of digital natives from their dopamine slumber.
Cora Currier reviews Lowry Pressly’s The Right to Oblivion: Privacy and the Good Life in The Nation:
In 1971, the artist and poet Bernadette Mayer shot a roll of film each day during the month of July. Alongside the photographs, she kept a journal and recorded herself reading it. “I thought,” she wrote many years later, “that if there were a computer or device that could record everything you think or see, even for a single day, that would make an interesting piece of language/information.”
Mayer’s project—like other meticulous attempts at self-documentation—seems quaint in the year 2025, when there are, in fact, on most of our persons, devices that record so much of what we think and see and do, creating indexes of personal information whether or not we are aware of it, and whether or not we wish them to be doing so. (We also self-consciously and knowingly participate in this recordkeeping; it’s funny to read that among the 1,153 photos Mayer took, only 27 were self-portraits.) The contemporary understanding of privacy—the one that has compelled me to use Signal, to quit Instagram, to browse in incognito mode, and toggle the little switches at the bottom of websites to reject cookies—focuses on the control of such information. I spend a lot of time thinking about who I want to know about me and exactly what they should know.
But Mayer’s 1971 work underlines that such information has to be created in the first place, and she noted how much was not captured in her experiment: “emotions, sex, thoughts, the relationship between poetry and light, storytelling, walking, and voyaging to name a few.” A poet’s list, to be sure, but it helped me understand the argument of Lowry Pressly’s The Right to Oblivion: Privacy and the Good Life. Pressly’s aim is to recapture a more expansive, even romantic ideal of privacy—one that is not about controlling our data and avoiding surveillance, but about the importance of everything Mayer could not record and the fact of its not being recorded; of the unknown and the unknowable.
And Ed Smith reviews Chris Hayes’s The Sirens’ Call: How Attention Became the World's Most Endangered Resource in The New Statesman:
The Sirens’ Call, by the American television host Chris Hayes, cites a lot of big thinkers – Pascal, Thomas Hobbes, JM Keynes – to support the thesis that holding our attention is central to being human. Our deep collective unease, Hayes argues, is a consequence of the attention-grabbing content constantly shovelled into our minds, usually through our smartphones. In the classical analogy that gives the book its title, we haven’t yet developed the coping strategies to resist the sirens’ call.
But no one put it better than the US author and broadcaster Garrison Keillor, who predicted the wasteland of dead-eyed internet addiction and what it would do to people before the iPhone was ubiquitous. “The internet will eat you alive. With newspapers, you’re in and out, 20 minutes,” Keillor wrote in a 2007 column for Salon. “It’s your life, you choose.” Alongside prophetic analysis of the bleak fate we were hurtling towards, Keillor also identified (and brought to life) the best riposte to endless scrolling: style! Contrast a lumpen torso tangled up in wires and headphones with Cary Grant tucking a broadsheet under his arm.
Amid the search for protections against the epidemic of overdosing on meaningless information – such as regulating Big Tech and banning smartphones in schools (both good ideas) – other measures are nearer at hand. The liberal in me doubts that regulation is the complete solution. We need to draw on ridicule, too, and call out uncivilised habits for what they are: sad and anti-human. Smartphones are necessary evils, for work and admin. But living through a phone is giving up on life.
But is living through a phone giving up on life entirely? Not if you’re reading this email on your phone, I hope.
In other news, Alan Jacobs complains about “relevance-mongering”: “It is not easy to avoid the habit of relevance-mongering, of explaining to people that they ought to read this piece about some long-ago moment in history or some far-away place because—and by implication only because—it is a distant mirror that tells us something about us. The habit is compelling for several reasons.”
Ted Gioia complains about “the aesthetics of slop”: “A hundred years ago, Ezra Pound proclaimed: Make It New. But in the Age of Slop we have a different rule: Make It Whack! AI image generation is boring unless the results are stupid. That’s the consensus view. And it’s why AI artists are in a race to make the most abominable Slop they can extract from the bots. People collect and curate these images. Entire social media accounts are devoted to stupid Slop.”
Speaking of AI: “A U.S. federal judge last week handed down a summary judgment in a case brought by tech conglomerate Thomson Reuters against legal tech firm Ross Intelligence. The judge found that Ross’ use of Reuters’ content to train its AI legal research platform infringed on Reuters’ intellectual property.”
One more AI piece: Amy Kurzweil and Daniel Story writes about the chatting with the dead:
In 1970, a 57-year-old man died of heart disease at his home in Queens, New York. Fredric Kurzweil, a gifted pianist and conductor, was born Jewish in Vienna in 1912. When the Nazis entered Austria in 1938, an American benefactor sponsored Fred’s immigration to the United States and saved his life. He eventually became a music professor and conductor for choirs and orchestras around the US. Fred took almost nothing with him when he fled Europe – but, in the US, he saved everything. He saved official documents about his life, lectures, notes, programmes, newspaper clippings related to his work, letters he wrote and letters he received, and personal journals.
For 50 years after Fred died, his son, Ray, kept these records in a storage unit. In 2018, Ray worked with his daughter, Amy, to digitise all the original writing from his father. He fed that digitised writing to an algorithm and built a chatbot that simulated what it was like to have a conversation with the father he missed and lost too soon. This chatbot was selective, meaning that it responded to questions with sentences that Fred actually wrote at some point in his life. Through this chatbot, Ray was able to converse with a representation of his father, in a way that felt, Ray said: ‘like talking to him.’ And Amy, who co-wrote this essay and was born after Fred died, was able to stage a conversation with an ancestor she had never met.
‘Fredbot’ is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko’s voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground’s co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called ‘life story avatars’, that are based on loved ones’ memories. Today, enterprises in the business of ‘reinventing remembrance’ abound: Life Story AI, Project Infinite Life, Project December – the list goes on.
These apps and algorithms are part of a growing class of technologies that marry artificial intelligence (AI) with the data that people leave behind. These technologies will become more sophisticated and accessible as the parameters and popularity of large language models increase and as personal data expands into the seeming permanence of the cloud. To some, chatbots of the dead are useful tools that can help us grieve, remember, and reflect on those we’ve lost. To others, they are dehumanising technologies that conjure a dystopian world. They raise ethical questions about consent, ownership, memory and historical accuracy: who should be allowed to create, control or profit from these representations? How do we understand chatbots that seem to misrepresent the past? But for us, the deepest concerns relate to how these bots might affect our relationship to the dead. Are they artificial replacements that merely paper over our grief? Or is there something distinctively valuable about chatting with a simulation of the dead?
Frank Filocomo and Joe Pitts praise dive bars:
65 percent of Gen Z respondents said they “plan to drink less alcohol in 2025, a much higher percentage than other generations.” This, in effect, means that America’s children are increasingly unlikely to sustain the colonial tradition of taverns as third places.
At first glance, this appears to be a good trend, especially in the wake of former U.S. Surgeon General Vivek Murthy’s advisory report on the links between alcohol use and cancer. But while drinking in excess is surely not good for you, there has been ample research in recent years that loneliness and lack of social connection have their own physiological consequences, including an increased risk of developing dementia.
The disappearance of public drinking would typify the death of bars. And the death of the bar would mean the elimination of an irreplaceable source of American community and companionship.
John Gallagher reviews a new French book (of course) that claims English doesn’t exist: “Picture the scene: it’s a few years after the Norman Conquest, and a man goes out to shoot deer in the New Forest. He’s breaking the law, as the right to hunt here is reserved to the Crown. The man is caught, and arrested – not by his own countrymen, but by ‘a group of armed jabbering foreigners’. Our hapless English hunter is forced to take a crash course in a strange language. First, he learns the word ‘prisun’; soon after, he’ll hear the words ‘foreste’, ‘rent’, ‘justise’. Uneasy in an occupied land, he will find language turned against him, his homely Saxon terms elbowed out by the language (and brute power) of a new Norman elite . . . It’s no secret that modern English is saturated with French. Insults and derogatory terms owe much to the French example – bastard, brute, coward, rascal, idiot. French oozes from the language of food and drink: chowder echoes the old French chaudière, meaning a cooking pot, while crayfish started out as escrevise before the English chopped off its initial vowel (something they also did with scarf, stew, slice and a host of others) and decided that the last syllable sounding like ‘fish’ was just too good to pass up. From arson to evidence, jury to slander, French runs through the language of the English law (and the ‘Oyez! Oyez! Oyez!’ of the US Supreme Court), such that the philologist Mildred Pope could write that the only truly English legal institution, at least from a linguistic perspective, was the gallows. With contemporary English including more than eighty thousand terms of French origin, Georges Clemenceau might have had a point when he argued that ‘the English language doesn’t exist – it’s just badly pronounced French.’ In this engaging and sometimes infuriating essay, Bernard Cerquiglini – linguist, medievalist, member of Oulipo, advisor to successive French governments on linguistic affairs – pushes Clemenceau’s statement further, arguing that ‘the global success of English is a homage to Francophonie.’ Anyone speaking English today, Cerquiglini argues, is mostly speaking French.” Try telling that to a Parisian.
Adam Kirsch revisits the war memoirs of Charles de Gaulle:
When Charles de Gaulle published the first volume of his war memoirs, in 1954, it looked like an acknowledgment that he no longer belonged to the present, but to history. His achievements during the Second World War were indeed historic. In June 1940, as France collapsed and its leaders agreed to a humiliating armistice, de Gaulle escaped to London. There, he went on the radio, to appeal to his countrymen to continue the struggle against the Axis. Over the next four years, he became the moral symbol and political leader of Free France, rallying support in the overseas colonies and among domestic resistance groups. After D-Day, he returned to France in the wake of the British and American armies, quickly created a provisional government, and led the liberated country in the Allied victory.
But de Gaulle’s leadership could not long survive the war. The fact that he belonged to none of France’s established political parties had been an asset during the German occupation, allowing him to transcend the ideological divisions that had paralyzed France and helped lead to its defeat. But when ordinary political life resumed, de Gaulle found that keeping aloof from the party system meant he had no reliable support in the National Assembly. After governing by fiat for so long, he was unwilling to court the votes of Socialist, Communist, and Radical deputies, whom he regarded in much the same way that Shakespeare’s Coriolanus did the voters of Rome: “Why in this wolvish toga should I stand here,/To beg of Hob and Dick, that do appear,/Their needless vouches? . . . Rather than fool it so,/Let the high office and the honour go/To one that would do thus.” In January 1946, he resigned the leadership he had claimed five years before, returning to private life like Cincinnatus and Washington.
Bailey Trela reviews Augusto Monterroso’s The Rest Is Silence in The Baffler:
Who exactly is Eduardo Torres? Any proper answer would have to come in phases. For starters, Torres is (or was—his existential standing is a matter of debate) a provincial literary critic hailing from San Blas, Mexico; an enemy of some and and mentor to many; the scripturient founder of the Sunday Cultural Supplement of El Heraldo de San Blas, “a daily paper that, much like the light of those stars still observable by the telescopes of astronomers after millions of years of extinction,” as his brother recalls, “continues to illuminate the hearths of the residents of San Blas even fifteen or twenty minutes after having been read.”
More honestly, or at least more literally, Torres is the subject of the Guatemalan miniaturist Augusto Monterroso’s sole foray into the novel form, The Rest Is Silence, published in Spanish in 1978 and appearing now in English in a sinuous translation by Aaron Kerner.
And then, you can’t escape the fact that Torres is really Monterroso himself, a gently mocking self-portrait of the quintessential writer’s writer’s writer—a figure who’s maybe best described less as a member of the Boom latinoamericano than as an M-80 erupting on the margins of the literary scene, no less potent for his peripherality.
Bill Meehan revisits William F. Buckley’s speeches: “Not long after I had reviewed Let Us Talk of Many Things: The Collected Speeches for this journal in 2003, I asked William F. Buckley Jr. which of his books was the favorite. We were sitting in his office at National Review on the eleventh floor at 215 Lexington Avenue, a polished upgrade from the magazine’s original home two blocks away. Buckley just happened to be in the city that December morning when I was gathering material for an addendum to his bibliography. Since I did not have a game plan other than to say “hello,” speaking with him was an unexpected opportunity to pop the question. “It has to be the book of my speeches,” he answered. ‘It covers fifty years of my life. No other of my books does that.’ Apropos of that fun fact, and this being the centennial of Buckley’s birth, I have taken a fresh look at my review to more clearly express my impression of themes unifying the collection.”

AI is being accused of hallucinations, being shifty or crafty, and people are actually surprised. Given that AI is trained on human writing, whether speeches, books, articles, poems, novels, and so forth, it all contains mankind's essence, when something has a very large mass of everything mankind has produced, or that they could get their hands on. Expecting it to be different from people in the long run is highly doubtful, being deadly to people it determines merit is more likely than not.
Another entry in the storied Francophone tradition of using “linguistics” as source of authority without actually engaging with any mainstream contemporary linguistics. Looking forward to his next books ‘Persian Does Not Exist: It’s Just Mispronounced Arabic’ and ‘Japanese Does Not Exist: It’s Just Mispronounced Chinese’