Tech

If Google Kills News Media, Who Will Feed the AI Beast?

Summarization tools from OpenAI and Google offer a CliffsNotes version of journalism that may further dumb down public discourse and deliver a brutal blow to an already battered media business.
Silhouette of an employee using a computer in a newsroom Image may contain Indoors Person Computer Hardware Electronics...
by Bromberger Hoover Photography/Getty Images.

There’s a story written by E.B. White in 1935 titled “Irtnog” that is sadly becoming more and more relevant in today’s age of AI. “Irtnog” was written in response to a rise of digests (specifically publications like Reader’s Digest, which summarized long-form journalism) and tells the story of a world where people rely on increasingly condensed versions of books and stories until we live in a society where nuanced understanding and critical thinking are sacrificed for the sake of convenience and speed (sound familiar?). In White’s tongue-in-cheek telling, people become so obsessed with efficiency that they start to demand shorter and shorter summaries of everything, until eventually they boil down entire books to single strings of letters. Thus comes the day when the word Irtnog is announced to the public, which supposedly encapsulates all human knowledge.

Fast-forward to today, and we’re on the cusp of a similar phenomenon with the new wave of AI summarization tools being launched by OpenAI, Google, and Facebook. These tools, though impressive in their ability to distill information, are just a few steps away from creating an “Irtnog”-like reality, where the richness of human knowledge and depth of understanding are reduced to bite-size, and sometimes dangerously inaccurate, summaries for our little brains to consume on our tiny devices. Case in point, this month Google launched several new AI-powered features for its search engine. One of the most notable additions is the AI Overviews feature, which provides AI-generated summaries at the top of search results. Essentially, that’s a fancy way of saying AI will summarize search results for you, because apparently reading anything that is not a summary is just too much effort these days.

For news publishers, this is—understandably!—quite worrisome. Over the past three decades, tech companies have systematically helped siphon off the advertising revenue that once supported robust journalism, as advertisers have flocked to the targeted offerings of social media and search platforms. At the same time, the proliferation of free news content aggregated by tech giants (ahem, Google News) has made it increasingly difficult for news outlets to attract and retain paying subscribers. As such, the publishing industry has been declining since the early 2000s, when the real tech companies were separated from the chaff of the dot-com bubble, with newspaper revenues falling by more than 50% over the past two decades.

According to Pew Research, newsroom employment in the US dropped by 26% between 2008 and 2020, with newspapers being hit the hardest. Nearly one third of the country’s newspapers have gone out of business since 2005, leaving thousands of communities without a local news source. According to the US Census Bureau, magazine revenue fell by 40.5% in the past two decades. Between 2019 and 2022 alone, total audiences for magazine companies decreased by a staggering 38.56%. The Washington Post said recently that it lost $77 million last year. The areas that have been growing for some outlets are digital subscriptions and digital ads, but now with Google’s AI, which creates a CliffsNotes version of news stories, fewer people will feel the need to click through to the original article, further eroding the already dwindling traffic and revenue for news publishers. If AI-generated summaries become the primary way people consume news, the economics of journalism could be devastatingly impacted.

Google logo in Poznan, Poland on May 16, 2024.by Jakub Porzycki/NurPhoto/Getty Images.

When I asked people in Silicon Valley if they worry about this new reality of tech sites summarizing everything and more jobs being lost in the news industry, they seemed genuinely elated by the idea of something that will save them more time and not require them to click through to another site. “I think that whatever semblance of clicks the publishers were hanging on to, this completely pushes them over the cliff, and that’s not tech’s fault, but the fault of the publishers,” says a Silicon Valley investor and entrepreneur. “Most quote-unquote news sites have already alienated readers with their obsessions with trying to create content in response to whatever Twitter is upset about that day, and so the few places that still do real journalism can keep trying to do real journalism and hope that they’ll get enough clicks to keep the lights on. For everyone else, the people who take a tweet and make an article out of it, fuck them, they deserve to die.”

One of the big worries with the rise of these AI CliffsNotes products is how much they tend to get wrong. You can easily see how AI summarizations, without human intervention, can provide not just incorrect information, but sometimes dangerously incorrect results. For example, in response to a search query asking why cheese isn’t sticking to a pizza, Google’s AI suggested that you should add “1/8 cup of non-toxic glue to the sauce to give it more tackiness.” (X users later discovered the AI was taking this suggestion from an 11-year-old Reddit post by a user called “fucksmith.”) Another result told people who are bitten by a rattlesnake to “apply ice or heat to the wound,” which would do about as much to save your life as crossing your fingers and hoping for the best. Other search queries have just resulted in completely incorrect information, like one where someone asked which presidents attended University of Wisconsin—Madison, and Google explained that President Andrew Jackson attended college there in 2005, even though he died 160 years earlier, in 1845.

On Thursday, Google said in a blog post that it was scaling back some of its summarization results in certain areas, and working to try to fix the problems it did see. “We’ve been vigilant in monitoring feedback and external reports, and taking action on the small number of AI Overviews that violate content policies,” Liz Reid, who is Head of Google Search, wrote on the company’s website. “This means overviews that contain information that’s potentially harmful, obscene, or otherwise violative.”

Google has also tried to allay the concerns of publishers. In another post last month, Reid wrote that the company has seen “the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query” and that as Google expands this “experience, we’ll continue to focus on sending valuable traffic to publishers and creators.”

While AI can regurgitate facts, it lacks the human understanding and context necessary for truly insightful analysis. The oversimplification and potential misrepresentation of complex issues in AI summaries could further dumb down public discourse and lead to a dangerous spread of misinformation. This isn’t to say that humans are not capable of that. If there’s anything the last decade of social media has taught us it’s that humans are more than capable of spreading misinformation and prioritizing their own biases over facts. However, as AI-generated summaries become increasingly prevalent, even those who still value well-researched, nuanced journalism may find it increasingly difficult to access such content. If the economics of the news industry continue to deteriorate, it may be too late to prevent AI from becoming the primary gatekeeper of information, with all the risks that entails.

The news industry’s response to this threat has been mixed. Some outlets have sued OpenAI for copyright infringement—as The New York Times did in December—while others have decided to do business with them. This week The Atlantic and Vox became the latest news organizations to sign licensing deals with OpenAI, allowing the company to use their content to train AI models, which could be seen as training robots to take jobs even more quickly. Media giants like News Corp, Axel Springer, and the Associated Press are already on board. Still, proving it’s not beholden to any machine overlords, The Atlantic published a story on the media’s “devil’s bargain” with OpenAI on the same day its CEO, Nicholas Thompson, announced their partnership.

Another investor I spoke with likened the situation to a scene in Tom Stoppard’s Arcadia, in which one character remarks that if someone stirs jam into their porridge by swirling it in one direction, they can’t reconstitute the jam by then stirring the opposite way. “The same is going to be true for all of these summarizing products,” the investor continues. “Even if you tell them you don’t want them to make your articles shorter, it’s not like you can un-stir your content out of them.”

But here’s the question I have. Let’s just say Google and OpenAI and Facebook succeed, and we read summaries of news, rather than the real thing. Eventually, those news outlets will go out of business, and then who is going to be left to create the content that they need to summarize? Or maybe it won’t matter by then because we’ll be so lazy and obsessed with shorter content that the AI will choose to summarize everything into a single word, like Irtnog.