Google has been developing tools aimed at helping journalists write news articles, reports The New York Times and Reuters. It has demonstrated one tool, dubbed “Genesis,” to the Times, The Washington Post, and The Wall Street Journal. Reportedly, Google is positioning the tool as a personal assistant for news reporters.
According to Reuters, Genesis is not intended to automate news writing but can instead potentially support journalists by offering suggestions for headlines or alternative writing styles to enhance productivity. “Quite simply, these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles,” a Google spokesperson told Reuters.
Like OpenAI with its ChatGPT AI assistant that can compose text, Google has also been developing large language models (LLMs) such as PaLM 2 that have absorbed massive amounts of information scraped from the Internet during training, and they can use that “knowledge” to summarize information, rephrase sentences, explain concepts, and more. Naturally, both companies have sought to find market applications for this technology, including in journalism.
However, unnamed anonymous executives who previewed Google’s presentation described Genesis as “unsettling,” according to the Times. Two of the executives told the outlet that the Google product seemed to underestimate the effort it takes to produce accurate and interesting news stories.
So far, attempts to use generative AI to augment journalism haven’t gone very well. In January, BuzzFeed announced it would begin publishing AI-written content (which followed obvious formulas and lacked variety). That same month, CNET received intense pushback from its own staff for publishing AI-written articles. More recently, an AI-generated Star Wars article published by Gizmodo sparked criticism for being full of errors.
Based on early reports, Google’s new tool seems to represent a different path away from full automation, envisioning a partnership between a human author and AI assistant that could see journalists adopting generative AI as labor-saving tools similar to typewriters, word processors, and spell checkers before them. Still, some newsrooms may seek to draw a clear line between an AI model merely suggesting phrasing or critiquing a piece and actually introducing new factual content, which could be mistaken or confabulated.
Journalism professor Jeff Jarvis told the New York Times, “If [Genesis] is misused by journalists and news organizations on topics that require nuance and cultural understanding, then it could damage the credibility not only of the tool but of the news organizations that use it.”
Even if tools like Genesis are initially used to accelerate productivity for journalists, there may still be a temptation to automate the writing process entirely to save money, as we’ve already seen in cases like CNET’s. Critics worry that the drive to automate content production could create an echo chamber of noise and misinformation online, with bots feeding off other bots while human-crafted content remains potentially siloed behind paywalls or away from the open web.
Still, The New York Times says that Google sees Genesis as a “responsible technology” that will help the publishing industry avoid pitfalls with generative AI. Exactly what that means will have to wait until Google brings Genesis further into view.