Sign In  |  Register  |  About Pleasanton  |  Contact Us

Pleasanton, CA
September 01, 2020 1:32pm
7-Day Forecast | Traffic
  • Search Hotels in Pleasanton

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Gibberish from the machine

I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish.  […] The post Gibberish from the machine appeared first on BuzzMachine .

I’m honored that Germany’s Stern asked me to write about AI and journalism for a 75th anniversary edition. Here’s a version prior to final editing and trimming for print and translation. And I learned a new word: Kauderwelsch (“The variety of Romansch spoken in the Swiss town of Chur (Kauder) in canton Graubünden) means gibberish. 


We have Gutenberg to blame. It is because of his invention, print, that society came to think of public discourse, creativity, and news as “content,” a commodity to fill the products we call publications or lately websites. Journalists believe that their value resides primarily in making content. To fill the internet’s insatiable maw, reporters at some online sites are given content quotas, and their news organizations no longer appoint editors-in-chief but instead “chief content officers.” For the record, Stern still has actual editors, many of them.

And now here comes a machine — generative artificial intelligence or large language models (LLMs), such as ChatGPT — that can create no end of content: text that sounds just like us because it has been trained on all our words. An LLM maps the trillions of relationships among billions of words, turning them and their connections into numbers a computer can calculate. LLMs have no understanding of the words, no conception of truth. They are programmed only to predict the next most likely word to occur in a sentence.

A New York lawyer named Steven Schwartz had to learn his lesson about ChatGPT’s factual fallibility the hard way. In a now-infamous case, attorney Schwartz asked ChatGPT for precedents in a lawsuit involving an errant airline snack cart and his client’s allegedly injured knee. Schwartz needed to find cases relating to highly technical issues of international treaties and bankruptcy. ChatGPT dutifully delivered more than a half-dozen citations.

As soon as Schwartz’s firm filed the resulting legal brief in federal court, opposing counsel said they could not find the cases, and the judge, P. Kevin Castel, directed the lawyers to produce them. Schwartz returned to ChatGPT. The machine is programmed to tell us what we want to hear, so when Schwartz asked whether the cases were real, ChatGPT said they were. Schwartz then asked ChatGPT to show him the complete cases; it did, and he sent them to the court. The judge called them “gibberish” and ordered Schwartz and his colleagues into court to explain why they should not be sanctioned. I was there, along with many more journalists, to witness the humbling of the attorneys at the hands of technology and the media.

“The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer told the judge. “The court has done its job warning the public of these risks.” Judge Castel interrupted: “I did not set out to do that.” The problem here was not with the technology but with the lawyers who used it, who failed to heed warnings about the dubious citations, who failed to use other tools — even Google — to verify them, and who failed to serve their clients. The lawyers’ lawyer said Schwartz “was playing with live ammo. He didn’t know because technology lied to him.”

But ChatGPT did not lie because, again, it has no conception of truth. Nor did it “hallucinate,” in the description of its creators. It simply predicted strings of words, which sounded right but were not. The judge fined the lawyers $5,000 each and acknowledged that they had suffered humiliation enough in news coverage of their predicament.

Herein lies a cautionary tale for news organizations that are rushing to have large language models write stories — because they want to be cool and trendy, or save work, or perhaps to eliminate jobs, and manufacture ever more content. The news companies CNET and G/O Media have gotten into hot water for using AI to produce content that turned out to be less than factual. America’s largest newspaper chain, Gannett, just turned off artificial intelligence that was producing embarrassing sports stories that would call a football game “a close encounter of the athletic kind.” I have heard online editors plead that they are in a war to produce more and more content to attract more likes and clicks so they may earn more digital advertising pennies. Their problem is that they think their mission is only to make content.

My advice to editors and publishers is to steer clear of large language models for writing the news, except in well-proven use cases, such as turning highly structured financial reports into basic news stories, which must be checked before release. I would give the same advice to Microsoft and Google about connecting LLMs with their search engines. Fact-free gibberish coming out of the machine could ruin the authority and credibility of both news and technology companies — and affect the reputation of artificial intelligence overall.

There are good uses for AI. I benefit from it every day in, for example, Google Translate, Maps, Assistant, and autocomplete. As for large language models, they could be useful to augment — not replace — journalists’ work. I recently tested a new Google tool called NotebookLM, which can take a folder filled with a journalist’s research and summarize it, organize it, and allow the writer to ask questions of it. LLMs could also be used in, for example, language education, where what matters is fluency, not facts. My international students use these programs to smooth out their English for school and work. I even believe LLMs could be used to extend literacy, to help people who are intimidated by writing to communicate more effectively and tell their own stories.

Ah, but therein lies the rub for writers, like me. We believe we are special, that we hold a skill — a talent for writing — that few others can boast. We are storytellers and wield the power to tell others’ tales, to decide what tales are told, who shall be heard in them, and how they will begin and neatly end. We think that gives us the ability to explain the world in what journalists like to call the first draft of history — the news.

Now writers and journalists see both the internet and AI as competition. The internet enables the silent mass of citizens who were not heard in media to at last have their say — and to create a lot of content. And by producing credible prose in seconds, AI devalues writing and robs writers of their special status.

This is one reason why I believe we see hostile coverage of technology in media these days. News organizations and their proprietors claim that Google, Facebook, et al steal away audience, attention, and advertising money (as if God granted publishers those assets in perpetuity). Journalists are engaged in their latest moral panic — another in a long line of panics over movies, television, comic books, rock lyrics, and video games. They warn about the dangers of the internet, social media, our phones, and now AI, claiming that these technologies will make us stupid, addict us, take away our jobs, and destroy democracy under a deluge of disinformation.

They should calm down. A 2020 study found that in the US no age group “spent more than an average of a minute a day engaging with fake news, nor did it occupy more than 0.2% of their overall media consumption.” The issue for democracy isn’t so much disinformation but the willingness — the eagerness — of some citizens to believe lies that stoke their own fears and hatreds. Journalism should be reporting on the roots of bigotry and extremism rather than simplistically blaming technology.

In my book, The Gutenberg Parenthesis, I track society’s entry into the age of print as we now leave it for the digital age that follows. Print’s development as an institution of authority took time. Not until fifty years after Gutenberg’s Bible, around 1500, did the book take the shape we know today, with titles, title pages, and page numbers. It took another century, a few years either side of 1600, before the technology and its technologists — printers — faded into the background, making way for tremendous innovation with print: the birth of the modern novel with Cervantes, the essay with Montaigne, and the newspaper. A business model for print did not arrive until one century more, in 1710, with the advent of copyright. Come the 1800s, the technology of print — which had hardly changed since Gutenberg — evolved at last with the arrival of steam-powered presses and typesetting machines, leading to the birth of mass media. The twentieth century brought print’s first competitors, radio and television. And here we are today, just over a quarter century past the introduction of the commercial web browser. This is to say that we are likely at just the beginning of a long transition into the digital age. It is only 1480 in Gutenberg years.

In the beginning, rumor was trusted more than print because any anonymous printer could produce a book or pamphlet — just as anyone today can make a web site or tweet. In 1470 — only fifteen years after Gutenberg’s Bible came off the press — Latin scholar Niccolò Perotti made what is said to be the first call for censorship of print. Offended by a bad translation of Pliny, he wrote to the Pope demanding that a censor be assigned to approve all text before it came off the press. As I thought about this, I realized Perroti was not seeking censorship. Instead, he was anticipating the establishment of the institutions of editing and publishing, which would assure quality and authority in print for centuries.

Like Perotti in his day, media and politicians today demand that something must be done about harmful content online. Governments — like editors and publishers — cannot cope with the scale of speech now, so they deputize platforms to police and censor all that is said online. It is an impossible task.

Journalists must be careful using AI to produce the news. At the same time, there is a danger in demonizing the technology. In the best case, the rise of AI might force journalists to examine their role in society, to ask how they improve public discourse. The internet provides them with many new ways to connect with communities, to build relationships of trust and authority with them, to listen to their needs, to discover and share voices too long not heard in the public sphere, to expand the work of journalism past publishing to the wider canvas of the internet.

Journalists think their content is what makes them valuable, and so publishers and their lawyers and lobbyists are threatening to sue AI companies, dreaming of huge payments for machines that read their content. That is no strategy for the future of journalism. Neither is Axel Springer’s plan to replace journalists in content factories with AI. That is not where the value of journalism lies. It lies with reporting on and serving communities. Like Nicollò Perotti, we should anticipate the creation of new services to help internet users cope with the abundance of content today, to verify the truth and falsity of what we see online, to assess authority, to discover more diverse voices, to nurture new talent, to recommend content that is worth our time and attention. Could such a service be the basis of a new journalism for the online, AI age?

The post Gibberish from the machine appeared first on BuzzMachine.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photography by Christophe Tomatis
Copyright © 2010-2020 Pleasanton.com & California Media Partners, LLC. All rights reserved.