Sign In  |  Register  |  About Pleasanton  |  Contact Us

Pleasanton, CA
September 01, 2020 1:32pm
7-Day Forecast | Traffic
  • Search Hotels in Pleasanton

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Artificial general bullshit

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology. But, of course, I first must deal with the ludicrous news playing out now at leading AI […] The post Artificial general bullshit appeared first on BuzzMachine .

I began writing this as a report from a useful conference on AI that I just attended, where experts and representatives of concerned sectors of society had serious discussion about the risks, benefits, and governance of the technology.

But, of course, I first must deal with the ludicrous news playing out now at leading AI generator, OpenAI. So let me begin by saying that in my view, the company is pure bullshit. Sam Altman’s contention that they are building “artificial general intelligence” or “artificial superintelligence”: Bullshit. Board members’ cult of effective altruism and AI doomerism: Bullshit. The output of ChatGPT: Bullshit. It’s all hallucinations: Pure bullshit. I even fear that the discussion of AI safety in relation to OpenAI could be bullshit. 

This is not to say that AI and its capabilities as it is practiced there and elsewhere is not something to be taken seriously, even with wonder. And we should take seriously discussion of AI impact and safety, its speed of development and adoption, and its governance. 

These topics were on the agenda of the AI conference I attended at the San Francisco outpost of the World Economic Forum (Davos). Snipe if you will at this fraternity of rich and powerful, this is one thing the Forum does consistently well: convene multistakeholder conversations about important topics, because people accept their invitations. At this meeting, there were representatives of technology companies, governments, and the academy. I sat next to an honest-to-God philosopher who is leading a program in ethical AI. At last. 

I knew I was in the right place when I heard AGI brought up and quickly dismissed. Artificial general intelligence is the purported goal of OpenAI and other boys in the AI fraternity: that they are so smart they can build a machine that is smarter than all of us, even them — a machine so powerful it could destroy humankind unless we listen to its creators. I call bullshit. 

In the public portion of the conference, panel moderator Ian Bremmer said he had no interest in discussing AGI. I smiled. Andrew Ng, cofounder of Google Brain and Coursera, said he finds claims of imminent AGI doom “vague and fluffy…. I can’t prove that AI won’t wipe us out anymore than I could prove that radio waves won’t attract aliens that would wipe us out.” In the portion of the event held under Chatham House rule, another participant talked of trying to get Elon Musk to make good on his prediction that AGI will arrive by 2029. What exactly Musk means by that is no clearer than anything he says. Keep in mind that Musk has also said that by now cars would drive themselves and Twitter would be successful and he would soon (not soon enough) be on his way to Mars. One participant doubted not only the arrival of AGI but said large language models might prove to be a parlor trick.

With that BS was out of the way, this turned out to be a practical meeting, intended to bring various perspectives together to begin to formulate frameworks for discussion of responsible use of AI. The first results will be published from the mountaintop in January. 

I joined a breakout session that had its own breakouts (life is breakouts all the way down). The circle I sat in was charged with outlining benefits and risks of generative AI. Their first order of business was to question the assignment and insist on addressing AI as a whole. The group emphasized that neither benefits nor risks are universal, as each will fall unevenly on different populations: individuals, organizations (companies to universities), communities, sectors, and society. They did agree on a framework for that impact, asserting that for some, AI could:

  • raise the floor (allowing people to engage in new skills and tasks to which they might not have had access — e.g., coding computers or creating illustrations);
  • scale (that is, enabling people and organizations to take on certain tasks much more efficiently); and
  • raise the ceiling (performing tasks — such as analyzing protein folding — that heretofore were not attainable by humans alone). 

On the negative side, the group said AI would:

  • bring economic hardship; 
  • enable evil at scale (from exploding disinformation to inventing new diseases); and
  • for some, result in a loss of purpose or identity (see the programmer who laments in The New Yorker that “bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp. Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it”).

This is not to say that the effects of AI will fit neatly into such a grid, for what is wondrous for one can be dreadful for another. But this gives us a way to begin to define responsible deployment. While we were debating in our circle, other groups at the meeting tackled questions of technology and governance. 

There have been a slew of guidelines for responsible AI — most lately the White House issued its executive order, and tech companies, eager to play a game of regulatory catch, are writing their own. Here are Google’s, these are Microsoft’s, and Meta has its own pillars. OpenAI has had a charter built on its hubristic presumption that is building AGI. Anthropic is crowdsourcing a “constitution” for AI, filled with vague generalities about AI characterized as “reliable,” “honest,” “truth, “good,” and “fair.” (I challenge either an algorithm or a court to define and enforce the terms.) Meanwhile, the EU, hoping to lead in regulation if not technology, is writing its AI Act

Rather than principles or statutes chiseled permanently on tablets, I say we need ongoing discussion to react to rapid development and changing impact; to consider unintended consequences (of both the technology and regulation of it); and to make use of what I hope will be copious research. That is what WEF’s AI Governance Alliance says it will do. 

As I argue in The Gutenberg Parenthesis regarding the internet — and print — the full effect of a new technology can take generations to be realized. The timetable that matters is not so much invention and development but adaptation. As I will argue in my next book, The Web We Weave: Why We Must Reclaim the Internet from Moguls, Misanthropes, and Moral Panic (out from Basic Books next year), this debate must occur less in the context of technology than of humanity, which is why the humanities and social sciences must be in the circle.

At the meeting, there was much discussion about where we are in the timeline of AI’s gestation. Most agreed that there is no distinction between generative AI and AI. Generative AI looks different — momentous, even — to those of us not deeply engaged in the technology because now, suddenly, the program speaks — and, more importantly, can compute — our language. Code was a language; now language is code. Some said that AI is progressing from its beginning, with predictive capabilities, to its current generative abilities, and next will come autonomous agents — as with the GPT store Altman announced only a week before. Before allowing AI agents to go off on their own, we must trust them. 

That leads to the question of safety. One participant at WEF quoted Altman in a recent interview, saying that the company’s mission is to figure out how to make AGI, then figure out how to make it safe, and then figure out its benefits. This, the participant said, is the wrong order. What we need is not to make AI safe but to make safe AI. There was much talk about “shifting left” — not a political manifesto but instead a promise to move safety, transparency, and ethics to the start of the development process, rather than coming to them as afterthoughts. I, too, will salute that flag, but….

I come to believe there is no sure way to guarantee safety with the use of this new technology — as became all too clear clear to princes and popes at the birth of print. “What is safe enough?” asked one participant. “You give me a model that can do anything, I can’t answer your question.” We talk of requiring AI companies to build in guardrails. But it is impossible for any designer, no matter how smart, to anticipate every nefarious use that every malign actor could invent, let alone every unintended consequence that could arise. 

That doesn’t mean we should not try to build safety into the technology. Nor does it mean that we should not use the technology. It just means that we must be realistic in our expectations, not about the technology but about our fellow humans. Have we not learned by now that some people will always find new ways to do bad things? It is their behavior more than technology that laws regulate. As another participant said, a machine that is trained to imitate human linguistic behavior is fundamentally unsafe. See: print. 

So do we hold the toolmaker responsible for what users have it do? I know, this is the endless argument we have about whether guns (and cars and chemicals and nukes) kill people or the people who wield them do. Laws are about fixing responsibility, thus liability. This is the same discussion we are having about Section 230: whom do we blame for “harmful speech” — those who say it, those who carry it, those who believe it? Should we hold the makers of the AI models themselves responsible for everything anyone does with them, as is being discussed in Europe? That is unrealistic. Should we instead hold to account users — like the schmuck lawyer who used ChatGPT to write his brief — when they might not know that the technology or its makers is lying to them? That could be unfair. There was much discussion at this meeting about regulating not the technology itself but its applications.

The most contentious issue in the event was whether large language models should be open-sourced. Ng said he can’t believe that he is having to work so hard to convince governments not to outlaw open source — as is also being bandied about in the EU. A good number of people in the room — I include myself among them — believe AI models must be open to provide competition to the big companies like OpenAI, Microsoft, and Google, which now control the technology; access to the technology for researchers and countries that otherwise could not afford to use it; and a transparent means to audit compliance with regulations and safety. But others fear that bad actors will take open-source models, such as Meta’s LLaMA, and detour around guardrails. But see the prior discussion about the ultimate effectiveness of such guardrails. 

I hope that not only AI models but also data sets used for training will be open-sourced and held in public commons. (Note the work of MLCommons, which I learned about at the meeting.) In my remarks to another breakout group about information integrity, I said I worried about our larger knowledge ecosystem when books, newspapers, and art are locked up by copyright behind paywalls, leaving machines to learn only from the crap that is free. Garbage in; garbage multiplied. 

At the event’s opening reception high above San Francisco in Salesforce headquarters, I met an executive from Norway who told me that his nation wants to build large language models in the Norwegian language. That is made possible because — this being clever Norway — all its books and newspapers from the past are already digitized, so the models can learn from them. Are publishers objecting? I asked. He thought my question odd; why would they? Indeed, see this announcement from much-admired Norwegian news publisher Schibsted: “At the Nordic Media Days in Bergen in May, [Schibsted Chief Data & Technology Officer Sven Størmer Thaulow] invited all media companies in Norway to contribute content to the work of building a solid Norwegian language model as a local alternative to ChatGPT. The response was overwhelmingly positive.” I say we need to a similar discussion in the anglophone world about our responsibility to the health of the information ecosystem — not to submit to the control and contribute to the wealth of AI giants but instead to create a commons of mutual benefit and control. 

At the closing of the WEF meeting, during a report-out from the breakout group working on governance (where there are breakout groups, there must be report-outs; it’s the law) one professor proposed that public education about AI is critical and media must play a role. I intervened (as we say in circles) and said that first journalists must be educated about AI because too much of their coverage amounts to moral panic (as in their prior panics about the telegraph, talkies, radio, TV, and video games). And too damned often, journalists quote the same voices — namely, the same boys who are making AI — instead of the scholars who study AI. The issue of The New Yorker I referenced above has yet another interview with former Google computer scientist Geoffrey Hinton, who has already been on 60 Minutes and everywhere. 

Where are the authors of the Stochastic Parrots paper, former Google AI safety chiefs Timnit Gebru and Margaret Mitchell, along with linguists Emily Bender and Angelina McMillan-Major? Where are the women and scholars of color who have been warning of the present-tense costs and risks of AI, instead of the future-shock doomsaying of the AI boys? Where is Émile Torres, who studies the faux philosophies that guide AI’s proponents and doomsayers, which Torres and Gebru group under the acronym TESCREAL? (See the video below.)

The problem is that the press and policymakers alike are heeding the voices of the AI boys who are proponents of these philosophies instead of the scholars who hold them to account. The afore-fired-and-possibly-resurrected Sam Altman gets invited to Congress. When UK PM Rishi Sunak held his AI summit, whom did he invite on stage but Elon Musk, the worst of them. Whom did Sunak appoint to his AI task force but another adherent of these philosophies. 

To learn more about TESCREAL, watch this conversation with Torres that Jason Howell and I had on our podcast, AI Inside, so we can separate the bullshit from the necessary discussion. This is why we need more meetings like the one WEF held, with stakeholders besides AI’s present proponents so we might debate the issues, the risks — and the benefits — they could bring. 

The post Artificial general bullshit appeared first on BuzzMachine.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photography by Christophe Tomatis
Copyright © 2010-2020 Pleasanton.com & California Media Partners, LLC. All rights reserved.