Sign In  |  Register  |  About Pleasanton  |  Contact Us

Pleasanton, CA
September 01, 2020 1:32pm
7-Day Forecast | Traffic
  • Search Hotels in Pleasanton

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

ChatGPT's artificial intelligence can produce artificial truth

ChatGPT is being touted as the superpowered AI of science fiction lore.

ChatGPT is being touted as the superpowered AI of science fiction lore, with the potential to inflame academic dishonesty, render jobs obsolete, and perpetuate political bias. 

Unsurprisingly, governments are now taking heavy-handed, drastic measures to combat this perceived AI problem. 

Italy’s recent ChatGPT ban has prompted several countries – including France, Ireland, Germany and Canada – to consider similar policies blocking OpenAI’s popular artificial intelligence program. According to the Italian Data Protection Authority, ChatGPT does not have "any legal basis that justifies the massive collection and storage of personal data." The agency gave the company 20 days to respond with changes or face a hefty multimillion-dollar fine. Meanwhile, Elon Musk and industry leaders are calling for an "AI pause."

It is too early to determine if ChatGPT will actually live up to these claims. Since the long-term impact is still unclear, knee-jerk reactions like national bans yield little societal benefit. Our governments should focus on mitigating the chatbot’s immediate harms, such as misinformation and slander.

YES, AI IS A CYBERSECURITY ‘NUCLEAR’ THREAT. THAT'S WHY COMPANIES HAVE TO DARE TO DO THIS

Chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard and Microsoft’s Bing Chat fall under the larger umbrella of generative AI, which use machine-learning systems to create videos, audio, pictures, text and other forms of media. While U.S. regulators have grappled with questions related to algorithmic bias, often in the context of decision-making systems that assist in hiring and lending, generative AI poses an array of new questions and challenges. 

For instance, Dall-E can make realistic images and art based on user prompts. As a machine learning model, Dall-E produces new content by "learning" from large swaths of data, at times by appropriating works of art and human images. Italy is targeting this privacy concern, but ultimately any prohibitions presuppose issues with emerging and evolving technology before they have been fully defined. The effectiveness of the policy depends on the extent to which these assumptions are correct.

National bans neglect to account for positive applications, such as increasing efficiency and productivity by making tedious tasks easier. Health experts predict that generative AI can be used for administrative purposes and improve the patient experience. If the ban is implemented successfully, Italy – and other countries that follow suit – will only prevent users from making use of the popular program and discourage domestic researchers from developing generative AI systems. It is also important to note that restrictions affect law-abiding citizens, not bad actors using the technology for more nefarious purposes, such as deception and fraud.

EUROPEAN LAWMAKERS LOOK TO REIN IN HARMFUL EFFECTS OF AI

While bans may not be the solution for addressing nascent technology, sensible and targeted regulations can ameliorate present harms. Regarding ChatGPT, there is a significant disparity between public perceptions of the chatbot and its actual abilities and accuracy. Its "learning" resembles imitation and mimicry far more than genuine understanding. Although the program is inclined to generate seemingly human-like responses, they can lack depth and, at their worst, be manufactured facts. Despite these flaws, many users do not check the veracity of ChatGPT’s responses and instead treat them as data-driven truth.

Consider this simple prompt I gave ChatGPT: quotes from lawmakers about AI. It proceeded to list a series of convincing, yet entirely fabricated citations. All the links were broken or invalid. Similarly, SCOTUSblog, a legal analysis website, asked ChatGPT 50 questions about the Supreme Court and found that the program was wrong or misleading in a majority of its responses. For the chatbot, success is predicting and producing appropriate responses to the user’s request – not accuracy. 

CLICK HERE TO GET THE OPINION NEWSLETTER

Incorrect information can have serious consequences when the stakes are high. Such as defamation, which has legal ramifications, though it is still unclear who would be responsible for AI speakers. In Australia, a mayor who was falsely accused by ChatGPT of bribery is contemplating a lawsuit against OpenAI, which would be the first defamation case against the chatbot.

Lawmakers in the United States and around the world should evaluate how emerging AI systems interface with the law. Sweeping prohibitions will do little to clarify these murky legal expectations, yet they will stifle the development of a program that millions find useful.

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photography by Christophe Tomatis
Copyright © 2010-2020 Pleasanton.com & California Media Partners, LLC. All rights reserved.