Tech giants are ill-prepared to combat "hallucinations" generated by artificial intelligence platforms, industry experts warned in comments to Fox News Digital, but corporations themselves say they're taking steps to ensure accuracy within the platforms.
AI chatbots, such as ChatGPT and Google's Bard, can at times spew inaccurate misinformation or nonsensical text, referred to as "hallucinations."
"The short answer is no, corporation and institutions are not ready for the changes coming or challenges ahead," said AI expert Stephen Wu, chair of the American Bar Association Artificial Intelligence and Robotics National Institute, and a shareholder with Silicon Valley Law Group.
Often, hallucinations are honest mistakes made by technology that, despite promises, still possess flaws.
Companies should have been upfront with consumers about these flaws, one expert said.
"I think what the companies can do, and should have done from the outset … is to make clear to people that this is a problem," Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University in California, told Fox News Digital.
"This shouldn’t have been something that users have to figure out on their own. They should be doing much more to educate the public about the implications of this."
Large language models, such as the one behind ChatGPT, take billions of dollars and years to train, Amazon CEO Andy Jassy told CNBC last week.
In building Amazon's own foundation model Titan, the company was "really concerned" with accuracy and producing high-quality responses, Bratin Saha, an AWS vice president, told CNBC in an interview.
Other major generative AI platforms such as OpenAI's ChatGPT and Google Bard, meanwhile, have been found to be spitting out erroneous answers to what seem to be simple questions of fact.
In one published example from Google Bard, the program claimed incorrectly that the James Webb Space Telescope "took the very first pictures of a planet outside the solar system."
It did not.
Google has taken steps to ensure accuracy in its platforms, such as adding an easy way for users to "Google it" after inserting a query into the Bard chatbot.
Microsoft's Bing Chat, which is based on the same large language model as ChatGPT, also links to sources where users can find more information about their queries, as well as allowing users to "like" or "dislike" answers given by the bot.
"We have developed a safety system including content filtering, operational monitoring and abuse detection to provide a safe search experience for our users," a Microsoft spokesperson told Fox News Digital.
"We have also taken additional measures in the chat experience by providing the system with text from the top search results and instructions to ground its responses in search results. Users are also provided with explicit notice that they are interacting with an AI system and advised to check the links to materials to learn more."
In another example, ChatGPT reported that late Sen. Al Gore Sr. was "a vocal supporter of Civil Rights legislation." In actuality, the senator vocally opposed and voted against the Civil Rights Act of 1964.
Despite steps taken by the tech giants to stop misinformation, experts were concerned about the ability to completely prevent it.
"I don’t know that it is [possible to be fixed]" Christopher Alexander, chief communications officer of Liberty Blockchain, based in Utah, told Fox News Digital. "At the end of the day, machine or not, it’s built by humans, and it will contain human frailty … It is not infallible, it is not omnipotent, it is not perfect."
Chris Winfield, the founder of tech newsletter "Understanding A.I.," told Fox News Digital, "Companies are investing in research to improve AI models, refining training data and creating user feedback loops."
"It's not perfect but this does help to enhance A.I. performance and reduce hallucinations."
These hallucinations could cause legal trouble for tech companies in the future, Alexander warned.
"The only way are really going to look at this seriously is they are going to get sued for so much money it hurts enough to care," he said.
The ethical responsibility of tech companies when it comes to chatbot hallucinations is a "morally gray area," Ari Lightman, a professor at Carnegie Melon University in Pittsburgh, told Fox News Digital.
Despite this, Lightman said creating a trail between the chatbot's source, and its output, is important to ensure transparency and accuracy.
Wu said the world’s readiness for emerging AI technologies would have been more advanced if not for the colossal disruptions caused by the COVID-19 panic.
"AI response was organizing in 2019. It seemed like there was so much excitement and hype," he said.
"Then COVID came down and people weren’t paying attention. Organizations felt like they had bigger fish to fry, so they pressed the pause button on AI."
He added, "I think maybe part of this is human nature. We’re creatures of evolution. We’ve evolved [to] this point over millennia."
He also said, "The changes coming down the pike so fast now, what seems like each week — people are just getting caught flat-footed by what’s coming."