Sign In  |  Register  |  About Pleasanton  |  Contact Us

Pleasanton, CA
September 01, 2020 1:32pm
7-Day Forecast | Traffic
  • Search Hotels in Pleasanton

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

AI 'kill switch' will make humanity less safe, could spawn 'hostile' superintelligence: AI Foundation

AI Foundation CEO Rob Meadows and Co-founder Lars Buttler say researchers should speed up development on AI and warned against developing a "kill switch."

Executives behind the American artificial intelligence (AI) company AI Foundation are cautioning against implementing kill switches in machine systems, arguing that such a move could increase the chances of a superintelligence that is hostile toward human civilization.

According to a new Yale CEO Summit survey, 42% of polled CEOs agreed that AI could potentially end humanity within five to ten years.

In citing the study, AI Foundation CMO and Chair Lars Buttler said the debate around AI needs to be elevated and suggested that people react emotionally to the new technology because of a lack of understanding about what is happening behind the scenes.

However, both Buttler and CEO Rob Meadows warned of several concerns surrounding the advancement of AI and the possible creation of an artificial general intelligence (AGI) capable of reasoning and decision-making equal to or beyond that of a human.

ARTIFICIAL INTELLIGENCE: FREQUENTLY ASKED QUESTIONS ABOUT AI

"With AI, you will always have this accidental danger, these accidental problems, you know? AI is obedient. It does what you tell it to do, but you don't know exactly which path it will take to get there," Buttler said.

He recalled the infamous "paperclip problem" as a fundamental shortcoming of AI models, wherein philosophers laid out a hypothetical scenario where an AI tasked with creating paperclips would inadvertently cause an apocalypse by diverting more and more resources to reach its goal. Similarly, an AI installed in a motor vehicle with a task of getting from A to B may run over a jaywalker because it was not programmed to prioritize anything other than the required destination.

"If you forget to rule out something, you know, it's an unknown unknown, trouble will happen and that will be forever our relationship with AI," Buttler added.

Buttler says that once humanity achieves AGI, those AI priorities flip almost 100%. An AGI, as opposed to an AI, will be able to understand the world around it fully and will likely avoid the accidental mistakes of its predecessor. However, it may also refuse to do what you tell it to.

ARTIFICIAL INTELLIGENCE WON'T LIKELY REACH HUMAN-LIKE LEVELS WITHOUT THIS ONE KEY COMPONENT, STUDY FINDS

"Nobody knows how far away we are from AGI. It could be just a few years, as many people think. It could be much longer," he said. "But when we are in the world of AGI, then we almost have to share the planet. Then it's a completely different story. It's not a smart but dumb, you know, obedient kind of thing, but it's a very, very smart and potentially disobedient, you know, people say new life form."

Meadows noted that in all these predictions related to the future of algorithms and models, there is a version of the future in which society never makes it to AGI. In this scenario, AI could slip into the hands of the wrong people who use it for a purpose that is enormously harmful to humanity.

"There are existential risks, but, you know, there's also just risks to civilization and peace on the planet," he said.

To highlight the potential harm, the American tech entrepreneur revealed that the AI foundation has internal research that can take just a few minutes of a person's voice and video to create a highly realistic deepfake.

"I could call up your mom and your mom would not know the difference, that she just got FaceTime'd by you. We can't put that out into the world. You know, what we can do with it is use it to put the antidote to that out in the world," Meadows added.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

In 2018, that antidote became a reality in Reality Defender, a deepfake detection suite that analyzes and identifies manipulated images and videos through billions of indexed visual and audio assets.

The system claims to adopt the latest AI models and models that have yet to be adopted and uses comprehensive scanning techniques, detailed reports, propriety deepfake and generative content fingerprinting technology to help organizations identify fraud and misinformation.

The technology falls under protective AI's umbrella, helping people avoid manipulation by criminal actors, big companies and even governments. According to Buttler, Reality Defender now protects many banks against AI-generated voice fraud. The company was also called down to DC before the 2020 election for a roundtable in which they detailed how their tech could be used to help identify manipulated content intended for political interference.

Despite concerns across industries, Meadows said the AI Foundation believes the world should not slow down work on the technology, breaking from the position of prominent figures who signed a letter urging for a temporary pause on AI development back in March.

AI REVEALS ANCIENT SYMBOLS HIDDEN IN PERUVIAN DESERT FAMOUS FOR ALIEN THEORIES

"All of the most dangerous technologies and, you know, tools and other things that have been invented over the years did way more good than harm. And we need to [advance] in a thoughtful way and be careful not to be reckless. But we're strong on the side of if anything, we need to go faster and educate more people on what is possible here," he said.

Chiming into the discussion about AI safety and guardrails, Buttler strongly disagreed with having kill switches on AI and AGI data centers, asserting that such a move is a "horrible idea" that would make humanity less safe.

"Once we are in the world of AGI, being hostile, threatening, you know, not entirely peaceful to AGI from the start might just create exactly the problems that we tried to avoid," Buttler said.

"When has it ever been a good idea for a less smart person or group to try to control a smarter one, you know, antagonize it? AGI might even then have a moral justification to take a much more hostile stand."

HOUSE DEMANDS AI UPDATE FROM PENTAGON AS THREATS FROM CHINA, OTHER ADVERSARIES PILE UP

Instead, Buttler suggested that humanity should be collaborative towards AGI and noted that such an approach would significantly increase the chance of reciprocity on the part of the superintelligence, allowing everyone to benefit.

Meadows likened the relationship between humans and AGI to a dog and its owner. A dog doesn't really understand fully what the owner is doing or saying, but it understands enough. If the owner says "walk," the dog will become excited. Similarly, the AGI may operate on a level that humans cannot fully decipher—or at least until humans have chips implanted to communicate with the intelligence adequately.

"There's going to be a period of time where we're going to have to trust in something we don't fully understand and I think that's going to be hard," he said. "It could unlock all the mysteries of the universe and cure all diseases and longevity and peace and all of that could come out of it—just like it is to be a well taken care of dog. I don't have to worry about food and water and shelter. Life is good."

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Photography by Christophe Tomatis
Copyright © 2010-2020 Pleasanton.com & California Media Partners, LLC. All rights reserved.