It is said there’s no violence in chess unless your opponent is a robot.
Last week, during a chess match at the Moscow Open, a seven-year-old chess player’s finger was grabbed and broken by a robot. This incident has aroused interest of numerous media outlets as well as the public.
One of the articles says : “The robot was unsettled and took one of the boy’s chess pieces because the boy violated the rules. Then the robot desperately grabbed his finger and broke it.” The description of such intelligent behavior creates a sudden panic that the robot has become self-awakened. Is that true?
It can be seen from the video that the robot first took the boy’s chess piece, then it wanted to put another piece to this place. Rather than waiting for the machine to complete its move, the boy opted for a quick riposte. But the robot insisted on putting the piece in that position. Therefore, the robot’s piece landed on the boy’s finger, and the latter was caught between the two pieces. This led to the illusion that the robot was upset by the boy’s violation of rules and grabbed his finger angrily.
In this way, this incident doesn’t count as an example of AI awakening. Rather, this mistake was caused by immature technology and system errors, which hurt human beings. But why are people so much concerned about cases of a robot hurting people? Is it because people are deeply afraid of the possible future of AI awakening?
In fact, news of human injuries and even deaths caused by immature robot technology and system errors has become common. And every case raises concerns over the awakening of robots.
On January 25, 1979, a worker working on the assembly line of Ford Factory in the U.S was hit by an industrial robot arm and died in the factory in Michigan. This is the first well-documented case involving mankind killed by industrial robots.
In July, 2015, a robot suddenly started working and installed a hitch assembly on the head of a repair worker when she was inspecting and adjusting it in an auto spare parts factory in Michigan of the U.S. The worker then died from skull fracture.
On May 7, 2016, a Tesla Model S hit a tractor when its owner—an American named Joshua Brow used APS. According to the investigation, it was found that both Brow and the auto pilot didn’t stop the car. The safety airbag didn’t unfold until the car drove off the road and hit a tree.
According to investigations, a lot of accidents involving robots occurred under irregular operational conditions such as programming, maintenance, testing, installation or adjustment. When people around these robots conducting such operations, unexpected injuries may occur.
From the change of identities of people involved in the above news, robots have been gradually used from only in production lines in factories to people’s life. Once robots come into people’s daily life, a small probability event in system malfunction will definitely cause more human casualties under the context of huge global population.
What’s more worrisome is that researchers from University of California, Berkeley have produced robots that can decide whether to ‘harm people intentionally or not’. Such a robot only costs $200.
The researcher of this robot Lieben said that “this is the first robot that can decide whether to break the Asimov’s Three Laws of Robotics”. He also said that “the most worrisome thing for AI is that it will be out of control.” He hoped that this ‘philosophical experiment’ could trigger more debates about AI.
Anthropologist Hu Jiaqi has also mentioned his worry about AI in one of his speeches. He said that although AI can only solve problems related to logical thinking at present, its powerful computing power has far excelled mankind, and human’s thinking speed is as slow as rock decaying. In the near future, if AI acquires the ability of logical thinking, abstract thinking and self-learning, it will get out of our control soon and exterminate mankind before we realize the danger.
No technology can be developed smoothly. There must be numerous mistakes in technological development. In the foreseeable future, AI is more capable of bringing harm to us as it continues to develop. In the later stage of technological development of AI, it will be a destructive blow to mankind if the scenario of ‘AI awakening’ , as Mr. Hu Jiaqi worries, becomes real.
It is hoped that technicians engaged in AI field and the users can think about one question carefully—is AI really controllable for mankind? Don’t allow a technology intended to improve people’s living standard to become the ultimate weapon to exterminate mankind at any time.