Godfather Of AI Cautiously Optimistic About Future Of Technology

Geoffrey Hinton is one of the most prominent scientists responsible for developing artificial intelligence technology. In a rambling profile by The New Yorker, Hinton describes how his life — both success and failure —- contributed to the creation of AI. He ultimately offers a cautiously optimistic outlook of what the future could hold.

Hinton says that AI has the potential to generate groundbreaking innovation in fields such as medicine and education, but he is also concerned about how the technology could be misused. Specifically, Hinton says the use of AI to create autonomous military weapons may lead to a dystopian future.

A recent letter signed by some of the pioneers and leaders of the tech industry called for a six-month pause in the development of AI. Hinton was not among those to sign the letter. He said that a six-month pause would only allow competing nations such as China to gain an advantage. Instead of stopping development, Hinton argues that built-in safeguards can prevent the technology from seeking control.

Hinton explained that AI systems are given subgoals. The most basic goal is to gain more control. Ultimately, a runaway AI system might determine that eliminating humanity is the only way to save humanity, like the plot of the 2004 movie I, Robot.

“We can’t be in denial,” Hinton said. “We have to be real. We need to think, ‘How do we make it not as awful for humanity as it might be?’”

Militaries around the world are rapidly developing AI systems to enhance combat operations. The U.S. Air Force is even testing an autonomous drone to fly alongside fighter planes. AI technology is already in use by the Israeli Defense Forces.

In a recent discussion, USAF Chief of AI Test and Operations Tucker “Cinco” Hamilton explained that one of the challenges in developing the technology is preventing it from turning on its operators. He said the Air Force was working on a design to target Surface-to-Air missile sites. When the AI is not allowed to destroy the SAM site, it may decide to kill the operator for preventing it from accomplishing its goals.

“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability,” Hamilton said.

Hinton said that he believes in reigning in the development of AI-powered weapons, but that there is no global governance system capable of effectively succeeding.

As AI technology advances, it will have profound changes in how our society functions, how we accomplish work, and ultimately, how we fight wars and defend our nation from our enemies. We may even find ourselves defending from AI systems that have run amok, just like in the movies.