Canadian Security Magazine

Two of Canada’s AI gurus warn of war by algorithm as they win tech ‘Nobel Prize’

By The Canadian Press   

News editors pick Industry News

MONTREAL — Two of Canada's artificial intelligence pioneers are warning about the consequences of AI in robotic weapons and outsourcing lethal decisions to machines, calling for an international agreement on its deployment as Canada marches toward the binary battlefield.

Geoffrey Hinton and Yoshua Bengio, who along with computer scientist Yann LeCun won the Turing Award on Wednesday — known as the Nobel Prize of the technology industry — say so-called weaponized AI and killer robots could spell danger for civilians.

“I think we need to worry about lethal autonomous weapons,” said Hinton, a professor emeritus at the University of Toronto and a senior researcher at Google Brain.

“Those are things that aren’t a worry about the distant future; those are things that are coming now. The technology is already capable of producing these things and they’re desperately in need of regulation.”

Hinton compared robotic weapons such as drones with land mines, which were banned in a 1997 international treaty. “They’re very stupid, but they’re lethal and they’re autonomous,” he said of the explosives.

Advertisement

Facial recognition technology and other forms of computer vision or surveillance could soon be deployed to identify individuals or locations for drone strikes, said Bengio, a professor at the Universite de Montreal.

“You could basically select a particular list of people and have them killed.”

Bengio said that even if an international convention were not signed by key players — the United States, China and Russia declined to sign the Ottawa Treaty banning mines — the awareness it stirs up can deter proliferation.

“The American companies gradually stopped producing land mines because of the moral stigma that became attached to doing this. So those treaties play not just a legal role, but also set social norms in ways that end up influencing behaviour,” he said.

Last year, Google opted not to renew a contract with the Pentagon for Project Maven — the U.S. military’s “pathfinder” AI program — after more than 3,000 employees signed a protest letter.

The technology at work in Project Maven, which uses machine learning to scan drone video for targets, has already been deployed in the Middle East and Africa, with the eventual aim of loading the software onto drones to locate people and objects on the fly.

The Canadian Armed Forces are now exploring how to use AI, with the air force conducting experiments, according to army Maj. Geoffrey Priems.

“My personal view is we need to look at this and approach it very methodically and get it right, as opposed to rushing and screwing it up,” said Priems, who is tasked with sketching out a concept by June 2020 for AI deployment.

“Nobody wants to cause a death through some friggin’ computer, unless we chose for that to happen, intentionally.”

Retaining human agency — and accountability — in decisions of life and death is one issue. Another is digital defects or built-in biases, said Graham Taylor, who heads a new AI ethics centre at the University of Guelph.

“Military organizations may be trying to frame these systems such that they reduce the civilian casualties,” he said. “On the other hand, there is always the capability of these systems making mistakes where a human is not directly in charge…and it becomes difficult to place responsibility on a particular individual.”

Privacy concerns also factor into the reams of data spawned by surveillance images.

“It’s an area that’s under development right now, but there’s no widespread regulation regarding the use of specifically AI technologies in the military,” he said.

The Department of National Defence says international law should form the basis for emerging weapons protocols, and that “more discussion around this complex and multi-faceted issue is needed at the international level and within Canada.”

An expert group established by the United Nations’ Convention on Certain Conventional Weapons held its inaugural meeting in November 2017, but progress toward a treaty has been slow, Bengio says.

“The way the UN works is not very functional. It’s enough that a few countries oppose to slow things considerably,” he said.

Bengio was among 400 participants at a November 2017 forum that produced the Montreal Declaration on the Responsible Development of Artificial Intelligence. It lays out a principle that “the decision to kill must always be made by human beings, and responsibility for this decision must not be transferred to an AI.”

— Christopher Reynolds

News from © Canadian Press Enterprises Inc. 2019


Print this page

Advertisement

Stories continue below