by Branka Marijan
This blog post was originally published by Project Ploughshares: https://ploughshares.ca/2019/05/more-clarity-on-canadas-views-on-military-applications-of-artificial-intelligence-needed/
Canada appears to be leading the way in the responsible use of artificial intelligence (AI) with a number of initiatives and guidelines intended to to ensure that AI applications are based on sound reasoning and common values. However, largely absent in its discussions of strategy and risk are military applications of AI—in particular, the growing autonomy of weapons systems.
Canada was the first country to release a national AI strategy, aimed at developing thought leadership on economic, ethical, legal, and policy implications of AI. As well, Canada, in partnership with France, has set up the International Panel on Artificial Intelligence, which aims to examine the implications of AI and ensure the responsible uses of AI technologies. There have also been a Canada-United Kingdom Symposium on ethics in AI and numerous academic workshops and conferences for use of AI in government service delivery.
But are the military dimensions of AI developments getting enough attention?
Autonomous weapons systems are mentioned only once in Canada’s defence policy: “The Canadian Armed Forces is committed to maintaining appropriate human involvement in the use of military capabilities that can exert lethal force.” Nowhere is it said how “appropriate human involvement” is determined—or even what the term means. Nor does it explicitly state that there could not be cases in which no human involvement was deemed necessary.
A recent article in the Financial Post reveals that Canada is developing a voice assistant for its warships. Rear Admiral Casper Donovan, director general of future ship capability for the Royal Canadian Navy, is quoted as saying, “We haven’t explicitly told ourselves we have a red line, but there is clearly a red line, that we’re not pursuing any AI that is connected to employing weapon systems. … Our systems, especially our weapons systems, are under a chain of command and that chain of command is on people, and people are employing those systems.”
Donovan’s statement is somewhat reassuring. It would be even more so if he had explained where and what the red line was. Is it official policy? Do all commanders understand the red line in the same way? Or are decisions being made on an ad hoc basis? More clarity is needed as the Canadian military develops and acquires new systems.
While Canada has been quite active on the domestic and bilateral fronts on AI regulation, the story is different on the international stage. At United Nations discussions on autonomous weapons, countries including Austria have openly called for a ban on weapons that don’t have meaningful human control over critical functions, including the selection and engagement of the target. Canada, however, has been mostly quiet, suggesting only that more time is needed to examine different dimensions of the technology. Why is Canada not openly supporting the idea that meaningful human control is critical?
Meanwhile, technological advancements and investments in military AI continue. Seven key countries lead in military applications of AI: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. Each is developing and researching weapons systems with greater autonomy.
Recent reports revealed that the U.S. Department of Defense was seeking vendors and researchers to work on its Advanced Targeting and Lethality Automated System program for ground-combat vehicles. According to details revealed by Quartz, ATLAS would essentially give tanks the ability to independently identify and engage a target. Russia is developing similar projects. Unsurprisingly, both the United States and Russia have pushed back against calls for a ban or other regulation of autonomous systems.
While China has mentioned the need for a ban on offensive weapons, it seems to want the option of having autonomous systems for defence and continues to invest heavily in AI research. Because defensive systems often have offensive applications, greater clarity is needed about China’s position.
Many experts emphasize the inherently dual-use nature of AI, which gives it both military and civilian applications and the need to address malicious uses of AI technologies. Now, with advanced militaries viewing these technologies as critical to national security, Canada must be one of the countries setting international norms on use.
Decisions are being made that involve the taking of human lives by autonomous machines. The Canadian government has shown domestic and bilateral leadership on ethical uses of AI. Now it’s time for Canada to don its colours, enter the global playing field and play its A game. The stakes are high and a win is critical. A loss could be, literally, fatal.
Branka Marijan is a Senior Researcher with Project Ploughshares
Project Ploughshares is a long term partner of MCC. This May, Rebekah Sears, MCC Ottawa Policy Analyst, joined the Executive Committee of Project Ploughshares Governing Committee, where she has been a member since May 2018, representing MCC.
As part of this partnership, we have tried to regularly feature the excellent analysis of Ploughshares on our blog and in our advocacy work. Read some of our recent Ploughshares-related blogs, including on Canada’s arms deal with Saudi Arabia, a call for leadership on nuclear disarmament, and AI and the threat of fully autonomous weapons, aka killer robots, among other topics.
Photo: HMCS Toronto leads a sail past during Rendez-vous 2017 in Québec City. Canadian Forces Photo