Killer robot campaign defector to 'embed ethics' in autonomous weapons

UNSW Canberra and University of Queensland to commence $9m, Defence-backed research

Dr Jai Galliott – an academic at UNSW Canberra – used to be against fully autonomous weapons, an emerging class of military technology that leverages artificial intelligence to select and shoot enemies.

Along with thousands of academics, activists and artists, he signed an open letter calling on governments to preemptively ban the so-called ‘slaughterbots,’ fearing a rise of ruthlessly effective killing machines, lacking in moral judgement, ethics and accountability.

He was a vocal supporter of the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots (CtSKR), which want an international treaty against the technology akin to the bans on chemical weapons and cluster munitions.

But in 2015, he had a “radical change in opinion”. He has since expressed regret about contributing to what he now calls “fearmongering” on the issue.

“Some people are just so determined to see a ban on anything that might resemble any kind of new weapons technology, whether it’s a lethal robot or not,” Galliott says. “Essentially they’re peaceniks and they’re not going to be happy until every nation either has them or they manage to get a ban. I don’t think that’s going to happen.”

Galliott now takes a more pragmatic view; it is better to work with the military to ensure ethics and the law is embedded in the AI and autonomous systems being used on the battlefield.

Along with University of Queensland Professor Rain Liivoja, Galliott is now commencing a five-year, $9 million study to explore the ethical constraints required in such systems, and the potential of  autonomy to “enhance compliance” with social values.

“Why invest so much time and effort in trying to push for something that’s never going to occur when you can invest your efforts in working directly with the people who are developing the technologies and to make sure they’re as ethical and legal as can be,” Galliott says.

“I think that’s going to drive a better humanitarian outcome, rather than being very critical and being a constant contrarian,” he adds.

Before the genie’s out

The research is being funded by the government through the Defence Cooperative Research Centre (DCRC) for Trusted Autonomous Systems, a $50 million initiative launched by the federal government in 2017.

As well as surveying Defence personnel to understand what they expect from new robotic comrades, the study will pair ethicists and lawyers with the programmers and engineers working on AI-supported weapons to “nut out a lot of the ethical and legal challenges at the time of the design rather than trying to do it all after the fact, after the genie’s out of the bottle,” Galliott said.

The Australian military has an “unwritten policy” against completely autonomous weapons, requiring there be a “human in the loop”. But, Galliott says, “it comes in degrees”.

“When you’re deploying these robots in semi-autonomous or autonomous mode, the whole idea of course is not to have a human overseeing every little action – so at the end of the day, the human that’s involved is going to be very distant from any effect,” Galliott says.

For example, a tank could be fitted with computer vision capabilities to identify ‘person with weapon’ in a landscape, and aim a gun at them.

Should potential targets be labelled with a percentage confidence score or a green box? How can the user interface help avoid ‘automation bias’ where soldiers blindly shoot at whatever the AI suggests?

Such questions shift much of the ethical considerations to the coding stage.

“It’s the programmer that’s going to have a degree of responsibility over how this potentially lethal action is meted out,” Galliott says. “And programmers inevitably apply their own sense of ethics, whenever they’re coding anything. You can’t avoid it…The aim of this project is to try and uncover that and maybe improve the design process.”

The research effort will also see the establishment of an advisory board for organisations to consult with on ethical matters. It will also explore where AI can be utilised to make weapons safer, for example, by teaching AI to identify ambulances or hospitals and alert soldiers.

“Even if it were to do nothing but help eliminate a number of lethal accidents that in itself is a really good thing,” Galliott says.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags legalethicsmilitarydefenceCRCuniversity of queenslandAustralian Defence Force (ADF)Human RightslawADFHuman Rights WatcharmyAustralian Defence Force AcademyNavyweaponsautonomousCampaign to Stop Killer RobotsairforceInternational Committee for Robot Arms ControlUNSW Canberra

More about AdvancedAppleAustraliaGoogleHuman Rights WatchKillerMicrosoftTeslaUniversity of QueenslandUNSWUS ArmyWareham

Show Comments
[]