By BROOKS TIGNER, with KYLE ATTAR
BRUSSELS – Members of the European Parliament (MEPs) are demanding a ban on weapons that have no “meaningful human control”.
The resolution, passed overwhelmingly on 12 September by a majority of the MEPs (566) is non-binding, however, on the 28 member states but is supported by Federica Mogherini, the EU’s policy chief for security and defence policy. She has already begun an international dialogue to try and bring the world into consensus as to the direction of autonomous warfare.
The resolution notes that lethal autonomous weapons (LAWs) are machines without the ability or capacity to make human decisions and, as such, remote operators must take responsibility for life or death decisions. Much like drones, these weapons bring up strong ethical and moral dilemma regarding how targets are selected and how force is applied depending on the situation.
Parliament is being quite cautious in its stance regarding the possible repercussions on private sector research that aims to develop artificial intelligence (AI) for civilian use. These reservations suggest the need for a narrow definition of what militarised autonomous systems are rather than a blanket ban of anything that may bridge the gap between a civilian AI capability and a military one.
“I believe the best way ahead is to agree on some common principles regarding the military use of artificial intelligence: define the boundaries of its applications so that, within those limits, scientists are free to explore the immense positive potential of artificial intelligence.” as Mogherini told the MEPs during their plenary session in Strasbourg.
By the parameters envisioned by the EP, LAWs encompass a slew of different technologies some of which are already in service in the military, from smart missile systems, unmanned areal vehicles (UAVs) and AI-controlled turrets and emplacements.
Regarding civilian applications of AI, many are wary of its use in any and all fields for its potential application a military capacity, even if that isn’t its intended function. This is especially the case when it comes to terrorist groups who can easily adapt civil AI to fit their own military agendas.
The main focus of the EP’s resolution is less to ban AI in combat, but more to reinforce the idea of human control over weapon systems when it comes to pulling the trigger. Technology analysts such as Ulrike Esther Franke, who studies AI and weaponry at the European Council on Foreign Relations, agree with the premise. “We need to view [computers equipped with AI] in the future more as consultants” rather than those who control the trigger,” said Franke.
The UPSHOT: A common phrase in the computer world is “garbage in, garbage out”, and its sentiment still applies to the world of more complicated technologies like AI. If a weaponised AI-equipped device doesn’t have a large enough library of data and images to draw from, it could lead to catastrophic results. The abilities of AI to recognise specific characteristics and act on them have improved vastly, but the process if far from perfect.
Even without errors of logic, certain situations arise where there isn’t a right or easy decision to be made. A human behind a trigger is responsible for the weapon’s use, but when it’s a LAW, then who bears the moral burden?