Home / Our news and views / The deployment of robotic weapons and artificial intelligence will save more money for defence if shared with other public sectors

The deployment of robotic weapons and artificial intelligence will save more money for defence if shared with other public sectors

Euro-View: Researcher Jaana Kuula on robotic defence

Amir Husain

Security and defence discussions are now filled with speculation about robot weapons and artificial intelligence (AI). The problem is, security cannot be built on robotic weapons, autonomous systems or AI alone. Autonomous systems are more like operating platforms and firing systems than weapons per se. For example, electro-optical weapons, firearms, missiles, and nuclear weapons are defined by their impact. Robotised features simply increase that impact.

But how will autonomous weapons be used in operations?
Inevitably, combatants mostly would like to see their application in frontline operations where they can create the first preventive strike, while avoiding the unnecessary use of human force. However, a full strategy requires more than frontline actions. Recent years have shown how provocation, intrusion, terrorism, cyber operations, fake news and other information operations blur the conception of peace and war.

The correct strategy, with or without robots, is not easy to define. Furthermore, fatal accidents, natural catastrophes and humanitarian disasters often escalate into violence. These crises often require armed forces to secure aid delivery. In addition, many battles take place among civilians. For example, electronics and cyber space are located in the civilian arena. Despite the immaterial and invisible nature of these spaces, weapons in these theatres have potential to cause serious harm.

It is therefore essential to think about how robotised, AI-based, electro-optical, acoustic, electronic and cyber weapons can be prevented from hurting civilians. Indeed, if terrorists turn to such weapons, there need to be ways to stop them before they cause massive destruction. If such weapons are needed in humanitarian crises, then automated and robotised systems should be capable of analysing and respecting human rights.

For ethical reasons, some AI developers and human rights activists do not accept robotised weapons at all, and have asked the United Nations to prohibit their use. But since many forms of these systems already exist, their further development can hardly be avoided. Consequently new kinds of defence systems will have to be developed and deployed.

For example, for Europe’s air space the European Commission has proposed the creation of a ‘U-Space’ for drones in altitudes of up to 150 metres by 2019. The full development of U-Space would not stop the use of unregistered and unidentified aircraft weighing less than 250 grams or certain categories of heavier ones. . In addition, the U-Space probably would not take into account or prohibit unidentified land and water traffic movements, where the number of unmanned vehicles is currently tiny but destined to grow.

To take a different example: many defensive digital and electronic locks and walls can be by-passed by other means such as non-electronic physical force (or chemical, biological, radiological or nuclear attacks). That is why proactive autonomous and remotely controlled detection, intelligence and surveillance systems are needed beyond regulated unmanned traffic. Autonomous robotic drones could detect radiation risks and chemical warfare agents as well as threats from conventional or downgraded arms.

In conclusion, robotised AI-based weapons and systems require knowledgeable and educated staff. They are expensive and demand continuous maintenance and capability upgrades. They do not immediately raise national security to a new level, since earlier iterations of defence still remain – and require conventional weaponry either as a backbone or backup. Overall, capability requirements in defence will not decrease. Instead they will increase as new capabilities are added on top of others.

Robotised capabilities will not remove the need for human staff. The latter will be engaged in the development, maintenance and operation of electronic, cyber and robotised systems. Collaboration between different authorities will be needed, as well as comprehensive national security strategies and international cooperation in the defence sector.

As for the cost impact of security-oriented robotics, budgetary resources will determine the cooperation models, especially in smaller countries. But instead of developing separate equipment for defence forces and internal security, it would make sense for capability development and resources to be shared among larger and smaller countries. That way, a more powerful defence can be built together with less economic effort.

Jaana Kuula, Ph.D., is a researcher within the Faculty of Information Technology at Finland’s University of Jyväskylä, and manager of the EU-funded “Toxi-triage” project. She can be reached at tel: +358 40 8053272 and email: jaana.kuula@jyu.fi.

Check Also

The EP pushes for international ban on the use of killer robots

By BROOKS TIGNER, with KYLE ATTAR
BRUSSELS – Members of the European Parliament (MEPs) are demanding a ban on weapons that have no “meaningful human control”.The resolution, passed overwhelmingly on 12 September by a majority of the MEPs (566)  is non-binding, however, on the 28 member states but is supported by Federica Mogherini, the EU’s policy chief for security and defence policy. She has already begun an international dialogue to try and bring the world into consensus as to the direction of autonomous warfare. The resolution notes that lethal autonomous weapons (LAWs) are machines without the ability or capacity to make human decisions and, as such, remote operators must take responsibility for life or death decisions. Much like drones, these weapons bring up strong ethical and moral dilemma regarding...