Home / Our news and views / Will self-learning software lead to hell or paradise?

Will self-learning software lead to hell or paradise?

By MAYA WHITNEY, with BROOKS TIGNER

BRUSSELS – Which will be the more critical applications of artificial intelligence (AI) and what do we do about their control? The answers are far from clear, as suggested by the diverse opinions on the subject during the Cybersec conference here on 27 February.

The biggest AI issue is the matter of deep learning. If used ethically, it will save time and human energy across many facets of everyday life. However, if its deep learning is “poisoned”, then AI as a force for evil could become nearly unstoppable.

For Paweł Lawecki, project leader at the Boston Consulting Group, there is an opportunity for AI to play a positive role for the military, for example, by protecting its information and detecting irregularities in critical algorithms. But Lawecki also sees a much darker future for AI. “Reverse engineering the deep learning component of ‘good’ AI to poison it and make it do something else – well, AI has the possibility to fight itself which, I think, is just around the corner,” he told the conference.

AI is separated into two “black box” and “white box” categories.

Black box technologies are systems with algorithms that are not analysed by humans. They are “deep thinking” technologies which save the most time and energy regarding information processing. Facebook uses black box algorithms to process information displayed on its users’ screens. However, this kind of AI cannot explain itself to a human and thus cannot indicate whether it has been “poisoned” or not since humans would not necessarily know which self-generating algorithms are legitimate and which not.

White box technologies are the reverse: algorithms and systems that can be understood and explained to humans. They are typically less complex than their black box counterparts, meaning their processing power and speed are usually slower.

The security concerns lie more with black box algorithms as AI’s deep learning capabilities grow in strength. If handled correctly, its positive applications are vast such as for Europe’s health and financial sectors where enormous amount of data require rapid processing. AI also can also create smart cities for security. Israeli cities already do this to combat threats of terrorism.

For example, AI can automatically sort through smart city-generated data to focus only on the most relevant streams of information or video feeds, explained Szymon Janota, business unit manager at the Future Processing, a software company based in Gliwice, Poland.

But what happens when those feeds are tampered with and the deep learning gets poisoned? Many at Cybersec warned of the need to accelerate the pace of laws and regulations to keep up with AI’s possibilities.

Mady Delvaux-Stehres, an MEP from Luxembourg and rapporteur of a European Parliament resolution on civil law rules on robotics expressed frustration with the speed of legislation moving through the EU’s bureaucracy: “I understand why, but I wish the progress would move faster so we have controls in place sooner rather than later.”

AI of all kind falls under the EU’s General Data Protection Regulation (GDPR) and, in future, should also be regulated by the EU’s Cybersecurity Certification Framework, proposed in September 2017. The latter will define tailored schemes governing the scope of certification, categories of products and services, evaluation criteria and the security requirements, among other aspect.

Whether it and the GDPR will be sufficient to protect personal privacy in Europe in the rapidly approaching era of ever-smarter AI is the crucial question – and one for which no one at this point has any iron-clad answers.

     THE UPSHOT: AI already plays many roles in daily life lives that seem are benign but could have serious repercussions for privacy and perceived reality in the future. For example, its use by Facebook to filter which posts and news show up on on a user’s news feed contributed to the spread of fake news during Brexit and the US elections.
     If self-learning algorthisms continue along their exponential curve of learning, then should they – or the robotic systems they might inhabit – be accorded a legal personality and accompanying rights? The ethical aspects to bloodless but self-thinking machines will soon be upon us. However, there are few, if any, indications that legislatures across the globe are ready to confront them, much less legislate for them.

     mayawhitney308@gmail.com
     bt@securityeurope.info

Check Also

The EP pushes for international ban on the use of killer robots

By BROOKS TIGNER, with KYLE ATTAR
BRUSSELS – Members of the European Parliament (MEPs) are demanding a ban on weapons that have no “meaningful human control”.The resolution, passed overwhelmingly on 12 September by a majority of the MEPs (566)  is non-binding, however, on the 28 member states but is supported by Federica Mogherini, the EU’s policy chief for security and defence policy. She has already begun an international dialogue to try and bring the world into consensus as to the direction of autonomous warfare. The resolution notes that lethal autonomous weapons (LAWs) are machines without the ability or capacity to make human decisions and, as such, remote operators must take responsibility for life or death decisions. Much like drones, these weapons bring up strong ethical and moral dilemma regarding...