By MAYA WHITNEY, with BROOKS TIGNER
BRUSSELS – Which will be the more critical applications of artificial intelligence (AI) and what do we do about their control? The answers are far from clear, as suggested by the diverse opinions on the subject during the Cybersec conference here on 27 February.
The biggest AI issue is the matter of deep learning. If used ethically, it will save time and human energy across many facets of everyday life. However, if its deep learning is “poisoned”, then AI as a force for evil could become nearly unstoppable.
For Paweł Lawecki, project leader at the Boston Consulting Group, there is an opportunity for AI to play a positive role for the military, for example, by protecting its information and detecting irregularities in critical algorithms. But Lawecki also sees a much darker future for AI. “Reverse engineering the deep learning component of ‘good’ AI to poison it and make it do something else – well, AI has the possibility to fight itself which, I think, is just around the corner,” he told the conference.
AI is separated into two “black box” and “white box” categories.
Black box technologies are systems with algorithms that are not analysed by humans. They are “deep thinking” technologies which save the most time and energy regarding information processing. Facebook uses black box algorithms to process information displayed on its users’ screens. However, this kind of AI cannot explain itself to a human and thus cannot indicate whether it has been “poisoned” or not since humans would not necessarily know which self-generating algorithms are legitimate and which not.
White box technologies are the reverse: algorithms and systems that can be understood and explained to humans. They are typically less complex than their black box counterparts, meaning their processing power and speed are usually slower.
The security concerns lie more with black box algorithms as AI’s deep learning capabilities grow in strength. If handled correctly, its positive applications are vast such as for Europe’s health and financial sectors where enormous amount of data require rapid processing. AI also can also create smart cities for security. Israeli cities already do this to combat threats of terrorism.
For example, AI can automatically sort through smart city-generated data to focus only on the most relevant streams of information or video feeds, explained Szymon Janota, business unit manager at the Future Processing, a software company based in Gliwice, Poland.
But what happens when those feeds are tampered with and the deep learning gets poisoned? Many at Cybersec warned of the need to accelerate the pace of laws and regulations to keep up with AI’s possibilities.
Mady Delvaux-Stehres, an MEP from Luxembourg and rapporteur of a European Parliament resolution on civil law rules on robotics expressed frustration with the speed of legislation moving through the EU’s bureaucracy: “I understand why, but I wish the progress would move faster so we have controls in place sooner rather than later.”
AI of all kind falls under the EU’s General Data Protection Regulation (GDPR) and, in future, should also be regulated by the EU’s Cybersecurity Certification Framework, proposed in September 2017. The latter will define tailored schemes governing the scope of certification, categories of products and services, evaluation criteria and the security requirements, among other aspect.
Whether it and the GDPR will be sufficient to protect personal privacy in Europe in the rapidly approaching era of ever-smarter AI is the crucial question – and one for which no one at this point has any iron-clad answers.
If self-learning algorthisms continue along their exponential curve of learning, then should they – or the robotic systems they might inhabit – be accorded a legal personality and accompanying rights? The ethical aspects to bloodless but self-thinking machines will soon be upon us. However, there are few, if any, indications that legislatures across the globe are ready to confront them, much less legislate for them.