Home / Our news and views / EU law to make online privacy more transparent has its problems

EU law to make online privacy more transparent has its problems

By PATRICK STEPHENSON

BRUSSELS –The EU’s General Data Protection Regulation, or GDPR, has as its main objective to ensure that Europeans can control how worldwide data collectors – public and private, but excluding national security agencies – use the online personal information of EU citizens. Data-collection processes, and their self-learning algorithms, must be transparent and auditable, and companies must hire ‘digital privacy officers’ (DPOs) to act as agents of national data privacy authorities (DPAs) to ensure that user privacy is respected.

While the GDPR may increase the online privacy of Europe’s citizens, the regulation itself is both vague and complex. Worse, its full implementation may spend years tied up in courts. That was the message from some data-collection experts at the conference, “Computers, Privacy and Data Protection 2017: The Age of Intelligent Machines”, held here on 25-29 January.

One of the event’s panels (“AI and GDPR: Concretely, what are the obligations and steps to take?”) wrestled with the implications of the GRPR’s wide-reaching mandate.

The GDRP’s article 25 mandates “data protection by design and by default” which entails, among other things, that ‘only personal data which are necessary for each specific purpose of the processing are processed.’ Agreed last April, the regulation goes into effect on 25 May 2018. Fines for violating its terms will range from EU 10 million to 2 percent of the offender’s previous financial year’s worldwide annual turnover, “whichever is higher.”

Panellist Joe Cannataci, the UN’s special rapporteur on privacy, pointed to the wide gap between the regulation’s ambitions and data practitioner realities. “If you understand AI (artificial intelligence)…you’re really going to scratch your head when you read article 25.”

He said most citizens typically have little idea if, or how, data collectors and their algorithms are collecting and using their data. “They wouldn’t know algorithmic transparency if it bit them in the butt,” he said. “So I’d rather look at real remedies for real people when it comes to the law, and that’s why I’d like to see the law come up with something not only interesting, but useful.”

Julie Brill, cybersecurity lead and partner at international law firm Hogan Lovells, also stressed practical solutions, given the sophistication of the self-learning algorithms behind AI. “The computer scientists who designed some of these systems are unable to explain their [own] results,” she said. Noting that journalists asked Google DeepMind engineers how their Go-playing programme AlphaGo had managed to beat a professional Go player, she said the engineers were at a loss to come up with an answer: “When it gets to AI, it gets to be almost inexplicable.”

Brill recommends using other metrics to judge data processing’s transparency. “I find input-output information to be much more effective on a personal level for data subjects and consumers than the complicated computer programme,” she observed. Thus, data collectors could tell consumers and users what information they collect, and what conclusions an algorithm makes or is likely to make, rather than trying to explain how the algorithm made its decisions.

At heart, she noted, many algorithms cannot be fully “audited” because they evolve over time. “If you’re talking about a really complex algorithm, one of the problems is that the algorithm changes,” she said. “It moves. It grows. It morphs. Disclosing it at one particular time is not going to be accurate for where the algorithm is an hour from now, or a day from now, or a week from now.”

Demosthenes Ikonomou, senior expert at the EU Agency for Network and Information Security (ENISA), agreed. “The danger is that there are a lot of things to be interpreted, and in certain cases like this one, the amount of interpretation is quite large,” he said, pointing to the huge scale of ‘data lakes’ that algorithms use to reach their conclusions. “We cannot imagine the scale of what we’re talking about,” he said.

EU policymakers are aware of the problem. Věra Jourová, EU Commissioner for justice, consumers and gender equality, told the conference that “my biggest nightmare is that this is almost mission impossible”. Recounting her dinner with data collection entrepreneurs, Jourová said the latter worry about the enormous difficulties of hiring the 30,000-odd DPOs needed who can combine both regulatory and technical expertise. According to the entrepreneurs, those experts do not yet exist in the labour market.

On a note of optimism, however, Jourová said the GDPR’s implementation requires only cooperation between EU member states and private sector actors. And its adoption was preferable to a dark future alternative.

“When I go more to the east of Europe, and ask people whether they would [agree to] get a chip here” — she indicated a spot behind her right ear — “that would ensure 100% their security, half the people at least would agree. When everybody has the chip imposed by the state, we will be a totalitarian regime again.”

In the ensuing panel discussion, Gerrit Jan Zenne, a law professor at Leiden University, had a more ironic take on the regulation’s prospects. “In one sense I’m optimistic, because the GDPR creates a lawyer’s paradise,” he said. “I’m not so optimistic whether the GDPR will deliver stronger protection of individuals. Most of the rules are very vague and open to interpretation, and a fine will be really difficult to impose. And if the Commission does impose a fine, a court will annul it. This is what I love about it.”

     THE UPSHOT: The desire to make online data collection more transparent, and to give online users more control over their data, is easy to understand. The GDPR, however, is often difficult to understand. It is a document of stunning scope, and of shocking if not absurd ambition, full of declared rights and high-minded principles with seemingly little thought invested in how those rights would be seized, and those principles enforced. To wit: the first sentence of the recital’s point 4 reads: “The processing of personal data should be designed to serve mankind.”
     No doubt its crafters thought that vagueness and ambiguity was a virtue. “It’s the correct approach to stay at a high level without entering into the details,” ENISA staffer Ikonomou told conference participants. But the devil lurks in the details, and private sector actors seem to want more, not fewer details about how – for example – they will hire thousands of DPOs when the Commissioner-in-charge herself suggests such skilled individuals might not be available, at least not yet.
     Google put on a brave face at the conference, talking about how they are already forming teams and crafting rules to comply. Small surprise: Google has the means to do so. SECURITY EUROPE wonders how Europeans SMEs will get on, without such deep pockets to restrain the data-collecting impulses of deep minds.

     ps@securityeurope.info

Check Also

The EP pushes for international ban on the use of killer robots

By BROOKS TIGNER, with KYLE ATTAR
BRUSSELS – Members of the European Parliament (MEPs) are demanding a ban on weapons that have no “meaningful human control”.The resolution, passed overwhelmingly on 12 September by a majority of the MEPs (566)  is non-binding, however, on the 28 member states but is supported by Federica Mogherini, the EU’s policy chief for security and defence policy. She has already begun an international dialogue to try and bring the world into consensus as to the direction of autonomous warfare. The resolution notes that lethal autonomous weapons (LAWs) are machines without the ability or capacity to make human decisions and, as such, remote operators must take responsibility for life or death decisions. Much like drones, these weapons bring up strong ethical and moral dilemma regarding...