Home / Our news and views / How best to regulate the murky depths of algorithmic decisions?

How best to regulate the murky depths of algorithmic decisions?

By PATRICK STEPHENSON

BRUSSELS – Citizens of most industrial countries, including Europeans and Americans, know that their bank or financial service provider will use a numerical formula to rate their credit-worthiness. But in future, government algorithms could generate a number that rates not just your financial history, but your social history, online behaviour, legal transgressions and even political observations that a particular government might find hostile. That number, in turn, could influence everything from loan approvals to your child’s application to a prestigious school.

The implications of such a ‘citizen score’ loomed large at the conference, “Computers, Privacy and Data Protection (CPDP) 2017: the Age of Intelligent Machines”, held here on 25-27 January. During its introductory debate — “Algorithms: Too Intelligent to be Intelligible?” — panellists explored how numerical calculations increasingly dominate political and economic decision-making, particularly by large governments and corporations.

The citizen score trend has already started, as seen in a 28 November 2016 article in The Wall Street Journal which reported that the Chinese Communist Party intends to roll out by 2020 a ‘social credit’ system to rate social, financial, and online behaviour, leading to a single numerical rating. This would determine a person’s access to a variety of services and activities, including low-interest loans, prestigious schools, travel overseas or government jobs. The Journal quoted a Party slogan that citizen scoring intends to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

Panellists agreed that, while algorithms have proved valuable in fields from hiring to politics, their complex decision-making demands greater transparency. But how much transparency is required, and who should impose it? Marc Rotenberg, president of the Washington DC-based Electronic Privacy Information Centre, said that without transparency, algorithms could become instruments for enforcing dictatorship.

Referring to the future Chinese programme, he said “citizen scoring is the absence of data protection, transparency, and democratic institutions.” Instead, he argued that democracies must find ways to make it possible to ‘audit’ algorithms to understand the basis for their decisions. In the absence of such audits, the use of algorithms should be strictly regulated, if not illegal. “A system that you cannot audit [should be] a system that you cannot use,” he said.

Asked how an algorithm could be audited, Rotenberg quoted science-fiction author Isaac Asimov’s three laws of robotics: robots must do no harm to any person, must obey human commands, and must preserve their own existence after following the first rules.

“I want two more rules,” posited Rotenberg. “I want the machine to explain the basis of its decisions…and [secondly] it should always reveal its identity to us.”

His concerns stem from the fact that many people online do not know that algorithms are tracing their activity and analysing their behaviour. So-called learning algorithms use this information to offer tailored products and services to the individual — or to anticipate political behaviour that would alarm repressive government authorities. “They know you, but you don’t know them,” he said.

Speaker Krishna Gummadi of Germany’s Max Planck Institute for Software Services, offered a more subtle perspective on algorithmic decision-making. An algorithmic designer himself, he agreed that algorithms must be transparent. However, he argued that their main use is to bring transparency to human decision-making by noting patterns or changes in otherwise unintelligible, human-produced data. Algorithms have found patterns of discrimination in NYPD stop-and-frisk activity, for example, and in Airbnb rentals that mere human analysis would have not discovered.

Moreover, he said algorithmic decisions are often misunderstood because they reveal simultaneous truths that can seem contradictory. He cited the widespread use of the ‘COMPAS’ recidivism tool – a programme used to estimate the likelihood of released criminals offending again in the near future. A product of US company Northpointe Inc, the programme is used by US judges, parole officers and other law enforcement personnel to evaluate an offender’s chances for parole.

Northpointe analysed COMPAS’s results show that blacks and whites have similar probabilities of recidivism at every level of risk – results which, it says, demonstrate that the tool was fair. However, a separate analysis by ProPublica, a non-profit newsroom that conducts investigative journalism, points to a different conclusion. Propublica argues that COMPAS’s results are considerably worse for blacks than whites, in part because blacks were more likely to be falsely identified as risks.

In Gummadi’s view, both interpretations were correct because they measured different aspects of the same problem. “The good thing about algorithms is that you can detect decision-making you don’t want in an algorithm, and find ways to constrain it,” he said. By contrast, “you can’t constrain the biases in human beings.”

But the human desire for superficial clarity makes a profound understanding of algorithmic results problematic, at best. A complex truth is hard to understand, and even harder to regulate through government-imposed rules. “It’s not that algorithms are too intelligent,” Gummadi said. “It’s that we, humans, try to be too intelligible to be intelligent.”

     THE UPSHOT: The EU has only gingerly approached the idea of issuing EU-wide rules covering algorithmic decision-making through its General Data Protection Regulation (GDPR). And no wonder: the algorithms are incredibly complex, and any rules about them must involve a level of technical and mathematical sophistication far beyond that of EU officials high up the policy pyramid.
     This fundamental and yawning gap – between the technical ignorance of policymakers eager for regulation, and the expertise of algorithm writers and operators – was on full display at the CPDP conference. It is the biggest obstacle to meaningful reform that protects privacy while also allowing society to benefit from the tremendous problem-solving potential that self-learning algorithms hold.
     That disconnection causes officials to make well-meaning but inflexible pronouncements about fundamental rights, without understanding the intractable problems that such posturing can pose for operators. For a solution to be found, the knowledge gap between policymakers and operators must be bridged.

     ps@securityeurope.info

Check Also

The EP pushes for international ban on the use of killer robots

By BROOKS TIGNER, with KYLE ATTAR
BRUSSELS – Members of the European Parliament (MEPs) are demanding a ban on weapons that have no “meaningful human control”.The resolution, passed overwhelmingly on 12 September by a majority of the MEPs (566)  is non-binding, however, on the 28 member states but is supported by Federica Mogherini, the EU’s policy chief for security and defence policy. She has already begun an international dialogue to try and bring the world into consensus as to the direction of autonomous warfare. The resolution notes that lethal autonomous weapons (LAWs) are machines without the ability or capacity to make human decisions and, as such, remote operators must take responsibility for life or death decisions. Much like drones, these weapons bring up strong ethical and moral dilemma regarding...