By PATRICK STEPHENSON
BRUSSELS – The WannaCry ransomware attacks in May 2017 disabled hundreds of thousands of computers across more than 150 countries, even hitting hospitals in the United Kingdom. But the attack would have been far worse had a single 22-year-old cybersecurity researcher not crippled the unfolding attack by registering a domain name, an act that proved to be a ‘kill switch’ for the spreading malware.
The expert did not understand what he was doing when he did it, but his expertise stopped the attacks far more quickly than institutional forces such as the police ever could. But what about ‘hacktivists’ or ‘black-hat’ hackers, who compromise cyber-systems for ideological or personal ends? Should such hackers be allowed to turn in their black hats for white ones, becoming cyber-security researchers who help defend against the very attacks they have launched themselves?
That was the major question at “Software Vulnerabilities Disclosure: the European Landscape,” a workshop held at the Centre for European Policy Studies in June. Speaking under Chatham House rules, speakers from major companies and transatlantic institutions reviewed how to incorporate former black-hat hackers into the security teams, and how to encourage big companies and major institutions to accept hackers’ help after they’ve changed hats.
A European Commission official kicked off the debate by praising vulnerability disclosure. “At first look, vulnerability disclosure is all good…then it starts getting complicated. The knowledge of vulnerabilities introduces risk until the vulnerability are mitigated,” said the official. If a hacker or self-styled ‘security researcher’ found a software vulnerability and published it online, the danger would actually increase as the window opened between bad-hacker knowledge of the vulnerability and the time it took for the company or institution to fit it,” he added.
The official admitted that Europe was behind in dealing with the issue. “Most of what we know is based on experiences in the US,” he said. “We have a lot of knowledge on the technical level, but we’re behind on bringing all this knowledge to the policy level. It’s a road that the US has been on for longer than we have.”
A spokesman for a prominent software vendor and Cloud provider laid out the types of disclosure, their drawbacks and benefits. Full disclosure, where hackers publish a newly discovered vulnerability online, is exploitable immediately and increases user risk. In a no-disclosure scenario, governments and software vendors acquire and store vulnerabilities for their advantage, provoking reprisals and recrimination when the vulnerabilities eventually become known.
Between these two options lay a third: “coordinated vulnerability disclosure”, or CVD, where ethnical hackers become ‘security researchers’ by finding a vulnerability, disclosing its existence with relevant vendors and law enforcement authorities, and working with the vendor to fix it.
The problem is that some companies do not have ‘CVD’ policies, meaning that if a security researcher contacts them about a problem, the vendor’s reflex is to call the police to prosecute the person who found the flaw instead of working with the researcher.
Participants learned about a case in the Netherlands where a hacker found an easily exploitable security flaw in the server of a well-known hospital. He did not contact the hospital directly but shared his information with a journalist, who contacted the hospital for a reaction. Learning that the hospital was preparing a press release that would have spoiled his scoop, the journalist published a story about the security flaw. Presuming the hacker was the aggressor, the hospital authorities called the police. In retaliation, the hacker entered the hospital’s server and downloaded the medical files of several Dutch celebrities. Although he claimed to be performing a public duty, a judge eventually sentenced him to a different sort of public duty: 120 hours of community service.
A happier CVD case involved vulnerabilities that two ethnical hackers found in KPN modems. Finding that the modems could be misused for Denial-of-Service (DoS) attacks, the hackers contacted KPN and alerted the company. In response, KPN invited the hackers to its offices and asked them to demonstrate. They did so, and KPN fixed the vulnerabilities. The company awarded the hackers with KPN electronics, allowed them to present their findings at a technology congress, and kept in touch, in effect making them true security researchers.
For one prominent vendor official, this example of successful CVD holds great promise, and he argued that software vendors could encourage it through corporate policy. One example is Microsoft’s ‘Bug Bounty’ programme. In the initial version, the programme offered a ‘mitigation bypass bounty’ of up to US 100,000 for “truly novel exploitation techniques” for breaking into Microsoft’s latest operating system. The company latter added a second US 100,000 “Bounty for Defense” for defensive ideas that accompany a mitigation bounty submission.
As malware and ransomware like WannaCry and its successor Petya ravage our digital landscape, we need all the help we can get. The goal now is to get more corporations to adopt formal CVD policies, so they don’t end up calling the police on the former black-hats who are trying to help.