Manufacturers of computer systems should welcome researchers' efforts to find flaws.
In late December, Berlin-based computer-security researcher Karsten Nohl announced that his group had found a vulnerability in the algorithm used to prevent eavesdropping in the most widely used mobile telephone standard in the world.
News outlets around the globe quickly reported that the research would make it easy for anyone to listen in on mobile telephone calls. The industry group that promotes the standard, the GSM Association, just as quickly defended the system and played down the importance of Nohl's finding.
The episode has highlighted an ongoing tension in computer-security research. The need for such research has never been higher: malicious hacking attacks are rapidly getting bolder and more sophisticated, even as law-abiding citizens are being asked to do everything from vote to have their medical information stored on computerized systems. The best way for researchers to improve the security of these systems is to attack them — to find their flaws so that they can be fixed. But this can lead researchers into a grey area in which their efforts can look a lot like criminal activity.
Some manufacturers, fearful that the revelation of a flaw could undermine their credibility in the marketplace, have reacted furiously to such research. In 2008, for example, two groups were the subject of legal action by organizations attempting to prevent the release of weaknesses the researchers had found in the smart cards used in mass transit systems (see Nature doi:10.1038/news.2008.1044; 2008).
Both those attempts were ultimately unsuccessful and the research was disseminated. Nonetheless, the threat of legal action haunts the field, not least because of uncertainty over exactly what work is legal. Researchers were particularly incensed about the 2008 cases because both the groups had followed the community's widely accepted 'responsible disclosure' protocol: researchers who uncover a flaw don't go public until the system's developer has had a chance to fix it.
They were right to be outraged: security research done in the spirit of responsible disclosure is something that computer-system manufacturers should encourage, not fight. When flaws are detected and fixed before outlaws can exploit them, everyone benefits.
That said, not every computer-security researcher has been as meticulous about the conduct of their work. Investigators say that they have seen work published or presented at conferences that they personally are uncomfortable with.
The computer-security community should engage in a wide-ranging discussion of the ethics of its work, especially as researchers move into ever greyer areas, such as examining or even controlling networks of computers that have been taken over by criminals. If nothing else, this discussion could help it to head off a worst-case scenario in which a research project that oversteps the bounds leads to an onerous crackdown that impedes genuinely useful research.
Computer-security research is a relatively young field and many of its leading members are far removed from the traditional image of academics. Much of their research is disseminated through less formal routes than peer-reviewed journals, such as blogs, and their conferences can seem like strange, anarchistic affairs to researchers in other fields.
But the public now relies on these people to defend it against everything from credit-card fraudsters to terrorists. They are genuine researchers. And they deserve a considered ethical framework within which to conduct their vital activity.
Rights and permissions
About this article
Cite this article
Security ethics. Nature 463, 136 (2010). https://doi.org/10.1038/463136a