Name a technology humans have developed that they haven't used. I can't think of any. So we can work on this for sure. But we are in this dilemma: Once we do develop this technology, it will be tempting to use it.The article skirts the question on who makes this decision. Maybe the United Nations after some unlikely agreement among major powers. But what if the UN doesn't act and some billionaire decides to fund a project?
As computer scientists we start to face these questions as software in our hyper-connected world starts to change society in unpredictable ways. How do we balance privacy, security, usability and fairness in communications and machine learning? What about net neutrality, self-driving cars, autonomous military robots? Job disruption from automation?
We have governments to deal with these challenges. But the world seems to have lost trust in its politicians and governments don't agree. How does one set different rules across states and countries which apply to software services over the Internet?
All too often companies set these policies, at least the default policies until government steps in. Uber didn't ask permission to completely change the paid-ride business and only a few places pushed back. Google, Facebook, etc. use machine learning with abandon, until some governments try and reign them in. The Department of Defense and the NSA, in some sense industries within government, set their policies often without public debate.
What is our role as computer scientists? It's not wrong to create the technologies, but we should acknowledge the ethical questions that come with them and what we technically can and cannot do to address them. Keep people informed so the decision makers, whomever they be, at least have the right knowledge to make their choices.