One of the most frustrating critiques I encounter when advocating for stronger privacy protections for consumers is: “But won’t that help criminals and pedophiles?” I consider this question to be an inflammatory operationalization of a broader question: “are privacy-enhancing technologies ethical?”
Ethics helps us determine what is right and wrong; it helps us navigate how to live both for ourselves and for others 1. Considering ethics helps us reason through whether the impacts we are having on the world, on the whole, are helpful or hurtful for society. The question — are privacy-enhancing technologies ethical? — is not well-formed in an academic sense. It is too broad and hard to tackle without first converting the question into something more tightly scoped: whose privacy is the technology protecting and from whom? But it is an interesting question because it forces us to consider the trade-offs between societal good and individual liberty. I teach a class on professional ethics to computer science undergraduates at Georgia Tech, and many of these students tend to have fairly well-developed opinions about the latter but often have little experience contending with the former. Government surveillance is bad, surveillance capitalism is bad, so privacy must be ethical, right?
I would not be upset if that was the take-away, but the point of the exercise is to scrutinize the process one uses to arrive at that conclusion.
My take is to have a productive conversation about the ethics of privacy, we must consider context. In other words, we must consider the ethics of a marginal change in privacy.
Here’s a couple of opposing strawman arguments to consider. First, in an imaginary world where privacy-enhancing technologies are so strong that they have emboldened widespread distribution of, e.g., child pornography, with no way for law enforcement services to catch suspected distributors and consumers, efforts to develop even stronger privacy-enhancing technologies would not be “ethical.” Why? Because, in such a world, the marginal utility of privacy would be negative. From a utilitarian perspective, stronger privacy protections would cause more harm than good — it would only further embolden the subverting of the social norms and laws that allow us to function as a society. From a deontological perspective, further enhancing the strength of privacy protections would conflict with our innate duty to keep our children safe from predators. From a social contract theory perspective, stronger privacy protections would inhibit our ability to reap the benefits of social living — e.g., the peace of mind of knowing that there is a societal infrastructure to inhibit and punish sexual predators.
Now, let’s consider a contrastive strawman. In an imaginary world where individual citizens have no privacy against the government, anything and everything one does on a networked computing device would be subject to scrutiny and analysis. In such a world, people would be fearful of posting anything on the Internet but thoughts that are guaranteed not to get them in trouble with law enforcement or intelligence agencies. And even beyond voluntary posts, a number of passive behavioral sensor streams would be collected about individuals — camera feeds from CCTV streams, an individuals location and web history, etc. etc. Taken to its extreme, we would have a world not unlike the ones represented by George Orwell in his novel, 1984, or Franz Kafka in his novel, “The Trial” — where an abstract, powerful entity monitors and infers its citizens every move in order to detect and correct deviancies and abnormalities. Democracy would fail — a pre-requisite for democracy is an active and engaged population, but a population cannot meaningfully engage in politics if they fear being surveilled and punished for speaking truth to power. In such a world, the development of privacy-enhancing technologies would clearly be “ethical”. From a utilitarian perspective, it would do more good than harm. From a deontological perspective, it would restore individuals’ inherent value by allowing them to speak and congregate freely. From a social contract theory perspective, it would allow individuals to take advantage of the benefits of social living without giving up their individual agency and choice.
Which strawman is more like the current world? Law enforcement would like you to believe that we are closer to the former; it’s how they stoke fear, uncertainty and doubt. It’s one of the reasons why former FBI director James Comey framed advances in cryptography and encryption technologies as the “going dark problem.” In fact, we’re much closer to the latter strawman than we are to the former — we are in, as Georgia Tech professor Peter Swire terms it: “the golden age of surveillance”. In todays world, streams of personal data are collected, processed and sold in a manner in which everyday folks like you and me have no agency. In this world, and not some hypothetical future wild west in which encryption in an insurmountable barrier for law enforcement, working on privacy-enhancing technologies is not only ethical — it is critical to the survival of our democracy.
Thanks for reading! If you think you or your company could benefit from my expertise, I’d be remiss if I didn’t alert you to the fact that I am an independent consultant and accepting new clients. My expertise spans UX, human-centered cybersecurity and privacy, and data science.
If you read this and thought: “whoah, definitely want to be spammed by that guy”, there are three ways to do it:
- Subscribe to my mailing list (spam frequency: a couple of times a month.)
- Follow me on Twitter (spam frequency: weekly)
- Subscribe to my YouTube channel (spam frequency: ¯\_(ツ)_/¯)
You also can do none of these things, and we will all be fine.
Ethical is not the same thing as legal, of course. In an ideal world, the laws of a society would align with the ethics of its people. But, in reality, there is a principle-practice gap between ethics and law — what is legal is not always ethical (e.g., it is legal to track people on the web without consent), and what is ethical is not always legal (e.g., it was illegal for Edward Snowden to release documents outlining corporate-government surveillance agreements, but many may believe his actions to be ethical). ↩