I am usually frustrated by discussions of privacy, which usually treat it as an end to itself, or only beneficial to people who have “something to hide.” But in discussions about, say, government surveillance programs, privacy isn’t about hiding things—it’s a check on government power. In pithy terms: you don’t get to decide if you have something to hide. The people invading your privacy do, and their decision can have all sorts of negative consequences for you.
This also explains why invasions of privacy are harmful even if they are secret: secret surveillance still represents unchecked government power, making unaccountable secret decisions. Think of Kafka’s The Trial, not Orwell’s 1984.
See also Predictive policing on policing and privacy (in the form of 4th Amendment searches).
Phillip Rogaway’s The Moral Character of Cryptographic Work is a good argument in favor of the defense of privacy against mass surveillance.
James Q. Whitman, “The Two Western Cultures of Privacy: Dignity Versus Liberty”, 113 Yale Law Journal 1151 (2004). http://www.yalelawjournal.org/article/the-two-western-cultures-of-privacy-dignity-versus-liberty
There is a divide in conceptions of privacy between America and Europe, explored in a surprisingly lucid (for a law review) article. Whitman points out that in Europe, privacy is largely about dignity: the right to controlling your own public image and being free from insult or disparagement. This means, for example, that nude models have privacy rights in photographs of them, and may refuse their publication, even if the photographer clearly holds the copyright in the photographs. Similarly, credit reporting agencies exist in Europe in very limited form compared to America, since financial matters are nobody else’s business unless you are bankrupt or in default. Americans, on the other hand, largely conceive of privacy as protection against government interference.
(I can see a connection here between American and European views on copyright, particularly with the European notion of “author’s rights”, which extend beyond mere property rights to an inherent right of authors to control their work. See my review of The Public Domain; see also Copyright and intellectual property.)
Daniel J. Solove, “A Taxonomy of Privacy”, 154 U Penn Law Review 477 (2006). https://ssrn.com/abstract=667622
Soloves attempt to categorize what harms, specifically, arise from violations of privacy, ranging from surveillance to aggregation to disclosure to decisional interference. Some of the ideas lead to his next paper, below, on privacy as a check on power. Solove gives coherent arguments why the typical legal treatment of privacy – once something is in public, it is no longer private, and there are no restrictions on its dissemination at all – is wrong, and why the harms of privacy violation are more complex.
Daniel J. Solove, “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy”, 44 San Diego Law Review 745 (2007). https://ssrn.com/abstract=998565
Takes the privacy-as-liberty argument to perfection. Solove also wrote a book, Nothing to Hide, but I found it disappointingly oversimplified, with minimal discussion of opposing views or in-depth analysis of the issues.
Helen Nissenbaum, Privacy in Context, Stanford University Press (2009).
Proposes the theory of “contextual integrity”, that privacy depends on “context-relative informational norms.” To determine if some new technology or policy threatens privacy, we must determine the context it affects, the existing norms of information flow in that context, the values motivating those norms, and how the new policy would affect them. This is about information flow; Nissenbaum denies that individual pieces of information are public or private, insisting that norms instead govern how information flows between people. Some things we will share with our doctors but not with the person next to us on the airplane.
Contextual integrity does not provide a single test to see if some new thing is bad because it violates privacy, but does point out that information is not public simply because it has been shared, and that who information is shared with for what purpose is as relevant to privacy as the nature of the information itself.
I suppose this fits well with Solove’s argument: much of the harm of privacy violations comes from how information can be used, not from the revealing of information on its own.
Danielle Keats Citron, “Sexual Privacy”, Yale Law Journal (2019, forthcoming). https://ssrn.com/abstract=3233805
A discussion of issues like sextortion, leaked nude photos, hidden cameras, “deep fake” videos, and other unwanted disclosures of sexual or intimate information. Makes the argument that because intimacy and sex are so core to our identities, personal control over the sharing of sexual information is essential, and privacy allows freedom in our personal lives to explore our identities without fear of shame or retribution. Suggests legislative fixes to ban common violations of sexual privacy.
Richard Posner, “Privacy, Surveillance, and Law”, 75 University of Chicago Law Review 245 (2008). http://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=5655&context=uclrevA contrary perspective, making an ultimately unconvincing argument that warrantless surveillance is necessary for effective counterterrorism; I think detecting terrorists from mass Internet taps and surveillance is an intractable classification problem, and that terrorism is an overblown threat.
[To read] M. Ryan Calo, “The Boundaries of Privacy Harm”, 86 Indiana Law Journal 1131 (2011). https://ssrn.com/abstract=1641487
Jack M. Balkin, “Information Fiduciaries and the First Amendment”, 49 UC Davis Law Review 1183 (2016). http://ssrn.com/abstract=2675270
Summarized in an article in The Atlantic. Argues that regulating the use and disclosure of private data by companies usually violates the First Amendment – you can’t prevent companies from saying true things about their customers. Suggests instead making companies “information fiduciaries”: just as your doctor, attorney, or accountant have professional obligations to act in your best interest and keep your information private, Facebook could have an obligation to act as a fiduciary with your data. Congress can regulate the speech of fiduciaries because their interaction with you is not part of public discourse, but an unequal relationship where the fiduciary has great knowledge or expertise you do not.
This would apply both when companies represent themselves as being trustworthy, or even just because of the business they’re in. This would also preempt the third-party doctrine, because we do have a reasonable expectation of privacy in an information fiduciary. To motivate businesses to voluntarily become information fiduciaries, the federal government could preempt state privacy laws for fiduciaries, so becoming a fiduciary negates the need to comply with fifty different conflicting state rules.
Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma”, 126 Harvard Law Review 1880 (2013). https://harvardlawreview.org/2013/05/introduction-privacy-self-management-and-the-consent-dilemma/
“Privacy self-management” refers to rules giving individuals control over their privacy by requiring them to consent to the collection and use of data. Solove contends that “Privacy self-management does not provide people with meaningful control over their data”, because (a) it is very difficult to make rational decisions about privacy, (b) there are so many entities collecting and using data that you could never have time to manage them all, (c) many privacy harms come from aggregation of data rather than individual data collection, and (d) privacy has social benefits as well as individual benefits. Paternalism is not the answer, because consent is important and people may legitimately make different decisions about their privacy; Solove proposes more careful regulations.
David C. Gray and Danielle Citron, “The Right to Quantitative Privacy”, 98 Minnesota Law Review 62 (2013). http://ssrn.com/abstract=2228919
Proposes a different test for Fourth Amendment violations: instead of asking “how much data did you collect about this specific person?”, ask “could this technology facilitate broad and indiscriminate surveillance if left unchecked?” If so, Fourth Amendment protections should apply, even if you only use the technology in a specific case for something very minor.
Kevin Bankston and Ashkan Soltani, “Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones”, 124 Yale Law Journal Online 335 (2014). http://www.yalelawjournal.org/forum/tiny-constables-and-the-cost-of-surveillance-making-cents-out-of-united-states-v-jones
An interesting practical approach to the “reasonable expectation of privacy” test. New surveillance technologies should be compared to previous technologies by the cost required to acquire information about suspects, and “if the new tracking technique is an order of magnitude less expensive than the previous technique, the technique violates expectations of privacy and runs afoul of the Fourth Amendment.”
Kate Crawford and Jason Schulz, “Big data and due process: Toward a framework to redress predictive privacy harms”, 55 Boston College Law Review 93 (2014). http://lawdigitalcommons.bc.edu/bclr/vol55/iss1/4/
Proposes “a right to procedural data due process” while adorably capitalizing “Big Data”. Points out the mismatch between current privacy law and predictive methods: in the famous Target story, where Target guessed a customer was pregnant based on purchasing patterns, sensitive information can be inferred instead of requested from the user. This connects with Solove’s conception of privacy: companies and governments can make decisions using inferred private data, so consumers and citizens should have a right to examine the data and models justifying the decisions and appeal to have them corrected if necessary. For some decisions (credit checks, job offers, etc.) the consumer has an obvious opportunity to seek redress; for others (ad targeting) there’s no obvious moment when a decision has been made about them, and an agency like the FTC would need to exercise oversight instead.
This right would be very interesting to see applied to typical Silicon Valley startups, which are seat-of-the-pants operations unlikely to want to slow down long enough for proper due process.
Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions”, 89 Washington Law Review 1 (2014). https://ssrn.com/abstract=2376209
Gives examples of real harms from predictive scores, including a credit card company adjusting customer credit risk “because they used their cards to pay for marriage counseling, therapy, or tire-repair services”. Through credit scores as an example, explores the need for due process and regulatory oversight, including a right to inspect data held by companies about you, dispute inaccurate data, and review predictive algorithms. Argues that “scoring systems should be subject to licensing and audit requirements when they enter critical settings like employment, insurance, and health care”, and the FTC should be empowered to review scoring algorithms. Companies should provide tools so individuals can see how their score would change under various conditions, something like the explainability requirements explored by Selbst and Barocas below.
Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines”, Fordham Law Review (2018). https://ssrn.com/abstract=3126971
Argues that calls for explainable decisions are not quite enough. Decision-making models can be inscrutable, meaning they are too complex to be easily understood, but even scrutable models can be non-intuitive: they can pick out relationships we cannot explain and which are not obviously connected to the outcome measure. We can require explanations of individual decisions, but inscrutable models are difficult to explain and non-intuitive explanations are difficult to understand; further, if the goal is to detect disparate outcomes or bias, we need to see the whole method, not just individual decisions. Advocates instead for documentation of model-building decisions so the construction of the model can be justified as well as its decisions, and which takes into account the purposes for which the model is used as well as the ways it makes decisions. (A perfectly scrutable and intuitive model can be used for hidden nefarious purposes.)
Bryce Goodman and Seth Flaxman, “European Union regulations on algorithmic decision-making and a ‘right to explanation’”, ICML 2016. https://arxiv.org/abs/1606.08813
Summarizes the EU General Data Protection Regulation, scheduled to become law in 2018, which adds a “right to explanation”: people profiled by data have a right to “meaningful information about the logic involved.” This doesn’t go so far as to create due process rights, but does suggest challenges for users of machine learning techniques in business: how do you explain the output of a random forest to an arbitrary person, who may have no technical knowledge at all? Can you justify its decisions?