I just watched Citizen Four, and that made me think again about mass surveillance. And it’s complicated.
I would like to leave aside the US foreign policy (where I agree with Chomsky’s criticism), and whether “terrorist attacks” would have been an issue if the US government didn’t do all the bullshit it does across the world. Let’s assume there is always someone out there trying, for no rational reason, to blow up a bus or a train. In the US, Europe, or anywhere. And with the internet, it becomes easier for that person to find both motivation and means to do so.
From that point of view it seems entirely justified to look for those people day and night in attempt to prevent them from killing innocent people. Whether the effort to prevent a thousand deaths by terrorists is comparable to the efforts to prevent the deaths of millions due to car crashes, obesity-related diseases, malpractice, police brutality, poor living conditions and more, is a beyond the scope of this discussion. And regardless of whether PRISM has helped so far in preventing attacks it may do so in the future.
Privacy, on the other hand, is fundamental and we must not be monitored by a “benign government”, regardless of the professed cause. And I genuinely believe that none of the officials involved in PRISM, envision or aim at any Orwellian dystopia, but that doesn’t mean their actions can’t lead to one. Creating the means to implement a surveillance state is just one step away from having one, regardless of the intentions that the means were created with. In a not-so-impossible scenario, the thin oversight of PRISM could be wiped and the “proper people” given access to the data. I live in a former communist state, so believe me, that’s real. And that’s not the only danger – self-censorship is another one, which can really skew the course of society.
So can’t we have both privacy and security? Shall we sacrifice liberties in order to feel less threatened (and at the end get neither, as Franklin said)? Of course not. But I think the implementation details are, again, the key to the optimal solution. Can there be some sort of solution, that doesn’t give the government all the data about all the citizens, and yet serves as a means to protect against the irrational person who plans to kill people?
The NSA used private companies as a source of the data (even though the companies deny that) – google searches, facebook messages, emails, text messages, etc. All of that was poured into a huge database and searched and analyzed. For good reasons, allegedly, but we don’t trust the government, do we? And yet, we trust the companies with our data, or we don’t care, and we hope that they will protect our privacy. They use the data to target ads at us, which we accept. But handing that data to an almighty government crosses the line. And even though the companies deny any large-scale data transfer, the veil of secrecy over PRISM hints otherwise. Receiving all the data related to a given search term is a rather large-scale data transfer.
My thoughts in this respect lead me to think of alternatives that would still be able to prevent detectable attacks, but would not hand the government the tools to become a superpowerful state. And maybe naively, I think that’s achievable to an extent. And, no, you can’t prevent a terrorist to use Tor, HTTPS, PGP, Bitcoin, a possible Silk road successor, etc. But you can’t prevent a terrorist from meeting an illegal salesman in a parking garage either. And besides, that’s not what mass-surveillance solves anyway – if it does solve anything, it’s the low-hanging fruit; the inept wrongdoers.
But what if there was a piece of software that is trained to detect suspicious profiles (using machine learning). What if that software was open-source and was distributed to major tech companies (like the ones participating in PRISM – google, facebook, etc). That software would work as follows: it receives as input anonymized profiles of users of the the company, analyzes them (locally) and flags any suspicious ones. Then the anonymized suspicious ones are sent to the NSA. The link between the data and the real profile (names, IPs, locations) is encrypted with the public key of a randomly selected court (specialized or not), so that the court can de-anonymize it. If the NSA considers the flagged profile a real danger, it can request the court to de-anonymize the data. How is that different from the search-term-based solution? It’s open, more sophisticated than a keyword match, and way less data is sent to the NSA.
That way the government doesn’t get huge amounts of data – only a tiny fraction, flagged as “suspicious”. Can the companies cheat that, if paid by the NSA – well, they can – they have the data. But preventing that is in their interest as well as that of the public, given that there is a legal way to help the government prevent crimes.
Should we be algorithmically flagged as “suspicious” based on something we wrote online, and isn’t that again an invasion of privacy? That question doesn’t make my middle-ground-finding attempt easier. It is, yes, but it doesn’t make an Orwellian super-state possible; it doesn’t give the government immense power that can be abused. And, provided there’s trust in the court, it shouldn’t lead to self-censorship (e.g. refraining from terrorist jokes, due to fear of being “flagged”).
Can the government make that software flag not only terrorists, but also consider everyone who is critical to the government as an “enemy of the state”? It can, but if the software is open, and companies are not forced to use it unconditionally, then that won’t happen (or will it?).
The Internet has made the world wonderful, and at the same time more complicated. Offline analogies don’t work well (e.g. postman reading your letters, or constantly having an NSA agent near you), because of the scale and anonymity. I think, given that no government can abuse the information, and that no human reads your communication, we can have a reasonable middle ground, where privacy is preserved, and security is increased. But as I pointed out earlier, that may be naive. And in case it is naive, we should drop the alleged increase in security, and aim to achieve it in a different way.