Can we ensure privacy in the era of big data?

Personal Privacy Principle: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

The ubiquity of digital gadgets and sensors, the widespread of networks and the benefits of sharing very personal information through social media, and they have led some to argue that privacy as a social norm is changing and becoming an outmoded concept.

In today’s era, social media and online profiles, maintaining privacy is already a tricky problem. As companies collect ever-increasing quantities of data about us, and as AI programs get faster and more sophisticated at analyzing that data, our information can become both a commodity for business and a liability for us.

We have already seen many examples in our daily life. For instance, such as Target recognizing a teenager was pregnant before her family knew. But this is merely advanced marketing. What happens when governments or potential employers can gather what seems like innocent and useless information  to uncover your most intimate secrets(like health issues).

Stefano Ermon( an assistant professor at Stanford)says:  “I think that’s a big immediate issue. I think when the general public thinks about AI safety, maybe they think about killer robots or these kind of apocalyptic scenarios, but there are big concrete issues like privacy, fairness, and accountability.”

Toby Walsh, a guest professor at the Technical University of Berlin, also worries about privacy. “Yes, this is a great one, and actually I’m really surprised how little discussion we have around AI and privacy.”

He also says, “I thought there was going to be much more fallout from Snowden and some of the revelations that happened, and AI, of course, is enabling technology. If you’re collecting all of this data, the only way to make sense of it is to use AI, so I’ve been surprised that there hasn’t been more discussion and more concern in the public around these sorts of issues.”

Ermon explains, “Privacy is definitely a big one, and one of the most valuable things that these large corporations have is the data they are collecting from us, so we should think about that carefully.”

Privacy as a Social Right

Yoshua Bengio and Guruduth Banavar believed  that personal privacy isn’t just something that AI researchers should value, but that it should also be considered as a social right.

Bengio (a professor at the University of Montreal) says, We should be careful that the complexity of AI systems doesn’t become a tool for abusing minorities or individuals who don’t have access to understand how it works. I think this is a serious social rights issue. We have to be careful with that because we may end up barring machine learning from publicly used systems, if we’re not careful. He also adding, “the solution may not be as simple as saying ‘it has to be explainable,’ because it won’t be.”

The experts above may have agreed about how serious the problem of personal privacy is, but solutions are harder come by. Do we need new corporate policies or something else?


Senior Editor, Love reading books, hate politics, learning about AI, never played Pokémon GO, always unlucky in cards. :(