The 2016 Cambridge Analytica incident made us take a step back and look at what is actually at stake here. Because we trusted Facebook, a large social media company that millions of people use every single day, with our personal data, an election was tampered with. That’s a pretty big pill to swallow. How do we stay connected in a world that is increasingly reliant on data sharing and memory? How do we protect our personal information from becoming public in an age where virtually all of our information is saved in seemingly abstract cloud-based systems? And is our information even worth protecting?
I’ve heard all the arguments: If you are granting Facebook access to your information, you shouldn’t be upset or surprised when they use it for their own personal gain or shady dealings. Or that Facebook should treat sensitive information with respect and discretion, even if it was given with consent. Or that it never was given with consent, so they shouldn’t be using it at all. Or that the consent was given as a result of insidiary practices that attempted to hide the fact that they were taking our info in the first place.
It all seems to become arbitrary at some point. Big tech companies are using our info as datas, and it seems there’s no way for us to stop them. And it seems, even more so, that we should stop worrying about it anyway. This is the direction the world is heading. Another big pill to swallow.
And now a surge of Artificial Intelligence developments are making us increasingly aware and weirdly comfortable with the idea that technology is storing our personal information. From Alexa knowing what kind of music we’ll probably want to listen to to our Maps app on our smartphones telling us how far away from home we are as soon as we get into our cars. Devices are getting smarter because they have more data. AI learns by picking out patterns from large data sets and applying those learned patterns to predictions and classifications.
It’s a fine line between data collecting and privacy invasion. So fine and blurry that companies with some of the brightest minds in the world are either not aware of their crossing it or are assured enough to believe that no one will notice when they do. There’s no denying the societal advantages that have come from smarter technology developed out of data pooling. GPS apps, for instance, are helping us to get places faster and keep traffic flowing smoother:
“Phones and other connected devices track our geolocation, speed and heading. When such information is aggregated and sent back to route-finding algorithms, a better picture of real-time traffic flows emerges,” wrote Risto Karjalainen, COO of blockchain-based data marketplace Streamr, in Entrepreneur. “Users share their data for free but receive an even better functioning service in return. Google, of course, makes massive profits from serving ads to those same users and knowing far more about them and their habits than they could otherwise dream of.”
But here comes the question that is always inevitably raised: at what cost? Is it possible to keep developing AI without crossing that fine and blurry line? Karjalainen cites Frank Pasquale to break it up into two general views. The Jeffersonians, who see decentralization “as a way to promote innovation and where people retain control over their own personal data and share it on their terms with the AI community.” Then there’s the centrist Hamiltonians, who are happy to support data collection by large companies who can develop better AI. The debate will only grow more heated as AI inevitably grows smarter and smarter. One thing seems helplessly clear: our data is going to get collected, somehow, some way.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
[…] Is Artificial Intelligence Data Collection an Invasion of Privacy […]