There is a long list of concerns when it comes to a major technology like artificial intelligence. AI privacy concerns only make up a small proportion of those. Most people are rather more worried about whether the robots will rise up and kill us all. While that is the more attention-grabbing potential outcome, privacy concerns around AI technology are more immanent and definitely more realistic. In fact, AI is already creating multiple privacy issues today, much less at some point in the future!
There are two sides to the issue of AI privacy. On the one side of the debate, there are those that believe that AI technologies mean the end of privacy as we know it.
The other side of the argument is more positive. The idea is that AI could actually actively protect our privacy. There are some signs of this today, but this argument is stalled largely speculative. Hopefully, by the end of this piece, you'll have the most important gist of AI privacy risks as well as AI privacy boons. It's early days yet, but everyone needs to be aware of what might be happening sooner than you may think.
The AI privacy issues that worry people today are largely based on how AI can be used to go through the mountains of data generated by internet users and then create algorithms that can deduce things about you from very little information. What this means is that if those who have access to those algorithms only know one or two things about you, they can "guess" other things about you with a high degree of accuracy. This really challenges the traditional notion of privacy invasion. Because you aren't being directly observed. Your private life is laid bare even if you've never leaked anything private about yourself.
The other major form of AI privacy invasion comes from deep learning applications. For example, deep learning neural networks can identify faces in seconds. So if you are moving in a public space, just a single clear frame caught by a camera somewhere can help piece together your movements.
Again, this challenges current privacy expectations. In most countries, you have no right to privacy in public. However, those legal frameworks were created in a world that didn't have these technologies. Simply walking out of the door has never meant such deep breaches of personal life were possible.
As we travel along this path where more and more of the world is connected and digital, things look grimmer for personal privacy. The march of IoT or internet of things devices is probably the main vector through which AI will share the world with us. Essentially this is the trend where every appliance and machine becomes somewhat intelligent and network-connected. Smart cameras, self-driving cars, and home automation systems are primary examples.
Being watched wherever you go, having every action you make be recorded and then for that sum total of your digital being scrutinized by AI. That's a future that's almost upon us. Artificial intelligence is primed to be the ultimate tool for espionage. Regardless of who is doing the spying.
On the other side of the AI privacy war, we might see some of that formidable intelligence come to our aid. We're already seeing AI approaches to network security. Where systems adapt to attacks and malware that's never been seen before.
In the future, you may have one or more AI applications that work to protect you against the algorithms of opposing AI apps that want to track, trace and catalog you. Something to warn you when you're doing something that's likely to be traceable.
Of course, there's another pretty straightforward way that AI can be used to protect our privacy. We can make it better. Right now there are still plenty of humans in the loop. Which means there's plenty of opportunities for humans to abuse that data.
Take the major digital assistants as an example. Siri, Cortana, Google Assistant, and Alexa have all been caught out exposing private information to human beings. Often this is by design because the machine learning solutions still need humans to tune them. So it makes sense that one day AI software will be advanced enough not to need a human in the loop to get better at its job. Minimizing privacy breaches. In fact, our efforts as a society should probably be towards instilling privacy protections and ethics within the most fundamental code of our AI products.
Here's the thing. People can use any new information technology to damage someone's privacy. The invention of tiny cameras and listening devices is one example. It's technically trivial these days to hide cameras and microphones in someone's private space and record everything that they do.
The problem isn't that technology can do something in principle, it's that we allow or condone the use of technologies in particular ways. The state needs the permission of the judiciary to plant bugs in your home. We have checks and balances to mitigate technological privacy abuses. The biggest problem is that our laws are based on conceptions of privacy from what is essentially another world. Unfortunately, the wheels of legal reform turn very slowly. Modern legal privacy protection frameworks such as GDPR are a good start, but AI techniques are advancing quickly.
How do you see AI privacy? Will it be a good or bad thing in the end? Let us know down below in the comments. Lastly, we’d like to ask you to share this article online. And don’t forget that you can follow TechNadu on Facebook and Twitter. Thanks!