Like it or not, most of us are exposed to or interact with AI tools almost daily. We may interact with an AI chatbot on a website, ask ChatGPT whether cats can eat pancakes, use Fathom to help us take notes during meetings, use Gemini to help us respond to emails, or even have AI analyze our X-rays. While AI can certainly be a very helpful tool, it is not without its risks. As AI becomes more and more integrated into our daily lives, it is becoming increasingly imperative for individuals to understand how to mitigate these risks and protect their privacy when using AI. In this article, we will discuss what these risks are, as well as how to protect yourself.
AI-generated images
You log in to Facebook and see that a lot of your friends have posted AI-generated images of themselves as babies or that they’ve used the Ghibli effect. Fun, right? As a privacy attorney, I have been, at times, accused of being a killjoy, and unfortunately, this will not help my case. When uploading your picture into an AI, you may not be realizing the fact that the AI may be running facial recognition on your photos to create the Ghibli effect image that you are requesting. Facial recognition is a particularly dangerous tool as it can:
- Be used to identify you in other areas (e.g., the airport, your church, a political rally, or any other location);
- Be used to create other images, such as deepfakes that include your face (e.g., the AI can use the images to portray you in an inappropriate situation, such as AI-generated pornography).
While data such as emails and phone numbers can be easily changed if they are breached or used in an unsavory way, you cannot change your face (at least not very easily). In addition, photos have metadata associated with them, such as when and where the photo was taken. By uploading a photo into an AI, this metadata can be exposed, letting others know where you are located. It can also be shared with advertisers, who can use your data to send you targeted ads. In addition, all of this information can be used to train AI, meaning that others could potentially receive this data as a response to a prompt when using the AI. So, how do you protect yourself when it comes to your photographs and images? Unfortunately, the only way to protect yourself and your privacy in this particular instance is to not upload photographs or images of yourself into an AI, as there currently is no way to specify that you do not want an AI to use facial recognition and that you do not want to expose your metadata.
Collection and use of data
Like it or not, AI relies on having huge troves of data to operate. For example, AI uses data to write articles for you by knowing sentence structure, what words usually come after others, writing styles, grammar, and more. AI can use data to help determine financial crimes by having knowledge of the standard operations of bank accounts and looking for unusual activity. AI can also be used to create malware by knowing and anticipating how existing antivirus software operates, so that the malware is not detectable by the antivirus software.
But where does AI get all of this data? AI can get its data from many sources, such as books, scraping of the Internet, including social media, scraping popular websites such as Wikipedia or Reddit, and, you guessed it, the data that you yourself input into the AI. The horrifying part about all of this is that once data is input into the AI, it becomes a part of the AI as it is used to train the AI. Therefore, the data that is input into the AI cannot be clawed back or returned to the owner – it’s a part of the AI now. And, now that it is part of the AI, it can be used for virtually any purpose (unless the AI is self-hosted or privacy-friendly). This means that the AI could be used for any number of nefarious purposes, such as writing spam emails, providing false information about you, exposing trade secrets, and more.
So, how do you protect yourself against the collection and use of your data by AI? First, you should lock down your accounts with maximum privacy protection. This means making your social media profiles private and not viewable by the public. In addition, this means being careful about what you put online, such as through comments on blogs or social media posts. In addition, you should be careful about what you put into AI prompts – do not put personal data or personal details into AI prompts, and therefore, they will not be used in AI training. Lastly, if you do need to use AI (or would like to use it), consider using a self-hosted version where none of the inputted data is sent back to the AI’s home servers.
Cybersecurity risks
Criminals and hackers are always looking for new technologies that will help them perpetrate their crimes. AI can be used by bad actors to create false social media profiles and messages, generate realistic images and videos of individuals, generate pornographic photos, generate audio clips, help guess passwords, and target other AI assistants. This use is intended to obtain data and money from unsuspecting individuals. AI is used to hack accounts, steal personal information, perpetrate fraud, commit identity theft, and human trafficking. It can also be used for surveillance and extortion by bad actors. While many of us have been able to spot scams in the past, AI is making it more difficult for email providers to detect spam emails, antivirus software to detect malware and it is making it more difficult for individuals to detect the key red flags that we have historically seen as the hallmarks of a scammer.
While bad actors have embraced the use of AI to scam individuals and organizations, we will need new tools to detect and prevent such use. For example, since AI can generate a realistic audio of your loved ones, we need to have a code word that others would not know. For example, let’s say that you receive a call from your adult son saying that he is in jail and needs money for bail. You ask the caller to tell them your secret word that only you and your son know (e.g., watermelon). If the caller cannot say that word, then you will know that this is most likely an AI-generated call trying to scam you out of your money.
It is also important for individuals to be fully aware and conscious of these scams. For example, how likely is it that some long-lost high school friend is calling you asking you for money to help them pay their tax bill? Isn’t it strange that a website that you’re visiting now has a pop-up asking you to install something on your computer? Being critical of such requests and not capitulating to “time sensitive” demands can help you protect yourself, your privacy, and your assets.
Bias and Discrimination
When I was younger, I listened to the whole “don’t believe what people say on the Internet” speech. There are certain corners of the Internet that are an absolute cesspool of disinformation, racism, sexism, and similar materials. AI that is trained by scraping the Internet will no doubt contain traces of such materials. For example, AI tools used for screening job applicants have presented a preference for white-associated names and male-associated names. AI models have predicted twice as many false positives for recidivism for black offenders as white offenders. AI models have also significantly favored white patients over black patients when predicting who needed extra medical care. The use of AI in these types of scenarios is particularly dangerous due to the fact that such use can lead to discriminatory and biased outcomes in significant, real-world situations. The use of personal data by AI can and does lead to biased and discriminatory outcomes.
Protecting yourself in situations like these is very difficult, as the individual may not even know that AI is being used to screen job applications, predict criminal recidivism, or make healthcare decisions. However, you can still request information as to how personal data is used and whether AI is being applied in decision-making. In addition, in some states and countries, individuals do have the right to opt out of such use or obtain further information regarding such use.
With the use of AI affecting our daily lives, it is imperative that individuals get educated about such uses, determine how such uses affect them, and know how to protect themselves and their privacy. Remember to be cautious when using AI, do not upload photographs or personal data into AI, and be wary of potential scams and bad actors.