Navigating A.I.’s blurred reality from deepfakes, cheapfakes, and AI alternations ahead of November’s election
The lines between reality and illusion have become more blurred than ever. Especially in some of the most recent political campaigns and coverage, where the misinformation can cause global change.
This year marks the first election year where artificial intelligence can truly affect the polls, manipulating voters into believing the most unbelievable. What we could see this year is how AI can contribute to this spread of lies in a way that was previously impossible. The future is here, and it is terrifying.
While “cheapfakes” involve humans manipulating real events to push their own agendas, AI does not rely on reality. It can clone a person’s likeness, voice, mannerisms, and unique quirks to create entirely fabricated content. Chris Kovac, Co-Founder of the Kansas City AI Club and Founder of kovac.ai, a marketing AI consultancy, leads many conversation around AI and understands some of the potential of this technology.
“It’s very easy for people to create AI-powered content, to say whatever they want,” Kovac says. “They could be politically tied, but worse, it’s probably third parties from other countries. For example, how would we know if a particular country who is not as friendly is putting out a lot of content to try to steer voters.”
From simple phone calls to elaborate video productions, these AI-generated deepfakes can convince audiences of the unthinkable. Organizations, individuals, or even governments could use these tools to sway elections and mold political narratives. The potential for misuse is vast, and the consequences serious. Politicians can defend themselves against damaging deepfakes, but once a misleading video hits the public eye, its impact can last.
Kovac mentioned four AI tools to answer questions and even expose other AI by identifying deepfakes and AI-generated images.
Perplexity.ai can quickly teach someone on just about anything at a level they can understand. From having a PhD in the topic to being in the first grade, Perplexity can give thorough explanations and focuses on being a tool rather than a chatbot. Since it is pulling information from all over the internet, AIs like this have caused search engine optimization (SEO) to shift to artificial intelligence optimization (AIO), suggesting that asking AI may be becoming a more popular way to fact-check than just Googling.
If you want to see just how fast AI can take your likeness and make content with it, check out HeyGen. This AI can invent your digital twin to create social media content, training videos, marketing material, and more in just a couple of minutes.
“So, my understanding is that I could create an avatar with my speech, with the inflections of my hands in roughly three minutes,” Kovac says. “So the implication is, I could create a deep fake of you probably relatively easily. I could capture your voice, I could capture your video, and then I can actually put content out there.”
If you are looking for a specific task assistant, There’s an AI for That has collected over 13,000 that can create anything from smut and memes to business logos and resume reviews. While the website itself is not an AI, it constantly collects information and access to startup AIs.
Powered by Google, SynthID identifies AI-generated images, videos, text, and audio. Using deep learning models to recognize and predict what content is generated, it finds content and adds a digital watermark, so that it can be recognized as artificial without depreciating the content.
“There will be the equivalent of about two weeks of AI innovation every day,” Kovac says. “If you think of the month of July, we’ll have as much AI innovation coming out of July as a year’s worth of social media innovation or internet innovation. It’s moving that fast, and that’s good and bad.”
These tools are working to rapidly fill a need, but with the speed of technological advancement, production is valued over reliability. At a glance, AI is becoming convincing, but the mistakes are in the details and the physics of the real world. Founder & Chief Visionary Officer of Zanago, Louis Byrd, also believes in the ability of people to use their own discernment and to be intentional about the content they are seeing. It may take some analysis to notice AI content, but he points out several giveaways to spot generated writing, audio, photos, and videos.
In writing, Byrd says that AIs such as ChatGPT or Gemini by Google will often produce complex sentence structure that come across as too perfect and not how the human mind thinks. Along with this, certain words that are not common to the language or the time period appear more frequently in AI-generated writing, and words are repeated much more often within one piece.
Beyond writing, robocalling and AI-generated audio has already targeted people online. Audio may be harder to distinguish as AI can copy someone’s phrases, voice, and even mimic taking a breath, but Byrd recommends focusing on the message first. Is this something that this person is likely to say? If not, try to listen for the small inflections, static, background noise, and distortion that could reveal the AI.
“If you’re not paying attention, you can be deceived very easily,” Byrd says. “But, if you’re paying attention to some of the details, that’s when you can start seeing that something is AI-generated.”
Perhaps the most prominent concern for people today though is the generation of video deepfakes. These combine the visual with the audio to depict a subject doing or saying anything that the AI is prompted to create. But as the complexity increases, so does the margin of error.
Shadow placement, blurring around the edges of a person, and the tonal difference of a subject’s skin can all be clues that a production is AI-generated. The skin may also have what Byrd refers to as “AI-sheen.” This is when a face appears too perfect, overly smooth, and shiny to the point of almost glowing. Along with studying the subject, people should also pay attention to the background details.
Does the person go through an object rather than over it, or do details and colors change throughout the video? Oftentimes, because of the complexity of creating a video of a reality that AI does not experience, there are obvious, bizarre mistakes.
https://www.youtube.com/shorts/JAsaB-OWJCI
MIT’s Media Lab has found several additional aspects to pay attention to in videos to help determine what is man-made or otherwise. Most deepfakes primarily transform the face, and The Media Lab says this is where flaws can be found:
- Shadows—Especially around the eyes and eyebrows, the shadows may not correspond with the lighting in the room. Pay attention to how they change and if they are absent.
- Glares—If the subject of the video has glasses, glares can be studied. How much glare should there be and does it move with the person? Getting the exact lighting right and adapting it to movement can be difficult for AI.
- Signs of Aging—Depending on the age of a person, the presence and consistency of features such as wrinkles and grey hair can expose generated content. Pay attention to if the skin appears too smooth or too wrinkled and if areas like the cheeks and forehead look to be the same age.
- Facial Hair—Both the presence and lack of facial hair can point to generated content, and hair details can often be difficult to perfect.
- Blinking—Since AIs do not have eyes, the content they produce can have an excess or lack of blinking. Rewatch a video and just study the amount of blinks. Maybe even blink along with the content to see if your eyes dry out.
- Lip Movement—Many deepfakes are based on lip-synching, so see if the lip movements look natural and if they are actually depicting the audio.
- Moles—Unique facial features like moles can look unnatural or be completely missing from a video. Analyzing this may require some familiarity with the subject, but the generated mole can look unnatural and lack the details present in reality.
Before you test your abilities, you can see just how much AI is capable of with the photo generator, This Person Does not Exist. After seeing faces that have never moved, put some of these methods to practice with Detect Fakes—a website with over 400 AI and man-made images for users to discern between.
Even with these tools and methods, AI is constantly training itself to evade detection. Much like in the organic world with diseases and immune systems, AI’s are participating in an evolutionary race to outsmart the other. An LLM is made to write academic papers, then an AI is created to catch the AI that is producing the academic papers. These two intelligences are then locked in a battle of detecting or evading the other, rapidly refining each other’s code. The same is true for generated images, deepfakes, and robocalls, but false negatives or positives could destroy careers.
“You’re gonna have the good AI versus the bad AI, and they’re going to continue to try to outdo each other,” Kovac says. “Those tools are still going to be relatively pedestrian by the time November rolls around, but they’re not advanced enough. They’re not evolved enough to be able to be perfect.”
Despite these threats, AI isn’t all bad. If it did not also have the potential to be an incredibly useful tool, it would have been done away with already. By making information about all levels of candidates quickly accessible, it is already having a positive impact on politics.
Some candidates, especially at the local level, do not get as much coverage, and their policies may not be easy to find, but with AI, information can be quickly found and summarized for anyone trying to make an informed vote. Other AI programs like Vngle are being used to streamline fact-based reporting so voters can skip the misinformation while staying informed.
“Most people think of AI as artificial intelligence, but I have a little bit different philosophy,” Byrd says. “I think that it should be looked at as augmented intelligence, as tools to be able to help augment how we think about things as humans, how we see things, to help us in our day to day lives, so we can focus on the things that really matter.”
In the world of politics, the closer a voter relates to a candidate and the more they feel heard, the greater chance they have of voting for them. Right now, there are representatives that mimic the values of their candidate, but with AI, voters could be able to talk to a virtual version of a politician.
Imagine answering the phone and hearing the voice of your congressperson asking about your concerns, responding to your comments in real-time, and making note of your values. The AI could speak to everyone in bilingual households and recognize regional slang. This level of perceived personalization could revolutionize political engagement, making it responsive to individual needs.
“AI could have dialogue, virtually, with 200,000 people to survey my voters on what their interests are, what they want to have happen, and how I can help them,” Kovac says. “It can become almost an automated, two-way conversation in real-time. Then I could aggregate all that data so I would better understand the people in my district and outreach to them better because I know their hot button topics.”
A big setback to AI now is that all the conversations are stored—The AI is learning from every conversation, and privacy is not being protected. Maybe voters would not feel safe expressing their concerns only for them to be stored in some big tech company’s vault, maybe voters would feel heard, maybe once the novelty of the technology has faded, it would just seem like another spam call. The quick-footed evolution of AI and lethargy of our legal system leaves the least tech savvy populations vulnerable.
It could get to the point where a person’s voice is indistinguishable from AI. While the up-and-coming generations will learn to be wary, many today could easily become victims of identity theft, data mining, and leaked information. Scams targeting the elderly caused $3.4 billion of losses in 2023 and the most common of these were from tech support scams that involved an “IT assistant” on the phone with their target. With the help of AI, hundreds of thousands of calls could go out to the most vulnerable populations and could rapidly grow the scammer market.
“It’s still the wild, wild west, and a lot of people are flying the plane as they’re building it,” Byrd says. “We’re going to get there sooner than later, but, as a nation, we’re behind because of a lot of infighting and people trying to take power and ownership. It’s not allowing us to quickly move ahead to protect people. Your best protection is for you to protect yourself and be very mindful of the information that you share with AI tools.”
So, even with the AI checking themselves, Kovac and Byrd recommend that before consumers can be protected by the law, they need to pay attention, trust their gut, and be more wary of their internet content now more than ever before. So hunker down, be paranoid, and wait for the laws to catch up to the technology.