Current AI technologies have become impressively adept at generating realistic images and videos, sparking concerns about their potential misuse for political and election manipulation.
The concern is valid, but it’s more complex than it seems.
The real issue with fake AI-generated imagery isn’t just that it looks convincing. The bigger problem is that it creates a cloud of doubt around the credibility of all images, offering an easy excuse for political figures who are willing to deceive their supporters who are already inclined to believe falsehoods.
For example, consider Donald Trump’s recent social media post in which he accused Kamala Harris’s campaign of artificially inflating the size of a crowd at a Detroit airplane hangar last week using AI.
“Has anyone noticed that Kamala CHEATED at the airport?” he wrote. “There was nobody at the plane, and she ‘A.I.’d’ it, and showed a massive ‘crowd’ of so-called followers, BUT THEY DIDN’T EXIST!”
In today’s world, how can we determine if an image is genuine? The average person can no longer confidently verify the authenticity of images—or increasingly, even videos—through individual investigation. AI technology has become that sophisticated, and it’s continually improving. (This is why the old advice of “do your own research” is less effective now.)
This uncertainty makes it harder to discern what’s real, except by relying on a critical factor: trusting sources that have either captured the image or video themselves or have rigorously verified its authenticity.
For instance, we know the crowd waiting for Harris was real because there are photos from agencies like Getty, as well as images and reports from multiple other news organizations that were on the scene, all matching the social media photos that provoked Trump’s ire. Credible news organizations and photo agencies have stringent rules about images and videos, but trusting them depends on trusting the source providing the content.
Trump’s strategy of portraying reputable news outlets as untrustworthy has led many of his supporters to view them with suspicion—something they were already inclined to do.
When trust erodes and credibility is questioned, the lie doesn’t need to be highly sophisticated. It doesn’t need to be backed by convincingly realistic fake AI. It doesn’t even need to be easily debunked. For the lie to be effective, it just needs a willing promoter and a receptive audience. In this context, AI is merely a convenient cover.