The Deceptive Use of AI and Its Impact on the Public
For Public Information Officers and Government Leaders
The deceptive use of artificial intelligence—especially deepfakes—poses a real risk to public trust, democratic institutions, and your daily work as a public information officer. Deepfakes use AI to manipulate video, audio, or images in a way that looks and sounds authentic. But they aren’t. And they’re showing up in misinformation campaigns, scams, and hoaxes targeting individuals and institutions.
A video surfaced of Facebook creator Mark Zuckerberg, seen boasting about how the platform “owns” its users was created by artists Bill Posters and Daniel Howe in retaliation when Facebook refused to take down a deepfake video that allegedly showed Nancy Pelosi intoxicated.
You don’t need a technical background to recognize the threat. But you do need to be prepared to respond.
What’s at Risk
Deepfakes can impersonate your agency’s leadership, mimic law enforcement communications, or create false public safety announcements. In some cases, they may appear as manipulated footage of actual events, giving the public the impression that something illegal, unethical, or violent happened—when it didn’t.
Public Safety professionals are at particular risk for the use of deepfakes. There is no shortage of people who dislike government officials and public safety officers. Technology has rapidly improved, and although there are safeguards put in place to prevent the obvious things that might put a government employee at risk, surrounding pornography. The use of technology outside of the United States that has not placed these types of safeguards in place makes coopting technology a potential risk for all public safety personnel.
This kind of content spreads fast, especially on social media. And it doesn’t always take long for news outlets or influencers to amplify it.
Here’s what happens when it spreads:
Trust erodes between your agency and your community.
You may lose control of the narrative before you know the hoax.
Your time and staff will be diverted from real work to manage an artificial crisis.
What the Law Says
Several states are taking steps to penalize the malicious use of deepfake technology.
New Jersey criminalizes the creation or spread of deepfakes with intent to harm. Offenders can face prison time and civil lawsuits.
New Hampshire bars using AI for manipulation, discrimination, or surveillance by public agencies.
Other states have laws targeting AI deception in elections, pornography, and impersonation, including California, Texas, Minnesota, and Washington.
These laws are a step forward. The ability to effectively enforce the use of deepfakes is slow, and damage to your agency’s reputation can happen long before any case reaches a courtroom
.
What You Should Watch For
The speed of AI makes it impossible to be an expert in deepfake detection expert, but stay alert to warning signs:
Content that appears off—lip-sync that doesn’t match speech, strange blinking, unnatural expressions.
The public outcry or media questions about footage you’ve never seen before.
Anonymous accounts pushing shocking or outrageous videos featuring your agency or personnel.
Altered or out-of-context clips that look real but have no verifiable source.
If something feels wrong, treat it like a threat until you can confirm otherwise.
How to Respond If a Deepfake Is Suspected
The worst thing you can do is ignore it or assume people will figure it out alone. Here’s what to do if a deepfake targets your agency:
1. Acknowledge it early.
If the content has already spread, don’t wait. Make a brief, factual statement acknowledging the issue. “We are aware of a video circulating online that falsely represents our agency. We are investigating its origin and will provide updates.”
2. Investigate quickly.
Work with your legal, IT, or cybercrime teams to confirm the video’s authenticity. If you have a media forensics partner or access to federal cyber resources, use them.
3. Communicate internally.
Let your leadership and frontline staff know what’s happening. Equip staff with talking points. The media or the public may approach them.
4. Own the narrative.
Use your official channels to state that the content is false and artificially generated. Stick to plain language. Use side-by-side comparisons or technical analysis only if you can confirm them through a trusted expert.
5. Contact platforms.
Request takedown of false content through the appropriate reporting channels. Most platforms now have abuse forms specific to AI-manipulated content.
6. Monitor and adjust.
Monitor any mentions, reposts, and questions. Repost your clarifying messages regularly. If you have a community prone to disinformation, consider holding a quick Q&A online or hosting a press availability to kill the rumor.
7. Document everything.
Keep a complete record of the incident for public records, FOIA, and legal action.
What You Can Do Now
Train your team to recognize deepfakes and misinformation.
Establish a workflow for rapid response to suspected manipulated media.
Build relationships with tech-savvy law enforcement or academia partners who can support media analysis.
Talk to your legal counsel about how deepfake laws in your state apply to your agency.
Be transparent with the public. Tell them what AI deception looks like. Please encourage them to follow verified accounts. Use “see something, say something” for suspicious media, too.
Final Note
This technology isn’t going away. Deepfakes will get better. Cheaper. Harder to spot. But your job isn’t to panic or to fix everything. It’s to prepare. Tell the truth. And when needed, call out lies fast.
If you’ve already dealt with one of these incidents, share your experience with peers. We’re all still learning how to handle this.