The state Legislature is fielding a first-pass measure that aims to stanch what is expected to be a rising tide of deepfakes in the run-up to November. It is an important stepping stone to more comprehensive safeguards against a burgeoning election-season threat.
House Bill 1766, which crossed over to the Senate Tuesday, seeks to restrict the spread of political misinformation during the prelude — 90 days prior — to a primary or general election by moderating the flow of “synthetic media” messaging in advertisements.
Like similar laws in other states, Hawaii’s HB 1766 includes a subsection excusing deepfake ads with prominent disclosures stating that media contained within has been manipulated. The exception is commonly adopted to act as a relief valve for First Amendment concerns.
Traditional media has long stood as the bellwether of voter education, but the locus of learning has shifted. Social media, email threads, text exchanges and other, more immediate forms of information sharing now vie with news network talking heads. Adding to that fire hose of data — and muddying its waters — are deepfakes.
One notable example this week: Donald Trump supporters generated and distributed multiple images depicting the former president with groups of Black people — the implied conceit being Trump is winning over Black voters.
Defined broadly, a deepfake is intentionally deceptive audio or visual media that has been altered, manipulated or created whole cloth with the assistance of powerful artificial intelligence tools. Wielded by political operatives, AI-generated content can become an effective means for spreading misinformation — X, formerly Twitter, and Facebook are among the most popular channels —
potentially setting the course of rival campaigns atilt. Posts on X are consumed within seconds and can be amplified to viral levels in minutes.
As this new brand of electioneering inches closer to becoming de riguer, federal agencies are investigating stop-gap measures as action is underway at the state level. Already, legislation combating deepfakes is on the books in at least six jurisdictions. However, public and private efforts to check deepfake proliferation are running up against pushback from conservative groups who cry foul over perceived censorship. The U.S. Supreme Court last week heard initial arguments in two cases involving laws in Florida and Texas that bar companies like X and Facebook from moderating content.
This is all well and good for campaigns that follow the rules, but the democratized nature of AI puts powerful tools in the hands of many. In January, New Hampshire voters received a bogus robocall from President Joe Biden, who implored them not to participate in the state’s presidential primary. The audio file was not built by a specialist or concocted by a political operative using sophisticated equipment — it was created by a down-
on-his-luck New Orleans magician. Last month, longtime political consultant Steve Kramer admitted to commissioning the audio snippet for $150.
With relatively easy access and a shallow learning curve, AI tools allow a wide subset of Americans to generate convincing deepfakes.
It is clear stricter curbs on deepfakes are needed at higher levels of government, but AI breakthroughs occur weekly — sometimes daily. It is an arms race Congress cannot hope to win, and throws into stark relief the need for strong guide rails. Without intervention from host platforms, government bodies have little recourse but to rely on laws of prevention to halt deepfake dissemination.
Hawaii’s proposed solution is laudable. It looks to stifle attempts to deceive the voting public through enforceable prohibition, and importantly, provides look-ahead guidelines for questionable content.
HB 1766 will be the bedrock on which follow-up legislation can be built to thwart the inexorable proliferation of deepfake — and other AI-based — technology.
But more needs to be done to bring the public into the fold. With regulation still in its infancy, awareness and education will be key in these early days of AI political advocacy.