Post

AI Could Still Wreck the Presidential Election

An article for The Atlantic about the US federal agencies' failure to regulate AI in time for the 2024 election.

This article was written with Bruce Schneier and originally published at The Atlantic on 2024-09-24.

For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. The  Republican National Committee  released a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border.  Fake robocalls  purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a  Russian bot farm  that was using AI to impersonate Americans on social media, and OpenAI disrupted an  Iranian group  using ChatGPT to generate fake social-media comments.

It’s not altogether clear what damage AI itself may cause, though the reasons for concern are obvious—the technology makes it easier for bad actors to construct  highly persuasive  and misleading content. With that risk in mind, there has been some movement toward constraining the use of AI, yet progress has been painstakingly slow in the area where it may count most: the 2024 election.

Two years ago, the Biden administration issued a blueprint for an  AI Bill of Rights aiming to address  “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices,” among other things. Then, last year, Biden built on that document when he issued his  executive order on AI . Also in 2023, Senate Majority Leader Chuck Schumer held an  AI summit  in Washington that included the centibillionaires Bill Gates, Mark Zuckerberg, and Elon Musk. Several weeks later, the United Kingdom hosted an international AI Safety Summit that led to the serious-sounding “ Bletchley Declaration ,” which urged international cooperation on AI regulation. The risks of AI fakery in elections have not sneaked up on anybody.

Yet none of this has resulted in changes that would resolve the use of AI in U.S. political campaigns. Even worse, the two federal agencies with a chance to do something about it have punted the ball, very likely until after the election.

On July 25, the Federal Communications Commission issued a  proposal  that would require political advertisements on TV and radio to disclose if they used AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) That seems like a step forward, but there are two big problems. First, the proposed rules, even if enacted,  are unlikely to  take effect before  early voting  starts in this year’s election. Second, the proposal immediately devolved into a partisan slugfest. A Republican FCC commissioner  alleged  that the Democratic National Committee was orchestrating the rule change because Democrats are falling behind the GOP in using AI in elections. Plus, he argued, this was the Federal Election Commission’s job to do.

Yet last month, the FEC  announced  that it won’t even try making new rules against using AI to impersonate candidates in campaign ads through deepfaked audio or video. The FEC also said that it lacks the statutory authority to make rules about misrepresentations using deepfaked audio or video. And it lamented that it lacks the technical expertise to do so, anyway. Then, last week, the FEC  compromised , announcing that it intends to enforce its existing rules against fraudulent misrepresentation regardless of what technology it is conducted with. Advocates for stronger rules on AI in campaign ads, such as  Public Citizen , did not find this nearly sufficient, characterizing it as a “wait-and-see approach” to handling “electoral chaos.”

Perhaps this is to be expected: The freedom of speech guaranteed by the First Amendment generally permits  lying  in political ads. But the American public has signaled that it would like some rules governing AI’s use in campaigns. In 2023, more than half of Americans polled  responded  that the federal government should outlaw all uses of AI-generated content in political ads. Going further, in 2024, about half of  surveyed  Americans said they thought that political candidates who intentionally manipulated audio, images, or video should be prevented from holding office or removed if they had won an election. Only 4 percent thought there should be no penalty at all.

The underlying problem is that Congress has not clearly given any agency the responsibility to keep political advertisements grounded in reality, whether in response to AI or old-fashioned forms of disinformation. The Federal Trade Commission has jurisdiction over truth in advertising, but political ads are largely  exempt —again, part of our First Amendment tradition. The FEC’s remit is campaign finance, but the Supreme Court has progressively  stripped  its authorities. Even where it could act, the commission is often stymied by political  deadlock . The FCC has more evident responsibility for regulating political advertising, but only in certain media:  broadcast , robocalls,  text messages . Worse yet, the FCC’s rules are not exactly robust. It has actually  loosened  rules on political spam over time, leading to the barrage of messages many receive today. (That said, in February, the FCC did unanimously  rule  that robocalls using AI voice-cloning technology, like the Biden ad in New Hampshire, are already illegal under a 30-year-old law.)

It’s a  fragmented  system, with many important activities falling victim to gaps in statutory authority and a  turf war  between federal agencies. And as political campaigning has gone digital, it has entered an online space with even  fewer disclosure requirements  or other regulations. No one seems to agree where, or whether, AI is under any of these agencies’ jurisdictions. In the absence of broad regulation, some states have made their own decisions. In 2019, California was the  first  state in the nation to  prohibit  the use of deceptively manipulated media in elections, and has strengthened these protections with a raft of newly passed  laws  this fall.  Nineteen states  have now passed laws regulating the use of deepfakes in elections.

One problem that regulators have to contend with is the wide applicability of AI: The technology can simply be used for many different things, each one demanding its own intervention. People might accept a candidate digitally airbrushing their photo to look better, but not doing the same thing to make their opponent look worse. We’re used to getting personalized campaign messages and letters signed by the candidate; is it okay to get a robocall with a voice clone of the same politician speaking our name? And what should we make of the AI-generated campaign  memes  now shared by figures such as Musk and Donald Trump?

Despite the gridlock in Congress, these are issues with bipartisan interest. This makes it conceivable that something might be done, but probably not until after the 2024 election and only if legislators overcome major roadblocks. One bill under consideration, the  AI Transparency in Elections Act , would instruct the FEC to require disclosure when political advertising uses media generated substantially by AI.  Critics  say, implausibly, that the disclosure is onerous and would increase the cost of political advertising. The  Honest Ads Act  would modernize campaign-finance law, extending FEC authority to definitively encompass digital advertising. However, it has languished for years because of reported  opposition  from the tech industry. The  Protect Elections From Deceptive AI Act  would ban materially deceptive AI-generated content from federal elections, as in California and other states. These are promising proposals, but libertarian and civil-liberties groups are already  signaling challenges  to all of these on First Amendment grounds. And, vexingly, at least one FEC commissioner has directly  cited  congressional consideration of some of these bills as a reason for his agency not to act on AI in the meantime.

One group that benefits from all this confusion: tech platforms. When few or no evident rules govern political expenditures online and uses of new technologies like AI, tech companies have maximum latitude to sell ads, services, and personal data to campaigns. This is reflected in  their   lobbying   efforts , as well as the voluntary policy restraints they occasionally trumpet to convince the public they don’t need greater regulation.

Big Tech has demonstrated that it will uphold these voluntary pledges only if they benefit the industry. Facebook once, briefly,  banned  political advertising on its platform. No longer; now it even  allows  ads that baselessly deny the outcome of the 2020 presidential election. OpenAI’s policies have long prohibited political campaigns from using ChatGPT, but those restrictions are  trivial  to evade. Several companies have volunteered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks might even make disinformation  worse  by giving the false impression that non-watermarked images are legitimate.

This important public policy should not be left to corporations, yet Congress seems resigned not to act before the election. Schumer  hinted  to NBC News in August that Congress may try to attach deepfake regulations to must-pass funding or defense bills this month to ensure that they become law before the election. More recently, he has pointed to the need for action “ beyond  the 2024 election.”

The three bills listed above are worthwhile, but they are just a start. The FEC and FCC should not be left to snipe with each other about what territory belongs to which agency. And the FEC needs more significant,  structural reform  to reduce partisan gridlock and enable it to get more done. We also need transparency into and governance of the algorithmic amplification of misinformation on social-media platforms. That requires that the  pervasive influence  of tech companies and their  billionaire  investors should be limited through stronger  lobbying  and  campaign-finance  protections.

Our regulation of electioneering never caught up to AOL, let alone social media and AI. And deceiving videos harm our democratic process, whether they are created by AI or actors on a soundstage. But the urgent concern over AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the coming tide of election disinformation. It needs to act more boldly to reshape the landscape of regulation for political campaigning.

This post is licensed under CC BY-NC-ND 4.0 by the author.