AI, deepfakes and political campaigns: massive legislators are taking rules into account

AI, deepfakes and political campaigns: massive legislators are taking rules into account

BOSTON — Concerns about the potential use of computer-generated images and audio recordings that could confuse and alienate voters prompted Massachusetts lawmakers to consider a bill Tuesday that would require all AI-generated campaign materials to carry a warning label.

The bill, proposed by Sen. Barry Finegold, D-Andover, and Rep. Frank Moran, D-Lawrence, would ban the use of synthetic media in the 90 days before an election without disclosing that the material was manipulated or generated by artificial media. intelligence.

The bill would also prohibit the intentional manipulation of materials with the intent to confuse or mislead voters by portraying a political candidate or party in a manner inconsistent with their stated platforms and policies.

The legislation also established consequences for violators: the ability to file a civil suit, fines up to $10,000 per incident and “any other relief the court deems appropriate.”

Christopher Gilrein, executive director of TechNet, the organization that calls itself the voice of innovation technology, said companies fully supported the bill.

Technology companies support the bill and want protection for AI makers

“But we want it to be similar to the bills that have been passed in the other states that have introduced similar legislation,” Gilrein said.

Gilrein, in his speech before the Joint Committee on Election Laws, requested that liability be attributed only to the creator of the material, not to the tools and program used, and not to the platforms that distribute the material.

“Artificial intelligence has the potential to solve our most pressing problems. We support clear disclosure when AI is used in communications,” Gilrein said. He also requested that the bill authorize programmers to examine the material to determine who created it and what tools were used.

Gilrein cited the recent use of deepfake in the New Hampshire primary: a robocall using a facsimile of President Joe Biden’s voice, which falsely suggested that if voters voted in the January primary, they would be vote in November could be ruled out. The call went out on a Sunday evening, just two days before Tuesday’s primary election.

Biden was not on the ballot in New Hampshire, but won the primary based on write-in votes. Researchers learned that the hoax call was made by a political consultant who claimed it was intended as a wake-up call about the spread of AI-generated campaign material and its potential for misuse.

Using deepfakes to influence voters’ choices could lead to a loss of confidence in the American electoral system, said Hamid Ekbia, a professor at Syracuse University who speaks for the Academic Alliance for AI Policy, a coalition of 40 major universities. The social and political implications, Ekbia said, include increased polarization and negative partisanship across the political spectrum.

13 states have already passed laws

According to Public Citizen, a national advocacy group, thirteen states have already passed laws regulating AI.

“The federal government has been slow to act,” said Craig Holman, the organization’s lobbyist. “It is up to the states to act.”

He described the 2024 election cycle as the first in which AI could have a major impact on campaigns and voter choice.

“It’s almost impossible to tell the difference between what’s real and what’s computer-generated,” Holman said.

He mentioned the 2023 mayoral campaign in Chicago, in which a widespread deepfake appeared to record a candidate endorsing police brutality. The candidate, Paul Vallas, ultimately finished second in a close race.

The bill provides protections for traditional and social media platforms; a news outlet would be allowed to use the doctored material as part of a news story, as long as the news outlet makes it clear that the material is false.

Also, news media would not be liable for prosecution if they make an effort to determine the veracity of the material before using it in a story or broadcast. If they knowingly broadcast the fake material, they should prominently warn the public that it does not reflect the speech or behavior of the candidate depicted.