
Maryland lawmakers are considering new legislation aimed at limiting the use of deceptive artificial intelligence–generated media in elections, as concerns grow over the potential for “deepfake” videos and audio to mislead voters.
The proposals — House Bill 145 and its companion, Senate Bill 141 — are titled Election Law – Election Misinformation, Election Disinformation, and Deepfakes. The bills would prohibit the knowing creation or distribution of materially deceptive synthetic media related to elections when intended to mislead voters about a candidate or election official.
HB 145 received a hearing before the House Government, Labor, and Elections Committee on February 4, 2026. SB 141 is advancing in the Senate on a parallel track.
What the Bills Would Do
Under the legislation, it would be unlawful to knowingly share or produce AI-generated audio, video, or images that falsely portray a candidate or election official in a way likely to influence voter decisions. The focus is on material deception — content that reasonably appears authentic and could mislead the public if left uncorrected.
The bills are designed to address scenarios in which fabricated recordings or videos could surface late in a campaign, when fact-checking or corrections may not reach voters in time.
Supporters say the legislation responds to rapid advances in generative AI technology, which has made the creation of realistic synthetic media faster, cheaper, and more accessible.
What the Bills Do Not Cover
The proposals include exemptions intended to protect constitutionally protected speech. According to legislative summaries and analysis, the bills do not apply to satire, parody, commentary, or legitimate news reporting. They also do not ban AI-generated content outright, focusing instead on deceptive use tied directly to elections.
The legislation requires intent — meaning accidental sharing or clearly labeled synthetic media would not fall under the prohibition.
Free Speech and Enforcement Questions
As with similar proposals in other states, the Maryland bills raise questions about how “materially deceptive” content would be defined and enforced in real time during campaigns. Civil liberties advocates have cautioned that election-related speech restrictions must be narrowly tailored to avoid chilling lawful political expression.
Election officials and lawmakers supporting the bills argue that the intent requirement and exemptions are designed to strike that balance.
Broader Context
Maryland is among a growing number of states examining how election laws should adapt to artificial intelligence technologies ahead of the 2026 and 2028 election cycles. Congress has also debated federal standards, though no comprehensive national framework has been adopted.
HB 145 and SB 141 remain under committee consideration, and no floor votes have been scheduled as of early February.
Why This Matters
Advances in artificial intelligence have made it easier to create realistic audio and video that can appear to show candidates or election officials saying or doing things that never happened. During fast-moving election cycles, especially close races, misleading content can spread widely before it can be verified or corrected.
Lawmakers considering HB 145 and SB 141 say the bills are intended to address that risk by focusing on intentional deception rather than general political speech. At the same time, election-related regulations involving speech raise constitutional considerations, particularly under the First Amendment, which protects political expression.
How Maryland defines “materially deceptive” content and enforces these rules could shape how voters receive information during future elections — and may influence similar efforts in other states as artificial intelligence becomes more common in political campaigns.

Leave a comment