Deepfakes: Criminalizing Forged Videos

New legislation criminalizing deepfakes has been introduced in the U.S.Senate.

Deepfakes Video

Video and audio recordings are widely relied on as “proof” that something happened or as a way to document the way something unfolded.  Investigators, law enforcement, prosecutors, defense attorneys, and juries routinely rely on digital recordings to evaluate an event and reveal the “truth.”   Our smartphones and tablets allow digital recordings to become “first hand witnesses” so we don’t “have” to put trust in memories, perspectives, or the eyes and ears of the people present.  Social media is often the place where these digital witnesses are shared and viewed, and what can spur action for victims or against offenders.

Now new video editing apps make it easy for almost anyone to forge a video, swap out people’s faces in videos, for example – and the techniques used make it very difficult for even forensics to discern whether the recording is a fake. 

These face-swapped videos are called “deepfakes” and they have alarmed social media platforms and legislators alike as a convincing method to spread disinformation and erode the trust we have in such audiovisual evidence.

Last month, a bill was introduced, S.3805, to criminalize the malicious creation and the knowing distribution of deepfakes.  The Malicious Deep Fake Prohibition Act will be the first federal law criminalizing deepfakes specifically.  The bill not only targets individual creators, but also social media and other platforms that distribute this kind multimedia content. Penalties in the bill call for up to 2 years for creating or distributing a deepfake, and up to 10 years if the deepfake affects the conduct of a public official, an election, foreign relations, or if it facilitates violence. 

The New York Assembly has a bill working its way through the legislature to ban pornographic videos made using artificial intelligence, a computer learning technology that deepfakes apps employ to make convincing fakes.  Under the New York bill anyone making these fake videos will be committing fraud.

Those opposed to bills criminalizing deepfake techniques include the movie and entertainment giants and civil rights groups concerned that the legislation will be too restrictive, infringe on 1st Amendment rights, and curtail creativity in storytelling and depictions. 

The Electronic Frontier Foundation argues there are existing state and federal laws that cover criminal acts and civil harms from deepfakes and that new legislation isn’t needed.  If deepfakes are used to pressure people to pay money to keep the video or audio from being released, that is the same as criminal extortion.  Or if used to harass, criminal harassment laws that prohibit behavior that torments or terrorizes another come into play.  In California a prosecutor used identify theft laws to bring charges against a man who superimposed his ex-wife’s image into pornographic images.  

Federal laws makes “cyberstalking” a crime, and encompass various forms of threats by way of electronic communications.  The EFF also argues civil laws that cover defamation, the right to control one’s own image, and copyrights allow for adequate legal remedies around deepfakes.

Proponents of legislation say that comprehensive criminal statues are needed to deter the creators of deepfakes that often invade privacy, assassinate an individual’s character or use technology as a weapon to inflict non-physical, but very real, harm.

We’ll be watching the deepfake debate and report back on legal developments.

Jeremy N. Goldman is a distinguished Orange County criminal defense attorney and trial lawyer.  He has an over 22-year track record of successfully defending white collar crime and Internet crime cases, often getting charges dismissed or winning acquittals after jury trial. He is one of a small percentage of criminal attorneys certified as a specialist in criminal law by the State Bar of California Board of Specialization.

Menu