fbpx
Connect with us

The Daily Sheeple

Here’s Why Pundits And Democrats Are Worried About So-Called Deepfake Videos

Here’s Why Pundits And Democrats Are Worried About So-Called Deepfake Videos

Editor's Choice

Here’s Why Pundits And Democrats Are Worried About So-Called Deepfake Videos



House lawmakers are working to flush out the risks and dangers so-called deepfake videos pose to political elections and society.

Democratic Rep. Adam Schiff of California believes the videos are a new type of artificial intelligence capable of giving bad actors access to tools of deception. Schiff, who chairs the House Intelligence Committee, also suggested the technology could allow Russian agents to elevate misinformation campaigns.

“What is a proportionate response should the Russians release a deepfake of Joe Biden to try to diminish his candidacy?” Schiff asked Thursday during a House Intelligence Committee hearing. The Democrat was addressing experts Danielle Citron, a professor of law at the University of Maryland, and Jack Clark, policy director at Open AI.

Deepfakes are effectively videos that look deceptively real but are actually highly manipulated fakes. Lawmakers began sounding alarms in January about such content, with some experts warning they will be the next phase in disinformation campaigns. They worry that these videos will change people’s perceptions of reality.

Reporters criticized former New York City Mayor Rudy Giuliani in May for circulating a manipulated video of House Speaker Nancy Pelosi. The video was slowed down and designed to make Pelosi sound and look drunk. YouTube removed the video but Facebook left it up.

Things escalated still further after an Israeli startup called Canny AI posted a deepfake video on Instagram June 9 portraying Facebook CEO Mark Zuckerberg saying: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.” Reporters pressured the company to remove that video too.

Citron, a law professor at the University of Maryland, warned against banning the technology outright, but urged lawmakers to consider changing Section 230 of the Communication Decency Act, which currently allows social media companies broad immunity for the content that third parties post. She believes the law should be changed to require platforms to accept content moderation practices.

Some of the experts at the hearing have experience creating technologies allowing groups to produce deepfakes. Clark’s group, for instance, published software in February called GPT2 that is capable of generating fake news from two sentences.

GPT2 is fed text and asked to write sentences based on learned predictions of what words might come next. Access to the GPT2 was provided to select media outlets, one of which was Axios, whose reporters fed words and phrases into the text generator and created an entirely fake news story. Open AI ultimately decided not to publish any of the code out of concern that bad actors might misuse the product.

Follow Chris White on Facebook and Twitter

Delivered by The Daily Sheeple

We encourage you to share and republish our reports, analyses, breaking news and videos (Click for details).


Contributed by Chris White of The Daily Caller News Foundation.

Content created by The Daily Caller News Foundation is available
without charge to any eligible news publisher that can provide a large audience. For
licensing opportunities of our original content, please contact
[email protected].

Continue Reading
You may also like...

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected].

Click to comment

More in Editor's Choice

Advertisement
Top Tier Gear USA
To Top