Page Loader
Summarize
Microsoft urges US Congress to regulate deepfakes
This call comes in response to the growing trend of cybercriminals exploiting AI technology for malicious purposes

Microsoft urges US Congress to regulate deepfakes

Jul 30, 2024
05:48 pm

What's the story

Microsoft has issued a plea to the US Congress, urging it to enact legislation against AI-generated deepfakes. The tech giant's Vice Chair and President, Brad Smith, emphasized the need for immediate action from policymakers. This call is aimed at safeguarding elections and protecting vulnerable groups such as seniors and children from fraudulent activities involving deepfakes.

Legislative push

Smith advocates for comprehensive deepfake fraud statute

Smith highlighted the urgency of this issue in a blog post, stating that "our laws will also need to evolve to combat deepfake fraud." He is advocating for a comprehensive deepfake fraud statute in the US. This proposed legislation would equip law enforcement with a legal framework to prosecute scams and frauds perpetrated using AI technology.

Legal amendments

Smith urges lawmakers to update laws on AI-generated content

Smith is also urging lawmakers to revise federal and state laws related to child sexual exploitation, abuse, and non-consensual intimate imagery. He wants these laws to encompass AI-generated content. This call comes in response to the growing trend of cybercriminals exploiting AI technology for malicious purposes.

Legislative action

Senate's recent bill targets sexually explicit deepfakes

The US Senate has recently passed a bill aimed at sexually explicit deepfakes. This new law permits victims of non-consensual sexually explicit AI deepfakes to sue their creators for damages. The legislation was enacted following incidents where students fabricated explicit images of female classmates using AI, and internet trolls created graphic AI-generated images of celebrities.

Tech safeguards

Microsoft tightens safety controls on its AI products

In response to these issues, Microsoft has also enhanced safety controls for its own AI products. This move was necessitated after a loophole in its Designer AI image creator allowed users to generate explicit images of celebrities. Smith stated that "the private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI."

Regulatory measures

FCC outlaws robocalls with AI-generated voices

The Federal Communications Commission (FCC) has already taken steps to combat AI misuse by outlawing robocalls with AI-generated voices. However, generative AI makes it easy to create fake audio, images, and video — a trend already evident in the lead-up to the 2024 presidential election. Elon Musk recently shared a deepfake video spoofing Vice President Kamala Harris. The clip went viral on social media platform X, garnering nearly 123 million views.