Is Hearing Believing in the Modern Era? How AI Calls for A Reshaping of the Legal Landscape

“Fake news” has unsurprisingly become a buzzword in today’s so-called post-truth era, stimulating discussions regarding ethical journalism and manufactured “truth”.  Leading journalists already warn readers not to believe everything they read and to be aware of possible misleading or blatantly fabricated headlines and articles, yet a new wave of technology threatens to extend this problematic trend to live-streamed events and even daily face-to-face interactions via these vectors.  Innovations like voice cloning and facial cloning are as detrimental in practice as they are exciting and potentially beneficial in theory.[1]  Although the process by which legislation is passed moves at a slow pace, it is imperative that it takes the time to directly address these new technologies, perhaps even with proactive rather than reactive policies.

Researchers like the ones behind Lyrebird.ai and the members of the Baidu AI team have been working to perfect voice cloning or “Deep Voice”[2].  For both groups, the process includes the use of deep learning to analyze samples of a person’s voice to learn and imitate his or her unique inflections and even emotions.[3]  In its purest form, voice cloning can be used to “change the life of everyone that lost their voice to a disease by helping them recover this part of their identities,” as Lyrebird’s version of Obama claims.[4]  Its more general applications include the personalization of GPS units, audiobook narration[5], and even movies just as rct studio is attempting to do with the help of the Baidu AI team[6].  Xinjie Ma, head of marketing for rct, compared their project to that in the HBO show Westworld, which created an interactive android-filled physical world that allows visitors to indulge in their fantasies without fear of retribution. Barring the obvious concerns that this comparison raises, it is important to note that AI-generated voices can be used for more nefarious purposes just as Brundage et al. predict in their work, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”  The authors rightly claim that if unchecked by some “authentication” method, voice cloning through commercial services such as Lyrebird is “ripe for potential criminal applications, like automated spear phishing and disinformation.”[7]  This ideally benign innovation could be abused to undermine trust in public figures and leaders and possibly even family and friends.  In essence, voice cloning has radical implications for politics and interpersonal relationships[8]; that is, our often untrustworthy memory and even history itself can soon be “photoshopped” perfectly.[9]

A very similar technology that is applied to video samples has been dubbed “deepfakes,” which gained infamy on the internet for disinformation campaigns during the 2016 presidential election and pornographic videos that grafted unsuspecting victims’ faces onto those of the actual actors.[10]  Concerns for the democratization of software (through companies such as Baidu) that can create fake videos and audio have been expressed by lawmakers around the world.  EU and Polish Law in particular have had difficulties protecting voice as a personal right and the audio files used to recreate and mimic voices as personal data.[11]  What the EU and Poland are attempting to do is use preexisting legislation to address litigation cases that pertain to new and often uncharted digital territory.  As Brundage et al. claim in their paper, prevention is key to addressing the “landscape of threats” that come with new technology and particularly AI and could adversely affect online political discourse and even everyday discussion.[12]  An ex ante rather than ex post view of technology law could help mitigate the problems that arise from trying to find a cure for problems that will inevitably arise through the democratization of AI technology rather than using what is known about the link between AI and security as a prophylaxis.  This would prevent threats to privacy like those in the case of Scarlett Johansson, who was one of many victims in the ongoing battle against “deepfake” videos.[13]  For now, however, it is imperative that any type of “criminal impersonation” of defamation through the means of AI.[14]

[1] Jacob Gershman, “Imitation Game: The Legal Implications of Voice Cloning,” WSJ (blog), April 25, 2017, https://blogs.wsj.com/law/2017/04/25/imitation-game-the-legal-implications-of-voice-cloning/.

[2] Samantha Cole and Emanuel Maiberg, “‘Deep Voice’ Software Can Clone Anyone’s Voice With Just 3.7 Seconds of Audio,” Motherboard (blog), March 7, 2018, https://motherboard.vice.com/en_us/article/3k7mgn/baidu-deep-voice-software-can-clone-anyones-voice-with-just-37-seconds-of-audio.

[3] Sercan O. Arik et al., “Neural Voice Cloning with a Few Samples,” February 14, 2018, https://arxiv.org/abs/1802.06006v3.

[4] “Can You Believe Your Own Ears? With New ‘Fake News’ Tech, Not Necessarily,” NPR.org, accessed April 8, 2019, https://www.npr.org/2018/04/04/599126774/can-you-believe-your-own-ears-with-new-fake-news-tech-not-necessarily.

[5] “Voice Cloning | ISpeech,” accessed April 8, 2019, https://www.ispeech.org/voice-cloning.

[6] “The Team behind Baidu’s First Smart Speaker Is Now Using AI to Make Films,” TechCrunch (blog), accessed April 8, 2019, http://social.techcrunch.com/2019/04/07/rct-studio-profile/.

[7] Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2008),” accessed April 8, 2019, https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/MaliciousUseofAI.pdf?ver=1553030594217.

[8] “Can You Believe Your Own Ears?”

[9] James Vincent, “Watch Jordan Peele Use AI to Make Barack Obama Deliver a PSA about Fake News,” The Verge, April 17, 2018, https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed.

[10] Tom Simonite, “Will ‘Deepfakes’ Disrupt the Midterm Election?,” Wired, November 1, 2018, https://www.wired.com/story/will-deepfakes-disrupt-the-midterm-election/.

[11] “Voice Cloning as a Global New Technology and Its Challenges for EU and Polish Law,” KG LEGAL | Polish law firm specialising in cross border cases with its focus on life science, biotech, medtech, IT, new technologies and investment processes in Poland, July 7, 2017, https://www.kg-legal.eu/info/it-new-technologies-media-and-communication-technology-law/voice-cloning-as-a-global-new-technology-and-its-challenges-for-eu-and-polish-law/.

[12] Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2008).”

[13] Drew Harwell, “Fake-Porn Videos Are Being Weaponized to Harass and Humiliate Women: ‘Everybody Is a Potential Target,’” Washington Post, accessed April 8, 2019, https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/.

[14] Gershman, “Imitation Game.”

Previous
Previous

A New Frontier of Environmental Law: An Overview of Juliana v. United States

Next
Next

Reining in the Digital Revolution: In Antitrust We Trust