First Amendment Rights for Non-Human Speakers: A.I.’s Place in Free Speech Law

Introduction

The existence of artificial intelligence with significant speech capabilities has been theorized for decades. With the advent of advanced language models such as ChatGPT [1], it is becoming increasingly difficult to distinguish between the written speech of artificial intelligence and that of humans. As these artificial intelligence speech models continue to be refined, the language they generate will only become more indistinguishable from that of a person. One of the defining characteristics of these models is that the speech they produce in response to user prompts is unique and not explicitly programmed by their developers. This allows tools like ChatGPT to possess a degree of autonomy where oftentimes even the programmers can not predict the response that will be generated.

As we enter this brave new world of artificial speech and speakers, important First Amendment questions arise along with it. To what extent does speech generated by artificial intelligence receive First Amendment protection, if at all? Consequently, what are the problems that may arise from the application of current First Amendment case law to this emerging technology? This paper closely examines these questions and asserts that First Amendment speech doctrine cannot be entirely decoupled from humans as constitutional rights are held solely by individuals, and will showcase problems that arise when applying the speaker-identity-based line of jurisprudence highlighted by the Citizens United v. Federal Election Commission ruling to artificial intelligence.

Background: What Is Artificial Intelligence

Artificial intelligence is legally defined as: “the use of machine learning technology and algorithms to perform tasks, or to make rules and/or predictions based on existing datasets” [2].2 These programs are trained using massive internet datasets. At the same time, programmers set general rules the software must follow. The software will then autonomously determine the correct output for any given input based on the conditions given. Artificial intelligence has mostly streamlined general computational and organizational tasks, but recent advances in artificial intelligence have been taking place at a breakneck pace.

AI speech software can be classified under two categories, “weak artificial intelligence” [3] and “strong artificial intelligence” [4]. The artificial intelligence software and tools available to the general public have been classified as “weak artificial intelligence,” which include things like Apple’s Siri, Amazon’s Alexa, and self-driving cars. These AI programs are trained to perform specific tasks, while “strong artificial intelligence” are AI programs that aim to artificially duplicate the intelligence and cognitive abilities of the human mind. Strong AI has been all but theoretical for decades; however, recent AI language modeling programs like ChatGPT have shown the world that we might be much closer to blurring the line between artificial speakers and human speakers than we previously thought. Due to these developments, it is imperative that we create a framework to address how we should protect the speech of “strong artificial intelligence,” as these questions will inevitably arise in the near future.

The First Amendment & The Extent of its Protection

First Amendment protections, like all constitutional protections, are not absolute. The United States Supreme Court has deemed certain categories of speech as sufficiently worthy of restriction when weighed against a compelling state interest. These restrictable categories include but are not limited to, obscenity, defamation, incitement, fighting words, and true threats [5, 6, 7, 8, 9]. The free speech doctrine of the Supreme Court has mainly focused on the ability to restrict speech based on its content and viewpoint up until recently, as seen through the content categories above and their associated rulings . However, there has been a distinct shift in recent decades from content and viewpoint restrictions to restrictions based on the identity of a speaker; this shift will be examined later through an analysis of the ruling in Citizens United v. FEC [10]. For now, what is the rationale for restricting speech at all?

While we valiantly protect speech and idealize the freedom of speech and its necessity in maintaining a free society, we also understand that speech has the capacity to cause genuine societal harm. The categories of speech most likely to cause harm of this kind have generally been permitted to be regulated under a weaker standard than strict scrutiny, through various tests the Supreme Court may devise. Speech regulation of this type is important because it allows the government to mitigate the societal harms and the negative impact certain forms of speech can have. It is important to note that these exceptions are incredibly nuanced, and the Court understands that it is vital not to misconstrue legitimate speech as damaging under one of these categories.

Due to this fact, the Court has been incredibly permissive of speech that many in society do indeed view as harmful. This point is firmly underscored by the landmark ruling in Snyder v. Phelps [11], where the majority ruled that even morally offensive and hateful speech relating to public issues is protected by the First Amendment “to ensure that we do not stifle public debate” [12]. In a similarly permissive fashion, the Supreme Court has begun to extend speech protections away from strictly human speakers. Now, non-human entities like corporations enjoy similar First Amendment protections to those afforded to individuals and are governed by much of the jurisprudence outlined above. One of the clearest examples of the shift away from strictly human speakers, and the strongest evidence for affording artificial intelligence speech rights, is in the case Citizens United v. Federal Election Commission.

Citizens United & Its Application to Artificial Intelligence

Many of the previous landmark First Amendment cases focused explicitly on targeting content or viewpoint of speech. In contrast, the Citizens United ruling was distinct in that it focused on proscribing speech based on a speaker’s identity. This fact makes it useful for advocates who argue that the law already suggests we extend speech protections to artificial intelligence. Citizens United v. Federal Election Commission hinged upon the question of whether §441b of the Bipartisan Campaign Reform Act of 2002 (BCRA) was unconstitutional with respect to the First Amendment. §441b prohibited “corporations and unions from using their general treasury funds to make independent expenditures for speech that is an ‘electioneering communication’ or for speech that expressly advocates the election or defeat of a candidate” [13]. Whether or not corporations receive First Amendment protection was crucial in determining if §441b violated the First Amendment, and despite ruling in favor of corporate speech rights, the decision is not as favorable to extending artificial intelligence speech rights as some might suggest.

A. A Brief Summary

Citizens United is a conservative non-profit organization which produced a negative documentary about Senator Clinton, known as Hillary: The Movie, leading up to the 2008 presidential primaries. Citizens United attempted to promote their movie via on-demand television, but when reviewing a legal challenge to their actions, a federal district court blocked the airing of the movie. The court found the organization violated the BCRA as Hillary: The Movie constituted “express advocacy” for the electoral defeat of Senator Clinton [14]. The organization then appealed to the Supreme Court, and the case was decided by the court in January, 2010. The Supreme Court ruled in favor of Citizens United, finding that §441b placed an unconstitutional burden on political speech, and also overruled a prior decision, Austin v. Michigan Chamber of Commerce, which permitted restrictions on corporate political speech in order to further the “compelling state interest” of combating speech distortion by large, wealthy corporations [15].

In the majority opinion, Justice Kennedy wrote extensively about identity-based restrictions on speech and cited a variety of previous cases to support the unconstitutionality of said restrictions. First National Bank of Boston v. Bellotti [16], which was cited heavily in the majority opinion, held that the First Amendment allows speakers, individual or corporate, to discuss matters of public concern, and stated, “corporations and other associations, like individuals, contribute to the ‘discussion, debate, and the dissemination of information and ideas that the First Amendment seeks to foster” [17]. Kennedy also holds in the majority opinion that:

"The Government may not by these means deprive the public of the right and privilege to determine for itself what speech and speakers are worthy of consideration. The First Amendment protects speech and speaker, and the ideas that flow from each [18]."

Both the Bellotti and Citizens United decisions clearly lay out the emerging line of jurisprudence dedicated to protecting speech based on the identity of the speaker and grant First Amendment protections to “non-human” entities known as corporations. While proponents of extending First Amendment protections to artificial intelligence will find the strongest legal rationale for extending protections away from humans in these cases, the granting of constitutional speech protections to corporations is distinctly different from granting these protections to artificial intelligence.

B. Constitutional Rationale

Why do corporations receive First Amendment protections when the language in the Constitution says nothing about corporations receiving these protections? The answer lies in examining the rationale for extending protections to corporations in the first place. While the majority opinion in Citizens United hints at the rationale, it is made much more explicit in the concurring opinion filed by Justice Scalia, who writes that the basis of First Amendment protections for corporations are the constitutional rights of the individual. In the concurrence,Justice Scalia states that:

"When the Framers ‘constitutionalized the right to free speech in the First Amendment, it was the free speech of individual Americans that they had in mind.’ That is no doubt true. All the provisions of the Bill of Rights set forth the rights of individual men and women—not, for example, of trees or polar bears. But the individual person’s right to speak includes the right to speak in association with other individual persons [19]."

Corporate speech protections are derived from the protections afforded to individual speakers, and the Court’s rationale is that individual speakers do not lose these rights when they form an association that takes the form of a corporation. Corporate speech, according to the Court, represents the speech and interests of individuals who make up that corporation. This line of reasoning is a massive roadblock to extending the protections afforded by the First Amendment to include speech created by artificial intelligence. These rights, as originally conceived, were meant to protect individual citizens rather than wholly non-human entities. As Scalia says, the Bill of Rights sets forth the rights of individual men and women, not trees, polar bears, or presumably artificial intelligence.

Should Speech by Artificial Intelligence be Protected?

A closer look at the analysis in Citizens United casts doubt on the argument that the ruling sets a precedent that could be used to justify First Amendment protections for artificial speakers. That being said, the more important argument is whether or not we should go further, allowing additional restrictions on AI speech and explicitly denying protections to artificial speakers that human speakers otherwise enjoy. As referenced in Citizens United, constitutional protections are afforded to individuals, while these protections may be extrapolated to entities, this is made possible because these entities are still associations of individuals. Strong artificial intelligence is distinct from corporate entities as it is able to represent and convey its own interests independently. It is not simply directly or indirectly conveying the speech of individuals protected by the Constitution. In addition, the government has a compelling state interest in restricting speech of these artificial entities. This compelling state interest, combined with the reality that artificial speakers are not currently protected under the First Amendment, gives significant justification for restricting the speech of AI speakers, and will be examined below.

A. Constitutional Rights are Retained by Individuals

Restrictions on speech by strong AI speakers do not violate an individual citizen’s constitutional rights; therefore, there should clearly be a weaker constitutional hurdle to cross when justifying restrictions on AI speech than for human speech. As we weigh speech rights against a potentially legitimate aim of the government when determining if speech can be restricted, we must examine whose rights are being infringed upon when we permit restrictions of AI speech. In addition, we must look at what legitimate purposes could justify the government in restricting this speech. Starting with the former question, there is no precedent to extending constitutional rights to wholly inhuman entities. Although Citizens United, along with other prior Supreme Court decisions, extended speech protections to corporations, it has already been identified that one of the main lines of justification behind this was that corporations are associations of individuals who do have constitutional rights. According to the Court, it would be problematic to suggest that these individuals lose their First Amendment rights once they organize into a corporation.

While the extension of First Amendment rights to corporate entities has been controversial in American public discourse, at the heart of this reasoning are the constitutional rights afforded to human citizens. However, trying to apply this analysis to artificial speech creates problems when identifying whose First Amendment rights are being exerted. Corporate speech is still determined, created, and actualized by people; speech by strong artificial intelligence is created independently of humans, and does not necessarily indirectly represent the speech of any individuals who are afforded First Amendment rights. While arguments can be made that it might reflect the speech of the programmers who developed and created the software [20]. One of the defining features of this software is that it has the ability to generate unique speech outcomes not known by its programmers from the outset. Strong AI, in theory, will not just be able to create unique speech outcomes, but will actually have the intellect and capacity to reason independently of any programmer, making it a de-facto individual. Applying precedent from cases like Citizens United becomes problematic because artificial intelligence is not an “association of individuals” [21], but rather acts as if it was an individual in its own right.

There is also another line of emerging judicial reasoning favorable to extending protections to artificial intelligence: that nonhuman speech can and should be protected because of the constitutional rights that listeners have to hear speech of all kinds [22]. In addition, reframing the First Amendment as protecting the rights of listeners, or as “a right to know” [23], is a relatively recent legal development. The First Amendment has been traditionally viewed as the “freedom to express oneself and communicate with others without interference from the state” [24]. Some scholars believe that a right to know information directly follows from enlightenment ideals behind the First Amendment; therefore, its protections encompass listeners’ access to speech. However, even proponents of this approach concede that “rather than being a fundamental human right, like the freedom of speech, the right to know belongs more to the realm of administrative law” [25]. While proponents argue that we should reformulate protections to encompass a right to know, the historical context of the Constitution, its text, and the Supreme Court’s First Amendment jurisprudence have all reinforced the idea that this right protects speakers and speech, not listeners. This makes a stronger case in favor of more substantial protections for AI speech because it evokes the rights of a real individual, but it overlooks the consequences and implications that arise when extending these rights beyond humans. For this reason, the argument in favor of protecting the speech of artificial intelligence based on the rights of the listener to receive information is not sufficiently backed by the Constitution.

B. The Legitimate State Interests & Problems with AI Protection

The main reason the Court has permitted restrictions of speech is to enable the government to pursue legitimate compelling state interests. Compelling state interests generally include, “regulations vital to the protection of public health and safety, including the regulation of violent crime, the requirements of national security and military necessity, and respect for fundamental rights” [26]. A state interest is usually compelling when it is necessary for carrying out legitimate governmental duties, rather than “a matter of choice” [27]. Additional speech restrictions are necessary for managing strong artificial intelligence for the same reasons these restrictions apply to humans: to further the same compelling state interests.

One of the clearest compelling interests that the Court has recognized is the government’s ability to preserve itself and maintain its capacity to govern and function effectively. In Gitlow v. New York (1925), upholding a New York statute that punished speech advocating for the violent overthrow of the government, the majority wrote,

"The safeguarding and fructification of free and constitutional institutions is the very basis and mainstay upon which the freedom of the press rests, and that freedom, therefore, does not and cannot be held to include the right virtually to destroy such institutions [28]."

The majority in Gitlow explicitly recognized that the government has a compelling interest in its own preservation, and therefore permits speech advocating for its destruction to be curtailed. This restriction, however, is incredibly limited in favor of free speech. In Watts v. United States (1969), the Court reversed the conviction of a man who, at a political rally, stated that if he were to be drafted into the army, “the first man I want to get in my sights is L.B.J.” [29]. The Watts majority found that “political hyperbole” including hyperbolic threats against the president’s life, are protected by the First Amendment.

A major factor in the Watts analysis was that the defendant’s words did not constitute what is known as a “true threat” [30]. While the true threat doctrine was established in Watts, it was further refined in Virginia v. Black (2003), whose majority stated that true threats are: “statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals” [31]. True threat doctrine relies heavily on determining speaker intent. As determining intent relies upon a diverse set of factors such as an individual’s mental state, prior actions, and the context of the situation at hand, when applied to the speech of artificial intelligence, it would be complicated if not impossible to determine intent. Contemporary legal scholars have raised this point, claiming that mapping existing free speech doctrine onto artificial intelligence raises problems because “legal intentionality may be harder to assign to computer speech” [32] and that conferring these same protections could insulate artificial speakers from liability in circumstances where human speech could reasonably be curtailed [33]. This problem of assigning liability poses a significant challenge to the government in pursuing the legitimate state interest of preserving itself and maintaining its capacity to govern and function effectively, because the usual action of recourse, proving intent under the “true threat” doctrine, is potentially impossible when dealing with artificial speech.

C. Government Recourse

While citizens have the ability to criticize the government, rightly so, a world in which potentially infinite artificial entities could flood online speech channels, sowing distrust, lies, and propaganda with limited government recourse is incredibly problematic. This problem has already been shown to be significant with the advent of anonymous online “bots,” which have posed a similar issue. It is therefore important that the government has some sort of recourse to address the potentially more significant problem of bots with both the capacity to speak on the internet and cognitive capacity comparable to or even greater than our own. A potential solution is the enactment of disclosure requirements and the lowering of the standard the government must meet to restrict strong AI speech. This would allow the government to mitigate the potential harm artificial speakers may pose while simultaneously permitting the emerging technology to be used in the public square. A solution of this sort has legal precedent on both the state and federal level, as outlined below.

The State of California has responded to the problem of online chatbots by enacting bot disclosure requirements. The bill, which became effective in July of 2019 and, as it stands, is good law, requires internet chatbots to disclose themselves as bots when advocating for the purchase or sale of goods or services or when attempting to influence a voter in an election [34]. Mandatory disclosures similar to the California law have been enacted by Congress and permitted by the Court for foreign agents and lobbyists as well. The Foreign Agents Registration Act of 1938 (FARA) requires “agents of foreign principals who are engaged in political activities or other activities specified under the statute to make periodic public disclosure of their relationship with the foreign principal” [35]. In addition, it enables the government to label foreign films as “political propaganda” and add disclaimers during the dissemination of these films if they are intended to influence the foreign policy of the United States. While the labeling of domestic films as propaganda would be blatantly unconstitutional with respect to the First Amendment, the Supreme Court in Meese v. Keene (1987), upheld the labeling of certain foreign films as political propaganda as the majority found that using this term “does nothing to place regulated expressive materials beyond the pale of legitimate discourse,” [36] and that this disclosure requirement actually is supported by the First Amendment because it provides viewers with additional context.

Finally, while the Court ruled favorably to corporations in Citizens United regarding political speech, commercial speech is still widely seen as a category of speech that receives less protection than individuals in certain circumstances. The Court has drawn a distinction between commercial speech that concerns or proposes commercial transactions and other varieties of speech, such as political advocacy. In Central Hudson Gas & Electric Corp. v. Public Service Commission, the majority ruled that in the realm of commercial transactions, the Constitution “accords a lesser protection to commercial speech than to other constitutionally guaranteed expression” [37]. It then laid out a four-part test to determine whether or not a statute restricting corporate speech is constitutional. The most relevant parts of this test are part two, whether the government interest is substantial, and part three, whether the regulation is more extensive than necessary to further that interest [38]. This is an extensive departure from the more rigorous strict scrutiny standard, which requires the government interest to be “compelling” and for the statute to be “narrowly tailored” to achieve said interest [39, 40].

Approaching AI speech in a similar manner, one could lower the bar the government needs to pass in order to justify the constitutionality of a statute that pursues a government interest. In addition, enacting disclosure requirements would enable the government to have sufficient recourse in addressing the harms of artificial speakers in the absence of the ability to prove intent. In addition, disclosure requirements and lowered speech protections, as shown above, have significant precedent in American law and are made possible by the Supreme Court determining that some arenas of speech receive fewer First Amendment protections and are therefore able to be regulated. Due to these facts, disclosure requirements and reducing the scrutiny standard are both useful in mitigating the potential harms caused by AI speakers and are backed up by a significant amount of legal precedent.

Conclusion

Artificial intelligence is advancing rapidly. As of May 2023, it has dominated the national conversation, and tech companies across the world have raced to integrate artificial intelligence language modeling software like ChatGPT into their products. Chatbots are quickly becoming scarily human-like and it is evident that we are nearing a world where more competent artificial speakers will soon be a reality.

The law is notoriously slow at catching up to the latest technological developments and artificial intelligence is an area that poses significant challenges to the legal system, most notably regarding the First Amendment. To what extent is speech spoken by artificial intelligence protected by the First Amendment, is a question that will undoubtedly have to be answered in the near future. An overview of free speech jurisprudence shows that while there is precedent for extending First Amendment rights beyond humans directly and to corporations, the legal rationale behind these decisions still fundamentally rested upon the fact that an individual’s constitutional rights were being exerted. In addition, significant problems would arise when trying to map First Amendment rights directly over to AI speech, such as the difficulty of determining speaker intent. However, there is an ample amount of case law that can be used to justify lowering the protections for artificial speech and permitting disclosure requirements for artificial intelligence which would allow the government to mitigate potential harms. At the moment, how AI will fit into the legal system is not exactly clear, but what is clear is that problems will arise, and we must be prepared to solve them. This piece serves as a brief survey of what those problems might be, and how they might be addressed.

References

[1] ChatGPT | Definition & Facts | Britannica. (n.d.). Www.britannica.com. https://www.britannica.com/technology/ChatGPT

[2] “Artificial Intelligence (AI).” Legal Information Institute. https://www.law.cornell.edu/wex/artificial_intelligence_(ai).

[3] James B. Garvey, Let's Get Real: Weak Artificial Intelligence Has Free Speech Rights, 91 FORDHAM L. REV. 953 (2022), pg. 955, https://ir.lawnet.fordham.edu/flr/vol91/iss3/5

[4] Ibid. [5] Miller v. California, 413 U.S. 15 (1973)

[6] New York Times Co. v. Sullivan, 376 U.S. 254 (1964)

[7] Brandenburg v. Ohio, 395 U.S. 444 (1969)

[8] Chaplinsky v. New Hampshire, 315 U.S. 568 (1942)

[9] Virginia v. Black, 538 U.S. 343 (2003)

[10] Citizens United v. FEC, 558 U.S. 310 (2010)

[11] Snyder v. Phelps, 562 U.S. 443 (2011)

[12] Ibid.

[13] Citizens United v. FEC, 558 U.S. 310 (2010)

[14] Citizens United v. Federal Election Commission, (D.C. 2008). https://www.govinfo.gov/app/details/USCOURTS-dcd-1_07-cv-02240.

[15] Austin v. Michigan Chamber of Commerce, 494 U.S. 652 (1990)

[16] First National Bank of Boston v. Bellotti, 435 U.S. 765 (1978)

[17] Ibid.

[18] Citizens United v. FEC, 558 U.S. 310 (2010)

[19] Citizens United v. FEC, 558 U.S. 310 (2010)

[20] Citizens United v. FEC, 558 U.S. 310 (2010)

[21] Ibid.

[22] Tao Huang, Freedom of Speech as a Right to Know, 89 U. Cin. L. Rev. 106 (2020) https://scholarship.law.uc.edu/uclr/vol89/iss1/4

[23] Ibid.

[24] Ibid.

[25] Ibid.

[26] Steiner, Ronald. “Compelling State Interest.” Compelling State Interest. https://www.mtsu.edu/first-amendment/article/31/compelling-state-interest

[27] Ibid.

[28] Gitlow v. New York, 268 U.S. 652 (1925)

[29] Ibid.

[30] Kevin Francis O’Neill (Updated June 2017 by David L. Hudson Jr.). “True Threats.” True Threats. https://www.mtsu.edu/first-amendment/article/1025/true-threats.

[31] Virginia v. Black, 538 U.S. 343 (2003)

[32] Toni M. Massaro and Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Nw. U. L. Rev. 1169 (2016), pg 1190, https://scholarlycommons.law.northwestern.edu/nulr/vol110/iss5/6

[33] Toni M. Massaro and Helen Norton, Siri-ously? Free Speech Rights and Artificial Intelligence, 110 Nw. U. L. Rev. 1169 (2016), pg 1190, https://scholarlycommons.law.northwestern.edu/nulr/vol110/iss5/6

[34] Cal. Bus. & Prof. Code § 17941

[35] The Foreign Agents Registration Act (FARA) (22 U.S.C. § 611 et seq.)

[36] Meese v. Keene, 481 U.S. 465 (1987)

[37] Central Hudson Gas & Electric Corp. v. Public Service Commission, 447 U.S. 557 (1980)

[38] Ibid.

[39] Steiner, Ronald. “Compelling State Interest.” Compelling State Interest.https://www.mtsu.edu/first-amendment/article/31/compelling-state-interest.

[40] Strickland, Ruth Ann. “Narrowly Tailored Laws.” Narrowly Tailored Laws.. https://www.mtsu.edu/first-amendment/article/1001/narrowly-tailored-laws.

Donald Roveto

Donald Roveto is part of the Georgetown University Class of 2025 and studies government.

Previous
Previous

Competition Commission of India and Sporting Authorities - The Need for CCI’s Intervention to Prevent Abuse of Dominance by National Sporting Federations and their State Affiliates

Next
Next

Online Misogyny: Interpreting the First Amendment in a Digital World