Changes and Challenges in Free Speech: The Impact of the Internet
Michael

In the 21st century, freedom of expression in democratic societies is making great strides toward “equal speech for all.” This is gratifying as free speech is essential for democracy to function because it protects the right to express ideas and facilitates healthy public debates. However, the emergence of the Internet as a central venue for expression has created new challenges. Although society has adapted to keep up with the rapidly changing information age and made amendments to its criteria for dealing with harmful speeches, it is unprepared for this digital revolution. The way of balancing free and hate speech on the Internet still has room for improvement.

Free speech, as a right primarily agreed upon by almost every democratic society, has been making gradual but significant progress by people’s increasing willingness to express opinions. Although the pioneers of integrating this concept into the modern legal system can be dated back to the Bill of Rights 1689 of the Parliament of England and the First Amendment to the United States Constitution in 1791, several social advancements have emphasized and facilitated public expressions. (Russomanno 215). Most notably, the imposition of compulsory education in several jurisdictions, including Massachusetts (1852) and England (1870), caused by the Industrial Revolution during the 18th to 19th centuries with the increasing demand for scientific and technological talents, greatly improved people’s literacy (Cowen and Kazamias 503).

During the past century, people’s willingness to express ideas went through a rapid expansion, especially after the Second World War when many countries worldwide started or finished democratization, and those who had already done so enfranchised most of their nationals, increasing the political rights of people (Russomanno 219). During this transformation, laws and regulations establishing speech regulations continued to evolve to reflect the gradually increasing needs of society. This notably included the imposition of the Canadian Human Rights Act (1977) and the U.K.’s Public Order Act (1986), both of which address issues of hate speech (Vilar-Lluch 4). However, with the introduction of the Internet, a catalyst for the amplification and diversification of public expression, freedom of expression advanced unprecedentedly in contemporary history. The characteristics of cyberspace intensified speeches by revolutionizing communication. The speed and amount of information transmitted on the Internet make it possible for an unprecedented number of people to comment and express their opinions on the same issue simultaneously and instantly with fewer burdens of responsibility as they enjoy anonymity. The barrier to getting one’s voice heard is evidently lower than traditional approaches, boosting expressions and diversifying speeches. This also extends people’s ability to express themselves and noticeably reduces the quality of speeches without a conventional gatekeeper (Svantesson 44–60).

Although the expansion of speech expression is pleasing to democratic societies, since the Internet encouraged the positive cycle of speeding up the growth of public expression, it is also concerning as the Internet has certainly brought about harmful ideas and expressions to a degree that society has never faced before. More and more people put on masks to express hate against others on the internet, believing they won’t face any consequences. Rather than being a symbol of progress in freedom of expression, as it appears to be, this level of diversity of expression can, on the contrary, limit freedom of expression.

Generally speaking, the targeted demographics of hate speeches are minorities in society. According to the theory of “tyranny of the majority,” the suppression of the majority on the minority will destructively suppress the discourse power the minority enjoys, even if they can oppose the majority’s idea in the short term. The majority will ultimately intimidate dissent made by the minority, even in modern democracies, causing the vanishing of freedom of speech (Nyirkos 10). As a result, maintaining a balance between free and hate speech is essential to the well functioning of democracy, especially during the challenging era where communication technology evolves speedily.

Societies have established procedures to distinguish and prohibit hate speech, including online speeches. As John Stuart Mill proposed in his work On Liberty, traditional viewpoints on free speech often involve the decision within a “harm principle,” which judges whether an action will harm others, and if so, the law should interfere. This influential theory was applied to fields that involve legislation that is not limited to free speech and serves as the reason, for example, why the law should not set limits on thinking. Challenges existed regarding this principle, including the theory of “listener’s autonomy,” which argues for respecting the listener’s right to self-determination on whether the speech they heard was beneficial to themselves by not setting limits to hate speech (Howard 97). Nevertheless, it is generally recognized that speeches can cause harm to others by inciting hatred and violence toward specific groups within those listeners, which fits in the harm principle (Wellington 1106). Effective generalizations of the situation of speeches by this theory have historically been a strong ground for legislatures and online platforms to prevent hate speech. However, lawsuits cannot be made infinitely per this principle as it harms freedom of expression. According to the case New York Times Co. v. Sullivan (1964), the court proclaimed the necessity to demonstrate “actual malice” in accepting defamation claims made by public figures, building a clear distinction between protecting vulnerable groups and censorship, two utterly different goals based on speech limitation (Cohen 215).

Governments have tried to build a prohibition framework based on these sophisticated discussions on identifying hate speech. Although the definition of hate speech is differentiated in different jurisdictions (Howard 94), it is commonly defined as the expression that targets a group or an individual due to their background in a distinguished, usually minority group, such as ethnicity, religion, or sexual orientation (Simpson 701). Based on the definition, the laws prohibiting hate speech have been well-established in Canada, the U.K., Australia, and many European nations. With a new legislative initiative, the Canadian Government recently introduced a controversial bill, C-63, that proposed the Online Harms Act, tailored explicitly for Internet activities. The United States, which has jurisdiction over many well-known social media platforms, did not establish a similar framework because challenges to the constitutionality of these restrictions were an obstacle to passing any substantive legislation. They are focused on whether quoting the Fourteenth Amendment can such limitation bypass the First Amendment. That said, the influence of worldwide Internet users who resent hate speech affected those platforms’ policies by local legislation, making some of those hate speech laws de facto applicable to those social media platforms (Ammori 2262).

When viewing procedures that were designed to deal with speeches mostly made with conventional communication methods, they were often outdated and often not as effective as they should be in balancing the environment. Notwithstanding the lack of legislative activism in some jurisdictions about this matter, in regions that identify hate speech as unlawful, the Internet, as a brand-new technology, posed challenges to the procedure of both identifying and prohibiting hate speech. Those outdated laws, caused by a lack of response to technological advancement, have highlighted a shortage of specific legislation tailored for the Internet, a place different from the real world. Reidenberg (1954) states, “In the United States, courts have had great trouble figuring out how to apply traditional jurisdiction principles to Internet activities,” which is also the norm worldwide. As a result, many responsibilities for Internet governance are left to service providers. Companies are required to set platform policies in accordance with government legislation and social norms, causing the lawyers employed by those companies to become the de facto interpreters of the concept of the boundaries of free speech instead of the legislature or the court that people mandate (Ammori 2261). What makes it worse is that some companies are private, causing their criteria to swing and change with the will of their owners, who sometimes become the protection of hate speech. As an example, the change in Twitter’s policy caused by Elon Musk’s acquisition caused a surge in hate speech, including “a nearly 500% increase in [the] use of the N-word” 12 hours after Musk acquired Twitter (Ray and Anyanwu, par. 2).

Even when companies have effective policies, identifying hate speech is still ineffective: destructive remarks that clearly fit in the harm principle sometimes cannot be filtered. The issue emerges when platforms struggle to monitor such a large number of posts with limited personnel; they rely on A.I. for surveillance, whose effectiveness is heavily based on the algorithm. For example, as a provider that filters about 80% of hate speech on its platform using A.I., Facebook could not detect hate speech made in Assam as they did not have the algorithm, creating a loophole (Perrigo, par. 2). The amphibolous nature of remarks made in cyberspace also causes issues for monitoring, as platforms limit them to only short phrases instead of longer arguments, causing certain messages wrongfully interpreted and deleted by machine auditors, raising concerns about the lack of free speech (par. 27). In addition, the current approach of requiring platforms to self-regulate results in the publishers of harmful remarks if successfully detected, was not punished in the same way as if they were to do the same thing offline; those remarks are simply deleted, leading to public’s further disregard for the boundaries of speeches.

To deal with those dilemmas, the United States favours a traditional approach, summarized as “Speech v. Speech,” which stands for exposing the problem, promoting open debate and letting the public decide whether an opinion is reasonable and acceptable (Cohen-Almagor 434). This approach should not be encouraged when considering that persuasion from the rational side will likely fall on deaf ears, as it does not stop speechmakers from harming others (435). Instead, governments should legislate to specify their expectations regarding the platform’s service policies so that those policies should be consistent and transparent enough for the public to follow. Governments should also enforce legal penalties on speechmakers who exhibit convicted behaviours while raising the threshold for a speechmaker to be convicted (Gelber and McNamara 636). Freedom of expression can be promoted by tolerating most speeches and protecting vulnerable groups with laws and legislation. It can also educate the public not to make hate speech through the procedure of legislation, law enforcement, verdicts, and publicity (656). After all, “preventing [hateful posts] from being posted in the first place would be much, much more effective.” (Perrigo, par. 28)

The fascinating emergence of the Internet in human life is that it connects the world closely through wires, making the Earth a village. This characteristic of the Internet has also brought about societal challenges, including difficulties in prohibiting hate speech. There have already been abundant international law bases for a worldwide collaboration to deal with hate speech. According to the Convention on the Elimination of Racial Discrimination (CERD) and the International Covenant on Civil and Political Rights (ICCPR), countries worldwide that signed and ratified both treaties have obligations to prohibit hate speech, and most countries have already done so. These treaties, therefore, can be used as a legal basis to prevent those speeches from being broadcast on the Internet (Viljoen 3). Although they are not specially tailored for such an international platform, their internationally accepted nature provides a deliberate standard of baseline for platform or government regulation globally to reference. However, since treaties such as CERD and ICCPR do not have forceful binding jurisdiction over their participating states (Viljoen 9), countries still maintain their distinct standard of hate speech regulation, causing debates and lawsuits regarding the jurisdiction of regional law over the worldwide Internet. This significant standard difference ultimately provides multinational corporations with a flexible position where they can choose the court that, based on the law in favour of them, has jurisdiction over them so that the speech regulation can proceed with the minimum standard, usually the United States one.

The case LICRA v. Yahoo! (2000), as an example, demonstrates a view to disregard sovereign law from online activities. Yahoo wanted the transmission of Nazi images, an activity that was illegal in France, to be intact. They, therefore, claimed that they operated their service from the U.S., and their activities were protected by the U.S. Constitution, ensuring they did not need to comply with French law. When the French court ruled against Yahoo in this case, the company sued in California and reached its goal of avoiding French law enforcement. Although the U.S. Court of Appeals overturned the California court by proclaiming that France was entitled to hold Yahoo accountable, this case demonstrates the difficulties this worldwide accessible network brings to court judgment (Reidenberg 1952). It is worth noticing that in this case, the French court did not have sufficient authority, which needed to be confirmed by the U.S. court to rule over an American company.

France is not the only country that encounters this dilemma. In the Internet world, the Earth is becoming more like a village than a combination of political entities, creating a gray area between these entities. Different regions worldwide still function with their own facilities, including courts up to supreme courts, which now function like local courts as laws worldwide, including the American First Amendment, become merely local ordinances, with a global ordinance still absent (Ammori 2263; Ammori 2278). Even before the digital era, the absence of a binding CERD or ICCPR to all its participating states also brought tragic consequences to society. Rwanda, as a participating country in both treaties, stopped submitting state reports to monitoring bodies before its 1994 genocide. The treaty bodies failed to deal with the unusuality with special measures specified in the treaty, even with clear signs that hate speech was a significant concern within Rwandan land. Hate propaganda through the media played an important role in the genocide, and between 500,000 and 800,000 people were killed (Viljoen 1–2). The global nature of the Internet poses legal and enforcement challenges to combating hate speech, which needs to be reached by promoting international cooperation, a concept that lacked implementation before the millennium but becomes increasingly important, potentially sorting out the current backward definition of Internet sovereignty.

In conclusion, although society continues to adapt in response to social changes and advancements, as it has whenever public expression has amplified historically, the introduction of the Internet as the center of expression challenges the pace of regulation framework reformation. The unprecedented technological advancement that took place decades ago has shown a revolutionary nature compared to other social advancements that influence public expression, such as compulsory education, reflecting the considerable obsolescence in the need to regulate hate speech according to the current approach. With hate speech on cyberspace keeps posing a threat to the general public, especially members of vulnerable groups, who may experience a loss of speech rights or even violence, measures including legislating specially targeted online speeches, improved government regulation on platform policies and speechmakers, and enhanced international cooperation within existing international law can improve their situation. Regulating hate speech does not aim at reducing freedom of expression but, on the contrary, improves the rights and freedom of each member of society. As Thomas Jefferson famously said, “Rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others,” freedom is not absolute.

Works Cited

Ammori, Marvin. “The ‘New’ ‘New York Times’: Free Speech Lawyering in the Age of Google and Twitter.” Harvard Law Review, vol. 127, no. 8, 2014, pp. 2259–95. JSTOR, http://www.jstor.org/stable/23742037.

Cohen, Joshua. “Freedom of Expression.” Philosophy & Public Affairs, vol. 22, no. 3, 1993, pp. 207–63. JSTOR, http://www.jstor.org/stable/2265305.

Cohen-Almagor, Raphael. “Countering Hate on the Internet.” Jahrbuch Für Recht Und Ethik / Annual Review of Law and Ethics, vol. 22, 2014, pp. 431–43. JSTOR, http://www.jstor.org/stable/43593801.

Cowen, Robert, and Andreas M. Kazamias. International Handbook of Comparative Education. Springer Science and Business Media, 2009.

Gelber, Katharine, and Luke McNamara. “The Effects of Civil Hate Speech Laws: Lessons from Australia.” Law & Society Review, vol. 49, no. 3, 2015, pp. 631–64. JSTOR, http://www.jstor.org/stable/43670529.

Howard, J. Woodford. “Free Speech and Hate Speech.” Annual Review of Political Science, vol. 22, no. 1, May 2019, pp. 93–109, doi:10.1146/annurev-polisci-051517-012343.

Nyirkos, Tamás. The Tyranny of the Majority: History, Concepts, and Challenges. New York, Routledge, 2018, doi:10.4324/9781351211420.

Perrigo, Billy. “Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch.” TIME, 27 Nov. 2019, time.com/5739688/facebook-hate-speech-languages.

Ray, Rashawn, and Joy Anyanwu. “Why is Elon Musk’s Twitter takeover increasing hate speech?” Brookings Institution, 23 Nov. 2022, https://www.brookings.edu/articles/why-is-elon-musks-twitter-takeover-increasing-hate-speech/.

Reidenberg, Joel R. “Technology and Internet Jurisdiction.” University of Pennsylvania Law Review, vol. 153, no. 6, 2005, pp. 1951–74. JSTOR, doi:10.2307/4150653.

Russomanno, Joseph. “Cause and Effect: The Free Speech Transformation as Scientific Revolution.” Communication Law and Policy, vol. 20, no. 3, July 2015, pp. 213–59, doi:10.1080/10811680.2015.1051916.

Simpson, Robert Mark. “Dignity, Harm, and Hate Speech.” Law and Philosophy, vol. 32, no. 6, 2013, pp. 701–28. JSTOR, doi:10.1007/s10982-012-9164-z.

Svantesson, Dan Jerker B. “The Characteristics Making Internet Communication Challenge Traditional Models of Regulation – What Every International Jurist Should Know About the Internet.” International Journal of Law and Information Technology/International Journal of Law and Information Technology (Online), vol. 13, no. 1, Jan. 2005, pp. 39–69, doi:10.1093/ijlit/eai002.

Vilar-Lluch, Sara. “Understanding and Appraising ‘Hate Speech.’” Journal of Language Aggression and Conflict, vol. 11, no. 2, May 2023, pp. 279–306, doi:10.1075/jlac.00082.vil.

Viljoen, Frans. “Hate Speech in Rwanda as a Test Case for International Human Rights Law.” The Comparative and International Law Journal of Southern Africa, vol. 38, no. 1, 2005, pp. 1–14. JSTOR, http://www.jstor.org/stable/23252193.

Wellington, Harry H. “On Freedom of Expression.” The Yale Law Journal, vol. 88, no. 6, 1979, pp. 1105–42. JSTOR, doi:10.2307/795625.

 评论
评论插件加载失败
正在加载评论插件
由 Hexo 驱动 & 主题 Keep
本站由 提供部署服务