“Extensive information only serves to further confuse the mind, favoring the insignificant at the expense of what is selective and effective.”
—Ortega y Gasset
Introduction
Information has become a luxury. The noise and misinformation to which we are exposed have reached an unsustainable level, fueled by the combination of several factors: internet access, the democratization of content creation, new habits emerging from social networks, the format in which we consume information (mobile devices), and the concentration of power in tech companies. The advent of generative artificial intelligence is merely the latest push in an already unstoppable trend.
As Ortega noted in the 1920s, an excess of information is not positive. We would add to this reflection that if the information is erroneous or even distorted, we as individuals cannot make rational decisions or engage in debate. Without these abilities, we lose autonomy and, as a result, freedom, just as Kant would argue (Pérez-Verdugo, Barandiaran, 2023).
Many argue that this situation can only be addressed by encouraging critical thinking. In our opinion, this idea is questionable. First, because as individuals, we are less capable than we believe. Numerous studies show our inability to distinguish real headlines from fake ones (Sanders, 2023). Let us also consider the numerous cognitive biases to which we are exposed. Taking the two ways of thinking described by Kahneman (2012)—do we think fast (“System 1”) or slow, that is, reflect (“System 2”)? Do we really believe that in today’s reality we use “System 2”? Do we have the capacity to concentrate and reason? Do we have the time for it? Are digital environments designed for this? In our opinion, quite the opposite. The design of digital platforms—built to capture and retain users’ attention—as well as the information overload, only foster quick decision-making. These types of decisions are more susceptible to being influenced by various cognitive biases and promote the creation of echo chambers, discouraging, among other things, the verification of information. Given the social implications of echo chambers, once inside one, it is very difficult to leave (Nguyen, C. T., 2020). Second, while we agree that education is the main means of combating the negative effects of the context described, its benefits will only be seen in the long term due to the difficulty of defining concepts like critical thinking and introducing them into educational systems. Although there is no threshold at which to introduce critical thinking, there is evidence suggesting that adolescence is the ideal time (Jensen, F. E., & Nutt, A. E., 2015)—a demographic particularly vulnerable to the negative effects of digital platforms.
It seems clear that we live in a more desirable world, given that we have numerous alternatives for getting information, in contrast to what previous generations in many European countries experienced, and what many societies around the world still experience today, where pluralism does not exist and everything is filtered. However, as we will argue, the new “information highways,” the major digital platforms—so relevant today—cannot hide behind neutrality to avoid taking even minimal action.
We will now explore how the detachment from democracy and the “deification” of technology converge. This situation seems to have led us to be less demanding when it comes to our rights in the digital world, as well as regarding the obligations of the private spaces in which we act. We will analyze how the way we consume information has changed and the role misinformation plays in this new context, how it is being addressed in the West, and the importance of public-private collaboration.
The Rise of Technocracy and Conformism
Imagine a pendulum. We can find the idolatry of technology on one side and the idolatry of democracy on the other. Which side do you think we currently fall on?
Democratic experience is under scrutiny in the West. Are there reasons to believe we are at a moment where democracy is perceived particularly negatively? According to the data, it appears so. If we look at the conclusions of recent studies—from private organizations, public institutions, and educational entities—the answer seems clear. In countries such as Spain and the United States, the majority of respondents say they are not satisfied with how democracy works (Pew Research Center, 2022). This is concerning, especially in a year like 2024, when we are writing these lines and when elections are taking place worldwide (Ewe, K., 2023). At the time we are drafting this reflection, Joe Biden has dropped out of the race for reelection as President of the United States, an event surrounded by controversy and with a backstory that in some way explains the perception of democracy and the media in that country—something we could certainly extrapolate to all of the West (Williams, D., 2024). The same media outlets that claimed the accusations about Biden’s health were false later demanded he be replaced in the U.S. presidential race after the debate.
Meanwhile, technologists are currently idolized. This seems logical in a context like the one just described, especially if we consider that tech companies and the owners of platforms or social networks now provide us with a form of escape. Elon Musk, owner of X (among other companies), has millions of followers, as evidenced by how users interact with the content he posts on that platform. The same could be said of Sam Altman and the team leading OpenAI, creators of the ChatGPT chatbot. Their launches and public appearances are followed worldwide, just as those of other tech-industry leaders such as Steve Jobs and Apple. This “deification” of technology coincides in time with the aforementioned detachment from politics and the public sphere, as well as, from our point of view, a retreat of citizens into themselves.
Paradoxically, the very technology that has so greatly fostered social life is now distancing us from it. In this sense, we defend that more frequent and larger-scale interactions—something social networks provide—do not equate to greater connection, but rather to greater distancing. Modern man, via his smartphone, lives virtually connected, has instant access to information and goods, and is self-sufficient. In some ways, one could argue that technology is “decivilizing” us if, as Rousseau believed, living in society distances us from humanity’s constitutive or natural trait—selfishness—and allows us to develop morally, understanding that the well-being of civilized man is linked to the well-being of the community (Rosales, 1998). In contrast to this pursuit of a just society—understood, as Sandel (2011) would argue, as one that aims to cultivate solidarity and mutual responsibility—there are movements that claim this is an impossible task as long as it remains part of a State as we know it today. Here we return to the crisis of democracy and how citizens perceive it. Indeed, the case of Bitcoin is highly illustrative. Beyond its technological or even speculative component, the idea has arisen around it that it is both possible and even advisable to become independent of the State. Álvaro de María, author of the essay “La Filosofía de bitcoin” (“The Philosophy of Bitcoin”), expresses it this way: “(Bitcoin) also changes the incentives of continuing to belong to the State, as it allows you to secede, to leave the system economically and analyze the opportunity cost of staying in it.” (Cid, 2022). While it is clear that the State is not synonymous with justice—and that this form of governance has its negative aspects—we also cannot claim that the State automatically equates to injustice. From our point of view, these movements, which we might classify as libertarian, neglect a large segment of society, which is why we do not share their perspective.
We believe that this scenario has led us to a degree of conformity as citizens. Even though we have witnessed cases of abuse such as Cambridge Analytica (Cadwalladr, C., & Graham-Harrison, E., 2018), suspect that the concentration of power benefits neither users nor competition (Federal Trade Commission of the United States, 2021), and recognize that the digital services we use lack transparency (who decides what content to show us, and on what basis?), we accept this. It seems like a decision based on the “lesser evil,” fearing that the government might turn the “cure into something worse than the disease.”
The Origin of Bad Practices on Digital Platforms
As we will see below, the aforementioned bad practices are not a direct consequence of the internet itself but rather the outcome of its evolution. This evolution has resulted from new social habits that have emerged around how we use it, as well as from the incentives generated by the concentration of the market for digital services and infrastructure providers.
These days, we tend to consume encapsulated content, partly because of the hardware through which we consume it—namely, mobile devices. Anticipating by many years what would ultimately occur, Jeff Bezos, in his 2007 letter to shareholders, stated that we have become scanners of information, abandoning deep and calm reading. Meanwhile, social networks provide a user experience built around producing and consuming short messages, encouraging volume, prolonged platform usage, and daily activity. One of the latest “feed” innovations involves showing content from people you do not follow but whom the platform believes might interest you (see, for example, the “For you” timeline on X), reinforcing the creation of the already-mentioned echo chambers. All of this discourages reflection, which by nature requires time, as we have already discussed. The environment does not encourage grounding our conclusions with relevant arguments. In this regard, as Orwell wrote, “the very concept of objective truth is fading out of the world. Lies will pass into history.” Faced with the lack of alternatives and platforms’ main business model—based on advertising—these platforms have no incentive to stop prioritizing content that generates high engagement, which often coincides with content that is more sensationalist and polarizing. Of course, the decisions made by large platforms are not illegal, but their morality is certainly questionable.
For those who might accuse us of paternalism, we acknowledge the role of citizens in all this. Users often reinforce their own ideas by following only like-minded individuals. This naturally extends to any realm, not just social networks. Therefore, we believe that too much intervention—whether by platforms themselves or by regulators—would not only be futile but might also be counterproductive, pushing users toward other platforms. While this is not necessarily negative, it could lead to greater social polarization if these alternatives attract users through a particular ideology, as may be the case with Truth Social, the social network launched by then U.S. President Donald Trump after his controversial expulsion from Twitter and Facebook in January 2021 (Conger, K., Isaac, M. & Frenkel, S., 2021).
Misinformation: A Social Problem
In addition to the new habits engendered by mobile devices and digital platforms—which, as we have argued, discourage reflection and debate—we have what is, in our view, the major problem of the 21st century: misinformation. In the context described, misinformation has a runway for acceleration such as it has never had before in history, with corresponding impacts on democracy and equality.
Clearly, if social networks, search engines, or chatbots have come to play a central role in society, the misinformation that arises within them must be addressed. The ability to convey any message, instantly, to millions of people anywhere in the world was not possible until a few years ago. Thus, although misinformation is not a new problem, the current context has facilitated its industrialization. As mentioned at the outset, generative AI is yet another leap forward. OpenAI recently published a report stating that its services had been used with the aim of manipulating public opinion and influencing election outcomes (OpenAI, 2024). The report describes several orchestrated cases from countries like Russia, Israel, Iran, or China, which used the well-known chatbot to produce content and distribute it on social networks and websites. Automating the creation or distribution of content is nothing new; nor is it inherently bad. The problem, as we’ve stated, lies in the intentions behind these operations. The definition of manipulative behavior has been described in the European Union’s Code of Practice on Disinformation, which we will revisit shortly. The manipulative behavior used to spread misinformation includes, for example, fake accounts, amplification through bots, impersonation, or malicious deepfakes. Such practices are prohibited under the use policies of major digital platforms.
Given its importance in this discussion, it is worth delving into what we mean by misinformation. In reality, there is no universally agreed-upon definition, but we will rely on the UN’s statement that “while misinformation refers to the accidental spread of inaccurate information, disinformation is not only inaccurate but is deliberately designed to deceive and is disseminated in order to cause serious harm.”
As we can see, misinformation does not depend on the channel used. Traditional media, despite journalistic codes of ethics, can also be utilized for this purpose, as we noted when discussing Biden’s health (see above). It is thus worth pointing out that misinformation is not unique to the internet.
Although anecdotal, recall the impact of Orson Welles’s radio show in October 1938 and the headline from The New York Times the following day:
“Radio Listeners in Panic, Taking War Drama as Fact; Many Flee Homes to Escape ‘Gas Raid From Mars’—Phone Calls Swamp Police at Broadcast of Welles Fantasy
RADIO WAR DRAMA CREATES A PANIC
Geologists at Princeton Hunt ‘Meteor’ in Vain
They’re Bombing New Jersey! …”
(The New York Times, 1938).
Welles’s own remarks:
“Radio is new, and we are learning about the effect it has on people. We learned a terrible lesson.”
What Is Being Done About It?
Now that the problem has been identified and its significance established, how is it being addressed worldwide? As is often the case, given the transversal nature of technology, we are dealing with a global problem that is tackled in fragmented ways. Each world region adopts its own stance, making unified proposals very difficult. However, in the face of complacent attitudes on this issue, let us remember that global agreements have been reached in areas as complex as security or trade.
Focusing on the West, Europe and the United States are responding in a way that could be expected and is, in some sense, inevitable. Indeed, given the different legal systems—continental or “civil law” in Europe, and “common law” in the U.S.—the former is more inclined to regulate, while the latter tends to place the interpretation and development of the law in the hands of the courts. As one can already infer, different regulations have an indirect impact on misinformation, such as those related to data protection. Digital platforms are subject to these and other rules as market operators, for example, those involving illegal content. Consider, for example, copyright infringements or the distribution of pedophilic material. The problem arises when content can be deemed harmful but is not illegal.
Accordingly, in Europe there is the Digital Services Act, which requires large platforms—literally, “Very large online platforms and search engines”—to “carry out an annual risk assessment and adopt corresponding risk mitigation measures arising from the design and use of their services.” Among the so-called systemic risks to be mitigated are misinformation and the manipulation of electoral processes. The measures to reduce the risks associated with misinformation include content moderation—an issue that has been jointly developed by the European Union, the platforms themselves, and other private actors in the 2022 Code of Practice on Disinformation. Naturally, all measures must weigh the impact on other rights, primarily freedom of expression. Interestingly, in Spain, we can find this fundamental right alongside the right to receive truthful information. Indeed, in Article 20.1(a) of the Spanish Constitution, the right “to freely express and disseminate thoughts, ideas and opinions through words, writing or any other means of reproduction” is recognized and protected; subparagraph (d) further recognizes and protects the right “to freely communicate or receive truthful information by any medium of dissemination.”
This leads us back to the Anglo-Saxon perspective, and especially that of the United States. As we have said, in the U.S., the law is typically developed by the courts; however, and contrary to what we might think from Europe, individual states do regulate as well, and the issue at hand is equally contentious there. Currently, there are several ongoing lawsuits between the association representing digital platforms (NetChoice) and states such as Florida or Texas, which passed laws limiting these platforms’ ability to moderate content (American Civil Liberties Union, 2024). While the platforms argue that under these laws online freedom of expression would be impaired, these U.S. states consider that Facebook or YouTube arbitrarily censor content. All the cases revolve around a specific law: Section 230 of the Communications Decency Act, enacted in 1996. This law originally sought to eliminate harmful content for children, allowing internet service providers to remove it without legal consequences. With the technological evolution since then and the various interpretations of the law, the interpretation is that this law offers immunity from civil liability for third-party content posted on platforms, as well as flexibility in removing content under certain circumstances. As we can see, there is also conflict in the U.S. over these issues. While Republicans seek to limit the power to expel users from platforms—as exemplified by the Florida and Texas laws in response to Donald Trump’s expulsion—Democrats push for greater scrutiny of certain types of content they consider harmful. The U.S. Department of Justice itself has released a proposal to revise Section 230, arguing that it should be updated to better balance protection of free speech with platform accountability (U.S. Department of Justice, 2020).
Conclusion
Misinformation and content moderation have become the main battleground between technophiles and technophobes. However, this debate is more about the type of society we want than about technology itself. The controversy has reached a point where it sparks discussions about the governance of society and the suitability of current political models, with the State at the helm.
Although technological change indeed underpins social change (Dator, 2019), we cannot be complacent. We can and must act to decide how we want to continue living in society and to enhance the benefits that progress brings, while also tackling the medium itself—technology—to mitigate its problems and second-order consequences (Hidalgo-Barquero, 2023).
The first challenge is accepting that there are no shortcuts. Considering the rights at stake if we take an inappropriate approach—and the effect on freedom and equality if we fail to acknowledge the existence and importance of the problem and act on it—we need to open up the debate. We cannot ignore how technology evolves; thus, it would be short-sighted not to recognize that we face an ongoing debate requiring public-private collaboration. Nor can we neglect the basics: education. At the end of the day, we, the people, must demand that digital platforms meet minimum standards. The only way to do this is by using or ceasing to use the service they provide. We will only do this when, through critical thinking, we recognize the need to demand improvements; in their absence, we may turn to other platforms that will undoubtedly arise in response to the growing awareness that alternatives are possible. In this scenario, platforms’ profitability will depend on abandoning current incentives and delivering an experience that inherently gives power to the user, moving away from the attention economy (Dixon, 2024).
Blockchain technology is a very interesting step in this regard. Through social protocols like Farcaster or Mastodon, alternative apps are being built with functionalities quite similar to traditional social platforms. One of the main advantages—part of the default design of these protocols—is that users own the content they generate. If we dislike a service or app, for instance, because its functionalities create incentives with which we disagree, we could use another service without losing our content or network. In any case, these alternatives, still small in terms of user numbers, will face the same difficulties and challenges, especially when it comes to content moderation.
We cannot conclude without mentioning all the changes taking place on the social network X, formerly Twitter, since its acquisition by Elon Musk. Setting aside the debate opened by those close to the network’s inception on the risks of concentrating ownership of these platforms (Wilson, 2022), Musk’s belligerent style, the opacity that still shrouds X’s operations today (AFP, 2024), and the ongoing conflicts with administrations around the world (European Commission, 2024), we must recognize the changes introduced since his arrival. Among them, particularly relevant to content moderation, is the feature known as “Community Notes,” which has notably expanded in recent months (Buterin, 2023). This feature allows users themselves to attach contextual notes to widely viewed posts on X that they deem confusing. Interestingly, Elon Musk’s own posts are frequently subject to these “community notes.” This is a “crowdsourcing” model—namely, one that is decentralized or sustained by collective intelligence. It is also transparent, one of the main requirements we have emphasized as a prerequisite for users to understand how a platform’s operation affects them, so that they can make informed decisions. The code for the rating algorithm can be inspected by anyone on GitHub, the world’s largest code repository.
As we see, innovation in favor of the user is indeed possible. Let us champion it and educate citizens. Freedom and equality are at stake, as is our collective memory.
This essay was finalized in August 2024 for the course “Philosophy, Computing, and Digital Humanities,” which is part of the Master’s program in Theoretical and Practical Philosophy at the National Distance Education University (UNED).
References
AFP (2024). X vows to end harvesting of EU users' personal data to train its AI. France 24. https://www.france24.com/en/live-news/20240904-x-vows-to-end-harvesting-of-eu-users-personal-data-to-train-its-ai
Álvarez, J.F. (2018). Nuevas capacidades y nuevas desigualdades en la sociedad red. Laguna: Revista de Filosofía, 42, pp 9-28.
American Civil Liberties Union (2024). Supreme Court ruling underscores importance of free speech online. https://www.aclu.org/press-releases/supreme-court-ruling-underscores-importance-of-free-speech-online
Buterin, V. (2023). What do I think about Community Notes? Vitalik Buterin’s website. https://vitalik.eth.limo/general/2023/08/16/communitynotes.html
Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Cid, G. (2022). La filosofía de Bitcoin: un libro plantea la caída del Estado como lo conocemos. El Confidencial. https://www.elconfidencial.com/tecnologia/2022-01-19/filosofia-bitcoin-libro-caida-estado-alvaro-maria_3343557/
Conger, K., Isaac, M. & Frenkel, S. (2021). Twitter and Facebook lock Trump’s accounts after violence at Capitol. The New York Times. https://www.nytimes.com/2021/01/06/technology/capitol-twitter-facebook-trump.html
Constitución Española. Boletín Oficial del Estado, 29 de diciembre de 1978, núm. 311
Dator, J. (2019). What Futures Studies Is, and Is Not.
Dixon, C. (2024). Read Write Own. Building the Next Era of the Internet. Random House.
European Commission. (n.d.). The Digital Services Act: Very large online platforms and search engines. Digital Strategy. Disponible en https://digital-strategy.ec.europa.eu/en/policies/dsa-vlops
European Commission. (n.d.). Code of practice on disinformation. Digital Strategy. Disponible en https://digital-strategy.ec.europa.eu/es/policies/code-practice-disinformation
European Commission (2024). Commission sends preliminary findings to X for breach of the Digital Services Act. European Commission. https://ec.europa.eu/commission/presscorner/detail/en/IP_24_3761
Ewe, K. (2023, August 23). The biggest elections to watch in 2024. Time. https://time.com/6550920/world-elections-2024/
Federal Trade Commission of the United States (2021). FTC Alleges Facebook Resorted to Illegal Buy-or-Bury Scheme to Crush Competition After String of Failed Attempts to Innovate. Disponible en: https://www.ftc.gov/news-events/news/press-releases/2021/08/ftc-alleges-facebook-resorted-illegal-buy-or-bury-scheme-crush-competition-after-string-failed
Jensen, F. E., & Nutt, A. E. (2015). The teenage brain: A neuroscientist's survival guide to raising adolescents and young adults. Harper.
Kahneman, D. (2012). Pensar rápido, pensar despacio. Editorial Debate.
Nguyen, C. T. (2020). Echo Chambers and epistemic bubbles. Episteme, 17(2), 141–161. doi:10.1017/epi.2018.32
OpenAI. (n.d.). Disrupting deceptive uses of AI by covert influence operations. OpenAI. Disponible en https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/
Pew Research Center (2022). Satisfaction with democracy and political efficacy in advanced economies, 2022. https://www.pewresearch.org/global/2022/12/06/satisfaction-with-democracy-and-political-efficacy-in-advanced-economies-2022/
Rosales, J. M. (1998). Política cívica: La experiencia de la ciudadanía en la democracia liberal. Centro de Estudios Políticos y Constitucionales.
Sandel, M. J. (2011). Justicia. ¿Hacemos lo que debemos?. Penguin Random House Grupo Editorial.
Sanders, L. (2023). How well can Americans distinguish real news headlines from fake ones? YouGov. https://today.yougov.com/politics/articles/45855-americans-distinguish-real-fake-news-headline-poll
The New York Times. (1938). Radio listeners in panic, taking war drama as fact; Many flee homes to escape gas raid from Mars. https://www.nytimes.com/1938/10/31/archives/radio-listeners-in-panic-taking-war-drama-as-fact-many-flee-homes.html
United Nations. (n.d.). Contrarrestar la desinformación. United Nations. Disponible en https://www.un.org/es/countering-disinformation
U.S. Department of Justice. (2020). United States Department of Justice’s review of Section 230 of the Communications Decency Act of 1996. https://www.justice.gov/ag/file/1072971/dl?inline=
Williams, D. (2024). Biden's age and the problem with the misinformation cope. Conspicuous Cognition.
Wilson, F. (2022). Twitter is too important to be owned and controlled by a single person. The opposite should be happening. Twitter should be decentralized as a protocol that powers an ecosystem of communication products and services [Post]. X. Disponible en: https://x.com/fredwilson/status/1514564762142752768
X. (n.d.). Community notes Guide. X. Disponible en https://communitynotes.x.com/guide/es/about/introduction
X. (n.d.). Manipulated media. X. Disponible en https://help.x.com/es/rules-and-policies/manipulated-media
X. (n.d.). Using X: X timeline. X. Disponible en https://help.x.com/en/using-x/x-timeline