Algorithmic Personalization and the State of Democracy in a Globalized World

By

A loooooong essay I wrote for my AI Ethics class..

In an increasingly globalized world where having a social media presence may seem more a necessity than an indulgence, personalized algorithms within these systems help users navigate the plethora of never-ending media by showing users what they would enjoy more. One consequence of such algorithms is in the political realm, in which one’s feed is limited to one’s views more than differing political perspectives. These algorithms and their lack of regulation pose a significant risk to the necessary conditions for democracy while promoting extremism that can translate into dangerous and illegal acts in the real world. This paper will define and understand the rationale and ethics of personalization algorithms in social media, alongside their consequences in the political sector. I will be following Cohen and Fung’s outline for an “idealized democratic public sphere” to see how algorithmic personalization is promoting extremism spurred by the political tunnel vision.

Algorithmic personalization is the process by which social media companies use machine learning algorithms to choose a user what they will see on their feed. This is based on their past interactions with other users, current online behavior, and likes and dislikes, and is a critical part of every social media algorithm (Pariser 2012, Shi 2021). The debate centered around whether social media is leading to a closing or opening of one’s mind due to algorithmic personalization is a curious, and at times, volatile one; some argue social media is exposing users to a wide range of posts from across the globe that match their interests while exposing them to new cultures and ideas despite algorithmic personalization (Fletcher 2020). On the other hand, social media’s hyper-personalization of a user’s content can inhibit the amount of diversity in one’s feed, thus perpetually feeding into their existing beliefs, biases, and stereotypes (An, Quercia and Crowcroft 2014; Nguyen 2020; Pariser 2012; Saez-Trumper, Castillo and Lalmas 2013).

This latter view confirms filter bubbles, a term coined by author Eli Pariser, which go one step past algorithmic personalization, as these bubbles result in an “invisible, algorithmic, editing of the web” that removes any viewpoints other than a user’s pre-existing ones (Pariser). This has the potential to reinforce one’s confirmation bias online, such that one disagrees with any media that does not follow their ideology (Modgil 2021). The debate on whether or not filter bubbles in social media algorithms have any importance to user ideology – much less if these bubbles even exist – has reached no consensus in the past decade among researchers, leading many to assume that its lack of confirmation is a pretense that algorithmic personalization in its entirety is not worth studying (Barbéra 2020). This is wholly incorrect, as regardless of how one feels about the impact of filter bubbles, algorithmic personalization itself is a cause for concern in understanding the role it plays in information sharing and understanding.

Before looking at algorithmic personalization’s effects on politics, one should understand the primary side effect of having an algorithm predetermine your content for you. Author Robert Putnam was one of the first more prominent authors to summarize a simple framework of filtering algorithms’ effects on polarization in his book Bowling Alone (2000) in regards to his worry that the internet would create a haven for white supremacists to “narrow their circle to like-minded intimates” (Putnam 2000, p. 178). Computational political scientist Pablo Barberá summarized Putnam’s claims of the harms of filtering algorithms into three main components: “(1) digital technologies facilitate the emergence of communities of like-minded individuals, where (2) they are increasingly isolated from challenging information, a process that is exacerbated by (3) filtering algorithms” (Barberá 2020).

Furthermore, algorithmic personalization “curates” the perfect feed for our biases, which we tend to believe is factually accurate. These algorithms are based on the human tendency to instinctively trust those who share our beliefs more than others. After all, it seems far more apparent when the information presented to us disagrees with our perspective, and far less apparent when it agrees with our perspective. The underlying assumption that algorithms can base a user’s bias on fact is the primary cause for concern for high-level algorithms, as they “tend to discourage the development of critical and pluralistic thinking due to the arbitrary selection of data to which we have access” (Rodilosso 2024). Researcher Ermelinda Rodilosso argues that although individuals create communities that share their mindsets and values, the online space escalates this innate bias to a potentially dangerous scale. In the political context, this can lead to extremism that promotes instability within one’s democratic system.

Additionally, these algorithms “can encroach on individual users’ autonomy” (Rodilosso 2024), by providing recommendations that nudge users in a particular direction, by attempting to “addict” them to some types of content, or by limiting the range of options to which they are exposed (Milano et al. 2020). These algorithms make the decisions for you. Outside of social media, these systems curate what to watch, buy, or read. This seemingly inescapable yet invisible force takes away user autonomy to choose, to discover, and to learn new perspectives.

Cohen and Fung’s article, “Democratic Responsibility in the Digital Public

Sphere”, states that to have a democracy, one needs “a well-functioning democratic public sphere” that “ensure[s] equal, substantive communicative freedom”, which they outline in a set of five rights and opportunities that permit free and equal reasoning that bolsters individual political autonomy. These are:

  1. Diversity: “good and equal chances to hear a wide range of views on issues of public concern”, giving “reasonable access to a range of competing views” which promotes healthier public debate and policy implementation

  2. Access: “good and equal access to instructive information on matters of public concern” such that each person who puts effort into accessing trustworthy, reliable sources can do so in a manner that does not prioritize some sources above others

  3. Communicative Power: “good and equal chances to associate and explore interests and ideas together with others” to reach a common consensus and work together to achieve a common goal through “open-ended discussion, exploration, and mutual understanding”

  4. Rights: expressive liberty that presumes against viewpoint discrimination, which occurs when there is a speech regulation due to the perspective of the speech itself, not because it violates any clauses of the First Amendment’s freedom of speech. This ensures free debate among differing political perspectives, restricts government overreach, and promotes citizens’ confronting injustices

  5. Expression: “good and equal chances to express views on issues of public concern to a public audience” such that each person who puts in a reasonable effort to share their views beyond their inner circles is given a fair opportunity to do so

These necessary rights and opportunities are aligned with three norms and dispositions (Cohen and Fung 2023, p. 94-95). These are necessary for the proper fulfillment of each of the above opportunities and in any fruitful and fair public discourse among participants of a well-functioning public sphere:

  1. Truth: participants have a universal assumed understanding of the importance of truth and “not deliberately misrepresenting their beliefs or showing reckless disregard for the truth or falsity of their assertions”, alongside “a willingness to correct errors in the assertion”

  2. Common Good: participants are concerned about the common good, on some reasonable understanding of the common good, such that participants do not oblige by a shared view of justice, but where each participant has “their views on fundamental political questions [and] are guided by a reasonable conception of the common good”

  3. Civility: participants can justify their political views in a “civil” manner, where they do not engage in political argument to prove their side or further confirm their beliefs, but as a manner of explaining to those that do not share their viewpoints in a manner that “can be supported by core, democratic values and principles—say, values of liberty, equality, and the general welfare”, while listening to the perspectives of others respectfully

Although each of Cohen and Fung’s five criteria is important when discussing their effects on a healthy democracy, I will further define and examine three criteria, and the three necessary conditions for each, that are inexplicably undermined by the current standards for algorithmic personalization in today’s political ecosystem. As there is overlap between the five criteria, these three criteria integrate the other two as well. I will be analyzing these criteria specifically in the U.S. political context.

  1. Diversity

Firstly, I will examine Cohen and Fung’s first right and opportunity for a well-functioning public sphere, diversity. They state that diversity has “each person [have] good and equal chances to hear a wide range of views on issues of public concern” where each participant has access to “a range of competing views about public values”. This definition states that the necessary exposure to disagreeing viewpoints is key for one to reflect on their views and justify them given conflicting information, which can lead to a high quality of political reflection and deliberation. This sentiment is routinely supported by political philosophers and theorists (Barberá 2020). Political and philosophical diversity is necessary for a healthy democracy as it fosters a healthy political sphere filled with intellectual debate rather than ignorant screaming matches where neither side has listened to a word of the opposing argument. Environments where only some political content goes to an audience fosters ignorance, confirmation bias, and in rare but more prominent cases, extremism that can lead to violence (Modgil 2021).

As previously stated, algorithmic personalization is rooted in a lack of diversity to “hook” viewers to what the algorithm already knows a user likes. Social media companies depend on advertising revenue, and the longer a user stays on the platform, the higher the chance they click on an ad, so it is clearly in these companies’ interests to addict users using homogeneous content that has historically proven to be what a user enjoys (Sindermann et A. 2023). Even if social media algorithms were to include differing political perspectives in a user’s feed, which fulfills the need for differing political perspectives that spur healthier debates, it would be a failure for both social media companies and political civility. This would (1) discourage long-term, user retention as users would be less engaged in media they do not relate to and (2), according to a study that studied users’ political beliefs before and after following bots that tweeted opinions disagreeing with their views, actually push a user’s beliefs more to the left or right than they were before (Bail et al. 2020). The latter point signals that a change must occur in integrating diverse viewpoints in algorithmic personalization, and I propose that it can occur if these algorithms integrate diverse viewpoints, especially on the political front, at the beginning of a user’s social media experience when they create an account. Bail’s study encouraged already active Twitter users to follow bots whose political beliefs strikingly disagree with their own, after potentially years of a user having never been approached with such views in an online setting. Integrating diversity in the user’s feed while they are first learning the way a site works initializes the presumption that social media is not supposed to only push non-conflicting information to a user.

In regards to the three norms and dispositions associated with diversity, as it decreases, so does the importance of truth within one’s feed as one is less likely to double-check sources that maintain their views but are inaccurate. When truthful statements do appear on their feed, it can serve as a form of confirmation bias that any other media that aligns with one’s view is also truthful. This lack of truthfulness goes hand in hand with a lack of common good, where it is algorithmically more fruitful and likely for users to see media that follows their viewpoints, whether or not it promotes the common good. This affects online civility, as reinforcing one’s view by posting (or reposting a post by a political out-group) using harsh language or crude jokes (more affectionately known online as “dunking”) is routinely seen as algorithmically beneficial (S. Rathje et al. 2021). This lack of civility in engaging with other viewpoints can only get worse because as algorithmic personalization grows a political chasm between users, it makes dehumanizing and criticizing the other side all the easier.

  1. Access (and Communicative Power)

Secondly, I will examine Cohen and Fung’s second right and opportunity for a well-functioning public sphere, Access. This entails that each person can access “instructive information on matters of public concern” as equally as anyone else if they put in the effort to inform themselves (Cohen and Fung 2023, p. 94). This is blatantly not the case in algorithmic personalization, as the algorithms will make it increasingly difficult to access reliable information if that information disagrees with a user’s political affiliation. So if a user has a sudden urge to challenge their viewpoints, unless they go out of their way to search for them, their algorithms will hide them in totality, as if they were never there. Even if a user does explicitly search them up, the algorithm will boost the most popular, and also more inane, points, another example of a lack of equal access to who gets a voice in social media.

One does not need to understand the way an algorithm works to understand how they can boost their content, and in the political realm that results in harsher posts that are factually blurry and at times, violent (Bucher 2016). Even if users are unaware of how these algorithms work, they know they prioritize more extreme or outwardly incorrect profiles because their lack of factuality is what causes engagement to their platforms, especially from an opposing viewpoint, even if the engagement is net positive or negative (S. Rathje et al. 2021). These posts and accounts will then get more communicative power than those that choose to post accurate and more nuanced information, which violates Cohen and Fung’s third criterion as a user does not have “good and equal chances to associate and explore interests and ideas together with others” (Cohen and Fung 2023, p. 94).

So, the accounts that reach out to more people are those that are more extreme and misleading. When people attempt to access information to inform themselves, it becomes more difficult than finding unreliable, extreme information. This search for “good media” gets harder and harder to do because as this “bad media” gets more popular, it shrinks the chances of “good media” having any engagement whatsoever. This is a very difficult problem to solve, because as content creators learn that “gaming the algorithm” entails making low-quality, “brain rot” posts that yield high-value results, then asking them to put time and effort into nuanced work that will lead to no money or engagement is an impossible task (Hu et al. 2023). This goes against the initial creation of connecting users across the globe to find shared interests if some if not most of what one sees online is garbage “brain rot”, rants, or misinformation.

As for our three norms and dispositions, algorithmic personalization violates the truth norm as the algorithm incentivizes lying and absurdity and similarly discourages truthful, well-thought-out posts. As extreme and unreliable content is the most monetizable, it similarly discourages content creators from focusing on the common good. It can almost feel as though media and sentiments that promote the common good are at odds with media that is categorized as entertaining as it has hit the threshold of uninformative and unreliable, which may not be for the “common bad”, but is most certainly not for the common good of those online. This also applies to civility as the more extreme content is, the less it needs to be civil or respectful of other viewpoints.

  1. Rights (and Expression)

Thirdly, I will examine Cohen and Fung’s fourth right and opportunity for well-functioning public sphere, Rights. This is arguably Cohen and Fung’s most vague criterion, as it is centered around protecting free speech and expressive liberty, which is a part of every other right and opportunity. Cohen and Fung do differentiate this criterion in their statement, “protecting speech from viewpoint regulation helps establish the conditions that enable equal citizens to form and express their views and monitor and hold accountable those who exercise power. And it gives participants additional reason for judging the results to be legitimate” (Cohen and Fung 2023, p. 93-94). I will primarily be focusing on the latter part of Cohen and Fung’s definition of the importance of monitoring and holding accountable those who exercise power on social media sites. This works in conjunction with Cohen and Fung’s fifth criterion, Expression, which requires “fair opportunities to participate in public discussion by communicating views on matters of common concern” to all audiences (Cohen and Fung 2023, p. 94). In social media, there has not been a universally acknowledged wave of censorship across platforms in the political realm, but both sides of the political spectrum argue that they are unable to express their true views – whether or not they are factual, informative, or insightful – due to the algorithm’s “invisible hand” limiting certain political content. These claims undermine both the Rights and Expression criteria as it is blatant viewpoint discrimination and does not provide fair opportunities for all to participate in online discussions.

This being said when discussing expressive liberty, equal expression, and free speech in the American context, one must note we must uphold the same restrictions the First Amendment has in the online context. Speech like incitement, threats with backing, defamation or libel, and obscenity is not protected, and reasonably so (United States Courts). Some speech is intended to harm, and allowing those to participate in such harmful speech puts one’s community in danger. This extends to expressive liberty and who has the right to equal expression online as well. I argue that all users should have expressive liberty that is not hidden by viewpoint discrimination and equal opportunity to participate in discussions online until a certain point. Posts should be removed or flagged when they encourage violence or other illegal activities, promoting false claims that can lead to violence, using explicit profanity, or anything else that would not be upheld in a court of law. Meta’s community standards focus primarily on hate speech, nudity and pornography, child exploitation, harassment, and spam. There is still constant debate regarding whether some speech deemed controversial online violates the subclauses of the First Amendment (this is especially true in instances of hate speech), in cases ranging from false accusations in a high school to a past President inciting an insurrection.

In terms of the three norms and dispositions, one disagrees with my claim that politically extreme posts should be regulated and argues that removing political content at all violates the truth norm. Those who feel any political content removal is glaring censorship that violates expressive liberty can argue that omitting controversial speech is removing some version of the truth, even if it is objectively not truthful. This has resulted in a wide reversal of enforcing content flagging, post removal, and account banning, largely towards alt-right extremist accounts. This goes hand in hand with a growing sentiment in today’s media landscape of more importance being given to “post-truth” than objective truth (Anderson 2017). Under this view, the common good is also undermined as every participant would not have the same free and equal voice to share their concerns as some accounts are haphazardly censored under guise of promoting civility.

This is a sentiment more shared among right-leaning individuals and CEOs of social media companies, namely Mark Zuckerberg and Elon Musk, CEOs of Meta, which own Instagram and Facebook, and X (formerly known as Twitter) respectively. They claim they are protectors of free speech in today’s political environment by preventing viewpoint regulation toward far-right leaning accounts, including President Trump, whose account was banned after he incited violence during the January 6 Capitol insurrection, a clear example of post-truth. President Trump re-received his X account after Elon Musk, his biggest donor in the 2024 election, bought Twitter and claimed that Trump’s removal was a mistake (Conger 2023).

Those that rely on post-truth and use appeals to emotion and personal belief more than objective fact, often equivocate it to object truth. This incorrect assumption leads one to assume that if “post-truth” should have the same place in our public sphere as objective truth, then any inhibitions of it are a violation of Rights, common good, and civility.

For a healthy democracy, we must stop blurring the line of what truth is. If social media platforms are to truly be the arbitrators of free speech, they must know that total free speech comes at the cost of truth, a key norm to uphold a well-functioning democratic public sphere. Additionally, Cohen and Fung acknowledge that “a commitment to the common good requires citizens to resolve these differences on a basis that respects the equal importance of others” (Cohen and Fung 2023, p. 96), which is an impossibility when post-truth is widespread, as it does not care for respecting and treating all groups as equals.

Truth on these platforms need not mean removing every fact that is a lie, but removing extremist or dangerous content, such as posts that deny a fair election, violence incentivization, promote violence to a group of people, or major bigotry. This also maintains common good and civility as any content that demeans other groups of people would result in a punishable consequence, such as post removal or account banning.

As we can see from Cohen and Fung’s article, today’s social media platforms are not promoting a healthy, well-functioning democratic public sphere. Even in the argument that filter bubbles do not affect one’s political compass, we see algorithmic personalization. Before solutions on how to curb political extremism in social media algorithms, there must be a consensus among philosophers, technologists, and politicians that this is a problem. We must revisit if today’s social media landscape is at all resemblant of what social media intended to be and if we wish for it to change.

Posted In ,

Leave a comment