Trapped in Algorithm Bubbles: Can We Escape It?

기사승인 [352호] 2021.12.06  


Nowadays, it is difficult to imagine a world without social media. Especially with the COVID-19 pandemic opening the Untact era, social media platforms such as Instagram, YouTube, TikTok, or Facebook are more active than ever. 

What are Algorithms?

You may have experienced being overwhelmed by the flood of contents when scrolling down on YouTube, totally immersed to a point of losing track of time. Once you click on a video and finish watching, it is hard to resist the previews of the next videos that pop up immediately after the video ends. This mechanism is based on a system called the “algorithm.” The term “algorithm” can be defined as a process or set of rules that are followed in calculations or other problem-solving operations, especially by a computer. Algorithms in social media platforms allow relevant content to be delivered to users after being filtered among a large amount of information. The main goal is to maximize participation by finding out what people like and exposing it on the top of the feed.

Filtering Mechanism: Content-based Filtering and Collaborative Filtering

Let’s say that you are taking a walk at the park alone on a cool, autumn night. You are trying to listen to a list of calm, relaxing music, but loud, funky music interferes with your mood all of a sudden. This is what we would have to face without the recommendation system of algorithms. This system can be categorized into two ways: content-based filtering and collaborative filtering. Content-based filtering relies solely on the information from the content itself, recommending other videos by analyzing the unique characteristics of the contents. This system is advantageous in terms of being able to suggest content despite the platform’s lack of data on a specific user with no prior history. However, since it only focuses on analyzing the contents, it is difficult to identify the personal taste of the users in full detail. Collaborative filtering groups people that normally do the same thing together and proposes content that is watched commonly within the group. This includes showing mutual friends on Instagram or Facebook as recommendation for future relationships or displaying advertisements of products that are frequently purchased along with the products that we have recently purchased. However, one of the demerits of this filtering system is that it has to face a cold start, which is when the acquisition of necessary information is unavailable due to an inadequate amount of general information.

The Overgeneralization Drawn by Algorithm

The recommendation system based on algorithm is indeed meaningful when considering the convenience and rapidity it provides the users. Suggesting videos relevant to one’s interest suits the users of the current society, in which speed and handiness are most preferred. Despite these merits, several matters have been rising to the surface along with the increase in the use of social media platforms. Professor Ko Hak-su, from the Graduate School of Law from Seoul National University, throws doubt in terms of the credibility of recommendation systems by stating, “In general, questions may arise as to whether recommendations are made useful to individual users while standing in line with the user's best interest. Algorithm users may be the end consumers, but depending on the type of service, it may also be selfemployed business owners in places such as restaurants or delivery men, expanding from the traditional group of end consumers; there are a wide variety of possibilities.” Algorithm, a relativitybased concept which judges whether the user will be interested in certain content or not without an objective factor, makes it difficult to consider the best variable interests that can differ depending on situation and stance, resulting in hasty generalization. Moreover, questions are raised about how algorithm is being operated specifically. The problems that arise within Facebook and Instagram originate from a secretive algorithm that only a few employees within the corporation are aware of. Franklin Foer, the Author of World Without Mind, points out that “Facebook exploits hundreds of thousands of signals to determine what they will show the users.” He also claims that “The Facebook source code, longer than 60 million lines, is like a decipherable ancient script written with complex algorithm - so much so that even Facebook themselves may not fully understand.” Since most of the users use Facebook for many different p u r p o s e s , including forp r o f i t u n d e r a v o l u n t a r y agreement to its terms, problems that arise are hardly diagnosed and resolved rightfully. Professor Won Yong-jin, from the Department of Communication from Sogang University states, “There are more speculations about algorithms than proven facts. One of the reasons is because the way it is operated is not open to the public. As it is being criticized for its non-transparency, transparency should be prioritized.” He added that it is indeed wrong to simply assume that algorithm systems are being manipulated and distorted, while at the same time, mentioning that it is irrefutable that systems using algorithm have more responsibility for this. The Indiscriminate Spread of Improper Content A n o t h e r emerging problem t h a t a l g o r i t h m can lead to is the i n d i s c r i m i n a t e s p r e a d o f i n a p p r o p r i a t e c o n t e n t . P a r k T a e - y e o n , a Student from Hanyang University with experience in algorithm-based social media said, "It seems that the amount of data provided by social media does not necessarily guarantee the degree of credibility. I have had the experience of being exposed to explicit content on social media against my will. The experience was very unpleasant. Once, my algorithm recommended many inappropriate videos in a short period of time; I believe that this was because most of the channels I subscribe to are by real-life couples, where it further recommends videos of other relevant channels automatically, although I am not interested in them." In October of this year, an 18-yearold high school student in Covington, L.A, was arrested for hitting a disabled 64-year-old teacher in her classroom, claiming that it was for a “TikTok challenge.” The violent challenges trending in Tiktok, backed up by algorithms, are not limited to the above; a challenge, “Devious Licks”, spread rapidly among students, which was problematic due to its nature of involving theft and vandalism. As such, the spread of inappropriate content mostly trending with the assistance of algorithms can result in unacceptable and undesirable outcomes especially to teenagers that are more vulnerable to negative stimulation. These instances warn us that the use of algorithms without proper regulation and due diligence can bring about unprecedented drawbacks with no foreseeable resolutions. 

Fast Spread of Fake News

Current issues resulting from social media and their use of algorithms are related to fake news that mislead the viewers. A fake news article mostly consists of what appears to be news from renowned press such as CNN, completely falsified with the purpose of tricking the viewers with the intent of mere entertainment, or even worse, specific malicious intent towards political, economic, or other interestoriented fields. The photoshopped photos attached together act as key components in fooling even the most suspicious viewers, which has currently evolved to use deep fake technologies that utilize artificial intelligence to forge falsified videos with the appearances and voices of politicians and socially well-known figures. Such fake news is promoted by algorithms without its authenticity verified, being recommended and watched by many viewers online, reproduced to become “viral” and attract more victims in the Internet arena. Jimmy Wales, the founder of Wikipedia, said on an interview with Yonhap News that fake news and radicalism prevalent on the Internet are being promoted by algorithms which lead people to click them although they do not look them up intentionally. Without valuing the authenticity and public interest that the information bears, algorithms only focus on their business purposes of leading as many people as possible to view what they feed the users.

Let’s Plunge Further: CyberEscapism

W h a t c a u s e s s u c h a h e l p l e s s dependency on personalized algorithm? One of the main reasons that people resort to increasing screen time is because it has become a prominent form of escapism. The problem that lies with dependency on algorithms is that hyper-individualized content is not an exploration of individuality but an escape to an accustomed familiarity. Im Hye-bin, a Professor in the Department of Industrial Psychology in Kwang-woon University, stated that it becomes i n c r e a s i n g l y and egregiously difficult to ask people to invest in different kinds of perspectives, i n f o r m a t i o n , and even other interests and hobbies. She stated the biggest problem of algorithm to be one of rejection from communicating with other people and using our phones as a form of escapism from reality. David Foster Wallace’s take on a non-scientific approach to defining loneliness provides insight on this reliance on algorithms to pass the time outside of our reality of dread-inducing “interactions” with other people. He shares his observations saying, “Lonely people tend, rather, to be lonely because they decline to bear the psychic costs of being around other humans. They are allergic to people. People affect them too strongly.” Professor Im claims that instead of “facing the different hardships” and generally thinking of “face-to-face communication,” as burdensome activity, people seek refuge in their filter bubbles. She continues “It’s a lot easier to be in proximities with people you agree with and don’t question your values and beliefs.” The notion of a “filter bubble” is largely predicated on the idea that by only exposing people to information that they already agree with (by overemphasizing it), it amplifies biases and hampers people’s development of diverse and healthy perspectives.

Welcome to the Filter Bubble

First coined by Eli Pariser in 2010, the filter bubble refers to the phenomena where hyper-individualized algorithm causes internet users to encounter information and opinions that only align or reinforce their own beliefs. According to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Users will be placed in a “filter bubble” and more importantly, without the awareness of being trapped within it. He warns that the “epistemic quality” of knowledge and diversity of perspectives will greatly suffer, and the future of civic discourse is put into question.

Weakening Social Communities

Boo Suh y u n , a P r o f e s s o r i n the Department of Media and P s y c h i a t r y o f K y u n g b u k U n i v e r s i t y warned- “We all do this skimming and sharing and clicking, and it seems so innocent. But many of us are uninformed about or uninterested in the forces affecting what we see online and how content affects us in return and that ignorance has severe consequences.”

Can We Pop the Bubble?

R e s e a r c h e r s o f p e r s o n a l i z e d algorithm emphasize that it is important not to antagonize the development of this technology. Professor David Lankes, a Professor at the University of South Carolina School of Library and Information Science, wrote, “There is simply no doubt that, on aggregate, automation and large-scale application of algorithms have had a certain positive effect. People can be more productive, know more about more topics than ever before, identify trends in massive piles of data and better understand the world around them. That being said, unless there is an increased effort to make true information literacy a part of basic education, there will be a class of people who can use algorithms and a class used by algorithms.” Professor Boo warns that direct government regulations of people’s individual AI may not redeem the u n d e m o c r a t i c c o n s e q u e n c e s o f personalized algorithms. On October 28, the Ministry of Science and Information and Communications Technology held a seminar dedicated towards how the South Korean government should approach the regulation of AI algorithm. The United States is also reserving some of its remaining legislative days of 2021 to introduce the Filter Bubble Transparency Act. The results of these discussions call for transparency of the system in order to reduce manipulations enacted by algorithms using personal data. The conclusions of both discussions are not yet final, but it seems that platforms that propagate harmful information will be faced with unprecedented consequences. Professor Ko states that what is more worrisome is that “general opinions” asserting that “algorithms should be fair” do not provide any meaningful guidance to actual algorithm developers. Future regulations and laws should propose that developers must present measurable metrics through statistics or figures. Developers and corporations cannot work based on abstract ethical notions or principles but can only work when they are measurable using data. The requirement of such indicators is not something that can be achieved by simply introducing a “loaded, short sentence into the law”. Sufficient social concerns and accumulation of empirical data in the field must be premised.

Urgent Calls for Media Literacy

Professor Lazer claims that “Tackling it [algorithm regulation] demands not only technical sophistication but an understanding of and interest in societal impacts.” Professor David warns that not only does the corporate world have to be interested in effects, but consumers must be informed, educated, and active in their orientation toward consuming even the subtlest of content. This is what computer literacy looks like in the 21st century.


The “filter bubble,” is our new habitat. We are its designers, and the algorithm is our trusty architect. It is where we reside- a personally curated world that influences our identity, shapes our values and priorities, and distorts our collective reality. As our societies become more and more enveloped by s u c h “ b u b b l e s ” – s p h e r e s o f individualized realities – the line between the skewed and the real becomes less and less distinct.

Kim Sung-joo & Park Seung-hye &

<저작권자 © 한양저널 무단전재 및 재배포금지>




1 2 3

섹션별 인기기사 및 최근기사