Skip to content

How Social Media Algorithms Promote Misinformation, and How We Can Educate Our Students

February 25, 2019

Recently YouTube began removing conspiracy videos from their recommended video algorithm. This algorithm has promoted conspiracy videos to the top of YouTube’s video suggestions for years. YouTube is not alone in this – many other digital media companies are also promoting false and misleading information via algorithmic “feeds.” Today, many people are consuming media exclusively through these feeds. These feeds take many forms – in social media, push notifications on our smartphones, the news page on sites like Google and Reddit, or a combination of all of the available methods for media consumption. We don’t always seek out our information; sometimes it just “comes to us.” And increasingly information come to us by way of machine-learning algorithms. In this context it is important for academic librarians, especially those in the liberal arts setting, to revisit our commitment to educating our students to critically examine sources in an information world increasingly curated by algorithms.

We don’t know much about how these algorithms operate. Largely, the way these algorithms decide the flow of information to consumers is an industry secret. Companies aren’t sharing this information publicly because it is extremely valuable intellectual property. For instance, we know that YouTube used to curate their recommended video selection based on what a user has viewed in the past. Specifically in this instance, “watch time” was the primary metric. The amount of time a user spent watching videos on certain topics would then be compared to other similar users, and thus recommended videos would be “fed” to the end user based on what other similar users have watched. This is why conspiracy videos, such as Modern Flat Earth Theory, became so prominent on YouTube. The more a video grabbed users’ attention, the more minutes users logged watching that video. And thus, the higher that video surely ranked on YouTube’s recommended video list.

We can safely deduce YouTube’s algorithm was not ranking videos based on their veracity. And this is a problem that is spread across the entire digital media landscape – Amazon and Hulu still list conspiracy videos in their “documentary” sections.

In the past, media consumers could trust a central authority – trusted newspapers, publishers, or television networks – to vet the feed of their information. Consumers have not, at least in recent history, been accustomed to critically evaluating and vetting the information that comes their way. Today there is no longer a central authority in this same way, and in large part the vetting process has been automated. The media landscape has become so decentralized that not only are users experiencing “echo chambers” and “bubbles” in social media, but these bubbles are getting smaller and smaller thanks to machine learning. Algorithms are getting so good at curating people’s information feeds that these bubbles aren’t shared by large groups of people any more – each individual lives in their own filter bubble. And with the investments being made in artificial intelligence and machine learning, we can only expect the landscape to get more decentralized, not less.

This is one of the reasons why my colleagues at Carlow University’s Grace Library have begun a new commitment to digital and media literacy as part of our information literacy vision. We hope to realize this vision through several methods. First, by embedding information literacy and critical thinking skills into the core curriculum. Second, by tying our information literacy instruction to the new ACRL Framework for Information Literacy in Higher Education. And third, by incorporating learning outcomes into our reference interactions (for example: encouraging students to seek resources from diverse perspectives). By working towards these goals, we hope to reach more students and encourage them to ask critical questions about all of the information they consume.

It is more important than ever to stress the importance of peer review, scholarly communication, and the research and knowledge creation process. But as part of a commitment to a liberal arts education, we must also train our students to ask critical questions about the media they consume outside of the research process as well, as our goal is to prepare students to become well-rounded adults and informed citizens in society.

These are nebulous and difficult questions to ask, and this problem is not something academic librarians can hope to tackle on their own. This issue is larger than a liberal arts university library. In my opinion, teaching students to deal with a decentralized media landscape is the information literacy challenge of our time. To equip students with the skills required to vet their information feeds, librarians must partner with faculty and build sustainable initiatives to reach as many students as possible. And educating students about the role of machine learning in their information feeds is a critical component of that overall goal.

A book cover for The Filter Bubble by Eli Pariser
One Comment leave one →
  1. February 25, 2019 5:19 pm

    Another book recommendation for those who are interested in algorithms and bias: Algorithms of Oppression, by Safiya Noble: https://www.worldcat.org/title/algorithms-of-oppression-how-search-engines-reinforce-racism/oclc/1020174401

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: