Social media algorithms exploit how we learn from our peers
Northwestern scientists propose interventions to limit the spread of misinformation and improve user experience
- Link to: Northwestern Now Story
EVANSTON, Ill. --- Surveys of Twitter and Facebook users show people are exhausted by and unhappy with the overrepresentation of extreme political content or controversial topics in their feeds.
In a review published today (Aug. 3) in the journal Trends in Cognitive Sciences, social scientists from Northwestern University describe how the misalignment between the objective of social media algorithms, designed to boost user engagement, and functions of human psychology can lead to increased polarization and misinformation.
“There are reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation,” said the article’s first author William Brady. “The social platforms also stand to benefit from better aligning their algorithms to improve user experience.”
Brady is a social psychologist in the Kellogg School of Management at Northwestern University.
“We wanted to put out a systematic review to help explain how human psychology and algorithms interact in ways that can have these consequences,” Brady said. “One of the things that this review brings to the table is a social learning perspective. This framework is fundamentally important if we want to understand how algorithms influence our social interactions.”
Humans are predisposed to learning from those they perceive as having prestige or influence within their group to cooperate and survive.
However, with the advent of diverse and complex modern communities — and especially in social media — these predisposed learning biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media.
When learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information was likely to be relevant to enforcing group norms and ensuring collective survival.
Algorithms, in contrast, are usually selecting information that boosts user engagement to increase advertising revenue. This means algorithms amplify the very information from which humans are biased to learn, oversaturating social media feeds with what the researchers call Prestigious, Ingroup, Moral and Emotional (PRIME) information, regardless of the content’s accuracy or representativeness of a group’s opinions. As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the majority opinion of different groups.
“It’s not that the algorithm is designed to disrupt cooperation,” Brady said. “It’s just that its goals are different. And in practice, when you put those functions together, you end up with some of these potentially negative effects.”
To address this problem, the authors propose that social media users become more aware of how algorithms work and why certain content shows up on their feed. Social media companies don’t typically disclose the full details of how their algorithms select for content, but they suggest companies could start by offering explainers as to why a user is being shown a particular post. For example, is it because the user’s friends are engaging with the content or because the content is generally popular?
In addition, the researchers propose that social media companies take steps to change their algorithms, so they are more effective at fostering community. Instead of solely favoring PRIME information, algorithms could set a limit on how much PRIME information they amplify and prioritize presenting users with a diverse set of content. These changes could continue to amplify engaging information while preventing more polarizing or politically extreme content from becoming overrepresented in feeds.
The research team is also developing interventions to teach people how to be more conscious consumers of social media.
“As researchers, we understand the tension that companies face when it comes to making these changes and their bottom line. That’s why we think these changes could theoretically maintain engagement while also disallowing the current overrepresentation of PRIME information,” Brady said. “User experience might actually improve by doing some of this.”
The full text of the Trends in Cognitive Sciences article “Algorithm-Mediated Social Learning in Online Social Networks” can be accessed online at https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(23)00166-3