Reasserting Agency Online: A Manifesto
The students in my asynchronous online Summer 2020 course, Coded Content (ENGL 3380), collaborated to write this manifesto for their final project. Students reflected on what their major takeaways were from the course, from which I loosely defined the scope of their manifesto; then, with my facilitation and prodding, they brainstormed, generated prose, commented on each other’s contributions, and implemented changes in a shared Google Doc.
The spaces in which we read and write online (Facebook, Twitter, TikTok, WhatsApp, etc.) are often thought of simply as mediums of communication — otherwise neutral places where people can share ideas for others to see and respond to in conversation. However, in these spaces, there is a hidden rhetorical actor: algorithms1 that determine all the interactions that happen on these platforms and influence the content we see and produce. Having algorithms “rule” the online world creates a whole series of problems that we will discuss below. As alumni of the Summer 2020 class of ENGL 3380 “Coded Content” at Northeastern University, we recognize the need to take some of that power back.
A number of scholars in fields adjacent to English studies have written on algorithms’ mediating role. Algorithms recommend content and “gate keep” what we experience on social media platforms — controlling, as Dustin Edwards describes, what gets circulated and shared — whether that process is visible to us or not. Estee Beck draws our attention to the often unseen algorithms that collect information about our online activities — from what sites we visit or apps we open, to when, where and what we do, and with whom — which is hoarded and categorized for future use by other algorithms. Features like reposts, hashtags, comments, and likes are great ways to interact with content that can also easily be leveraged by users, algorithms (and their designers), and sometimes even bad actors looking to profit illicitly or amplify voices that suppress voices of others. While online activities such as liking, commenting, or sharing may seem trivial to us in passing, their consequences are far from inconsequential or local. Without an awareness of algorithms’ role in online spaces and how they can be used tactically, we risk unintentionally promoting or silencing others. We recognize this as a need for algorithmic literacy, fostering an understanding of processes that govern our technology-driven lives and their ramifications, both online and offline.
There are examples on almost every social media platform of subversive actors, from disruptive trolls to authoritarian leaders, who take advantage of platforms’ algorithms to proselytize, defraud, or harass. Appropriating a platform’s inherent logic enables such actors to boost the visibility of content, which provides their posts with additional perceived credibility. For example, Ryan P. Shepherd has written on how members of the now-banned subreddit community “r/the_donald” made their posts appear more frequently on Reddit’s homepage by actively manipulating its sorting algorithm. This act allowed them to reach a wider audience and push their ideologies into the mainstream. Even on platforms where identity2 is more visible than Reddit, it can be difficult to assess the intentions of users who share content that uses extremist language. The amplification of some voices at the cost of others often results when people of differing amounts of power, as well as differing opinions, both share and interact with the same content. Users’ aggregated intentions are interpreted by content-sorting algorithms, which then promotes already popular content to a broader audience. Aggregation, however, can preclude nuance; as Shepherd argues in his analysis of r/the_donald, “it is difficult to tell the difference between extremism and parodies of extremism online” (5). Worse, the algorithms that shape our experiences on many content sharing platforms are often structured to favor polarizing rhetoric. This tendency becomes problematic when hateful and discriminatory speech is re-circulated, harming more people while simultaneously encouraging its supporters to replicate that harm.
That said, it’s entirely possible for social media users to use our knowledge of algorithms to positively interact with digital spaces in ways that account for their material effects. We might model our advocacy after BIPOC organizers, like those who created the #StandWithStandingRock hashtag during the Dakota Access Pipeline protests. As Jackson et al. describe in #Hashtag Activism, these organizers subverted the algorithm’s reliance on gathering location data to undermine state surveillance. By asking people across social media platforms to change their locations to Standing Rock, they acted to protect the identity of activists who were at the protests (195). More recently, activists redirected a well-intentioned “Blackout” solidarity campaign, which was flooding the #BlackLivesMatter Instagram hashtag with black squares, inadvertently interfering with the George Floyd protests by eclipsing the voices of Black organizers.
While these are larger-scale examples of reasserting digital agency, there are ways to do so on a smaller scale. We think that the first step is individual awareness: it is important to be conscious of the biases present in our social media feeds and to question why certain voices are amplified while others get suppressed. Algorithms can often be reflective of real-world oppressive systems within the microcosm of a social media platform, which can mean the suppression of already-marginalized voices. By thinking about what we see online from an algorithmic perspective, we can start to see the ways in which algorithms may shift opinions, suppress or promote content, and disrupt digital homeostasis from the neutrality it poses to have. Social media content is often biased not solely due to people’s opinions, but the algorithms that circulate opinions on the web. Since all the content we receive on social media platforms carries bias due to who and what we connect with, it’s important to be critical of popular information in our feeds — to screen for not only blatant misinformation or presumed glitches,3 but also for the perspectives and biases they assume of us. This is especially true when confronting easily-resharable, authentic-looking plots and graphics that, as Data Feminism co-authors Catherine D’Ignazio and Lauren Klein remind us, can mislead (82-83).
As users sharing spaces with algorithms, we follow Rainie and Anderson’s charge to be algorithmically literate, understanding algorithms’ broader societal impacts and demanding more transparency and accountability from creators. While we acknowledge the potential for “rhetorical exhaustion,”4 we urge others to join us in evaluating sources, being mindful of internet surveillance, and fact-checking statements we see online. As authors of algorithmic processes, we urge those involved in implementing algorithms to consult with multiple potentially affected populations and be accountable to their implications.
1: Algorithms are processes and sets of rules, usually designed by humans, to do things and act on their behalf. They comprise computer programs often thought to be neutral; but they can inherit, and frequently perpetuate, bias. See Safiya Noble and Cathy O’Neill for key texts on algorithmic bias; see Tania Bucher for the “multiplicity” of ways the word “algorithm” is used.
2: See Cheney-Lippold for more on how algorithms construct identities for us.
3: See Benjamin; Reyman.
4: This term is Jonathan Bradshaw’s.
Works Cited
Beck, Estee N. “The Invisible Digital Identity: Assemblages in Digital Networks.” Computers and Composition, vol. 35, Mar. 2015, pp. 125–40.
Benjamin, Ruha. “Default Discrimination: Is the Glitch Systemic?” Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.
Bradshaw, Jonathan L. “Rhetorical Exhaustion & the Ethics of Amplification.” Computers and Composition, vol. 56, June 2020.
Bucher, Taina. “The Multiplicity of Algorithms.” If… Then: Algorithmic Power and Politics.
Oxford University Press, 2018.
Cheney-Lippold, John. We Are Data: Algorithms and the Making of Our Digital Selves. NYU Press, 2017.
D’Ignazio, Catherine, and Lauren F. Klein. Data Feminism. The MIT Press, 2020.
Edwards, Dustin W. “Circulation Gatekeepers: Unbundling the Platform Politics of YouTube’s Content ID.” Computers and Composition, vol. 47, Mar. 2018, pp. 61–74.
Jackson, Sarah J., et al. #HashtagActivism: Networks of Race and Gender Justice. The MIT Press, 2020.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016.
Rainie, Lee, and Janna Anderson. “Theme 7: The Need Grows for Algorithmic Literacy, Transparency, and Oversight.” Pew Research Center, 2017.
Reyman, Jessica. “The Rhetorical Agency of Algorithms.” Theorizing Digital Rhetoric, edited by Aaron Hess and Amber Davisson, 1 edition, Routledge, 2017, pp. 112–25.
Shepherd, Ryan P. “Gaming Reddit’s Algorithm: R/The_donald, Amplification, and the Rhetoric of Sorting.” Computers and Composition, vol. 56, June 2020.