Digital Aggression: Research, Experience, and Policy
Hello everyone, I am Dr. Erika Sparby. First, thank you to Representative Warren and the University of Michigan IRWG for hosting this event and inviting me to speak. Especially as we move through this fraught election season, digital aggression is at the forefront of social justice issues that need to be addressed at multiple levels. In this brief talk, I’m going to give a quick overview of my research projects and experiences as well as talk about policy efforts that have begun at my home university.
I began studying aggressive discourses online with my dissertation, Memes and 4chan and Haters, Oh My! Rhetoric, Identity, and Online Aggression. Here, I
I have also co-edited a collection with my friend and mentor Jessica Reyman called Digital Ethics: Rhetoric and Responsibility in Online Aggression. In this book, fourteen authors examine digital aggression from a range of perspectives with variety of goals, including developing values-based moderation practices, feminist research methodologies in hostile spaces, and developing ethical circulation practices. In the introduction, Jessica and I posit that productive digital communities must be proactive and act in ways that recognize the multidimensional nature of aggression:
This model acknowledges that an ecological response to digital aggression is required. Much of this work must come from platform designers and developers and social media managers and entrepreneurs, who must build tools that (1) protect vulnerable groups and (2) allow for collective and collaborative management of digital aggression by users and communities. Offering design and moderation options would allow users to collectively employ tools and tactics to establish (and reestablish, when needed) values and principles of productive digital discourse and collaboratively cultivate an inclusive community on multiple fronts.
In a forthcoming article called “Reading Mean Comments to Subvert Gendered Hate on YouTube: Toward a Spectrum of Digital Aggression Response,” I test this framework through an examination of how YouTube’s platform has failed its users by creating rules and policies that ostensibly protect them, while in actuality doing very little to enforce them and provide tangible protection. I look to five young woman YouTubers who, acting as both community leaders and moderators, have developed reading mean comments as a strategy for reinforcing their comments sections as positive spaces for community building, while also modeling good digital citizen behaviors for their community members, who then also help reinforce the norms of the space. While this tactic seems to work in the short term for addressing aggression, it takes a lot of emotional energy to maintain long term, and I argue that YouTube needs to be doing more to support its content creators.
In another forthcoming piece titled “Toward an Ethic of Self-Care and Protection When Researching Digital Aggression,” I recognize the vulnerable position of digital aggression researchers and the impacts this kind of work can have on both mental and physical health and also digital safety. I share stories from my own research experiences and two of my colleagues’ to show how researching digital aggression can be incredibly mentally, emotionally, and spiritually draining. I provide self-care strategies that I wish I had had the forethought to consider when I decided to spend hours upon hours a day in these spaces, and I urge researchers to build a self-care plan into their research methodologies intentionally.
I also share stories from when the hostile communities we research discovered our publications. Users on 4chan found my article “Digital Social Media and Aggression: Memetic Rhetoric in 4chan’s collective identity” shortly after it was published in late 2017. I had intentionally chosen a publication venue that was behind a paw paywall and I knew it would be more difficult for aggressors to find. But a well-meaning colleague at another university assigned my article in his graduate digital rhetoric course and linked the pdf on his publicly accessible course website. Once 4chan got their hands on the abstract, they attempted to discredit it and insult me, despite clearly not having read the whole piece. At one point someone briefly mentioned doxing me, or making my personal information public as an intimidation tactic. It was a tense few days before the thread fell inactive and slipped from the main board. As such, I provide takeaways for how to protect our digital safety post-publication. Some of these include locking down digital identities, considering the publication venue, notifying employers, adding “do not share” disclaimers, alerting local authorities, and considering citational practices of fellow researchers.
Finally, I have recently begun collaborating with my college dean and another faculty member in a different department to strategize how we can get a university policy in place to protect us as teacher-scholars from digital aggression attacks, both as a result of our research and from students who take issue with the content we teach, such as Turning Point USA’s vendetta against professors who they deem as contributing to “liberal indoctrination.” We are looking toward other successful policies and resources, such as those offered by the University of Illinois and the University of Minnesota. Both of these universities have pages on their main website dedicated to helping faculty and staff navigate instances of aggression they may face. They provide specific strategies and phone numbers for faculty to begin protecting themselves. The University of Minnesota also includes a statement clearly positioning themselves to defend academic freedom. At Illinois State University, we want to craft a similar guide and also develop a university policy that makes it clear the ISU will protect faculty and staff who may face aggressive attacks. Oftentimes, our jobs are at stake when these attacks happen, and so a policy to protect us is key. These efforts are in the very early stages, but I’m looking forward to seeing how today’s conversation can help us work out ways to move forward with a solid policy to protect our faculty, and to hopefully create a model for other universities beyond ours.
Thank you again for listening, and I look forward to a great discussion.
References
I began studying aggressive discourses online with my dissertation, Memes and 4chan and Haters, Oh My! Rhetoric, Identity, and Online Aggression. Here, I
- examined the ways that memes perpetuate negative stereotypes against marginalized identities;
- uncovered how anonymous users on 4chan assimilate into violent groupthink and mob behaviors
- recognized the ways that haters disrupt fan communities on YouTube through mean comments.
I have also co-edited a collection with my friend and mentor Jessica Reyman called Digital Ethics: Rhetoric and Responsibility in Online Aggression. In this book, fourteen authors examine digital aggression from a range of perspectives with variety of goals, including developing values-based moderation practices, feminist research methodologies in hostile spaces, and developing ethical circulation practices. In the introduction, Jessica and I posit that productive digital communities must be proactive and act in ways that recognize the multidimensional nature of aggression:
- First, we find some responsibility with platform providers, technology developers, and media companies to design and offer more powerful tools for moderation and management of aggression, as well as harassment policies and terms of use that can be drawn on to support community members. There must be transparency in actions being taken, tools employed, and terms of use enforced. Some design decisions could benefit from including more diversity in the hiring of developers, managers, and entrepreneurs, as lack of diversity can lead to design decisions with unintended consequences for minority groups and vulnerable populations.
- Second, community leaders and content creators themselves must clearly establish and articulate the values and norms of their communities. Leaders can employ technical tools and moderation options made available to them through the platform as well as communicative practices that support values of inclusivity and productive discourse. Community leaders are uniquely positioned to establish these values through the generation of codes of ethics, community mission statements, and through direct responses to transgressions.
- Third, human moderators who participate in digital communities can consistently enforce the rules and values of those communities. They can accompany corrections of transgressions and enforcement of rules with reminders to users of the connections between rules and the values established by community leaders. Their actions and comments should aim to support a culture based on a shared value of inclusivity rather than more limited rule-following.
- Fourth, community members and participants who are not official moderators can also help to reinforce values, norms, and rules, and teach others. By distributing the activity of moderation among many participants, community leaders and moderators do not suffer the sole burden. Rather than remaining silent or “not feeding the trolls,” participants can respond to aggression with clear articulation of shared values.
This model acknowledges that an ecological response to digital aggression is required. Much of this work must come from platform designers and developers and social media managers and entrepreneurs, who must build tools that (1) protect vulnerable groups and (2) allow for collective and collaborative management of digital aggression by users and communities. Offering design and moderation options would allow users to collectively employ tools and tactics to establish (and reestablish, when needed) values and principles of productive digital discourse and collaboratively cultivate an inclusive community on multiple fronts.
In a forthcoming article called “Reading Mean Comments to Subvert Gendered Hate on YouTube: Toward a Spectrum of Digital Aggression Response,” I test this framework through an examination of how YouTube’s platform has failed its users by creating rules and policies that ostensibly protect them, while in actuality doing very little to enforce them and provide tangible protection. I look to five young woman YouTubers who, acting as both community leaders and moderators, have developed reading mean comments as a strategy for reinforcing their comments sections as positive spaces for community building, while also modeling good digital citizen behaviors for their community members, who then also help reinforce the norms of the space. While this tactic seems to work in the short term for addressing aggression, it takes a lot of emotional energy to maintain long term, and I argue that YouTube needs to be doing more to support its content creators.
In another forthcoming piece titled “Toward an Ethic of Self-Care and Protection When Researching Digital Aggression,” I recognize the vulnerable position of digital aggression researchers and the impacts this kind of work can have on both mental and physical health and also digital safety. I share stories from my own research experiences and two of my colleagues’ to show how researching digital aggression can be incredibly mentally, emotionally, and spiritually draining. I provide self-care strategies that I wish I had had the forethought to consider when I decided to spend hours upon hours a day in these spaces, and I urge researchers to build a self-care plan into their research methodologies intentionally.
I also share stories from when the hostile communities we research discovered our publications. Users on 4chan found my article “Digital Social Media and Aggression: Memetic Rhetoric in 4chan’s collective identity” shortly after it was published in late 2017. I had intentionally chosen a publication venue that was behind a paw paywall and I knew it would be more difficult for aggressors to find. But a well-meaning colleague at another university assigned my article in his graduate digital rhetoric course and linked the pdf on his publicly accessible course website. Once 4chan got their hands on the abstract, they attempted to discredit it and insult me, despite clearly not having read the whole piece. At one point someone briefly mentioned doxing me, or making my personal information public as an intimidation tactic. It was a tense few days before the thread fell inactive and slipped from the main board. As such, I provide takeaways for how to protect our digital safety post-publication. Some of these include locking down digital identities, considering the publication venue, notifying employers, adding “do not share” disclaimers, alerting local authorities, and considering citational practices of fellow researchers.
Finally, I have recently begun collaborating with my college dean and another faculty member in a different department to strategize how we can get a university policy in place to protect us as teacher-scholars from digital aggression attacks, both as a result of our research and from students who take issue with the content we teach, such as Turning Point USA’s vendetta against professors who they deem as contributing to “liberal indoctrination.” We are looking toward other successful policies and resources, such as those offered by the University of Illinois and the University of Minnesota. Both of these universities have pages on their main website dedicated to helping faculty and staff navigate instances of aggression they may face. They provide specific strategies and phone numbers for faculty to begin protecting themselves. The University of Minnesota also includes a statement clearly positioning themselves to defend academic freedom. At Illinois State University, we want to craft a similar guide and also develop a university policy that makes it clear the ISU will protect faculty and staff who may face aggressive attacks. Oftentimes, our jobs are at stake when these attacks happen, and so a policy to protect us is key. These efforts are in the very early stages, but I’m looking forward to seeing how today’s conversation can help us work out ways to move forward with a solid policy to protect our faculty, and to hopefully create a model for other universities beyond ours.
Thank you again for listening, and I look forward to a great discussion.
References
- Sparby, Erika M. (Forthcoming). Toward an ethic of self-care and protection when researching digital aggression. Chapter accepted for Crystal VanKooten & Victor del Hierro (Eds.) Methods and Methodologies for Research in Digital Writing and Rhetoric. Colorado Springs: WAC Clearinghouse.
- Sparby, Erika M. (Forthcoming). Reading Mean Comments to Subvert Gendered Hate on YouTube: Toward a Spectrum of Digital Aggression Response. Article accepted for enculturation.
- Reyman, Jessica & Sparby, Erika M. (2020). Introduction: Toward an ethic of responsibility in digital aggression. In Jessica Reyman & Erika M. Sparby (Eds.) Digital ethics: Rhetoric and responsibility in online aggression (pp. 1-15). New York: Routledge.
- University of Illinois. (2020). Trolling Attacks on Scholars—Executive Officer Action.
- University of Minnesota. (2020). Resources for Responding to Online Harassment.
- Sparby, Erika M. (2017). Digital social media and aggression: Memetic rhetoric in 4chan’s collective identity. Computers and Composition, 45, pp. 85-97.
- Sparby, Erika M. (2017). Memes and 4chan and Haters, Oh My! Rhetoric, Identity, and Online Aggression. Dissertation.