Over 40% of Internet users report experiencing abuse online, with marginalized groups experiencing the most frequent and most severe forms of abuse. I aim to detect and discourage online abusive behavior using statistical machine learning and causal inference strategies. By examining how established platforms govern themselves, I create human-centered technology that better serves the needs of online communities.
In this talk, I will use my research on Reddit to demonstrate three key phases of my work—understanding, building and evaluating. First, I will talk about my research on understanding social norms across disparate communities, the first large-scale study of its kind. I found that racist and homophobic speech is generally considered unacceptable on most Reddit communities, while speech mocking religion and nationality is not. Second, I will introduce Crossmod, a new open source, AI-backed sociotechnical moderation system that I built. Crossmod is currently deployed in a Reddit community with over 14 million members. Third, I will describe how I evaluated the effectiveness of deplatforming using causal inference techniques. I found that banning spaces where hateful groups congregate actually helped reduce hate speech on Reddit.
I will conclude with future directions for my research. In addition to combating misbehavior, my vision is to promote “healthy” behavior online, create proactive solutions to prevent online misbehavior from occurring, and tackle new challenges in online governance.
Eshwar Chandrasekharan is a PhD candidate in Computer Science at Georgia Tech. His research builds a foundation for evaluating and improving approaches to online moderation, and developing new AI-backed sociotechnical systems. His work has appeared at high-impact conferences like CSCW, CHI, ACL and Web Science. Chandrasekharan has worked with large-scale Internet platforms including Twitter, Reddit and Facebook, and his research has impacted their efforts to improve online governance. For example, he developed Crossmod, a new AI-backed moderation system that is currently deployed in an online community with over 14 million subscribers. His research led Reddit to ban many hate groups (e.g., neo-Nazis) from the platform. Steve Huffman, Reddit CEO and co-founder, also used Chandrasekharan’s work as evidence in his recent testimony before Congress. Chandrasekharan’s work has received considerable press coverage—e.g., The New York Times, MIT Technology Review, The Verge, TechCrunch and MotherBoard.
Faculty Host: Hari Sundaram