Publishing papers (like these) is a key success metric for academics, but HateLab also wants their work to have a meaningful impact on reducing toxicity and improving positive, healthy conversation online. Enter the HateLab Dashboard. The dashboard currently identifies toxic conversation flows and brings those insights to charities, government departments, and other interested organizations who want to create counter messages that will help defuse toxic speech.
Sefa Ozalp, the lead data science researcher at HateLab, states that: “It is not easy for policy makers or community organizations to get a grasp of the big picture of the discussions and online tensions on Twitter due to the massive volume of Tweets arriving every second of the day.
By putting our machine learning research on online hate speech detection into production, HateLab Dashboard addresses this challenge and presents an interactive and intuitive way to explore online tensions on Twitter and assists with data-driven decision making. We have received overwhelmingly positive feedback about the usefulness of the Dashboard during the field trials with approved partners who are interested in making sense of social media data to promote community cohesion”.
HateLab’s partner, Social Data Science Lab, has developed a desktop and web tool for more general social research, COSMOS. COSMOS filters Tweets using the Twitter API to give researchers without programming skills an easy way to analyze the public conversation. This tool helps researchers source data from Twitter in a way that is ethical, and turns it into something they can use as source material for research.