It takes only one racist comment to sour an online discussion. A main area of focus is machine learning models that can identify racism in online conversations, where racism is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion.
If these toxic contributions can be identified, we could have a safer, more collaborative internet. I use transformers in the Xenophobic Meter Project to give each tweet a toxicity score. The dataset has been taken from the kaggle Jigsaw Multilingual Toxic Comment Classification challenge. This was part of a research project in collaboration with Rishi Malhotra , Bao Kham Chau and Prof Beth Lyon.