Skip to content

It only takes one racist comment to sour an online discussion. A main area of focus is machine learning models that can identify racism in online conversations, where racism is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. If these toxic contributions can be identified, we could have a safer, mor…

Notifications You must be signed in to change notification settings

mc2259/BERT-for-classifying-toxicity

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 

Repository files navigation

BERT-for-classifying-toxicity

It takes only one racist comment to sour an online discussion. A main area of focus is machine learning models that can identify racism in online conversations, where racism is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion.

If these toxic contributions can be identified, we could have a safer, more collaborative internet. I use transformers in the Xenophobic Meter Project to give each tweet a toxicity score. The dataset has been taken from the kaggle Jigsaw Multilingual Toxic Comment Classification challenge. This was part of a research project in collaboration with Rishi Malhotra , Bao Kham Chau and Prof Beth Lyon.

xenophobia

About

It only takes one racist comment to sour an online discussion. A main area of focus is machine learning models that can identify racism in online conversations, where racism is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. If these toxic contributions can be identified, we could have a safer, mor…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published