Talks
Summer 2019
What It Takes to Control Societal Bias in Natural Language Processing
Monday, July 8th, 2019, 10:15 am–11:00 am
Speaker:
Kai-Wei Chang (UCLA)
Natural language processing techniques play important roles in our daily life. Despite these methods being successful in various applications, they run the risk of exploiting and reinforcing the societal biases (e.g. gender bias) that are present in the underlying data. In this talk, I will describe a collection of results that quantify and control implicit societal biases in a wide spectrum of language processing tasks, including word embeddings, coreference resolution, and visual semantic role labeling. These results lead to greater control of NLP systems to be socially responsible and accountable.
Attachment | Size |
---|---|
What It Takes to Control Societal Bias in Natural Language Processing | 39.39 MB |