DADD Language Bias Visualiser

The DADD Language Bias Visualiser is online! The team has used Word Embeddings to connect target concepts such as `male’ or `female’ to evaluative attributes found in online data, which are then categorised through clustering algorithms and labelled through a semantic analysis system into more general (conceptual) biases. Categorising biases allows us to give a broad picture of the biases present in discourse communities, such as those on Reddit.

Check it out at https://xfold.github.io/WE-GenderBiasVisualisationWeb/