1. Cross-disciplinary understanding of Discrimination
Addressing and attesting digital discrimination and remedying its corresponding deficiencies is a problem that must be faced from a cross-disciplinary perspective; including the technical, legal and social dimensions of the problem. In this stream, we study the relationship between these dimensions and how these can be combined to better understand discrimination.
- Natalia Criado, Jose M. Such. Digital Discrimination. Algorithmic Regulation. Oxford University Press (2019)
- Tom van Nuenen, Xavier Ferrer, Jose M. Such, Mark Coté. Transparency for whom? Assessing discriminatory AI. Computer, vol. 53, no. 11, pp. 36–44, 2020.
- Xavier Ferrer, Tom van Nuenen, Jose M. Such, Mark Coté, Natalia Criado. Bias and Discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine (2020) (in press).
2. Data-driven discovery of biases in ML-based NLP
Language carries implicit human biases, functioning both as a reflection and a perpetuation of stereotypes that people carry with them. ML-based NLP methods such as word embeddings have been shown to learn such language biases with striking accuracy. This capability of word embeddings has been successfully exploited as a tool to quantify and study human biases. Here we create a data-driven approach to automatically discover and help interpret conceptual biases towards different concepts encoded in the language from online communities.
- Xavier Ferrer, Jose M. Such, Natalia Criado. Attesting Biases and Discrimination using Language Semantics. AAMAS Responsible Artificial Intelligence Agents. (2019)
- Xavier Ferrer, Tom van Nuenen, Jose M. Such, Natalia Criado. Discovering and Categorising Language Biases in Reddit. International AAAI Conference on Web and Social Media (ICWSM 2021) (in press). Github
- Xavier Ferrer, Tom van Nuenen, Jose M. Such, Natalia Criado. Discovering and Interpreting Conceptual Biases in Online Communities. Arxiv preprint (2020).
3. Study of discrimination in online social media
Using the Data-driven discovery of biases we created, we explore the biases in social media and online communities and present them in a visually pleasing and interactive website.
4. Automated assessment of discrimination based on norms
Biases and discrimination in models and datasets pose a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions. Here, we use norms as an abstraction to represent different situations that may lead to digital discrimination to allow non-technical users to benefit from ML. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.
- Natalia Criado, Xavier Ferrer, Jose M. Such. A Normative Approach to Attest Digital Discrimination. Advancing Towards the SDGS Artificial Intelligence for a Fair, Just and Equitable World, Workshop of the 24th European Conference on Artificial Intelligence (ECAI 2020). Github
- Natalia Criado, Xavier Ferrer, Jose M. Such. Is my program sexist? Using Norms to Attest Digital Discrimination. IEEE Technology and Society Magazine (2020) (in press).
- Natalia Criado, Xavier Ferrer, Jose M. Such. Attesting Digital Discrimination Using Norms. International Journal of Interactive Multimedia and Artificial Intelligence IJIMAI (2021) (in press).