Our new article “A Normative Approach to Attest Digital Discrimination” has been accepted in the Advancing Towards the SDGS Artificial Intelligence for a Fair, Just and Equitable World Workshop (AI4EQ) of the 24th European Conference on Artificial Intelligence (ECAI’20)!
In the paper we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms. The code is publicly available here.
Abstract. Digital discrimination is a form of discrimination whereby users are automatically treated unfairly, unethically or just differently based on their personal data by a machine learning (ML) system. Examples of digital discrimination include low-income neighbourhood’s targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing. Recently, different techniques and tools have been proposed to detect biases that may lead to digital discrimination. These tools often require technical expertise to be executed and for their results to be interpreted. To allow non-technical users to benefit from ML, simpler notions and concepts to represent and reason about digital discrimination are needed. In this paper, we use norms as an abstraction to represent different situations that may lead to digital discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.