About

Digital discrimination can be the result of algorithmic biases, i.e., the way in which a particular algorithm has been designed creates discriminatory outcomes, but it also occurs using non-biased algorithms when they are fed or trained with biased data. Research has been conducted on so-called fair algorithms, tackling biased input data, demonstrating learned biases, and measuring relative influence of data attributes, which can quantify and limit the extent of bias introduced by an algorithm or dataset. But, how much bias is too much? That is, what is legal, ethical and/or socially-acceptable? And even more importantly, how do we translate those legal, ethical, or social expectations into automated methods that attest digital discrimination in datasets.

In digital discrimination, users are treated unfairly, unethically or just differently based on their personal data. Examples include low-income neighborhoods targeted with high-interest loans; women being undervalued by 21% in online marketing; and online ads suggestive of arrest records appearing more often with searches of black-sounding names than white-sounding names. Digital discrimination very often reproduces existing instances of discrimination in the offline world by either inheriting the biases of prior decision makers, or simply reflecting widespread prejudices in society. Digital discrimination may also have an even more perverse result, it may exacerbate existing inequalities by causing less favourable treatment for historically disadvantaged groups, suggesting they actually deserve that treatment. As more and more tasks are delegated to computers, mobile devices, and autonomous systems, digital discrimination is becoming a huge problem.

DADD (Discovering and Attesting Digital Discrimination) is a novel cross-disciplinary collaboration to address these open research questions following a continuously-running co-creation process with academic (Computer Science, Digital Humanities, Law and Ethics) and non-academic partners (Google, AI Club), and the general public, including technical and non-technical users. DADD will design ground-breaking methods to certify whether or not datasets and algorithms discriminate by automatically verifying computational non-discrimination norms, which will in turn be formalised based on socio-economic, cultural, legal, and ethical dimensions, creating the new transdisciplinary field of digital discrimination certification.