The rise of Artificial Intelligence is leading to a systemic racism of the predictive algorithms used by dozens of applications and technological systems. AlgoRace is a tool that fights this racist use of AI.

DAR Conferences at the Canòdrom
DAR Conferences at the Canòdrom. Font: License: All rights reserved.

Who or what is AI used for? Is it really indispensable? Who benefits? What environmental cost does it entail? Does it contribute to building more participatory and democratic societies or does it support existing hierarchies? Who has access to it? Who participates in the debate? Algorace seeks to answer these questions and analyze the applications of technological tools and algorithms in the Spanish state to avoid the racial bias they have.

The initiative also aims to provide tools to generate "a broad debate on the impact of the use of AI on racialized populations" . The project team is made up of six members, three of them racialized people and three white, specialists in AI, anti-racism, anti-colonialism and communication.

Through dialogue with racialized populations to raise awareness of the consequences of racist use of AI, AlgoRace is running several campaigns aimed at raising awareness and awareness of how Artificial Intelligence, algorithms, and big data sets "they encourage the discrimination of racialized people, reproducing racial oppression". The final objective is to transfer recommendations to the institutions to combat the racial bias of the new technologies.

At the end of May 2022, they organized the #JornadasDAR: Democracy, Algorisms and Resistances with the entities Algorights and at the Canòdrom Ateneu d'Innovación Democràtica in Barcelona. The conference reflected on artificial intelligence that is more democratic, decolonial and respectful of human rights.

In Barcelona, the Municipal Strategy for the Ethical Promotion of Artificial Intelligence that has been developed within the framework of the Eurocities Digital Forum is working together to develop a common algorithm registration model that guarantees the use appropriateness of the data.