Resumo:
Face detection and recognition algorithms have been widely adopted in a diversity of applications, such as social networks that automatically detect and recognize every person present in published images. However, with the growing adoption of Artificial Intelligence (AI) algorithms in general, questions related to the existence of biases began to arise. In many situations, it was found that there were biases affecting historically oppressed minorities. As an example, there was racial bias in many facial recognition systems used by American police, which led to the suspension of the use of this technology in some states, the discontinuation of development in some companies, such as IBM, and researchers to ask their colleagues to stop working on these systems because of the impact on people of different races and ethnicities. The aforementioned problem motivates the study and evaluation of the existence of bias in a fraud detection, AI-based, system used by the public transportation in Salvador (Brazil). Taking into account the fact that Salvador is the Brazilian city with the highest percentage of black people, any error can affect a significant number of users, leading to a high number of false positives. In previous studies developed by the research group, statistical tests were performed to verify if there is a correlation between the error rate and race and gender. The results indicated the existence of this correlation, that is, there is a higher error rate in face detection of black or brown users and women. From these results, a main question motivates the development of this project: Is there, in fact, a causal relationship between race and the error rate in detection? To evaluate this question, a causal model was developed to analyze the influence of skin color on the face detection system used in Salvador’s public transportation.