Algorithms can also discriminate in the field of mental health, according to one researcher. — AFP Relaxnews
As the saying goes, don't be fooled by appearances. And yet, artificial intelligence seems to have fallen into that trap. According to an American study from the University of Colorado, some AI-based tools being used in programs for treating patients in the field of mental health could be relying on biased information.
Stereotypes die hard. And according to researchers at Colorado's University of Boulder, algorithms have also picked up these clichés. A study, led by Theodora Chaspari, associate professor of computer science, reveals a worrying reality: artificial intelligence (AI) tools used to screen for mental health issues can be biased towards patients' gender and ethnicity. This discovery raises crucial questions about the fairness and effectiveness of mental health technologies.
