WASHINGTON: Artificial intelligence (AI) tends to give "inappropriate responses" to mental health-related queries, even when the user suggests they are contemplating suicide, according to researchers based at Stanford and other US universities.
Not only that, but AI chatbots or large-language models sometimes "report high stigma overall toward mental health conditions" such as schizophrenia, bipolar disorder and major depressive disorder, including by encouraging "delusional thinking" among patients.
