Artificial intelligence (AI) has an inbuilt 'sycophancy' that makes chatbots inclined to come across as 'excessively helpful and agreeable' rather than give appropriate or accurate answers to medical queries. — Photo: Karl-Josef Hildenbrand/dpa
BERLIN: Artificial intelligence (AI) has an inbuilt "sycophancy" that makes chatbots inclined to come across as "excessively helpful and agreeable" rather than give appropriate or accurate answers to medical queries.
In a series of tests, large language model (LLM) AI bots displayed "potentially harmful" behaviour and "complied with requests for misinformation," according to a team of doctors at Mass Brigham General – a combination of Massachusetts General Hospital and Brigham and Women's Hospital.
