WASHINGTON: Artificial intelligence (AI) chatbots are increasingly being used by people to do everything from answering and summarizing emails and texts the recipient never reads to deciding on travel itineraries, products and meals.
But such sub-contracting of thought and effort is risky not only because it may reduce mental engagement: researchers at universities including Stanford and the Massachusetts Institute of Technology have found that most major bots do not have risk assessments in place, amid a "significant transparency gap" across the industry.
