FILE PHOTO: A message reading "AI artificial intelligence", a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
SAN FRANCISCO (Reuters) - Artificial intelligence group MLCommons unveiled two new benchmarks that it said can help determine how quickly top-of-the-line hardware and software can run AI applications.
Since the launch of OpenAI's ChatGPT over two years ago, chip companies have begun to shift their focus to making hardware that can efficiently run the code that allows millions of people to use AI tools. As the underlying models must respond to many more queries to power AI applications such as chatbots and search engines, MLCommons developed two new versions of its MLPerf benchmarks to gauge speed.
