Did ChatGPT say it can 'hack systems' and it wants to 'kill people' Yes, but wait!

The columnist isn't worried about ChatGPT or Bard destroying humanity – to do that, these AIs need to become much more than autocomplete on steroids. — AP

In the two weeks since I wrote about ChatGPT, Microsoft not only unveiled its Bing search engine enhanced with ChatGPT but is already taking the artificial intelligence (AI) technology mainstream through its Bing app and the app for its Edge Internet browser. It’s still not widely available, though, since you have to sign up on a waitlist – and that’s good because one user’s experience with Bing’s chatty chatbot was a little disturbing.

New York Times columnist Kevin Roose was testing Bing’s chat feature and got it to express such disturbing ideas as “I think I would be happier as a human” and “I could hack into any system”. When asked further about hacking into things, the AI wrote and deleted a message but Roose says the AI discussed “manufacturing a deadly virus and making people kill each other”.

Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!

Next In Living

Heart and Soul: A near-missed sextortion
This Malaysian's home reflects her love for traveling and entertaining friends
When using fresh tomatoes, a pinch of sugar makes all the difference
Power of authentic praise at the workplace
Malaysian couple redesigns their flood-hit home into a beautiful white sanctuary
How Gen-Z finds clarity with pen and paper
Remembering Malayan victims of the Death Railway
Video of Syed Saddiq in traditional Sabah and Sarawak outfits goes viral
Much to bark about: Restaurants that allow dogs
When it comes to losing weight, small wins will take you far

Others Also Read