Don’t expect quick fixes in ‘red-teaming’ of AI models. Security was an afterthought


Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated. — AP

BOSTON: White House officials concerned by AI chatbots’ potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition that ended Sunday (Aug 13) at the DefCon hacker convention in Las Vegas.

Some 2,200 competitors tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology’s next big thing. But don’t expect quick results from this first-ever independent “red-teaming” of multiple models.

Save 30% OFF The Star Digital Access

Monthly Plan

RM 13.90/month

RM 9.73/month

Billed as RM 9.73 for the 1st month, RM 13.90 thereafter.

Best Value

Annual Plan

RM 12.33/month

RM 8.63/month

Billed as RM 103.60 for the 1st year, RM 148 thereafter.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Tech tracking to tackle human-wildlife conflict in Zimbabwe
Like fancy Japanese toilets? You’ll love the sound of this.
Facebook 'supreme court' admits 'frustrations' in five years of work
Russia restricts FaceTime, its latest step in controlling online communications
Studies: AI chatbots can influence voters
LG Elec says Microsoft and LG affiliates pursuing cooperation on data centres
Apple appoints Meta's Newstead as general counsel amid executive changes
AI's rise stirs excitement, sparks job worries
Australia's NEXTDC inks MoU with OpenAI to develop AI infrastructure in Sydney, shares jump
SentinelOne forecasts quarterly revenue below estimates, CFO to step down

Others Also Read