At Shanghai conference, researcher says attackers can ‘poison’ data sets through subtle tampering to critically harm artificial intelligence models. A team in China proposes method to bolster defences against these attacks, which can cause serious damage or security breaches. — SCMP
A Google researcher has warned that attackers could disable AI systems by “poisoning” their data sets, and Chinese researchers are already working to come up with countermeasures to guard against this emerging threat.
At an AI conference in Shanghai on Friday, Google Brain research scientist Nicholas Carlini said that by manipulating just a tiny fraction of an AI system’s training data, attackers could critically compromise its functionality.
Already a subscriber? Log in
Save 30% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
