A Google researcher has warned that attackers could disable AI systems by “poisoning” their data sets, and Chinese researchers are already working to come up with countermeasures to guard against this emerging threat.
At an AI conference in Shanghai on Friday, Google Brain research scientist Nicholas Carlini said that by manipulating just a tiny fraction of an AI system’s training data, attackers could critically compromise its functionality.
Already a subscriber? Log in
Play, subscribe and stand a chance to win prizes worth over RM39,000! T&C applies.
Cancel anytime. Ad-free. Unlimited access with perks.
