It might be time to pay a lot more attention to how your company utilises AI in the workplace, and how good your AI policy is. — AFP
Facing a tidal wave of stress, burnout, and workplace anxiety, it’s no wonder that people are smuggling AI systems into their jobs without explicit permission or even in defiance of company rules, using them to save precious time by speeding up tedious tasks. As one office worker told the BBC, “It’s easier to get forgiveness than permission.” But as recent news surrounding the risks of breakthrough Chinese AI system DeepSeek highlighted, AI can represent a direct threat to your company, as sensitive information may leak out – without your knowledge, and despite your best efforts to control the risks – if staff are illicitly using AI.
It might be time to pay a lot more attention to how your company utilises AI in the workplace, and how good your AI policy is.
According to a survey from multinational Germany-based business analytics firm Software AG, half of all knowledge workers surveyed use personal AI tools at work, the BBC reported. In this case a “knowledge worker” means people who primarily work at a desk or computer, which means it applies to a broad range of staff – not just people who work in IT. The survey found many reasons why people were using the AI they’d chosen, including some who said their employer didn’t offer any AI systems for official use and some who simply preferred their own choice of tool.
One software development worker who spoke to the BBC summarised the time-saving aspects of using AI at his desk neatly: “It’s largely a glorified autocomplete, but it is very good,” he said. “It completes 15 lines (of code) at a time, and then you look over it and say, ‘Yes, that’s what I would’ve typed.’ It frees you up,” he explained.
Another worker at a data-storage company said he wasn’t sure why his company banned external AI: “I think it’s a control thing,” he said, arguing that he feels companies just “want to have a say in what tools their employees use. It’s a new frontier of IT and they just want to be conservative.”
Meanwhile, a different survey from Australian open-source learning platform Moodle shows that about 52% of US employees said they use AI tools to complete mandatory work training. Twenty-one percent said AI helped them answer tricky questions, and 12% said they used AI to take the entire training course for them. Moodle CEO Scott Anderberg tried to explain some of this data in a statement, industry news outlet HRDive reports. “American workers across most industries are struggling, especially young employees. Burnout rates are high and the threat of AI is triggering significant fear about their relevance at work,” he said. Plus, Anderberg thinks the “training and development programs they have access to are not helping,” and “in many cases, it’s making things worse.”
There are self-evident risks when staff use unofficial AI tools to help them work, including the fact that AI systems can store the prompts that you upload to them for use as training data to improve the AI. This means information can leak out later on, perhaps when another user queries the system or through an unexpected security loophole. The controversial Chinese DeepSeek AI system just demonstrated this kind of risk, when a report showed it explicitly sends data to Chinese servers. If your staff are uploading sensitive company information to an unofficial AI to help them get through their day, this is definitely a concern.
But these surveys, and particularly Moodle’s focus on the time-consuming training hoops some knowledge workers are forced to jump through, also demonstrate that some workplaces are failing their staff. If workers feel the need to use AI to speed up mundane processes, then maybe you’re not motivating them or incentivising them properly. It could also mean they lack the right kind of tools they need to perform tasks.
If your company has yet to embrace AI – and, more importantly, if you lack any sort of AI use guidelines or policy – then maybe it’s time to address this issue before something problematic happens – including the chance a worker includes AI-hallucinated misinformation in a report because they don’t understand the risk. – Inc./Tribune News Service