
Research recently showed that using AI may impact workers' long term critical thinking powers. Another alarm bell raises worries that overreliance on AI to help write code will also have a long-term on the workplace. — Pixabay
We know that AI can speed up regular work tasks, boost office efficiency and even help you understand your business better, right? And that it’s a totally benign, safe and useful tool that can only transform the workplace for the better, er, right?
Well, actually, nope. There are many concerns about applying AI at work, from staff who worry they’ll be replaced, to a troubling Microsoft report that says some knowledge workers are already relying on this innovative tech too much so that their “problem-solving skills may decline as a result.”
Can chalk up another mark in the “AI is harmful” column, following fresh reports that young coders may already rely too much AI to actually do their jobs.
This comes via a report at Futurism.com, highlighting a post from founder and experienced developer Namanyay Goel in which the tech veteran said many young coders he’s spoken to lack the fundamental knowledge about why the code they’re writing works.
“Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever,” Goel admits, in news that’ll please some of the leading AI brands. But if you ask these developers why their code works “that way instead of another way? Crickets.” And if you ask about tricky “edge cases” about a particularly thorny piece of coding magic? “Blank stares,” Goel says.
According to this expert, who’s been busy learning code the hard way since he was 13, according to his website, AI is harming how people understand computer code: “The foundational knowledge that used to come from struggling through problems is just… missing,” he alleges.
You could dismiss this as sour grapes about an incredible breakthrough tool that is upending the entire paradigm of writing code for computers – so much so that Meta’s CEO Mark Zuckerberg recently promised he’d dump mid-level human coders and replace them with AI. But Goel anticipated this: “I’m not trying to be that guy who complains about ‘kids these days,’” he said, adding “I use AI tools daily. I’m literally building one. But we need to be honest about what we’re trading away for this convenience.”
Goel’s last point simplifies the conundrum about balancing the benefits and risks of AI into a single sentence. It also addresses questions like “why should we care who writes the code or why it works, as long as it just works?” That’s because if some complex issue affects the way your computer systems work in the future, and a coder has to dig into the reasons why, through over-reliance on AI they may not have the fundamental understanding to pull the task off.
There are bigger implications here, of course, beyond using AI to code (which is, admittedly, a contentious issue in the first place, since some critics argue AI really can’t “invent” good code). As AI gets more sophisticated, small companies may be tempted to use it to fill in skill gaps in their workforce instead of hiring a human expert or paying a third-party contractor. Your team lacks a finance whiz? No worry: AI can do business analysis. Your marketing team doesn’t have a video expert? No worries: just use an AI tool.
At first, this may create issues if something goes wrong, like your AI financial analysis recommends what turns out to be exactly the wrong move to boost revenues, or your swanky new promo video accidentally violates someone else’s intellectual property rights. In the longer term, this sort of problem could become much more serious: after five years of refining your company’s core computer system with the increasing help of AI, will any of your developers be able to understand how it actually works?
This might be a wakeup call for business owners and managers to consider the value of the subject matter experts already on staff, and to make sure that if you’re recruiting young Gen Z team members that they have a grasp of the fundamental theory behind their jobs, as well as being good at speedily finessing useful answers from an AI chatbot. – Inc./Tribune News Service