
We all use technology to gain advantages at the margins by exploiting weaknesses in the system. All of us. — Pixabay
If you absolutely needed a job, you know, to feed your starving family, would you use AI to cheat during the job interview?
Because you can, and I’ll get to that in a minute.
But first, I’m going to very simply explain the bubbling AI-for-Cheating argument, and then unabashedly take the “pro-cheating” side of it – for a reason you might not be expecting.
So think of your answer to that ethical question, because at the end of this column that answer might change.
Using tech to cheat the system
See, when I first started working with automation and AI some 15 years ago, I knew there would come a day when people would use AI to “cheat the system.”
Because, let’s all be honest here, that’s exactly what AI is for.
What I just said may sound controversial, but it’s actually true of any kind of technical evolution. You use tech because it gives you an edge. If it doesn’t, it’s terrible tech. This is true whether the tech is a workout tracking app or stock market analysing software or an LLM trained to give the user the best answers to job interview questions.
But that doesn’t stop everyone, including me, from recoiling at the lapse of “ethics” in AI. And of course, some of that concern is more than warranted.
But first we need to consider why those ethics are being tested in the first place.
Columbia suspends a student for “cheating”
I’m probably not the first to introduce you to the AI-for-Cheating guys. They’ve had a run lately.
A 21-year-old student at Columbia got suspended for developing an app that he used to cheat during job interviews. He says he landed at least one internship using the app, but didn’t accept the job offer.
Instead, the student raised US$5.3mil (RM22.31mil) from legit VCs for straight-faced selling cheating as a service (CaaS?). His startup “offers its users the chance to ‘cheat’ on things like exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver.”
The natural initial reaction to this, including mine at first, is the clutching of ethical pearls. It should be noted that a reporter for The Verge tried the app in real time with the founder and it didn’t work well, so the reporter tore it to shreds.
So yeah, the app is broken and the founder has been bold-faced about “cheating all the time everywhere.” So, you know, let’s get him.
But I think that misses the point. Entirely.
The dad in me says, “That’s not right!” The human in me says, “That’s not fair!” The entrepreneur in me says “Why is this even possible?”
Is exploiting a weakness ethical?
You don’t need to answer the job interview hypothetical I laid out above, because the ethics of it are moot. If someone would cheat to feed their starving family, why wouldn’t any cheater cheat to land a US$200K (RM841,900) job instead of a US$100K (RM420,950) job? Or an “A” grade instead of a “B”?
And which scenario do you think is more likely?
The problem isn’t that people can all of a sudden cheat on tests. The problem is the laziness in the measurement of merit that opens the door wide to predisposed cheaters.
In other words, there’s a demand for AI cheating tools because they work.
That’s not AI’s fault
When I was in college, there was a course taught by a professor who allowed the students to use their books during the exam. Open book. No big deal, right?
I did not get this professor for this class. But I got one of “his books.”
See, this professor got lazy, and eventually started using the same pool of questions for his exams. Not long after, the students figured this out and started copying the questions and answers straight into the textbooks. Those textbooks were then sold used and bought used, and a new generation of students came into his class on the first day with the answers to all his exams neatly written out in the textbooks they had just purchased.
Is coasting ethical?
We all use technology to gain advantages at the margins by exploiting weaknesses in the system. All of us. So let me bring the college test problem back to 2025.
What if, today, that same professor was using AI to give his lectures, using AI to generate new and unique questions for his exams, and using AI to grade the exams?
Is that ethical? Is it even teaching?
Before you answer, let me assure you that the vast majority of college kids have been using AI for help with tests and papers for years now. An MSN article, in discussing the same AI-for-Cheating guys, also mentions a survey of 1,000 college students that found that in January 2023, over two years ago, 90% of them were already using ChatGPT for help with homework assignments.
Do you think that number has gone down? Or up?
Oh, also from the MSN article, “Still, while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not.”
New ethical game: Are you going to be the only student in the class not using AI on homework, tests, and papers?
Necessary evil.
Technology is a forcing function
I got a “B” in that professor’s class. That’s not important for any other reason than –remember – I had the magic cheat book, which didn’t help me at all because my professor didn’t allow open book. I also learned a ton in that class and enjoyed it.
And yes, in 2025, my non-lazy professor would also be screwed, because there is no level of digital or even physical precaution that isn’t going to eventually be susceptible to some form of mass-adopted digital cheating.
I’m not here to solve that. I’m not here to eliminate cheating. I’m here to again point out that the more we lazily lean into technology without considering consequences, the more we realise all exams will eventually have to be face-to-face, one-on-one, oral, in a clean room, with no devices or eyeglasses, and with your hands clearly visible.
That same exact scenario happened in the movie Back to School. Which came out in 1985. For you kids, let me assure you, there was no AI back then, there was just Rodney Dangerfield.
AI is here. So we can do the clean room thing, or we could just measure merit another way.
In college, this is difficult, and I empathise. In business, whether it’s interviews or sales calls or leadership meetings, we need to realise that AI becomes a cheating tool first and most quickly for those lazy processes where it can exploit weaknesses.
Eliminate those lazy processes and weaknesses and you eliminate the demand for cheating.
The other day, a friend I hadn’t talked to in a while asked me if I had started using AI in my writing and now we’re not friends anymore. – Inc./Tribune News Service