Vibe coding is powerful, but anyone using it should remember that it can also be risky. — Photo by Rahul Mishra on Unsplash
Vibe coding is one of the hottest trends of 2025. While the meaning has shifted a little from OpenAI alum Andrej Karpathy’s original definition, you can think of it as coding apps by using LLM-powered tools without really looking at the code they generate or understanding what it’s doing.
As an approach, vibe coding both allows non-technical founders and indie hackers to create functioning tools – and experienced coders to work a lot faster. Vibe coding is distinct from using AI as part of a typical developer workflow, which is also booming.
In March,
He added, “What that means for founders is that you don’t need a team of 50 or 100 engineers.” Statements like these give you an idea of the volume of AI-generated code that is out there – especially with newer apps and services.
Some tools like Cursor and Replit are more of an AI-enabled coding app, while others like Lovable are purely designed for vibe coding. Regardless of which tools you use, there are risks inherent in using AI-generated code without fully understanding the implications of what it does.
“ AI is like that junior developer that you hire where they understand the concepts and their rule of thumb is to make something work,” explains Dahvid Schloss, CEO of cybersecurity firm Emulated Criminals (his official title is Emulated Mob Boss).
“A lot of people joke that security is the barrier to productivity,” he says. And in few places is that clearer than with AI-generated code. AI will often write a function that works as intended, but isn’t secure.
As a result of AI-generated code, Schloss is starting to see a resurgence of “simplistic exploits.”
Consider attacks like SQL injection, where attackers insert malicious code into an app’s database to get unauthorised access to information like passwords and credit card details, or directory traversal, where attackers are able to piggy back on information in public server resources to access private resources – these were major attack vectors 20 or 30 years ago, but have long since been eliminated from commercial products.
But there are techniques that developers now use to prevent these exploits from happening: A practice called field sanitisation prevents SQL injection by blocking code that’s entered in an unexpected location like a password field or URL slug from running. Similarly, containerisation – where applications run in their own separate system independent from other applications – is a standard industry practice that prevents all kinds of abuse because hackers just don’t have easy access to data in a different silo.
Unfortunately, AI coding tools don’t always stick to standard industry practices. As Schloss explains it, if the AI decides it wants to call SQL, it’s just going to call SQL and take whatever value it’s given. Unless it’s told explicitly to sanitise an input field and check that someone hasn’t entered malicious code in the email signup form, it’s not reliably going to write the necessary code to do it. Hackers can once again run malicious code from your front-end website. In other words, common-sense security measures aren’t always being included.
AI coding tools also have a tendency to leave important information exposed as plain text. Schloss says he keeps seeing usernames and passwords stored as plain text in session cookies. While it still requires the attacker to intercept the cookies either on the user’s computer or the server, if they do, the attacker then has the login details.
Similarly, some vibe coders have reportedly accidentally exposed the secret API keys that they use to access services like OpenAI through their public facing website. This allows anyone to take their key and then rack up a big bill at the vibe coders expense.
Vibe coding isn’t just creating small one-off security vulnerabilities that can be easily patched. According to Schloss, AI tools are making incredibly poor architecture choices and steering people towards insecure practices. He gives the example of an app using a publicly accessible Firebase database to store user information.
Why is it publicly accessible? Well, the easiest way around a complicated user authentication protocol is just to have the whole database accessible to anyone with the URL. This is a situation he says he’s seeing a lot across different fields, including medtech and fintech.
Namanyay Goel, the founder of AI coding startup Giga AI, echoes this sentiment. While it seems logical to try to make an app functional and then add security, it just doesn’t work in reality. “Security is not something you can very easily add later,” he says.
For security consultants like Schloss, this is creating a lot of opportunities. “ From our perspective as red teamers it’s becoming a lot easier to break into newer companies,” he says. More mature companies have people with a deeper understanding of application security and more rigid deployment workflows. It’s the newest apps that are moving fast and breaking standard practices.
For start ups, indie hackers, and anyone else considering vibe coding an app, all this should stand as a stark warning. While AI tools can be incredibly useful when used right, handing off all coding decisions to an AI is likely to lead to an insecure app.
If you don’t understand what steps have been taken to secure your user data, then it probably isn’t secure at all. – Inc./Tribune News Service
