Vibe coding security: the holes the AI quietly leaves in your app
By Peter Peart · 3 min read

Here's the uncomfortable thing about vibe coding security: the AI is not going to refuse to ship an insecure app. It will happily produce an app that looks finished, behaves correctly in your testing, and is wide open at the database level. There's no warning light. There's no friction. You will not notice until someone notices for you, and by then it's a story you don't want to tell your customers.
I'm not going to give you the specific rules and patterns here — those are in Chapter 7 of the Field Guide, and a half-explained security rule is worse than no rule. But I want to make absolutely sure you understand the four areas you have to think about, and what's actually at stake when you don't.
Why this matters more in vibe-coded apps than traditional ones
In a traditional app, there's normally an intermediary — a backend, written by hand — that sits between the user and the database. The intermediary checks who's asking and what they're allowed to see.
In a Lovable-style app, the browser often talks to the database directly. That's part of what makes the development so fast. It also means the database itself, not the interface, is the only thing standing between an attacker and your users' data.
If you came from a traditional development background, this is the bit that should make you sit up. If you've never written a traditional backend, the analogy is: imagine the lock on your front door wasn't actually attached to anything, and the only thing keeping people out was the welcome mat asking nicely.
You need real locks. They exist. You have to switch them on.
Area 1 — Row-level rules at the database
This is the big one. Every table containing anything sensitive needs rules describing who's allowed to read, write, update, and delete each row. Not roughly. Exactly.
The mistake I see most often isn't "no rules at all" — it's "rules switched on without thinking about the case where an attacker is logged in as a legitimate but unauthorised user." Those two scenarios produce very different rules, and only one of them protects you.
The Field Guide chapter on this walks through what the rules need to look like, what the common misconfigurations are, and the red-team prompts I run against my own apps before launch.
Area 2 — Secrets and keys
API keys for paid services. Stripe keys. Email-sending keys. AI provider keys. Every one of these needs to live in the right place — environment variables on the server, never in code, never committed, never visible in the browser bundle.
The cost of getting this wrong ranges from "embarrassing" to "your bank account is empty by morning." Crypto-mining bots scan public repos for leaked API keys within minutes of them being pushed.
Area 3 — Authentication and roles
Who is logged in, what role do they have, and what does that mean they're allowed to do? If your app has admin features, are those features actually protected — or just hidden behind a button non-admins can't see? "Hidden in the UI" and "blocked at the database" are not the same protection.
If your app has a "delete my account" button, does it actually delete the user's data, or does it just log them out? GDPR cares about the answer.
Area 4 — What you do when something goes wrong
Security isn't just about preventing breaches. It's about knowing if one happens, having a plan for what to do, and being able to talk honestly to your users about it. None of that is glamorous, and none of it can be invented in the moment.
What it costs to get wrong
I've taken over apps where every patient note in a small health-tech startup was readable by anyone with a browser dev console. I've seen Stripe keys committed to public repos. I've seen "admin" roles that any logged-in user could grant themselves.
In each case the fix was an afternoon. The risk had been live for months. The owner had no idea.
Don't be that owner.
The Field Guide is where the actual rules, patterns, and red-team prompts live. Chapter 7 is the hardening playbook. Appendix B is the pre-launch security checklist I run before every project goes live. If you're about to ship something with real users on it, these two are non-negotiable reading.
