Vibe Coding Forem

Cover image for Vibe Coding Is Fun Until It Ships a Security Breach
Tashfia Akther
Tashfia Akther

Posted on

Vibe Coding Is Fun Until It Ships a Security Breach

Let’s define the term first, because it matters.

Vibe coding is when you:

  • prompt an AI
  • skim the output
  • feel that it “looks right”
  • paste it into production
  • move on

No threat modeling. No adversarial thinking. No real review.

Just vibes.

This works shockingly well… right up until it doesn’t. And when it fails, it fails silently.


Why vibe coding feels safe (and why it isn’t)

AI-generated code has three dangerous properties:

  1. It looks confident
  2. It compiles
  3. It usually works in the happy path

That’s enough to fool smart developers.

The problem is that security bugs rarely live in the happy path. They live in:

  • edge cases
  • malformed input
  • attacker-controlled parameters
  • assumptions you didn’t realize you made

Vibe coding optimizes for speed and surface correctness.

Security requires paranoia.

Those two goals are in tension.


The core security failure mode of vibe coding

AI does not think like an attacker unless you force it to.

By default, models generate:

  • permissive input handling
  • optimistic assumptions
  • minimal validation
  • convenience-first patterns

That’s fine for demos.
It’s lethal for production.

Example failure patterns AI loves:

  • trusting client-side validation
  • missing authorization checks
  • string interpolation in queries
  • unsafe defaults in configs
  • “temporary” secrets hardcoded into examples

None of these scream “broken” during testing.
All of them scream “breach” later.


The illusion of “it’s just boilerplate”

One of the most common rationalizations is:

“This is just boilerplate auth / upload / webhook / API glue.”

That’s exactly where attackers live.

Boilerplate is where:

  • auth boundaries are crossed
  • untrusted input enters the system
  • secrets are handled
  • permissions are implied, not enforced

If you didn’t design the trust model yourself, you don’t own it.
You’re borrowing assumptions you didn’t inspect.


Vibe coding collapses your threat model to zero

Traditional secure development forces you to ask:

  • Who controls this input?
  • What happens if it’s malformed?
  • What happens if it’s hostile?
  • What happens if it’s replayed, delayed, or forged?

Vibe coding skips all of that.

You’re not malicious.
You’re just moving fast.

Attackers love fast.


Concrete security risks I keep seeing

These are not hypothetical. These ship every day.

1. Auth without authorization

AI often generates:

  • “check if user is logged in”
  • but forgets to check what they’re allowed to do

Result: horizontal or vertical privilege escalation.

2. Input validation theater

Basic type checks.
No length limits.
No encoding normalization.
No rejection of unexpected fields.

Result: injection, deserialization bugs, logic abuse.

3. Secrets handled like config values

Environment variables logged.
Tokens passed through URLs.
Keys reused across environments.

Result: credential leakage with perfect audit trails… for the attacker.

4. Trusting third-party payloads

Webhooks verified incorrectly.
Signatures checked but not timestamped.
Payloads parsed before verification.

Result: replay attacks and forged events.

5. “Temporary” debug code

Verbose errors.
Stack traces in responses.
Feature flags hardcoded to true.

Result: attackers learn your internals faster than your team.


Why code review doesn’t save you (by itself)

Here’s the uncomfortable part.

Most human reviewers also vibe review AI code.

They look for:

  • syntax errors
  • obvious logic bugs
  • style issues

They don’t simulate attackers either.

So the AI’s blind spots compound with the reviewer’s blind spots.
That’s how boring, preventable vulnerabilities ship.


The real danger: false confidence

Vibe coding doesn’t just introduce bugs.
It changes behavior.

Developers:

  • stop reading code deeply
  • stop questioning assumptions
  • stop threat modeling
  • trust outputs they didn’t earn

That’s a cultural regression, not just a technical one.

Security dies when curiosity dies.


How to vibe code without being reckless

You don’t need to stop using AI.
You need to stop outsourcing judgment.

Here’s the minimum discipline required.

Rule 1: Force adversarial prompting

After generating code, immediately prompt:
“List all security assumptions in this code.”
Then:
“How could an attacker abuse this?”

If the model struggles, that’s a red flag.

Rule 2: Separate generation from acceptance

Never generate and ship in one pass.
Sleep on it.
Re-read it.
Try to break it.

Speed without reflection is just deferred failure.

Rule 3: Own the trust boundaries

For every file, explicitly identify:

  • trusted inputs
  • untrusted inputs
  • authority checks
  • data sinks

If you can’t write that down, you don’t understand the code.

Rule 4: Default to denial

If the code doesn’t explicitly allow something, it should not work.
AI loves permissive defaults. You should hate them.


Assumptions, weak spots, and how to test this argument

Assumptions I’m making:

  • You’re shipping real systems, not throwaway demos
  • You care about user data and business continuity
  • You use AI for more than autocomplete

Where this critique weakens:

  • Solo prototypes
  • Internal tools with no external exposure
  • Short-lived experiments

How to falsify me:
Take a vibe-coded feature.
Threat model it explicitly.
Try to abuse it.
If you can’t find a single weakness after honest effort, you either got lucky or you didn’t look hard enough.


The bottom line

Vibe coding is not evil.
Unexamined vibe coding is.

AI accelerates output.
It does not inherit responsibility.

Security isn’t about writing perfect code.
It’s about distrusting your own assumptions.

If AI makes you stop doing that, the problem isn’t the model.

It’s the vibe.


Top comments (0)