Back to posts
May 7, 2026

Most AI-built apps shouldn't be on the internet

AI builders can ship v1 in a weekend. They can also bankrupt you, leak user data, and put your API keys on display by Sunday afternoon. What actually breaks — and what to do about it.

The fastest way to ship a broken SaaS is with AI.

This breaks in real systems every week — friends, coworkers, side projects, sometimes companies with paying customers. The AI builder gets you to "it works on my screen" in 30 minutes. Three days later you're explaining to users why their data leaked, or to your bank why your card got charged $4,000 in OpenAI credits overnight.

That's not a reason to stop vibecoding. The tools are real, the productivity gains are real, the democratization is real. But most people shipping AI apps right now have no idea what they're doing. And the part of the system that would normally have caught the dumb stuff — a senior engineer in your head, a code review process, a security person — has been removed without anyone replacing it.

This post is what I'd say to a friend who just got their first Lovable or Claude Code app working and is about to send it to real users.

Why vibecoding is real

I'm not gatekeeping. The capability is real:

  • The blank page is gone. Scaffolding, auth boilerplate, database setup — what used to take a developer half a day takes Lovable 30 seconds.
  • Domain experts can ship. A nurse can build a tool for nurses without waiting for an engineering team. That's a profound shift.
  • Iteration is cheap. You can try three versions of an idea in a morning and pick the one that feels right.
  • Mistakes are cheaper to learn from. A bug used to mean an hour of debugging. Now it's a sentence: "this is broken, fix it."

If you're using these tools, you're already feeling all this. Skip ahead.

What actually breaks

This is where most "I built a SaaS in a weekend" posts stop. Here's what they don't tell you:

1. API keys in the frontend

This shows up constantly. The AI will happily put your OpenAI or Stripe key directly into client-side code — sometimes even into the deployed bundle.

It works perfectly in the demo.

Then someone opens DevTools, copies the key, and runs up your bill.

If your API key is in the frontend, it's not your API key anymore.

Fix: keep secrets in server-side env vars. Have the backend proxy the API call instead of letting the browser hit it directly.

2. Auth that looks fine but isn't

The login screen looks great. It even has nice transitions. What it doesn't have: rate limiting, proper session handling, brute force protection, or password hashing that any security person would call acceptable.

I've seen apps in production storing passwords in plain text. With real users. Real users whose credit card matched their email matched their address.

Fix: don't let the AI roll its own auth. Use Clerk, Auth0, or Supabase Auth — five minutes of setup, decades of accumulated security work.

3. One prompt away from drop table users

If your AI builder is connected to a real database with write access, "delete these rows" is one careless prompt from gone. There's no undo. There's no warning. The AI will not say "are you sure?"

Fix: never point the AI at production. Use a separate dev database. Confirm automated backups are running before you put anything real in there.

4. Costs that don't have a ceiling

Some AI builders deploy your app on infrastructure that bills by the request. If your app gets a hug-of-death from Reddit, or has a bug in a retry loop, you can wake up to a $4,000 bill. By default, there's no spending cap.

Fix: set a hard spending cap on every billable service before launch. OpenAI, hosting, database, all of them. The five minutes you spend now is the bill you don't get later.

5. The "it works for me" trap

AI-generated code looks confident. It compiles. The demo works in your browser, on your laptop, with your data. None of that means it works for 100 users in 5 timezones on 3 browsers.

The bugs that actually bite you are the ones you didn't see coming.

Fix: test on a different device, with a different account, on a different network — before you tell anyone the app is ready.

What to actually do

1. The AI is a fast intern. Not a senior engineer. It will write a thousand lines of confident-sounding code. Read the code. If you can't explain what it does, that's the part to dig into.

2. Anything that touches money, auth, or user data deserves slow mode. Don't accept the first version. Ask "what could go wrong here?" — the AI is shockingly good at answering that question, but only if you ask.

3. Don't let it design your database. Schema decisions you live with for years. The first version that "works" often turns into a nightmare at v3. Get a second pair of eyes before you build a lot on top of it.

4. Use a different model to review. Same model = same blind spots. Paste the code into a different one (Claude → ChatGPT, or vice versa) and ask "what's wrong with this?" The reviewer catches what the builder rationalized away.

5. Set a hard spending cap. Today. Before launch. Wherever you deploy, find the billing settings, set a maximum monthly cap. The five minutes you spend now is the bill you don't get later.

6. Write down your non-negotiables. A 5-line file at the root of your project: "always use prepared statements. never log passwords. never put API keys in the frontend." Most AI builders read these. The ones that do, follow them. The ones that don't give you a checklist to verify against.

7. When you don't know if something is safe, ask specifically. Not "is this secure?" — that gets vague reassurances. Ask: "if this code were on the public internet, what could a bad actor do with it?" The AI will tell you. It just won't volunteer it.

The take

Vibecoding gave us the most useful thing software has had in a decade: the gap between "I have an idea" and "someone can use it" collapsed from months to days. That's not going away.

The catch is that the friction wasn't all bad. Some of it was a small voice that said wait — should this really be on the internet? That voice used to come from a senior developer reviewing your PR. With AI builders, you have to put it there yourself.

If you do — fast and thoughtful — you get the upside without the horror story. If you skip the thoughtful part, the horror story arrives later, and it's usually expensive.

Vibecode. Just don't skip the second part.


If you want a more concrete walkthrough — workflows, repo files, escalation rules for the risky stuff — /coding goes deeper. Or use the repo file generator to drop a sane CLAUDE.md or AGENTS.md into your project in 30 seconds.

Get these in your inbox every Sunday — no daily spam, just the weekly note plus a few hand-picked links. Subscribe on the homepage.