Vibe Coding Forem

Amira Abdillah
Amira Abdillah

Posted on

How I Made My First $880 Vibe Coding (And the Painful Lessons That Followed)

Vibe Coding: The Reality Behind the Magic

Vibe coding sounds magical — like whispering an idea to an AI and watching it spit out a finished app. The reality? Messy, glitchy, frustrating… and sometimes surprisingly profitable.

The Projects

$800 Web App (Book Scanner Tool)

A family friend who exports used books needed a way to scan books using an external USB scanner, look up their details, group these into 'boxes' and export the content to excel so buyers could see what they were getting.

His old mobile app used a camera which was super slow, so he was losing $50 a day in productivity.

I vibe coded a custom web app for him for $800.

$80 Mobile App (Gym Tracker)

Two of my husband's friends pay $40 a year for the Strong Lifts 5x5 gym app. And they have had it for 10 years!

I vibe coded a super simple version for them and charged them $40 each for lifetime access.

They still only have the Expo build while my Apple developer account is in progress. But I will publish this on the app store once I get approved!

Part 1: The Harsh Reality Check

1. Don't Believe in the One-Prompt Myth

All the TikToks and YouTubes I saw made it look effortless.

One short prompt along the lines of "Make me a pomodoro timer app" and out pops a perfect app.

Reality? Buggy, glitchy, and not what I wanted.

Those videos are smoke and mirrors. True vibe coding is very far off.

2. Prototypes Are Easy. Production Hurts.

You can rapidly develop a prototype which looks functional.

But as soon as you start using it properly, it breaks. Buttons do not work, edge cases crash the app and data is lost.

Prototypes are easy. Production is painful.

If you want usable apps, you need to embed workflows, iterations and carefully curated structure.

You will need to use your actual brain!

3. Garbage In, Garbage Out (Especially with AI)

LLMs are hyper-agreeable and obedient.

Feed them slop, contradictory specs or impossible demands, and they will literally find a way to convince itself and you that it has delivered it.

And do not get me started on when you build more slop on top of it!

Ensure what you are asking it to do is correct.

Part 2: The Survival Toolkit

4. Keep the Code Clean (Or Pay Later)

You may not be writing code.

But you need to help the LLM by prompting it to keep the code clean and maintainable.

Or else you will see a 1500 line file and every edit file operation erroring out.

5. Have AI Critique Itself

Here is a hack I stole from Reddit which changed everything!

If you ask an LLM to follow a set of guidance or criteria or checklist, it will rarely follow all of them.

Add a step afterwards for the LLM to systematically and line by line critique itself against the checklist.

And it will spit out gold!

6. Specs Beat Vibes

Ambiguity kills projects.

And specs are the solutions. There is not a clear set pattern yet, you can follow Spec Kit or Kiro as examples.

But I prefer these three docs:

  • A product specification document (features and product principles)
  • An implementation checklist
  • A guidance doc (coding standards, best practices, structure etc)

7. Manage Context Ruthlessly

It is so tempting to just continue the mega-chat so you do not need to spend time or effort describing what you want to the LLM again.

Don't.

All the platforms compress history, bastardising your message history into garbage.

And the longer the prompt, the worse the LLMs perform.

Start new chats fast and often.

Part 3: Beyond Just Building

8. Choose the Right Vibe Coding Platform

Not all vibe coding platforms are equal.

Some cram everything into one file so are fast and cheap but fragile.

Others give you strong guardrails and thought-out workflows but cost more and are slower.

Match platform to project:

  • Fast & Cheap: good for quick websites. Loveable is a great example.
  • Stable + Guardrailed: better for production-ready apps. I prefer Rocket and Kolega.

9. The Vibe Coding Wall

Even with all the best practices I described, there is a fundamental wall.

LLMs cannot handle large codebases or niche tasks which they have not been trained on extensively.

Do not waste your time trying to build anything big or sophisticated.

10. Don't Do It Alone — Find the Community

Vibe coding is so new that there are no real textbooks or established best practices.

Everyone is experimenting. Reddit, Discord, Twitter/X are full of hard learnt lessons.

Learn from others' failures instead of spending your own money learning them.

Bonus Point: Snapshot and Rollback

You may have the best spec, the best prompt and the best everything.

But the cruel RNG gods at OpenAI and Anthropic will still mess up the feature sometimes.

If you rerun it again, it will work.


Want to see the exact prompts I used? I'll share them in my next post — follow to get them.

Top comments (1)

Collapse
 
martial_terran_6ecf247360 profile image
Martial Terran

"All the platforms compress history, bastardising your message history into garbage." What about Google AIStudio? How many tokens before Google AIStudio compress message history? [I have noticed that after 100,000 tokens, the python scripts tend to have more typos and errors, so I manually delete obsolete (obsolete prior versions or prior debugging) prompts/responses.....]

Gemini (Chrome Tab) responds:
While Google AI Studio uses models with very large context windows, the specific point at which it compresses or manages message history is related to the model's token limit for a single conversation.

The key details regarding token limits and history management in Google AI Studio are:

Context Window Limit (Per Chat): The maximum conversation history is determined by the context window size of the specific Gemini model you are using. For example, the gemini-1.5-flash model has a massive context window of around 1 million tokens for input.

No Automatic Reset: The token count does not reset daily for the same chat. The token count for a conversation keeps accumulating until it hits the model's maximum limit.

Compression is a Strategy: Unlike some other platforms that may aggressively compress at a lower, fixed token count, Gemini models are designed to utilize the entire large context window. For developers using the Gemini API, there are methods and strategies available for explicitly compressing the history (e.g., summarizing old messages into a TLDR) when the context gets too long, but this is a choice made by the developer, not an automatic compression at a low threshold in the AI Studio chat interface.

Practical Performance: While the official limit is very high (e.g., 1 million tokens), some users of AI Studio have reported the interface becoming sluggish or the model forgetting context at much lower, though still very large, token counts (e.g., around 200,000 to 250,000 tokens). This suggests that while the model can hold a million tokens, practical performance may degrade before reaching that hard limit.

In short, Google AI Studio and its underlying models offer a large token limit (e.g., 1 million tokens for gemini-1.5-flash), but this limit is the conversation ceiling, not a trigger for early compression. You only hit a hard stop on continuing the chat when the total input tokens (including all previous history) approaches that limit.