Vibe Coding & AI (Follow-Up)

aidevelopmentprocesslessons

A few weeks ago I wrote about vibe coding—building software guided more by aesthetic intuition than upfront planning, amplified by AI. Since then I've pushed this approach harder, shipped a few things, and hit enough friction points to know what works and what doesn't.

This is what changed.

What Stuck: The Scaffolding Workflow

The workflow I trust now is simple: use AI to generate structural scaffolding, then refine by hand.

I give it clear constraints: "Build a React component that does X, uses Y library, follows Z pattern." It generates the boilerplate—imports, props, basic logic, placeholder UI. I review it, fix the obvious mistakes, then iterate on the parts that matter: the interaction details, the edge cases, the performance considerations.

This works because the division of labor is clear. AI handles the tedious setup, the repetitive patterns, the parts where correctness is straightforward. I handle the decisions that require context: how this fits into the larger system, what the tradeoffs are, what future changes might break.

The key is specificity. Vague prompts ("make a music player") produce vague results. Detailed prompts with examples and constraints ("build a music player using Web Audio API, with play/pause/seek controls, support for playlists, handle loading states") produce usable starting points.

I've started keeping a library of prompt templates for common patterns. Not because AI needs the repetition, but because I need to remember what level of detail gets good results.

What I Abandoned: Letting AI Make Architectural Choices

Early on I tried letting AI suggest architectural patterns. "Should I use context or a state management library for this?" or "What's the best way to structure this data flow?"

The answers were always plausible but generic. They didn't account for the specific constraints of the project—existing patterns, team familiarity, future extensibility needs. Following that advice meant constantly refactoring when reality diverged from the assumptions.

Now I make those decisions first, then use AI within that framework. The architecture is mine. The implementation gets help.

This also applies to dependencies. AI will confidently suggest libraries I've never heard of, or outdated approaches, or solutions that technically work but introduce unnecessary complexity. I've learned to verify every suggested package, check when it was last updated, and ask: is this solving a real problem or just adding weight?

The Quality Filter: Does It Handle the Second-Order Case?

The best heuristic I've found for evaluating AI-generated code is this: does it handle the second-order case?

The first-order case is the happy path. User clicks button, thing happens, state updates. AI is excellent at this.

The second-order case is everything else: what if the network request fails, what if the user clicks twice rapidly, what if the data is malformed, what if the component unmounts mid-operation?

AI often misses these. The generated code works in demo conditions but breaks in production. So now I test for this explicitly: after getting a working prototype, I immediately try to break it. Rapid clicks, bad data, edge cases, race conditions.

If it handles those gracefully, I keep it. If not, I rewrite the fragile parts by hand. This has saved me from shipping subtle bugs that would've been annoying to debug later.

What I Stopped Doing: Iterating on Vibes Alone

In the first post I talked about iterating based on feel—does this interaction seem right, does this flow make sense? That still matters, but I've added a constraint: every iteration needs a concrete success criterion.

"Make this feel faster" becomes "reduce perceived latency below 200ms." "Make this UI clearer" becomes "user can complete the primary action without reading instructions." "Make this code cleaner" becomes "reduce cyclomatic complexity below 10."

Vibes guide the direction, but metrics tell you when you're done. Without that, you iterate forever, chasing a feeling that keeps shifting.

The Human Part That Still Matters

The thing AI doesn't replace—and I don't think will for a while—is the ability to hold the whole system in your head. To see how a change in one component will ripple through five others. To remember the design decision from two months ago that now conflicts with this new feature.

That's where the value is. Not in typing code faster, but in knowing what code to write and why. The judgment calls. The tradeoffs. The boring-but-important stuff like consistency, maintainability, and not surprising future maintainers.

I'm faster now because I'm not typing boilerplate. But I'm more careful too, because the surface correctness of AI-generated code can mask structural problems. The skill is knowing which parts to trust and which to verify.

Where This Settles

I think this style of working will plateau into something like: AI handles the mechanical parts, humans handle the strategic parts, and the boundary between them becomes second nature.

You won't think about whether to use AI for a particular task—you'll just use it for the things it's obviously good at (scaffolding, boilerplate, pattern-matching) and do the rest yourself (architecture, context-heavy decisions, quality verification).

The interesting part is what happens to the things in the middle. The decisions that require some context but not total system knowledge. The refactors that are tedious but not trivial. The optimizations that are clear in hindsight but not obvious upfront.

I don't know yet where those land. But I'm learning by doing, which feels appropriate for a workflow built on iteration.

Related Posts

Loading...

Sal De Mi Mente II

Yng Naz

0:000:00