9

Chapter Nine

The Build
Loop

Expect imperfection. Enjoy refinement.
The third version is where the magic happens.

The most common mistake when building with AI isn't a bad prompt. It's a bad expectation. You type one long description, hit enter, and expect a finished product. It doesn't work. It never works. The people who build amazing things with AI know a secret: the first version is always wrong. The third version is good. The fifth is great. And the process of getting from one to five? That's the actual skill.

Every artist iterates. Every architect revises. Every filmmaker shoots multiple takes. Building with AI is no different. The people who enjoy the loop — not resent it — are the ones who build the best things.

Specify, Generate, Verify

Every productive session with a coding agent follows the same rhythm. Specify what you want — be concrete, give examples, describe what "done" looks like. Let the agent generate it. Then verify: does it work? Does it look right? Does it do what you asked? If not, you specify again, more precisely. That's the loop. It's not a sign of failure. It's the process.

The power move is writing your acceptance criteria before you start building. "The button should turn green when clicked." "The list should persist when I refresh the page." "The signup form should reject emails without an @ symbol." These plain-English descriptions of "done" become your evaluation framework. When something's wrong, you paste your criteria back to the agent and say "this doesn't match." The agent can test against your standards.

What to Do When You're Stuck

The agent generated something broken. Now what? Five strategies, in order of what to try first:

1. Describe the symptom, not your diagnosis

"The button doesn't respond when I click it" beats "I think the onClick handler is broken."

2. Share the error message verbatim

Copy-paste the exact error. It's the single most useful thing you can give the agent.

3. Ask the agent to explain what it did

"Walk me through what you just changed and why." Understanding often reveals the issue.

4. Ask for a different approach

"That approach isn't working. Can you try solving this a completely different way?"

5. Start fresh with lessons learned

Sometimes a new conversation is faster than debugging a tangled one. Bring the key decisions with you.

Tests as guardrails

You can ask the agent to write automated tests — in plain English. "Write a test that verifies the signup form rejects emails without an @ symbol." The agent writes the test and the code that passes it. This is test-driven development, directed by someone who's never written a test. Once a bug is caught by a test, it can never come back.

Practice the skill. How would you describe these bugs to a coding agent?

Debug Detective

How would you report this bug?

0 pts
Scenario 1 of 5

The Button That Does Nothing

Create Account
Submitno response

You asked the agent to build a sign-up form. The form looks great, but the "Submit" button doesn't do anything when you click it.

Submit button is unresponsive
Choose your approach
Share this course
The first version is always wrong. The third is good. The fifth is great. Enjoy the loop.

You can iterate. You can debug. But how do you know if what you're building is actually good? Not just functional — good. That takes something AI can't give you: taste. That's next.

Eval Framework

New tool unlocked!