: Orchestrating Complexity — Talking to Machines
8

Chapter Eight

Orchestrating
Complexity

Managing AI across projects bigger
than any context window.

Here is the moment every ambitious beginner hits: you ask AI to build something real — a full game, a website with a backend, a research paper with citations — and the whole thing falls apart. Not because the AI isn't smart enough. Because the project is too big for a single conversation.

You paste in a wall of requirements, the AI gives you a confident-sounding response, and twenty minutes later you realize it forgot half of what you asked for, contradicted itself on the architecture, and hallucinated a library that doesn't exist. You try again. Same result, different hallucinations.

This isn't an AI problem. It's an orchestration problem. And learning to solve it is the difference between someone who uses AI for one-off tasks and someone who uses it to build real things.

The One-Prompt Trap

When you first learn to use AI, you naturally try to do everything in one shot. One massive prompt. One conversation. One attempt. And for simple tasks — writing an email, brainstorming ideas, explaining a concept — that works fine.

But complexity breaks the one-prompt approach in three ways:

1

Context overflow

The AI can only hold so much information at once. A full project exceeds that limit, so the AI starts "forgetting" your earlier requirements.

2

Attention dilution

Even within the context window, asking the AI to juggle twelve different concerns at once means each one gets less focus. Quality drops across the board.

3

Error compounding

One small mistake in a complex output cascades. If the AI gets the database schema wrong, everything built on that schema is wrong too — and it's hard to spot deep in a wall of generated code.

The solution isn't a better prompt. It's a better process.

Decomposition: The Master Skill

Professional software engineers don't build entire applications in one sitting. They break them into pieces — modules, components, functions — each with a clear purpose, clear inputs, and clear outputs. Then they assemble them.

Working with AI on complex projects requires the same discipline. You need to decompose your project into tasks that are small enough for a single, focused AI conversation to handle well.

Good decomposition follows three rules:

Single responsibility. Each task does one thing. "Design the database AND build the API AND write the frontend" is three tasks, not one.

Clear interfaces. Every task has a defined input (what it needs to start) and a defined output (what it produces). The output of one task becomes the input of the next.

Minimal dependencies. Tasks that can be done in parallel should be. Only create dependencies when one task genuinely needs another's output.

Decomposition is a human skill, not an AI skill.

AI can help you brainstorm a task breakdown, but deciding how to split the work, what depends on what, and what to prioritize — that's your job. It's project management, and it's one of the most valuable skills you can develop.

Project Orchestrator

Break it down. Move tasks through the pipeline. Watch dependencies come alive.

A 2D puzzle platformer with original art and music

To Do7

Define game concept

Nail down the core mechanic, theme, and target audience.

Design 5 core levels

Create level layouts that teach mechanics progressively.

Define game concept

Generate art assets

Create character sprites, backgrounds, and UI elements.

Define game concept

Build game engine scaffolding

Set up the project structure, physics, and gravity system.

Design 5 core levels

Implement levels

Build all 5 levels with puzzles, triggers, and progression.

Build game engine scaffolding Design 5 core levels

Compose music & SFX

Create ambient music and sound effects that match the watercolor theme.

Define game concept

Playtest & polish

Test all levels, fix bugs, tune difficulty, add juice.

Implement levels Generate art assets Compose music & SFX
In Progress0

Move a task here to start working on it

Done0

Key insight: Complex projects are orchestration problems, not prompt problems. Break the work into pieces, manage dependencies, and let each conversation focus on one thing.

The Handoff Pattern

Once you've broken a project into tasks, you need a way to connect them. In AI workflows, this connection is the handoff — the moment where the output of one conversation becomes the input to the next.

The Handoff Pattern

Conversation 1 produces an artifact — a document, a schema, a plan.
That artifact is pasted into Conversation 2 as context.
Conversation 2 builds on it and produces the next artifact.
Repeat until the project is complete.

Think of it like a relay race. Each runner (conversation) covers their leg and passes the baton (artifact) to the next. The baton is the critical piece — if it's dropped or garbled, the next runner stumbles.

This means your handoff artifacts need to be crisp. A database schema is a better handoff than a vague description of the data model. A numbered list of API endpoints is better than "I need a backend that handles user stuff." The sharper the artifact, the better the next conversation performs.

The quality of your handoff artifact determines the quality of everything built downstream. Garbage in, garbage out — across conversations.

The Art of Context Packing

Every time you start a new AI conversation, you're making a decision that most people don't even notice: what context do you include?

You have a limited budget — the context window. And you have a pile of potentially relevant information: code files, documentation, previous conversations, error logs, design specs. Not all of it will fit. And even if it did, dumping everything in would dilute the AI's attention.

Context packing is like packing a suitcase for a trip. You can't bring your entire closet. You need to think about where you're going (the task), what you'll need (relevant information), and what you can leave behind (everything else).

Too little context

The AI doesn't have what it needs and fills the gaps with hallucinations. It sounds confident but invents details.

Too much context

The AI gets overwhelmed. Important details get lost in the noise. Output becomes generic and unfocused.

The right context

The AI has exactly what it needs for this specific task. Output is precise, relevant, and actionable.

Think like an editor, not a hoarder.

Before every AI interaction, ask yourself: what does the AI need to know to do this specific task? Include that. Leave out everything else. This is context engineering applied to project management.

Context Packing

Pack the right context. Too little and the AI hallucinates. Too much and nothing fits.

Fix the authentication bug

Users are getting logged out after 5 minutes. The session token refresh seems broken.

Your Context Suitcase
0 / 8,000 tokens0%

Click documents on the right to pack them

Available Documents
DocsProject README800

Overview of the project structure, goals, and setup instructions.

CodeFull codebase12,000 (too big!)

The entire source code of the application. Way too large to fit.

DocsStyle guide summary200

Coding conventions, naming patterns, and formatting rules.

DocsAPI documentation3,000

Endpoint specifications, request/response formats, authentication.

ContextPrevious conversation2,500

Your last chat with the AI about this project, including decisions made.

DataError logs1,200

Recent error stack traces and warnings from the application.

DataTest results600

Which tests pass, which fail, and the failure messages.

DocsUser requirements400

What the user originally asked for and the acceptance criteria.

CodeDatabase schema500

Table definitions, relationships, indexes, and constraints.

ContextCompetitor analysis1,800

Research on how competitors handle similar features.

Key insight: Context is not "more is better." It's about choosing the right information for the task. The best AI users think like editors: what does the AI need to know to do this specific job?

Keeping Track of What Worked

Here's a scenario that happens to everyone: you have a great AI conversation that produces exactly the output you need. A week later, you need something similar. You can't remember what you said. You try to recreate it. The results are worse.

Complex AI projects generate a lot of artifacts — prompts that worked, outputs that were good, decisions that were made, approaches that failed. Without a system for tracking this, you'll waste hours rediscovering things you already figured out.

Professional developers use version control (like Git) to track every change in their codebase. You should apply the same mindset to your AI work:

01

Save your prompts

When a prompt produces great output, save it somewhere. A simple notes app, a markdown file, a prompt library. The exact wording matters.

02

Save the outputs

Keep the AI-generated artifacts you build on. Database schemas, API designs, outlines. These are your handoff documents and you'll need them again.

03

Document your decisions

When you and the AI explore two approaches and pick one, write down why. Future-you will thank present-you when the project pivots.

04

Track what failed

A prompt that produced garbage is valuable information. Knowing what doesn't work is half the battle. Keep a "failure log" so you don't repeat mistakes.

Your prompt library is like a musician's practice notebook. The more you collect, the faster you get, and the less you start from zero.

Key Concepts

Decomposition

Break work into chunks with clear inputs, outputs, and minimal dependencies.

The Handoff Pattern

Output from conversation 1 becomes input to conversation 2.

Version Control for AI Work

Track what prompts worked, compare output versions, maintain a decision trail.

The best AI users aren't the best prompters. They're the best project managers.

You've learned to break work into pieces, manage context like a scarce resource, and build a system for tracking what works. In the next chapter, we turn to the hardest skill of all: knowing when AI is wrong — and what to do about it.