5

Chapter Five

Give It
Tools

The leap from "answer questions"
to "take actions."

Up until now, every AI interaction you've had follows the same pattern. You type something. The AI types something back. A really sophisticated autocomplete. But what if the AI could do things? Not just tell you about the weather, but check the forecast. Not just write code, but run it. Not just suggest what to Google, but search the web itself and bring back what it found.

The Trust Question

When you give AI tools, you hand it a toolbox and say: "Here are things you can do. You decide when to use them." It can search the web, run code, read files, call APIs. The AI didn't get smarter — it got connected. And that raises a question that has nothing to do with technology and everything to do with you: how much should the AI do on its own before checking with you?

The answer depends on three things. Stakes — what's the worst that could happen? Reversibility — can you undo it? And trust — how well do you know this tool? A calculator you've used a hundred times deserves more freedom than a brand-new plugin you've never tested.

Key insight

It's not all-or-nothing. The best AI workflows give freedom on low-stakes, reversible tasks while keeping humans in the loop for high-stakes, irreversible ones. Your job is to design the boundary.

Where would you draw the line? Set your own autonomy levels.

Trust Thermometer

How much autonomy would you give?

1 / 8
low stakes

Scenario

Your AI agent wants to: summarize an article you're reading

Share this course
Tools turn AI from an advisor into an assistant — from someone who tells you what to do, to someone who actually does it.

You've given AI hands. Next, we wire those tools into something that can pursue a goal on its own — planning, executing, recovering from mistakes. Time to build agents.

Would You Let It?

New tool unlocked!