I've Started Using Dumber Models on Purpose

Why the most capable AI isn't always the right tool, and how friction in your workflow can lead to better architecture decisions.

dev ai claude workflow linkedin

Here’s something that felt wrong at first: I’ve started reaching for less capable models when I’m writing code.

Not because they’re cheaper. Because they make me think.

The Problem with Too-Capable Tools

Opus 4.5 will take a half-baked prompt and ship working code. You describe a vague idea, and 30 seconds later you’ve got something that compiles. Magic, right?

Except… did you actually think about what you were building?

The risk with ultra-capable models isn’t wrong code - it’s skipping the part where you understand the problem. You get a solution before you’ve defined what you’re solving.

I found myself in meetings defending decisions I hadn’t consciously made. “Why did you structure it this way?” Uh, because the model did, and it worked?

That’s a problem.

Friction Is the Feature

Sonnet makes you think first. When a model requires precision in your prompts, you’re forced to actually articulate what you want. That articulation is the architecture work.

My architecture decisions are sharper when the model requires precision. The prompt becomes the design doc. If I can’t explain it clearly enough for a mid-tier model to execute, maybe I don’t understand it well enough yet.

This isn’t about the model being bad. It’s about the model being appropriately demanding.

The New Workflow

Here’s what works for me:

Exploration and research: Use the biggest brain available. Opus 4.5 for understanding complex codebases, exploring possibilities, asking “what if” questions. Let it synthesize.

First-draft implementation: Dial it back. Sonnet forces me to write actual specs. If the prompt has to be precise, the thinking has already happened.

Code review and debugging: Back to powerful models. They catch things I miss, suggest better patterns, and explain why something is wrong - not just that it is.

Refactoring: Sonnet again. If I can’t describe the refactor clearly, I’m not ready to do it.

The pattern: powerful for exploration and review, constrained for creation.

Write the Design Doc Before the Prompt

This is the real insight: if your prompt could substitute for a design doc, you’ve done the thinking. If your prompt is “make it work,” you haven’t.

Ultra-capable models let you skip writing that design doc. Which is exactly why you shouldn’t always use them.

I’ve started treating prompts like I treat commit messages - they should explain the why, not just the what. If the prompt is just “add user authentication,” I haven’t done my job. What kind of auth? Where does it live? What’s the session strategy?

Forcing yourself to answer those questions in the prompt forces you to answer them in your head first.

The Counterintuitive Truth

The tool that makes you think less isn’t always the better tool.

When I reach for the most powerful model, I’m implicitly saying “I don’t need to think about this.” Sometimes that’s true - you’re doing rote work, or you genuinely need the extra capability.

But for design decisions? For architecture? For anything you’ll need to explain to another human?

The friction is a feature. The struggle to articulate is the work.


I’m curious - has anyone else found themselves deliberately using less capable tools for certain tasks? Not for cost or speed, but because the constraint improves the output?


Join the conversation: Comment on LinkedIn