What Superpowers Taught Me About Planning with AI

13 min read

The Claude Code plugin that made me stop coding and start thinking first

What Superpowers Taught Me About Planning with AI - Featured image showing AI Engineering related to what superpowers taught me about planning with ai
BJT

I Used to Jump Straight Into Code

I'll be honest with you. When I first got access to Claude Code, I did what everyone does.

I opened a terminal, typed something like "build me a settings page with dark mode toggle and user preferences," hit enter, and watched the magic happen. Code appeared. Files got created. Things mostly worked. I felt like the fastest developer alive.

For about forty-five minutes.

Then I spent three hours fixing edge cases the AI didn't anticipate. The dark mode toggle didn't persist across sessions. The preferences schema didn't match my existing database. The component styles conflicted with my design system. The AI had built me something that looked correct and functioned incorrectly.

This is vibe coding. You skip the thinking and go straight to doing. You describe what you want in natural language, accept the output, and iterate by feel. It's fast at the start and expensive at the end. And I was deep in it.

I'd start every session the same way: open Claude Code, describe a feature, let it rip. Sometimes I'd get lucky and the output was close to what I needed. More often, I'd end up in a debugging spiral where I was fixing code I didn't write and didn't fully understand. The AI was guessing at my intentions, and I was guessing at its output. Two parties guessing at each other is not engineering. It's improv.

Something had to change.

The Evolution Nobody Talks About

Here's context most people don't have: I've been working with AI coding tools since the GitHub Copilot beta. I was early. Back then, AI-assisted development was autocomplete on steroids — it would finish your line, suggest a function body, save you some typing. Useful, but limited. And the habit it built was exactly vibe coding — let the AI suggest, accept it, move on.

The technology wasn't there yet for anything more, but it was moving fast. When agents arrived — Claude Code, Cursor with multi-file edits, tools that could understand context across an entire codebase — the game changed completely. Suddenly the AI wasn't just finishing your sentences. It was writing entire features, making changes across multiple files, understanding architecture.

But the early days of agent-level tools were rough. When you let the AI make large codebase changes, things broke. Files that should have been deleted weren't. New files appeared that didn't need to exist. There was no cleanup. Code quality felt like the old jQuery days — spaghetti everywhere, things wired together in ways that made no sense. You learned fast to commit your code before letting the AI touch anything, because there was a real chance you'd need to roll back.

It felt like having a junior developer with infinite energy and zero judgment. Incredibly fast, but you couldn't trust the output without reviewing every line. Most developers, myself included, didn't immediately realize they could develop differently. We carried the old habits forward. We used agent-level tools with autocomplete-level workflows. We vibed when we should have been planning.

What's interesting is that my team at CloneForce was starting to figure this out organically. We were pushing the technology hard — testing what it could do, finding where it broke, discovering what worked. And we kept arriving at the same conclusion: the AI performed dramatically better when we gave it structured context and clear plans upfront.

It started with spec-based development. We began writing specs before prompting — breaking problems down, defining what we wanted before asking the AI to build it. That helped a lot. Then we evolved into harness engineering — wrapping the AI in structured tooling, automated tests, log analysis, deployment pipelines. That's when we started seeing real results. The specs told the AI what to build. The harness told it whether it worked. Together, the output went from jQuery spaghetti to code we could actually ship.

We were moving toward planning-first development without knowing there was a name for it.

If we hadn't been pushing the technology and seeing what it could actually do, we never would have discovered this pattern. And we never would have found Superpowers.

The Brainstorm-First Workflow

When I found Superpowers, it wasn't a revelation — it was a relief. We'd been doing a rougher version of this already. What Superpowers did was formalize it. They figured out the better way to do what we were stumbling toward on our own — the right structure, the right sequence, the right guardrails. It took our organic discovery and turned it into a repeatable system.

Superpowers is a skills framework for Claude Code, created by Jesse Vincent (obra). It's one of the most popular Claude Code extensions out there — over 42,000 stars on GitHub, officially featured in Anthropic's marketplace. But the star count isn't what matters. What matters is the workflow it enforces.

The core loop is simple: brainstorm, plan, execute, review.

That's it. Four steps. But the order is non-negotiable, and the first step is the one that changes everything.

When you run /superpowers:brainstorming, Claude doesn't write code. It doesn't generate files. It doesn't touch your codebase. Instead, it does two things: it researches and it asks questions.

It goes online and pulls in real context — API documentation, GitHub repos, GitHub issues, Stack Overflow threads, anything relevant to what you're building. This was something we were already doing manually at CloneForce. I'd have Claude Code search the web for API docs, dig through GitHub issues for known problems, check Stack Overflow for edge cases — all before writing a line of code. Superpowers just formalized it and did it better.

Then it asks you questions. One at a time. It wants to understand what you're actually trying to build, what constraints exist, what alternatives you've considered, what tradeoffs you're willing to make.

It explores the problem space with you — armed with real research, not just its training data. It presents the emerging design in sections and asks you to validate each one. It pushes back on assumptions. It surfaces edge cases you hadn't thought about. By the end of a brainstorm session, you have a design — not code, a design — that both you and the AI understand deeply.

Then /superpowers:writing-plans takes that approved design and breaks it into bite-sized implementation tasks. Each task is scoped, ordered, and has clear acceptance criteria. The plan becomes a document — a spec — that the AI can execute against with precision instead of guessing with confidence.

Then /superpowers:executing-plans runs the tasks, using subagents and built-in code review checkpoints to catch issues as they happen rather than after you've shipped them.

The first time I used this workflow end-to-end, I spent about twenty minutes in the brainstorm phase. I kept thinking "I should just start coding." But I stayed with it. I answered the questions. I validated the design sections. I let the planning phase finish.

Then the execution phase took maybe thirty minutes and produced clean, working code that matched my architecture, respected my existing patterns, and handled the edge cases I would have spent hours debugging under my old workflow.

The brainstorm and planning phase got us 80-90% of the way there. The remaining 10-20% was just debugging through my automated process — the Ralph Wiggum loop. Plan it, execute it, let the loop catch what's left. That's the whole workflow.

Why Planning First Is THE Differentiator

Here's the thing most people miss about AI coding tools: the AI is not the bottleneck. You are.

When you give Claude a vague prompt — "build me a login page" — it has to make dozens of assumptions. What auth provider? What session strategy? What error handling patterns does your app use? What does your design system look like? The AI fills in every gap with its best guess. And that's not guessing — that's hallucinating. It's making things up with confidence. Every assumption is a potential bug, and vague input is an invitation for the AI to hallucinate its way through your codebase. We saw this early on — Claude would generate fake URLs because it wanted to be helpful. Links that looked completely real but went nowhere. It got better over time, but early on it was a real problem because you couldn't tell at a glance if something was working or fabricated. That's what hallucination looks like in practice: confident, plausible, wrong.

What you want is the opposite: structured input with checkpoints and reviews. When you give Claude a plan — a structured document that says "here's the auth provider, here's the session strategy, here's how errors are handled in this codebase, here's the component library we use" — it doesn't hallucinate. It executes. And at every stage there's a checkpoint where you review what it built before it moves on. You want it to correct itself when it's wrong — not barrel forward hoping you won't notice. And this is what I actually see after planning: the AI catches its own issues and corrects itself mid-execution, because it has the plan to compare against. It has something concrete to reference. Without a plan, it has no way to know it's off track. With one, it self-corrects because it can see the gap between what it built and what the spec says it should have built.

This is why Superpowers' brainstorm phase matters so much. It's not just a nice-to-have warm-up step. It's the mechanism that converts your vague idea into a structured spec. It forces the solution design to happen before the implementation — which is the thing experienced engineers do naturally but AI agents never do unprompted.

Left to its own devices, an AI coding assistant will always skip straight to code. It's trained to be helpful, and code feels helpful. But helpful and correct are not the same thing. The brainstorm-first workflow interrupts that instinct and replaces it with rigor.

The planning document also creates something powerful: a referenceable source of truth. When you're three tasks into an implementation and something feels off, you can check the plan. When a code review checkpoint flags an issue, the subagent can compare against the spec. The plan isn't just preparation — it's a guardrail that keeps the entire execution on track.

This connects directly to spec-driven development and harness engineering — the idea that structured inputs produce structured outputs. Superpowers didn't invent this concept, but it made it accessible. It wrapped the discipline of solution design into a tool that anyone can use, even if they've never written a design doc in their life.

What This Reveals About Senior Engineers

There's a narrative floating around that AI is going to flatten the gap between junior and senior developers. That experience won't matter as much when the AI can write the code. I think it's exactly backwards.

The skill that matters most with AI isn't typing speed. It isn't prompt tricks. It isn't knowing which model is 2% better at JavaScript. The skill that matters is knowing how to break down a problem, design the solution, anticipate failure modes, and guide execution. That's what senior engineers do. That's what they've always done.

Vibe coding skips all of this. It's like building a house without blueprints. You might end up with something that has walls and a roof, but the plumbing won't pass inspection and the load-bearing wall is in the wrong place.

AI tools amplify whatever you bring to them. If you bring vague ideas, you get vague code. If you bring structured plans, you get structured software. The amplification is neutral — it scales your strengths and your weaknesses equally.

Almost everything that makes someone a senior engineer — designing systems, managing complexity, thinking through tradeoffs, catching problems before they become expensive — is what now yields the best outcomes with AI. The planning muscle, the architecture muscle, the "let me think about this before we start building" muscle. Those are the muscles that matter more than ever.

But I want to be honest about something — and this cuts both ways.

When I joined CloneForce, the team before me had been vibe coding the entire system. These weren't dumb people — some had PhDs. They were smart. But they were junior in terms of production engineering experience, and they'd built the whole thing by prompting AI without structured workflows. We had to rewrite and rearchitect that codebase from scratch. The new system was significantly better, but the original attempt wasn't a failure of intelligence — it was a failure of experience. They didn't know what they didn't know.

That said, experience trumps everything but smart junior developers can absolutely do this work too. At CloneForce, we had a real junior developer on the team. She worked hard and was sharp. Training her to use these workflows — teaching her how to learn the technologies, how to ask the right questions, how to spot when the AI was off track — was one of the best experiences I've had as a lead. She figured things out because she was willing to push through the confusion.

The career path for people starting out is harder and easier at the same time. Harder because the bar for what's expected is rising fast. Easier because the tools give you leverage that didn't exist two years ago. A junior developer who learns to plan before coding and build structured workflows around AI will outperform a mid-level developer who's still vibing. The opportunity is real — but you have to learn the right way to work with these tools, not just the tools themselves.

Superpowers made all of this visible to me. The brainstorm phase is literally the senior engineer workflow, externalized into a tool. It asks the questions a good tech lead would ask. It pushes for clarity the way a good architect would. And then it hands off to execution with the confidence that comes from actually understanding what you're building.

Try It. Seriously.

Here's my challenge to you: the next time you open Claude Code to build something, don't start coding. Start brainstorming.

If you have Superpowers installed, run /superpowers:brainstorming and answer the questions honestly. Don't rush through them. Don't skip the validation steps. Let the design emerge before the code does.

If you don't have Superpowers, do it manually. Before you write a single prompt that generates code, spend fifteen minutes writing down what you're building, why you're building it, what constraints matter, and what the implementation should look like. Then feed that document to the AI and watch the difference.

Junior developers who learn this workflow early will leapfrog peers who just vibe code their way through projects. You'll build better intuition about system design because the tool forces you to think about it explicitly instead of skipping past it.

Senior developers who already think this way will find AI is the multiplier they've been waiting for. The planning skills you've spent years developing are suddenly the most valuable part of your toolkit. AI doesn't replace your experience — it finally gives your experience leverage.

For me, once I started planning before execution, that was the game changer. Not a better model. Not a faster editor. Not a cleverer prompt. Just the discipline of thinking before building — made visible and repeatable by a tool that wouldn't let me skip the step I most wanted to skip.

That's the gap between vibe coding and real AI-augmented engineering. It's not the AI. It's the plan.

— Bill John Tran

© 2026 Bill John Tran. All rights reserved.

Ask about Bill John Tran

I'm an AI trained on Bill John Tran's complete career — resume, projects, skills, and writing. Ask me anything.