AI Development Workflow — 5-Part Series
  1. AI-Assisted Development: A Loop, Not a Chat
  2. /plan-issue: Collaborative Planning with AI
  3. /work-issue: Autonomous Implementation
  4. /qa-run: AI-Driven QA That Closes the Loop
  5. Specialist Agents: Looking at Every Page with Different Eyes

/plan-issue: Collaborative Planning with AI

This is Part 2 of my series on AI-assisted development. Part 1 covers the overall workflow loop.

I kept running into the same problem: the AI would implement something, and it wasn’t what I wanted. Not because the AI was bad at coding, but because I was bad at specifying. Vague instructions produce vague implementations.

So I started planning separately from implementing. /plan-issue is a Claude Code slash command that takes a rough Linear issue and turns it into a detailed spec through conversation. The AI explores the codebase, I make decisions. No code gets written. The output is a Linear issue with enough detail that /work-issue can implement it later without needing me in the room.

How I structured the planning phase

The command is a markdown prompt in .claude/commands/plan-issue.md. I iterated on this a lot before landing on something that worked. The key insight was that planning needs a rigid structure — otherwise I’d skip steps and end up with vague specs anyway. Here’s what the protocol looks like now and why each piece matters.

Step 1: Fetch the issue

/plan-issue                    # Auto-pick oldest Backlog issue
/plan-issue --id=PROJ-42       # Plan a specific issue

The AI connects to Linear via MCP, grabs the issue, and shows me what we’re working with — title, priority, existing description (often sparse or empty).

Step 2: Claim it

The issue moves to In Progress immediately. This prevents another run (or another developer) from picking the same issue. Simple but important when you’re batching work.

Step 3: Gather context

The AI reads:

  • Linear data — comments, parent issues, sub-issues, related issues
  • Architecture docs from docs/ — platform overview, tech stack, data model (I mentioned this in Part 1 — the docs/ folder gives the AI persistent project context)
  • Relevant code — existing handlers, components, DB queries in the area the issue touches

It puts together what exists, what needs to change, and what could go wrong. This is stuff I’d otherwise do by hand — grepping through files, reading handlers, checking the schema.

Step 4: Present analysis

The AI presents its findings in a structured format:

## Analysis: Add export functionality for reports

### What exists today
- Reports page at /reports shows filterable list of records
- Reports table has status, date range, and category fields
- PDF generation library already in use (for invoices)

### Scope assessment
- **Size:** Medium
- **Areas affected:** Backend (export endpoint), Frontend (export UI), Database (export job tracking)
- **Key files:** internal/api/reports.go, web/src/pages/Reports.tsx, db/queries/exports.sql

### Proposed approach
- Add exports table to track async export jobs
- New API endpoint triggers export generation in background
- Support CSV and PDF formats, download via signed URL

### Open questions
1. Should exports include all visible columns or let users pick?
2. What's the max row limit — 10k? 50k? No limit?
3. Should we support scheduled/recurring exports?

### Risks / Dependencies
- Large exports need background processing; can't block the request
- PDF generation for large datasets may need pagination

This is where I do my actual work. I answer the open questions, push back on scope, redirect the approach. The AI is doing research; I’m making product decisions.

Step 5: Split if needed

The AI assesses whether the issue should be broken into subtasks. The heuristic is practical:

  • Touches 3+ distinct areas? Split.
  • Would take more than ~2 hours of autonomous work? Split.
  • Has independent pieces that could be parallelized? Split.

If splitting is warranted, it proposes a breakdown:

## Suggested subtask breakdown

1. **Add exports table and API endpoint** — Migration + sqlc queries + handler
2. **Build export generation worker** — Background job, CSV/PDF rendering, file storage
3. **Export UI** — Export button, format picker, download status indicator

Each subtask will be independently implementable by /work-issue.

I approve, reject, or modify the breakdown. Nothing gets created until I say so.

Step 6: Write the refined spec

The AI writes a complete description using a consistent template:

## Summary
Add export functionality for the reports page (CSV and PDF).

## Acceptance criteria
- [ ] Users can trigger an export from the reports page
- [ ] Format picker supports CSV and PDF
- [ ] Large exports run in background with progress indicator
- [ ] Completed exports available for download via signed URL
- [ ] Export history visible in user's account section

## Technical approach
- **Backend:** New exports table, background worker in cmd/server/
- **Frontend:** ExportButton component on Reports, status polling
- **Database:** migration 000042_add_exports.up.sql

## Files likely affected
- `internal/api/exports.go` — New handler
- `db/queries/exports.sql` — New queries
- `web/src/pages/Reports.tsx` — Add export button
- `web/src/components/ExportStatus.tsx` — New component

## Testing strategy
- Backend: unit tests for export worker, handler tests for export API
- Frontend: Vitest tests for ExportButton and status indicator

## Out of scope
- Scheduled/recurring exports
- Custom column selection
- Export sharing between users

I review this final spec. If it looks good, I approve.

Step 7: Update Linear

The AI updates Linear — writes the refined description, creates subtasks if splitting was approved, moves things to Todo. Each subtask gets its own complete spec.

The parent issue stays In Progress if it was split (it auto-completes when all subtasks finish). Individual subtasks go to Todo, which is the /work-issue pickup queue.

Step 8: Update docs (optional)

If the planned work would change the architecture, data model, or API surface, the AI asks if I want to update the relevant files in docs/. I usually say yes — it’s a small overhead that keeps future runs accurate. The AI drafts the doc update, I review it, and it gets applied alongside moving the issue to Todo.

The lesson: planning is where the real work happens

I almost didn’t build this phase. It seemed like unnecessary overhead — why not just let the AI start coding? Here’s what changed my mind:

It catches scope issues before code exists. The worst time to discover an issue is too big is after you’ve written half of it. Planning forces the “this needs splitting” conversation up front, when it’s free.

It creates a paper trail. Every planned issue has acceptance criteria, technical approach, file paths, and testing strategy. When /work-issue picks it up weeks later, the context is right there. When I review the commit, I can check it against the spec.

It stops the AI from improvising. Without a spec, the AI will make product decisions for you. It’ll add features you didn’t ask for, pick approaches you wouldn’t choose, skip edge cases you care about. Planning is where I write down what I want so the AI doesn’t have to guess later.

It’s faster than it sounds. A planning session takes 5-10 minutes. I spend my time on the interesting parts — making decisions, answering questions, deciding what’s in and what’s out.

Mistakes I made and what I’d do differently

Start from Backlog, not from your head. Keep your ideas in Linear’s Backlog. Let /plan-issue pull from there. This prevents the “I’ll just quickly add this” temptation that leads to unplanned scope.

Always wait for the open questions. Don’t rush past Step 4. The AI’s questions often reveal things I hadn’t considered. “Should exports include all columns or let users pick?” is a product decision that changes the implementation significantly.

Split aggressively. Small, focused issues lead to better autonomous implementation. If an issue touches backend, frontend, and database, that’s three subtasks. Each one fits in a single context window.

The “Out of scope” section matters more than you’d think. Listing what’s NOT included keeps the AI (and me) focused. Without it, /work-issue will sometimes add “helpful” extras that weren’t asked for.