When to use this
- You’ve scoped a feature or fix and can describe the steps in a plan
- The implementation involves writing code, running tests, and iterating on lint/build failures
- You want multi-model review (not just the model that wrote the code) before calling it done
- You’d rather not wait at your terminal while the agent codes — let it run in the background on cloud resources
The workflow
implement.fabro
How the handoff works
The key insight is that planning happens interactively and implementation happens autonomously. The workflow’simplement node reads a plan file that you created in your REPL session — it doesn’t generate its own plan.
Step 1: Plan in Claude Code
Use Claude Code’s plan mode to collaborate on an implementation plan. Go back and forth until you’re happy with the steps, file changes, and test strategy:Step 2: Hand off to Fabro
Once the plan is ready, run the/fabro-implement slash command. This reads your Claude Code plan file and launches a Fabro workflow run:
--detach flag returns immediately so you get your terminal back. Fabro executes the workflow on cloud resources in the background.
The Claude Code command
To set up the/fabro-implement slash command, create this file:
~/.claude/commands/fabro-implement.md
CLAUDE_CODE_PLAN_FILE_PATH to the actual plan file, passes it as the --goal-file to Fabro, and the workflow’s implement node reads that file to know what to build.
How it works
Preflight checks
Before touching any code, the workflow validates the environment:- Toolchain — ensures Rust is installed (installs via
rustupif not). Exits immediately if this fails. - Preflight Compile —
cargo check --workspaceconfirms the codebase compiles. Exits if it doesn’t — no point implementing against a broken baseline. - Preflight Lint —
cargo clippy --workspace -D warningschecks for lint warnings. If lints fail, thefix_lintsagent fixes them (up to 3 attempts) before proceeding.
implement node starts from a clean, compiling, lint-free codebase.
Implementation
Theimplement node is the core agent stage. Its prompt is simple:
--goal-file), then writes code, creates tests, and iterates until all plan steps are complete. The “red/green TDD” instruction means: write a failing test first, then make it pass.
Multi-model simplification
After implementation, the code passes through three independent simplification stages:@prompts/simplify.md prompt but brings different strengths. Running the same review prompt across Claude, Gemini, and GPT catches different classes of issues — verbose code one model wrote that another would simplify, edge cases one model spots that others miss, naming improvements that vary by model preference.
The models override the graph-level default via per-node model attributes:
Verification and fixup
Theverify node runs clippy and the full test suite:
goal_gate=true— the workflow cannot succeed unless verification passesretry_target="fixup"— if verification fails at the exit node, route back to thefixupagent instead of failing immediately
fixup agent reads the build output and fixes lint warnings and test failures. It loops back to verify, with max_visits=3 as a circuit breaker.
Format
A finalcargo fmt --all ensures consistent formatting. This is also a goal gate — unformatted code means the workflow fails.
Adapting for other languages
This workflow is Rust-specific, but the pattern generalizes. Replace the shell commands for your language:| Stage | Rust | TypeScript | Python |
|---|---|---|---|
| Toolchain | rustup | bun --version | python3 --version |
| Compile | cargo check | bun run typecheck | mypy . |
| Lint | cargo clippy | bun run lint | ruff check . |
| Test | cargo nextest run | bun test | pytest |
| Format | cargo fmt | bun run format | ruff format . |