Skip to main content

Fabro CLI

Run AI workflows from the command line with fabro run start, validate DOT workflow definitions with fabro validate, and step through dry-runs to test logic before committing real LLM calls.
fabro run start spec-dod-multimodel.fabro
fabro validate my-workflow.fabro
fabro run start --dry-run my-workflow.fabro
The CLI streams LLM responses in real time and supports interactive tool approval — each tool call pauses for you to approve or reject via arrow-key prompts, giving fine-grained control over what the agent does.

Docker sandboxing

Agent tool execution can now run inside Docker containers, so workflows can safely run shell commands, edit files, and install dependencies without affecting your host machine.
fabro run start --docker my-workflow.fabro
The container is shared across all stages in a run, so tools have access to the same filesystem throughout the workflow.

Web dashboard

A pipeline kanban board shows all active runs with their status, progress, and any pending human-in-the-loop prompts. Watch workflows execute in real time from the browser.

More

  • Color-coded terminal output with ANSI styling makes it easier to follow agent activity
  • Use --output-format json for NDJSON event streaming into CI/CD pipelines or custom dashboards
  • Ships with Claude Opus 4.6 (1M context window) as the default Anthropic model and Gemini 3.1 Pro Preview as the default Gemini model
  • Anthropic models use adaptive thinking by default for deeper reasoning on complex tasks