Skip to main content
No single model is best at everything. Fabro lets you assign the right model to each workflow step — cheap, fast models for boilerplate, frontier models for hard reasoning, and a different provider for cross-critique so the reviewer brings fresh eyes. When a provider goes down, Fabro can fail over automatically.
Ensemble workflow: fan out to Opus and Gemini Pro, merge, then synthesize

Model catalog

ModelProviderAliasesContextCost (in/out per Mtok)Speed
claude-opus-4-6anthropicopus, claude-opus1M15.00/15.00 / 75.0025 tok/s
claude-sonnet-4-5anthropicsonnet, claude-sonnet200K3.00/3.00 / 15.0050 tok/s
claude-haiku-4-5anthropichaiku, claude-haiku200K0.80/0.80 / 4.00100 tok/s
gpt-5.2openaigpt51M1.80/1.80 / 14.0065 tok/s
gpt-5-miniopenaigpt5-mini1M0.20/0.20 / 2.0070 tok/s
gpt-5.2-codexopenai1M1.80/1.80 / 14.00100 tok/s
gpt-5.3-codexopenaicodex1M1.80/1.80 / 14.00100 tok/s
gpt-5.3-codex-sparkopenaicodex-spark128Kn/a1000 tok/s
gpt-5.4openaigpt541M2.50/2.50 / 15.0070 tok/s
gpt-5.4-proopenaigpt54-pro1M30.00/30.00 / 180.0020 tok/s
gemini-3.1-pro-previewgeminigemini-pro1M2.00/2.00 / 12.0085 tok/s
gemini-3.1-pro-preview-customtoolsgeminigemini-customtools1M2.00/2.00 / 12.0085 tok/s
gemini-3-flash-previewgeminigemini-flash1M0.50/0.50 / 3.00150 tok/s
gemini-3.1-flash-lite-previewgeminigemini-flash-lite1M0.20/0.20 / 1.50200 tok/s
kimi-k2.5kimikimi262K0.60/0.60 / 3.0050 tok/s
glm-4.7zaiglm, glm4203K0.60/0.60 / 2.20100 tok/s
minimax-m2.5minimaxminimax197K0.30/0.30 / 1.2045 tok/s
mercury-2inceptionmercury131K0.20/0.20 / 0.801000 tok/s
Each provider requires its own API key set via environment variable (e.g. ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY). See the Quick Start for setup.

Default models

When no model is specified, the fabro exec command uses a default model based on the provider:
ProviderDefault model
anthropicclaude-opus-4-6
openaigpt-5.2-codex
geminigemini-3.1-pro-preview
kimikimi-k2.5
zaiglm-4.7
minimaxminimax-m2.5
inceptionmercury

Using models in workflows

Assign models to workflow nodes using model stylesheets, which use a CSS-like syntax:
example.fabro
digraph Example {
    graph [
        model_stylesheet="
            *        { model: claude-haiku-4-5; }
            .coding  { model: claude-sonnet-4-5; reasoning_effort: high; }
            #review  { model: gemini-3.1-pro-preview; }
        "
    ]

    spec     [label="Write Spec"]
    implement [label="Implement", class="coding"]
    review   [label="Review"]
}
This routes the spec node to Haiku (the default), implementation to Sonnet, and review to Gemini Pro.

Overriding the default model

Model stylesheets set per-node models inside the workflow graph, but you can also override the default model for an entire run. This is useful for quick experimentation or when you want to swap models without editing the DOT file.

CLI flags

Pass --model and optionally --provider to fabro run:
fabro run files-internal/demo/01-hello.fabro --model claude-opus-4-6
fabro run files-internal/demo/04-pipeline.fabro --model gemini-3.1-pro-preview
These flags set the default model for all nodes that don’t have an explicit model assigned via a stylesheet. The provider is automatically inferred from the model catalog — you only need --provider for models not in the catalog or to force a specific provider.

Run config TOML

For repeatable runs, set the model in a run config file:
run.toml
version = 1
goal = "Implement the feature"
graph = "implement.fabro"

[llm]
model = "claude-sonnet-4-5"

[llm.fallbacks]
anthropic = ["gemini", "openai"]
gemini = ["anthropic", "openai"]
Then launch with:
fabro run run.toml
The [llm.fallbacks] table is optional. It maps each provider to an ordered list of fallback providers to try when the primary is unavailable.
The precedence order is: node-level stylesheet > run config TOML > CLI flags > server defaults. More specific settings always win.

CLI commands

List models

View all available models, or filter by provider:
fabro model list
fabro model list --provider anthropic
fabro model list --query codex

Test models

Verify that your API keys are working by sending a test prompt to each configured provider:
fabro model test
fabro model test --model claude-sonnet-4-5
fabro model test --provider openai
This is useful for confirming connectivity after setup or when adding a new provider key.