Documentation Index
Fetch the complete documentation index at: https://docs.fabro.sh/llms.txt
Use this file to discover all available pages before exploring further.
Faster, stricter LLM diagnostics
fabro doctor now probes configured LLM providers concurrently instead of waiting on them one by one. Provider probe failures also count as diagnostic errors, so a broken key or unreachable provider no longer looks like a successful configuration check.
Doctor output now preserves the underlying LLM error chain when a provider probe fails, which makes network failures, API errors, and provider-specific terminal errors easier to distinguish:
GPT-5.5 and Claude Opus 4.7 defaults
The model catalog now includesgpt-5.5 and gpt-5.5-pro, with gpt-5.5 set as the OpenAI default. Anthropic’s default model is now claude-opus-4-7.
Built-in Fabro workflows were also refreshed to use the newer defaults in their own model settings. This keeps generated plans, simplify stages, and project workflows aligned with the catalog users see through fabro model list.
More
CLI
CLI
fabro doctornow reports LLM provider probe failures as errorsfabro doctorprobes configured LLM providers concurrently
Workflows
Workflows
- Built-in implementation workflows now use newer Claude and OpenAI defaults
- Built-in verify gates refresh generated docs before checking them
- Built-in Rust workflow checks now run clippy across all targets with the pinned nightly toolchain
Fixes
Fixes
- Fixed OpenAI responses-stream terminal errors being hidden from agent sessions
- Fixed doctor output dropping source-chain details for failed LLM probes
- Fixed an LLM preflight probe regression