Documentation Index
Fetch the complete documentation index at: https://docs.fabro.sh/llms.txt
Use this file to discover all available pages before exploring further.
Configurable LLM providers and models
Fabro can now merge LLM provider and model catalog entries from settings. Teams can add OpenAI-compatible gateways, route through provider proxies, attach typed extra headers, map a Fabro model ID to a provider-specificapi_id, and declare model controls and per-speed pricing without waiting for a new built-in catalog entry.
Migration note
Provider values exposed by configuration, model routing, and the API are provider ID strings. Built-in names likeanthropic, openai, and gemini still work, and custom IDs such as proxy now work wherever the selected catalog defines them.
Clients that generated closed provider enums from older API specs should regenerate against the current OpenAPI schema and treat model provider fields as strings.
More
LLM catalog
LLM catalog
- Added settings-reference coverage for
[llm.providers.<id>], providerextra_headers,[llm.models.<id>], model limits, features, controls, base costs, and per-speed cost overrides - Documented OpenAI-compatible provider configuration and gateway header examples
- Documented
api_idfor provider wire-model names - Documented run-level model controls for reasoning effort and speed
Guardrails
Guardrails
- Added a workspace policy test preventing direct production
Catalog::builtin()usage outside the catalog owner and tests - Clarified that
fabro_model::Provideris a built-in compatibility enum, while open-ended provider identity is string-backedProviderId