| Provider | Runs on | Use case | Status |
|---|---|---|---|
local | Host machine | Development, trusted workflows | Available |
docker | Docker container | Reproducible environments, untrusted code | Available |
daytona | Cloud VM | CI/CD, team-shared runs, SSH debugging | Available |
ssh | Any SSH host | Existing remote machines, dev servers, NAS boxes | Available |
exe | Cloud VM (exe.dev) | Fast ephemeral VMs, lightweight cloud sandboxing | Experimental |
sprites | Cloud VM | Managed cloud sandboxes | In Development |
Choosing a provider
Set the sandbox provider via CLI flag, run config TOML, or server defaults:run.toml
local).
Local
The local sandbox runs all tool operations directly on the host machine. It’s the default and the simplest option — no setup required beyond the Fabro binary itself.How it works
- Working directory — Set to the current directory (or
directoryfrom the run config). Fabro creates it if it doesn’t exist. - Commands — Executed via
/bin/bash -cin the working directory. - File operations — Read and write directly to the host filesystem. Relative paths resolve against the working directory.
- Cleanup — No-op. Local sandbox doesn’t create or destroy anything on cleanup.
Environment variable filtering
The local sandbox filters sensitive environment variables before passing them to commands. Variables ending in_API_KEY, _SECRET, _TOKEN, _PASSWORD, or _CREDENTIAL are stripped. A safelist of common variables (PATH, HOME, USER, SHELL, LANG, TERM, TMPDIR, GOPATH, CARGO_HOME, NVM_DIR) is always passed through.
The local sandbox offers no isolation. Agents can read and modify any file on the host. Use
docker, daytona, or exe when running untrusted workflows or when you need a reproducible environment.Docker
The Docker sandbox runs all tool operations inside a Docker container. The host working directory is bind-mounted into the container, so file changes are visible on both sides.Prerequisites
- Docker Engine running on the host
- The configured image available locally (or
auto_pullenabled)
How it works
- Container lifecycle — On
initialize(), Fabro pulls the image (if needed), creates a container withsleep infinity, and starts it. Oncleanup(), Fabro stops and removes the container. - Working directory — The host working directory is bind-mounted at
/workspaceinside the container. All relative paths resolve against this mount point. - Commands — Executed via
docker execwith/bin/bash -cinside the container. Timeout and cancellation are supported. - File writes — Use the Docker API’s tar upload to avoid shell escaping issues with special characters.
- Platform detection — The container’s
uname -ris cached at startup.
Configuration
The Docker sandbox is configured through theDockerSandboxConfig:
| Setting | Default | Description |
|---|---|---|
image | fabro-agent:latest | Docker image to use |
container_mount_point | /workspace | Mount point inside the container |
network_mode | bridge | Docker network mode |
extra_mounts | [] | Additional host:container bind mounts |
memory_limit | unlimited | Memory limit in bytes |
cpu_quota | unlimited | CPU quota (microseconds per 100ms period) |
auto_pull | true | Pull the image if not found locally |
env_vars | [] | Additional KEY=VALUE environment variables |
Preserving the container
By default, the container is destroyed when the run finishes. To keep it alive for debugging:run.toml
docker exec -it <id> bash.
Daytona
The Daytona sandbox runs all tool operations inside a cloud-hosted VM managed by Daytona. It provides full machine-level isolation, automatic git cloning, and SSH access for debugging.Prerequisites
- A
DAYTONA_API_KEYenvironment variable - A GitHub App configured via
fabro install(for private repository cloning)
How it works
- Sandbox lifecycle — On
initialize(), Fabro creates a Daytona sandbox (from an image or a snapshot), clones the current git repository into it, and waits until it’s ready. Oncleanup(), the sandbox is deleted. - Working directory — Fixed at
/home/daytona/workspace. The current repository is cloned there automatically. - Git clone — Fabro detects the local
originremote URL and current branch, converts SSH URLs to HTTPS, and clones into the sandbox. For private repositories, Fabro uses a GitHub App Installation Access Token scoped tocontents: readon the specific repository. Public repositories are cloned without credentials. If no git repo is detected, the working directory is created empty. - Commands — Executed via the Daytona process API. Commands are base64-encoded and piped through
shto support pipes, environment variables, and other shell features. - Ephemeral — Sandboxes are created with
ephemeral: trueand a unique timestamped name (e.g.fabro-20260305-142301-a3f2).
Snapshots
Snapshots let you pre-build an environment image so each run starts with dependencies already installed. If the named snapshot doesn’t exist and adockerfile is provided, Fabro creates it automatically and polls until it’s ready (up to 10 minutes).
run.toml
| Field | Description |
|---|---|
name | Snapshot identifier. Reused across runs if it already exists. |
cpu | CPU cores for the snapshot VM. |
memory | Memory in GB. |
disk | Disk in GB. |
dockerfile | Dockerfile content for building the snapshot. Required when creating a new snapshot. |
Active state, Fabro uses it directly. If it’s in Building or Pending state, Fabro polls with exponential backoff until it’s ready.
Labels
Attach key-value labels to sandboxes for filtering and identification in the Daytona dashboard:run.toml
SSH access
Connect to a running Daytona sandbox via SSH for live debugging:Preserving the sandbox
Like Docker, Daytona sandboxes are destroyed on cleanup by default. Use--preserve-sandbox to keep them alive:
Auto-stop
Theauto_stop_interval setting (in minutes) tells Daytona to stop the sandbox after a period of inactivity. This saves costs for long-running sandboxes that may sit idle:
run.toml
SSH
The SSH sandbox runs all tool operations on an existing remote machine over SSH. Unlike cloud providers (Daytona, Exe), there is no VM lifecycle management — the host must already be running and reachable. This makes it ideal for persistent dev servers, NAS boxes, or any machine you SSH into regularly.Prerequisites
- SSH access to the target machine (key-based auth recommended)
- The remote working directory must exist, or the agent must be able to create it
- A GitHub App configured via
fabro install(for private repository cloning)
How it works
- No lifecycle management — Fabro does not create or destroy the remote host. It connects, runs operations, and disconnects.
cleanup()is a no-op. - Working directory — Set explicitly in the run config. Relative paths for all file operations resolve against this directory.
- Git clone — Fabro detects the local
originremote URL and current branch, converts SSH URLs to HTTPS, and clones into the working directory. For private repositories, Fabro uses a GitHub App Installation Access Token. Public repositories are cloned without credentials. If no git repo is detected, the working directory is used as-is. - Commands — Executed via SSH. Commands are base64-encoded and piped through
shto support pipes, environment variables, and other shell features. - File I/O — Reads use
catover SSH. Writes upload content via SCP. Parent directories are created automatically.
Configuration
run.toml
| Field | Required | Description |
|---|---|---|
destination | Yes | SSH destination — user@host, a hostname, or an SSH alias from ~/.ssh/config. |
working_directory | Yes | Absolute path to the working directory on the remote host. |
config_file | No | Path to a custom SSH config file. Useful when using a non-default key or jump host. |
preview_url_base | No | Base URL for port previews (e.g. "http://myserver"). When set, preview URLs are returned as {preview_url_base}:{port} instead of falling back to localhost. |
SSH config file
Useconfig_file to point Fabro at a non-default SSH config — useful when the remote host requires a specific key, ProxyJump, or port:
run.toml
~/.ssh/fabro_config
Port previews
When a workflow stage starts a local server on a port, Fabro callsget_preview_url(port) to produce a clickable URL. For SSH sandboxes, this requires knowing the host’s reachable address — Fabro can’t infer it automatically.
Set preview_url_base to enable preview URLs:
run.toml
http://devbox:3000.
Exe
The Exe sandbox runs all tool operations inside a cloud VM managed by exe.dev. It provides full machine-level isolation with fast VM startup via SSH.Prerequisites
- SSH keys configured for
exe.dev(added via the exe.dev dashboard) - A GitHub App configured via
fabro install(for private repository cloning)
How it works
- VM lifecycle — On
initialize(), Fabro connects to the exe.dev management plane via SSH and runsnew --jsonto create a VM. The response contains the VM name and an SSH destination (e.g.my-vm.exe.xyz). Oncleanup(), Fabro runsrm <vm_name>to destroy the VM. Cancelling a run cleanly tears down the VM. - Working directory — Fixed at
/home/exedev. Relative paths resolve against this directory. - Git clone — Fabro detects the local
originremote URL and current branch, converts SSH URLs to HTTPS, and clones into the VM. For private repositories, Fabro uses a GitHub App Installation Access Token. Public repositories are cloned without credentials. If no git repo is detected, the working directory is created empty. - Commands — Executed via SSH on the data plane (
<vmname>.exe.xyz). Commands are base64-encoded and piped throughshto handle pipes, environment variables, and other shell features. Environment variables and working directory overrides are supported. - File I/O — Reads use
catover SSH. Writes use SCP upload. Parent directories are created automatically. - Git checkpointing — Between stages, Fabro commits agent changes and pushes them to a remote branch, the same mechanism used by Daytona sandboxes.
- Ephemeral — Each run gets a fresh VM that is destroyed on cleanup.
Configuration
run.toml
| Field | Description |
|---|---|
image | Custom container image for the VM. Optional — uses the exe.dev default when omitted. |
SSH access
Connect to a running exe.dev sandbox via SSH for live debugging:Sandboxing
Sandboxes isolate agent execution from the host machine. When an agent runs a shell command, edits a file, or searches code, it does so inside a sandbox — preventing unintended side effects and providing a reproducible environment for each run.Filesystem
Each provider offers a different level of filesystem isolation:| Provider | Isolation | What agents can access |
|---|---|---|
local | None | The entire host filesystem. Agents can read and modify any file. |
docker | Container-level | Only the bind-mounted working directory (/workspace by default) and whatever is in the container image. Host files outside the mount are inaccessible. |
daytona | Full machine | A cloud VM with the repository cloned into /home/daytona/workspace. The host filesystem is completely inaccessible. |
ssh | Remote host | The remote machine’s filesystem, rooted at the configured working directory. No isolation from other users or processes on that host. |
exe | Full machine | A cloud VM with the repository cloned into /home/exedev. The host filesystem is completely inaccessible. |
local, Fabro filters sensitive environment variables (those ending in _API_KEY, _SECRET, _TOKEN, _PASSWORD, or _CREDENTIAL) but does not restrict file access. Use docker, daytona, or exe when running untrusted workflows.
Network
Each provider handles outbound network access differently:| Provider | Default | Controls |
|---|---|---|
local | Full access | No network isolation. Agents have the same network access as the host. |
docker | Bridge network | Set via the network_mode config option. Supports all Docker network modes (bridge, none, host, etc.). |
daytona | Full access | Configurable via the network setting with three modes: "allow_all", "block", or CIDR-based allow lists. |
ssh | Remote host’s access | No network controls. Agents have the same outbound access as the remote host. |
exe | Full access | No network isolation controls. VMs have full outbound access. |
[sandbox.daytona] section:
run.toml
network overrides the server default. If neither specifies network, Daytona’s own default (full access) applies.
Safety guardrails
Regardless of which provider you use, Fabro applies a read-before-write guardrail. Agents must read a file (viaread_file or grep) before they can modify it with write_file or delete_file. Writing to new files that don’t yet exist is always allowed. This prevents agents from blindly overwriting files they haven’t inspected.