Skip to main content
When an agent runs a shell command, edits a file, or searches code, it does so inside a sandbox. The sandbox is the execution environment for all tool operations — it controls where commands run, which files are visible, and how much isolation exists between the agent and the host. Fabro supports six sandbox providers. Each one implements the same interface (file I/O, command execution, grep, glob), so workflows run identically regardless of which provider you choose. The difference is in where and how the tools execute.
ProviderRuns onUse caseStatus
localHost machineDevelopment, trusted workflowsAvailable
dockerDocker containerReproducible environments, untrusted codeAvailable
daytonaCloud VMCI/CD, team-shared runs, SSH debuggingAvailable
sshAny SSH hostExisting remote machines, dev servers, NAS boxesAvailable
exeCloud VM (exe.dev)Fast ephemeral VMs, lightweight cloud sandboxingExperimental
spritesCloud VMManaged cloud sandboxesIn Development

Choosing a provider

Set the sandbox provider via CLI flag, run config TOML, or server defaults:
# CLI flag
fabro run workflow.fabro --sandbox local
fabro run workflow.fabro --sandbox docker
fabro run workflow.fabro --sandbox daytona
fabro run workflow.fabro --sandbox ssh
fabro run workflow.fabro --sandbox exe
run.toml
# Run config TOML
[sandbox]
provider = "daytona"
The precedence order is: CLI flag > run config TOML > server defaults > built-in default (local).

Local

The local sandbox runs all tool operations directly on the host machine. It’s the default and the simplest option — no setup required beyond the Fabro binary itself.

How it works

  • Working directory — Set to the current directory (or directory from the run config). Fabro creates it if it doesn’t exist.
  • Commands — Executed via /bin/bash -c in the working directory.
  • File operations — Read and write directly to the host filesystem. Relative paths resolve against the working directory.
  • Cleanup — No-op. Local sandbox doesn’t create or destroy anything on cleanup.

Environment variable filtering

The local sandbox filters sensitive environment variables before passing them to commands. Variables ending in _API_KEY, _SECRET, _TOKEN, _PASSWORD, or _CREDENTIAL are stripped. A safelist of common variables (PATH, HOME, USER, SHELL, LANG, TERM, TMPDIR, GOPATH, CARGO_HOME, NVM_DIR) is always passed through.
The local sandbox offers no isolation. Agents can read and modify any file on the host. Use docker, daytona, or exe when running untrusted workflows or when you need a reproducible environment.

Docker

The Docker sandbox runs all tool operations inside a Docker container. The host working directory is bind-mounted into the container, so file changes are visible on both sides.

Prerequisites

  • Docker Engine running on the host
  • The configured image available locally (or auto_pull enabled)

How it works

  • Container lifecycle — On initialize(), Fabro pulls the image (if needed), creates a container with sleep infinity, and starts it. On cleanup(), Fabro stops and removes the container.
  • Working directory — The host working directory is bind-mounted at /workspace inside the container. All relative paths resolve against this mount point.
  • Commands — Executed via docker exec with /bin/bash -c inside the container. Timeout and cancellation are supported.
  • File writes — Use the Docker API’s tar upload to avoid shell escaping issues with special characters.
  • Platform detection — The container’s uname -r is cached at startup.

Configuration

The Docker sandbox is configured through the DockerSandboxConfig:
SettingDefaultDescription
imagefabro-agent:latestDocker image to use
container_mount_point/workspaceMount point inside the container
network_modebridgeDocker network mode
extra_mounts[]Additional host:container bind mounts
memory_limitunlimitedMemory limit in bytes
cpu_quotaunlimitedCPU quota (microseconds per 100ms period)
auto_pulltruePull the image if not found locally
env_vars[]Additional KEY=VALUE environment variables

Preserving the container

By default, the container is destroyed when the run finishes. To keep it alive for debugging:
fabro run workflow.fabro --sandbox docker --preserve-sandbox
Or in the run config:
run.toml
[sandbox]
provider = "docker"
preserve = true
When preserved, Fabro prints the container ID so you can reconnect with docker exec -it <id> bash.

Daytona

The Daytona sandbox runs all tool operations inside a cloud-hosted VM managed by Daytona. It provides full machine-level isolation, automatic git cloning, and SSH access for debugging.

Prerequisites

  • A DAYTONA_API_KEY environment variable
  • A GitHub App configured via fabro install (for private repository cloning)

How it works

  • Sandbox lifecycle — On initialize(), Fabro creates a Daytona sandbox (from an image or a snapshot), clones the current git repository into it, and waits until it’s ready. On cleanup(), the sandbox is deleted.
  • Working directory — Fixed at /home/daytona/workspace. The current repository is cloned there automatically.
  • Git clone — Fabro detects the local origin remote URL and current branch, converts SSH URLs to HTTPS, and clones into the sandbox. For private repositories, Fabro uses a GitHub App Installation Access Token scoped to contents: read on the specific repository. Public repositories are cloned without credentials. If no git repo is detected, the working directory is created empty.
  • Commands — Executed via the Daytona process API. Commands are base64-encoded and piped through sh to support pipes, environment variables, and other shell features.
  • Ephemeral — Sandboxes are created with ephemeral: true and a unique timestamped name (e.g. fabro-20260305-142301-a3f2).

Snapshots

Snapshots let you pre-build an environment image so each run starts with dependencies already installed. If the named snapshot doesn’t exist and a dockerfile is provided, Fabro creates it automatically and polls until it’s ready (up to 10 minutes).
run.toml
[sandbox]
provider = "daytona"

[sandbox.daytona]
auto_stop_interval = 60

[sandbox.daytona.snapshot]
name = "rust-dev"
cpu = 4
memory = 8
disk = 20
dockerfile = "FROM rust:1.85-slim-bookworm\nRUN apt-get update && apt-get install -y git ripgrep"
FieldDescription
nameSnapshot identifier. Reused across runs if it already exists.
cpuCPU cores for the snapshot VM.
memoryMemory in GB.
diskDisk in GB.
dockerfileDockerfile content for building the snapshot. Required when creating a new snapshot.
If the snapshot already exists and is in Active state, Fabro uses it directly. If it’s in Building or Pending state, Fabro polls with exponential backoff until it’s ready.

Labels

Attach key-value labels to sandboxes for filtering and identification in the Daytona dashboard:
run.toml
[sandbox.daytona.labels]
project = "fabro"
env = "ci"
team = "platform"
When using server defaults, labels are merged — run config labels override default labels on key collisions.

SSH access

Connect to a running Daytona sandbox via SSH for live debugging:
fabro run workflow.fabro --sandbox daytona --ssh
This creates temporary SSH credentials (valid for 60 minutes) and prints the connection command.

Preserving the sandbox

Like Docker, Daytona sandboxes are destroyed on cleanup by default. Use --preserve-sandbox to keep them alive:
fabro run workflow.fabro --sandbox daytona --preserve-sandbox
Fabro prints the sandbox name so you can find it in the Daytona dashboard.

Auto-stop

The auto_stop_interval setting (in minutes) tells Daytona to stop the sandbox after a period of inactivity. This saves costs for long-running sandboxes that may sit idle:
run.toml
[sandbox.daytona]
auto_stop_interval = 30

SSH

The SSH sandbox runs all tool operations on an existing remote machine over SSH. Unlike cloud providers (Daytona, Exe), there is no VM lifecycle management — the host must already be running and reachable. This makes it ideal for persistent dev servers, NAS boxes, or any machine you SSH into regularly.

Prerequisites

  • SSH access to the target machine (key-based auth recommended)
  • The remote working directory must exist, or the agent must be able to create it
  • A GitHub App configured via fabro install (for private repository cloning)

How it works

  • No lifecycle management — Fabro does not create or destroy the remote host. It connects, runs operations, and disconnects. cleanup() is a no-op.
  • Working directory — Set explicitly in the run config. Relative paths for all file operations resolve against this directory.
  • Git clone — Fabro detects the local origin remote URL and current branch, converts SSH URLs to HTTPS, and clones into the working directory. For private repositories, Fabro uses a GitHub App Installation Access Token. Public repositories are cloned without credentials. If no git repo is detected, the working directory is used as-is.
  • Commands — Executed via SSH. Commands are base64-encoded and piped through sh to support pipes, environment variables, and other shell features.
  • File I/O — Reads use cat over SSH. Writes upload content via SCP. Parent directories are created automatically.

Configuration

run.toml
[sandbox]
provider = "ssh"

[sandbox.ssh]
destination = "user@myserver"
working_directory = "/home/user/workspace"
FieldRequiredDescription
destinationYesSSH destination — user@host, a hostname, or an SSH alias from ~/.ssh/config.
working_directoryYesAbsolute path to the working directory on the remote host.
config_fileNoPath to a custom SSH config file. Useful when using a non-default key or jump host.
preview_url_baseNoBase URL for port previews (e.g. "http://myserver"). When set, preview URLs are returned as {preview_url_base}:{port} instead of falling back to localhost.

SSH config file

Use config_file to point Fabro at a non-default SSH config — useful when the remote host requires a specific key, ProxyJump, or port:
run.toml
[sandbox.ssh]
destination = "devbox"
working_directory = "/home/user/projects/myapp"
config_file = "/home/user/.ssh/fabro_config"
~/.ssh/fabro_config
Host devbox
  HostName 192.168.1.42
  User alice
  IdentityFile ~/.ssh/id_ed25519_devbox
  Port 2222

Port previews

When a workflow stage starts a local server on a port, Fabro calls get_preview_url(port) to produce a clickable URL. For SSH sandboxes, this requires knowing the host’s reachable address — Fabro can’t infer it automatically. Set preview_url_base to enable preview URLs:
run.toml
[sandbox.ssh]
destination = "alice@devbox"
working_directory = "/home/alice/projects/myapp"
preview_url_base = "http://devbox"
With this config, port 3000 on the remote host yields http://devbox:3000.

Exe

The exe.dev sandbox provider is under development and requires building Fabro with the exedev feature flag. The API and configuration may change.
The Exe sandbox runs all tool operations inside a cloud VM managed by exe.dev. It provides full machine-level isolation with fast VM startup via SSH.

Prerequisites

  • SSH keys configured for exe.dev (added via the exe.dev dashboard)
  • A GitHub App configured via fabro install (for private repository cloning)

How it works

  • VM lifecycle — On initialize(), Fabro connects to the exe.dev management plane via SSH and runs new --json to create a VM. The response contains the VM name and an SSH destination (e.g. my-vm.exe.xyz). On cleanup(), Fabro runs rm <vm_name> to destroy the VM. Cancelling a run cleanly tears down the VM.
  • Working directory — Fixed at /home/exedev. Relative paths resolve against this directory.
  • Git clone — Fabro detects the local origin remote URL and current branch, converts SSH URLs to HTTPS, and clones into the VM. For private repositories, Fabro uses a GitHub App Installation Access Token. Public repositories are cloned without credentials. If no git repo is detected, the working directory is created empty.
  • Commands — Executed via SSH on the data plane (<vmname>.exe.xyz). Commands are base64-encoded and piped through sh to handle pipes, environment variables, and other shell features. Environment variables and working directory overrides are supported.
  • File I/O — Reads use cat over SSH. Writes use SCP upload. Parent directories are created automatically.
  • Git checkpointing — Between stages, Fabro commits agent changes and pushes them to a remote branch, the same mechanism used by Daytona sandboxes.
  • Ephemeral — Each run gets a fresh VM that is destroyed on cleanup.

Configuration

run.toml
[sandbox]
provider = "exe"

[sandbox.exe]
image = "my-custom-image:latest"
FieldDescription
imageCustom container image for the VM. Optional — uses the exe.dev default when omitted.

SSH access

Connect to a running exe.dev sandbox via SSH for live debugging:
fabro run workflow.fabro --sandbox exe --ssh
This prints the SSH connection command so you can connect to the VM while the workflow runs.

Sandboxing

Sandboxes isolate agent execution from the host machine. When an agent runs a shell command, edits a file, or searches code, it does so inside a sandbox — preventing unintended side effects and providing a reproducible environment for each run.

Filesystem

Each provider offers a different level of filesystem isolation:
ProviderIsolationWhat agents can access
localNoneThe entire host filesystem. Agents can read and modify any file.
dockerContainer-levelOnly the bind-mounted working directory (/workspace by default) and whatever is in the container image. Host files outside the mount are inaccessible.
daytonaFull machineA cloud VM with the repository cloned into /home/daytona/workspace. The host filesystem is completely inaccessible.
sshRemote hostThe remote machine’s filesystem, rooted at the configured working directory. No isolation from other users or processes on that host.
exeFull machineA cloud VM with the repository cloned into /home/exedev. The host filesystem is completely inaccessible.
For local, Fabro filters sensitive environment variables (those ending in _API_KEY, _SECRET, _TOKEN, _PASSWORD, or _CREDENTIAL) but does not restrict file access. Use docker, daytona, or exe when running untrusted workflows.

Network

Each provider handles outbound network access differently:
ProviderDefaultControls
localFull accessNo network isolation. Agents have the same network access as the host.
dockerBridge networkSet via the network_mode config option. Supports all Docker network modes (bridge, none, host, etc.).
daytonaFull accessConfigurable via the network setting with three modes: "allow_all", "block", or CIDR-based allow lists.
sshRemote host’s accessNo network controls. Agents have the same outbound access as the remote host.
exeFull accessNo network isolation controls. VMs have full outbound access.
For Daytona, network access is configured in the [sandbox.daytona] section:
run.toml
# Block all egress
[sandbox.daytona]
network = "block"

# Allow only specific CIDRs
[sandbox.daytona]
network = { allow_list = ["208.80.154.232/32", "10.0.0.0/8"] }
When using server defaults, the run config network overrides the server default. If neither specifies network, Daytona’s own default (full access) applies.

Safety guardrails

Regardless of which provider you use, Fabro applies a read-before-write guardrail. Agents must read a file (via read_file or grep) before they can modify it with write_file or delete_file. Writing to new files that don’t yet exist is always allowed. This prevents agents from blindly overwriting files they haven’t inspected.