Docs · Quickstart

Five minutes from download to first request.

Pick your install method, configure a first upstream, make a first request, see it in the live-traffic console.

1. Install

Pick one. Every method ships the same ~18 MB statically-compiled binary.

Homebrew (macOS + Linux)

brew install RelayOne/tap/relaygate

Debian / Ubuntu

curl -fsSL https://pkg.relaygate.ai/gpg | sudo gpg --dearmor -o /usr/share/keyrings/relaygate.gpg
echo "deb [signed-by=/usr/share/keyrings/relaygate.gpg] https://pkg.relaygate.ai/apt stable main" \
  | sudo tee /etc/apt/sources.list.d/relaygate.list
sudo apt update && sudo apt install relaygate

RHEL / Fedora

sudo rpm --import https://pkg.relaygate.ai/gpg
sudo tee /etc/yum.repos.d/relaygate.repo <<'EOF'
[relaygate]
name=relaygate
baseurl=https://pkg.relaygate.ai/rpm
gpgcheck=1
EOF
sudo dnf install relaygate

tar.gz (any Linux / macOS)

curl -fL https://pkg.relaygate.ai/tar/relaygate-linux-amd64.tar.gz | tar xz
sudo mv relaygate /usr/local/bin/

Docker

docker run --rm -p 8080:8080 \
  -v $(pwd)/config.yaml:/etc/relaygate/config.yaml:ro \
  ghcr.io/relayone/relaygate:v1

Helm (Kubernetes)

helm repo add relayone https://charts.relaygate.ai
helm install rg relayone/relaygate --version 1.4.0 \
  --set config.existingSecret=relaygate-config

2. Configure a first upstream

Write a minimal config. RelayGate supports ten backend drivers; start with the one you already have a key for.

# /etc/relaygate/config.yaml
listen: "0.0.0.0:8080"

upstreams:
  - id: openai
    driver: openai
    base_url: https://api.openai.com/v1
    api_key: "${OPENAI_API_KEY}"

routes:
  - match: 'request.model startsWith "gpt-"'
    upstream: openai

workers:
  - id: metering
    stage: post-response

3. Start the gateway

OPENAI_API_KEY=sk-... relaygate serve --config /etc/relaygate/config.yaml

Expected output:

relaygate v1.4.0 (commit 2a7f3c8)
  listen        0.0.0.0:8080
  upstreams     1
  workers       1
  policies      0
  status        ready  ·  2026-04-23T14:22:11Z

4. Make a first request

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{ "model": "gpt-4o-mini", "messages": [{"role":"user","content":"hello"}] }'

5. See it in the live-traffic console

Open http://localhost:8080/console. You will see the request, the resolved route, the worker that ran, the signed receipt. Every request that passes through the gateway shows up here.

6. Add a ContextWorker

Install the PII scrub worker and add it to the pre-request stage.

relaygate worker install @builtins/pii-scrub

# add to config.yaml
workers:
  - id: pii-scrub
    stage: pre-request
  - id: metering
    stage: post-response

7. Check receipts

relaygate receipt tail
{"id":"rcpt_01HY...","route":"openai","overhead_us":238,"redactions":1,"tokens_in":42,"tokens_out":17}

That is the loop. Add more workers, add CEL policy, add more upstreams. Read Pipelines next.

Backend driver reference

All ten drivers are ready to use out of the box:

idNameKindInbound formatNotes