Pick your install method, configure a first upstream, make a first request, see it in the live-traffic console.
Pick one. Every method ships the same ~18 MB statically-compiled binary.
brew install RelayOne/tap/relaygate
curl -fsSL https://pkg.relaygate.ai/gpg | sudo gpg --dearmor -o /usr/share/keyrings/relaygate.gpg
echo "deb [signed-by=/usr/share/keyrings/relaygate.gpg] https://pkg.relaygate.ai/apt stable main" \
| sudo tee /etc/apt/sources.list.d/relaygate.list
sudo apt update && sudo apt install relaygate
sudo rpm --import https://pkg.relaygate.ai/gpg
sudo tee /etc/yum.repos.d/relaygate.repo <<'EOF'
[relaygate]
name=relaygate
baseurl=https://pkg.relaygate.ai/rpm
gpgcheck=1
EOF
sudo dnf install relaygate
curl -fL https://pkg.relaygate.ai/tar/relaygate-linux-amd64.tar.gz | tar xz
sudo mv relaygate /usr/local/bin/
docker run --rm -p 8080:8080 \
-v $(pwd)/config.yaml:/etc/relaygate/config.yaml:ro \
ghcr.io/relayone/relaygate:v1
helm repo add relayone https://charts.relaygate.ai
helm install rg relayone/relaygate --version 1.4.0 \
--set config.existingSecret=relaygate-config
Write a minimal config. RelayGate supports ten backend drivers; start with the one you already have a key for.
# /etc/relaygate/config.yaml
listen: "0.0.0.0:8080"
upstreams:
- id: openai
driver: openai
base_url: https://api.openai.com/v1
api_key: "${OPENAI_API_KEY}"
routes:
- match: 'request.model startsWith "gpt-"'
upstream: openai
workers:
- id: metering
stage: post-response
OPENAI_API_KEY=sk-... relaygate serve --config /etc/relaygate/config.yaml
Expected output:
relaygate v1.4.0 (commit 2a7f3c8)
listen 0.0.0.0:8080
upstreams 1
workers 1
policies 0
status ready · 2026-04-23T14:22:11Z
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{ "model": "gpt-4o-mini", "messages": [{"role":"user","content":"hello"}] }'
Open http://localhost:8080/console. You will see the request, the resolved route, the worker that ran, the signed receipt. Every request that passes through the gateway shows up here.
Install the PII scrub worker and add it to the pre-request stage.
relaygate worker install @builtins/pii-scrub
# add to config.yaml
workers:
- id: pii-scrub
stage: pre-request
- id: metering
stage: post-response
relaygate receipt tail
{"id":"rcpt_01HY...","route":"openai","overhead_us":238,"redactions":1,"tokens_in":42,"tokens_out":17}
That is the loop. Add more workers, add CEL policy, add more upstreams. Read Pipelines next.
All ten drivers are ready to use out of the box:
| id | Name | Kind | Inbound format | Notes |
|---|