OpenClaw provider switch

Faster OpenClaw in APAC. One config change.

Keep your OpenClaw workflow exactly the same. Point `models.providers` to Brightnode's OpenAI-compatible Singapore endpoint and your Telegram/WhatsApp/Slack agent replies faster with lower token costs.

TTFT P50
94ms
E2E P50
199ms
Error rate
0%
OpenAI compatible
Yes

Why switch your OpenClaw provider to Brightnode

  • Your OpenClaw agent responds faster in APAC. Brightnode routes from Singapore with sub-100ms benchmark TTFT.
  • Transparent per-token pricing across proprietary and Brightnode-hosted models, shown directly in the catalog.
  • No workflow changes. Brightnode is OpenAI-compatible, so you drop it into models.providers and restart OpenClaw.

Copy this config. Restart. Done.

OpenClaw supports custom providers through `models.providers`. Add Brightnode, keep `api: "openai-completions"`, and set your default model as `brightnode/<model-id>`.

Example openclaw.json

{
  "env": {
    "BRIGHTNODE_API_KEY": "your-brightnode-api-key"
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "brightnode/meta-llama/Llama-3.3-70B-Instruct"
      },
      "models": {
        "brightnode/meta-llama/Llama-3.3-70B-Instruct": {}
      }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "brightnode": {
        "baseUrl": "https://api.brightnode.cloud/v1",
        "apiKey": "${BRIGHTNODE_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "meta-llama/Llama-3.3-70B-Instruct",
            "name": "Llama 3.3 70B Instruct"
          }
        ]
      }
    }
  }
}

Quick sanity check:

curl https://api.brightnode.cloud/v1/chat/completions \
  -H "Authorization: Bearer $BRIGHTNODE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"meta-llama/Llama-3.3-70B-Instruct","messages":[{"role":"user","content":"ping"}]}'

Restart OpenClaw after saving config:

openclaw gateway run

Most common OpenClaw model picks on Brightnode

These are the most common model choices OpenClaw users run through Brightnode.

ModelInput / 1MOutput / 1MContextLatency (Singapore)
Claude Sonnet 4.5$3.00$15.00200,00040ms (Singapore)
Claude Sonnet 4$3.00$15.00200,00083ms (Singapore)
DeepSeek V3.2$0.74$2.22131,07240ms (Singapore)
Kimi K2.5$0.72$3.60131,07240ms (Singapore)
Llama 3.3 70B Instruct$0.22$0.50131,07227ms (Singapore)

Plus many more models are available on the Brightnode models page.

TTFT p50
94ms

8.9x faster than global routers

End-to-end latency p50
199ms

3.7x faster at 200 token output

Error rate
0%

Global routers: 3.3% error rate

Concurrency stable
178ms at 5x

Consistent under parallel load

Ready to speed up your OpenClaw agent

Get your API key and switch providers in 5 minutes.

Keep your current OpenClaw setup, paste the provider block, and ship faster replies to your users in APAC.