Get started

Quickstart

Five minutes from sign-in to your first routed request.

1. Connect a provider key

Open Providers in the dashboard and click Connect provider. Paste your provider API key. useLLM probes it against the provider once on save — if the provider returns 401/403 we tell you up front instead of letting you discover it later.

Keys are AES-256-GCM-encrypted at rest. Only the gateway service can decrypt them to route requests.

2. Generate a useLLM gateway key

Go to API keys Create key. Name it something memorable like production. The full ul_live_* secret is shown once in the reveal dialog — copy it into your secret manager before closing.

3. Make your first request

Any OpenAI-compatible SDK works. Point it at the useLLM gateway and pick a model the provider you connected supports.

from openai import OpenAI

client = OpenAI(
    api_key="ul_live_XXXXXXXXXXXXXXXXXXXXXXXX",
    base_url="https://api.usellm.io/v1",
)

res = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(res.choices[0].message.content)

4. (Optional) define a route alias

Hard-coded model names age badly. Open Routes New route, name it smart, set the primary to gpt-4o, and add a fallback to claude-sonnet-4-5. Your app calls model: "smart" and the gateway picks the right provider — swap models without redeploying.

5. Verify it landed in the dashboard

  • Dashboard → your request appears in Recent requests within a second; the spend chart and KPI strip refresh on reload.
  • Usage → daily breakdown table picks it up, model bar chart updates.
  • The pill in the top-right of the app shows your routed-request count vs the plan quota — green under 80%, orange near the limit, red when over.

That's the whole loop. Continue with Authentication for the differences between gateway and provider keys, or Routing & aliases for fallback chains and retry policies.