By default, Valiqor uses its own OpenAI API key for LLM-based evaluations, security audits, and failure analysis. With BYOK (Bring Your Own Key), you can provide your own OpenAI API key so that LLM judge calls are billed to your OpenAI account instead.
Why use BYOK?
| Benefit | Description |
|---|
| No shared quota limits | Avoid hitting Valiqor’s per-org LLM token quotas |
| Faster processing | Your key won’t be rate-limited by other users’ traffic |
| Cost control | Pay OpenAI directly, track costs in your OpenAI dashboard |
| Data handling | LLM judge calls go through your own OpenAI account |
Setting your OpenAI key
You can set your key at four levels, in priority order:
1. Per-method (highest priority)
result = client.failure_analysis.run(
dataset=my_data,
openai_api_key="sk-...", # Used for this call only
)
result = client.eval.evaluate(
dataset=my_data,
metrics=["hallucination"],
openai_api_key="sk-...",
)
result = client.security.audit(
dataset=my_data,
openai_api_key="sk-...",
)
2. Per-client (constructor)
from valiqor import ValiqorClient
client = ValiqorClient(
api_key="vq_...",
project_name="my-app",
openai_api_key="sk-...", # Used for all calls from this client
)
The key is passed to each sub-client (.eval, .security, .failure_analysis).
3. Environment variable
export VALIQOR_OPENAI_API_KEY="sk-..."
4. .valiqorrc file (lowest priority)
{
"api_key": "vq_...",
"project_name": "my-app",
"openai_api_key": "sk-..."
}
Resolution order
method param → constructor param → VALIQOR_OPENAI_API_KEY env var → .valiqorrc → backend server key (fallback)
If you don’t provide a key at any level, Valiqor’s backend falls back to its own server-side OpenAI key.
How it works
When you provide an OpenAI key, the SDK includes it in the JSON request body:
Your code → SDK → POST /v2/failure-analysis/analyze
Body: { "dataset": [...], "openai_api_key": "sk-..." }
→ Backend uses YOUR key for LLM judge calls
The key is:
- Request-scoped — not persisted by the backend
- Sent in the request body — not as a header
- Used only for LLM judge calls — your Valiqor API key (
vq_...) is still used for authentication
Supported methods
BYOK is supported in all LLM-based operations:
| Module | Methods |
|---|
| Failure Analysis | run(), run_async() |
| Evaluation | evaluate(), evaluate_trace(), evaluate_async() |
| Security | audit(), audit_trace(), red_team(), audit_async(), red_team_async() |
Heuristic metrics (contains, levenshtein, equals, regex_match) don’t use LLM judges and therefore don’t use your OpenAI key.
Example: CI/CD with BYOK
import os
from valiqor import ValiqorClient
client = ValiqorClient(
api_key=os.environ["VALIQOR_API_KEY"],
project_name="my-app",
openai_api_key=os.environ.get("OPENAI_API_KEY"), # Optional BYOK
)
result = client.failure_analysis.run(dataset=test_cases)
if result.summary.should_gate_ci:
print("❌ Critical failures — blocking deployment")
exit(1)
Verifying BYOK is active
When you provide an OpenAI key, the backend uses it for all LLM judge calls in that request. You can verify by checking your OpenAI usage dashboard for corresponding API calls.
If your key is invalid or has insufficient credits, the backend will return an error. The SDK raises an APIError in that case.