Documentation Index Fetch the complete documentation index at: https://docs.valiqor.com/llms.txt
Use this file to discover all available pages before exploring further.
Now that you’ve seen a failure and learned to fix it , let’s understand how the system works under the hood.
Architecture
┌──────────────┐ HTTPS ┌──────────────────┐ ┌────────────┐
│ │ ───────────────► │ │ ───► │ LLM Judges │
│ Your Code │ │ Valiqor Backend │ │ (GPT-4, etc)│
│ + SDK │ ◄─────────────── │ │ ◄─── │ │
│ │ JSON results │ FastAPI + Async │ └────────────┘
└──────────────┘ │ Workers │
└──────────────────┘
│
┌────────┴────────┐
│ PostgreSQL │
│ (Results DB) │
└─────────────────┘
Your code calls the SDK (e.g., client.failure_analysis.run(...))
The SDK sends an HTTPS request to the Valiqor backend
The backend dispatches the work to LLM judges (GPT-4o by default)
The judges classify each item against the failure taxonomy
Results are returned to the SDK as structured Python objects
Two analysis modes
Dataset mode (ad-hoc)
Trace mode (continuous)
Pass your existing AI inputs and outputs directly. No tracing or instrumentation required. result = client.failure_analysis.run(
dataset = [
{
"input" : "What is 2+2?" ,
"output" : "5" ,
"context" : [ "Basic arithmetic: 2+2=4" ],
}
]
)
Best for: Quick checks, debugging, CI testing, evaluating prompt changes.Instrument your LLM calls with auto-tracing, then run analysis on captured traces. import valiqor.auto # Auto-instruments OpenAI, Anthropic, etc.
# ... your normal LLM calls happen and are traced ...
result = client.failure_analysis.run( trace_id = "tr_abc123" )
Best for: Production monitoring, capturing full execution context, multi-step chains.
Configuration
The SDK resolves configuration from multiple sources, in priority order:
Programmatic (highest priority)
Values passed directly to the constructor: client = ValiqorClient(
api_key = "vq_..." ,
project_name = "my-app" ,
base_url = "https://custom.valiqor.com" ,
)
Environment variables
export VALIQOR_API_KEY = "vq_..."
export VALIQOR_PROJECT_NAME = "my-app"
export VALIQOR_BACKEND_URL = "https://api.valiqor.com"
export VALIQOR_OPENAI_API_KEY = "sk_..." # For LLM judges
.valiqorrc file
A JSON config file in your project root or home directory: {
"api_key" : "vq_..." ,
"project_name" : "my-app" ,
"backend_url" : "https://api.valiqor.com"
}
Defaults (lowest priority)
The SDK uses sensible defaults: backend_url defaults to https://api.valiqor.com, timeout defaults to 300 seconds.
Async behaviour
For large datasets or complex analyses, the backend may process requests asynchronously:
SDK sends request → Backend returns 202 Accepted
→ SDK auto-polls for completion (transparent to you)
→ Result returned when ready
This is fully transparent — your code looks synchronous:
# This may take 30+ seconds for large datasets,
# but the SDK handles polling automatically
result = client.failure_analysis.run( dataset = large_dataset)
If you want explicit async control, use run_async():
handle = client.failure_analysis.run_async( dataset = large_dataset)
# Check status
print (handle.status()) # "running", "completed", etc.
print (handle.is_running()) # True/False
# Wait with progress callback
result = handle.wait(
on_progress = lambda status : print ( f "Status: { status } " )
)
Authentication
Valiqor uses API key authentication :
Every request includes your API key in the X-API-Key header (handled by the SDK)
API keys are scoped to your organization
Each organization has its own quotas and project isolation
Keys can be created and revoked from the Dashboard
What to learn next
Failure Analysis Custom buckets, subcategories, batch analysis, and advanced options.
Evaluations Run metric-based evaluations: hallucination, relevance, coherence, and more.
Security Audits Red-team your AI for prompt injection, data leakage, and jailbreaks.
Tracing Auto-instrument OpenAI, Anthropic, and LangChain calls for production monitoring.