Skip to main content
Now that you’ve seen a failure and learned to fix it, let’s understand how the system works under the hood.

Architecture

┌──────────────┐        HTTPS        ┌──────────────────┐       ┌────────────┐
│              │  ───────────────►   │                  │  ───► │ LLM Judges │
│  Your Code   │                     │  Valiqor Backend │       │ (GPT-4, etc)│
│  + SDK       │  ◄───────────────   │                  │  ◄─── │            │
│              │    JSON results     │  FastAPI + Async │       └────────────┘
└──────────────┘                     │  Workers         │
                                     └──────────────────┘

                                     ┌────────┴────────┐
                                     │   PostgreSQL    │
                                     │   (Results DB)  │
                                     └─────────────────┘
  1. Your code calls the SDK (e.g., client.failure_analysis.run(...))
  2. The SDK sends an HTTPS request to the Valiqor backend
  3. The backend dispatches the work to LLM judges (GPT-4o by default)
  4. The judges classify each item against the failure taxonomy
  5. Results are returned to the SDK as structured Python objects

Two analysis modes

Pass your existing AI inputs and outputs directly. No tracing or instrumentation required.
result = client.failure_analysis.run(
    dataset=[
        {
            "input": "What is 2+2?",
            "output": "5",
            "context": ["Basic arithmetic: 2+2=4"],
        }
    ]
)
Best for: Quick checks, debugging, CI testing, evaluating prompt changes.

Configuration

The SDK resolves configuration from multiple sources, in priority order:
1

Programmatic (highest priority)

Values passed directly to the constructor:
client = ValiqorClient(
    api_key="vq_...",
    project_name="my-app",
    base_url="https://custom.valiqor.com",
)
2

Environment variables

export VALIQOR_API_KEY="vq_..."
export VALIQOR_PROJECT_NAME="my-app"
export VALIQOR_BACKEND_URL="https://api.valiqor.com"
export VALIQOR_OPENAI_API_KEY="sk_..."  # For LLM judges
3

.valiqorrc file

A JSON config file in your project root or home directory:
{
  "api_key": "vq_...",
  "project_name": "my-app",
  "backend_url": "https://api.valiqor.com"
}
4

Defaults (lowest priority)

The SDK uses sensible defaults: backend_url defaults to https://api.valiqor.com, timeout defaults to 300 seconds.

Async behaviour

For large datasets or complex analyses, the backend may process requests asynchronously:
SDK sends request → Backend returns 202 Accepted
                   → SDK auto-polls for completion (transparent to you)
                   → Result returned when ready
This is fully transparent — your code looks synchronous:
# This may take 30+ seconds for large datasets,
# but the SDK handles polling automatically
result = client.failure_analysis.run(dataset=large_dataset)
If you want explicit async control, use run_async():
handle = client.failure_analysis.run_async(dataset=large_dataset)

# Check status
print(handle.status())       # "running", "completed", etc.
print(handle.is_running())   # True/False

# Wait with progress callback
result = handle.wait(
    on_progress=lambda status: print(f"Status: {status}")
)

Authentication

Valiqor uses API key authentication:
  • Every request includes your API key in the X-API-Key header (handled by the SDK)
  • API keys are scoped to your organization
  • Each organization has its own quotas and project isolation
  • Keys can be created and revoked from the Dashboard

What to learn next