Architecture
- Your code calls the SDK (e.g.,
client.failure_analysis.run(...)) - The SDK sends an HTTPS request to the Valiqor backend
- The backend dispatches the work to LLM judges (GPT-4o by default)
- The judges classify each item against the failure taxonomy
- Results are returned to the SDK as structured Python objects
Two analysis modes
- Dataset mode (ad-hoc)
- Trace mode (continuous)
Pass your existing AI inputs and outputs directly. No tracing or instrumentation required.Best for: Quick checks, debugging, CI testing, evaluating prompt changes.
Configuration
The SDK resolves configuration from multiple sources, in priority order:Async behaviour
For large datasets or complex analyses, the backend may process requests asynchronously:run_async():
Authentication
Valiqor uses API key authentication:- Every request includes your API key in the
X-API-Keyheader (handled by the SDK) - API keys are scoped to your organization
- Each organization has its own quotas and project isolation
- Keys can be created and revoked from the Dashboard
What to learn next
Failure Analysis
Custom buckets, subcategories, batch analysis, and advanced options.
Evaluations
Run metric-based evaluations: hallucination, relevance, coherence, and more.
Security Audits
Red-team your AI for prompt injection, data leakage, and jailbreaks.
Tracing
Auto-instrument OpenAI, Anthropic, and LangChain calls for production monitoring.