Use this file to discover all available pages before exploring further.
Tracing captures every LLM call, tool invocation, and retrieval step in your AI application as structured traces and spans — which you can then analyze with Failure Analysis, Evaluations, or Security Audits.
The fastest way to start tracing — one import, no code changes:
import valiqor.auto # That's it!# All OpenAI, Anthropic, LangChain calls are now traced automaticallyimport openaiclient = openai.OpenAI()response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello!"}],)# ^ This call is automatically captured as a trace
On import, valiqor.auto:
Loads config from env vars / .valiqorrc
Calls enable_autolog() for all supported providers
Starts capturing LLM calls to traces
Set VALIQOR_QUIET=true to suppress the startup message in production.
from valiqor.trace import enable_autolog# Enable for all supported providersenable_autolog()# Or enable for specific providers onlyenable_autolog(providers=["openai", "anthropic"])
Use as a context manager or decorator. Creates a top-level trace that groups all nested operations:
from valiqor.trace.autolog import trace_workflow, trace_function# As a context managerwith trace_workflow("my-rag-pipeline"): # All LLM calls, tool uses, and nested functions within # this block are captured under one trace docs = retrieve_documents(query) answer = generate_answer(query, docs)# As a decorator@trace_workflow("chat-handler")def handle_chat(message): response = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": message}], ) return response.choices[0].message.content
from valiqor.trace.autolog import start_conversation, end_conversation# Start a conversation-level tracestart_conversation(conversation_id="session_abc123")# Each user message creates spans under this conversation traceresponse1 = llm.chat("Hello!")response2 = llm.chat("Tell me more.")# End the conversationend_conversation()
Read back traces from the backend using the trace query client:
# List recent tracestraces = client.trace_query.list_traces(project_name="my-app", limit=10)# Get a specific tracetrace = client.trace_query.get_trace(trace_id="tr_abc123")# Get trace summarysummary = client.trace_query.get_summary(trace_id="tr_abc123")# Get messages from a tracemessages = client.trace_query.get_messages(trace_id="tr_abc123")# Get all spansspans = client.trace_query.get_spans(trace_id="tr_abc123")# Get full trace with all datafull = client.trace_query.get_full_trace(trace_id="tr_abc123")# Get eval steps (for evaluation integration)steps = client.trace_query.get_eval_steps(trace_id="tr_abc123")
# Failure Analysis on a tracefa_result = client.failure_analysis.run(trace_id="tr_abc123")# Evaluation on a trace (pass the trace dict, not just the ID)trace_data = client.trace_query.get_full_trace(trace_id="tr_abc123")eval_result = client.eval.evaluate_trace( trace=trace_data, metrics=["hallucination", "answer_relevance"],)# Security audit on a tracesec_result = client.security.audit_trace(trace=trace_data)
Traces & Spans →
Deep dive into trace structure, span kinds, and metadata.