Tracing captures every LLM call, tool invocation, and retrieval step in your AI application as structured traces and spans — which you can then analyze with Failure Analysis, Evaluations, or Security Audits.
Zero-config auto-instrumentation
The fastest way to start tracing — one import, no code changes:
import valiqor.auto # That's it!
# All OpenAI, Anthropic, LangChain calls are now traced automatically
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
# ^ This call is automatically captured as a trace
On import, valiqor.auto:
- Loads config from env vars /
.valiqorrc
- Calls
enable_autolog() for all supported providers
- Starts capturing LLM calls to traces
Set VALIQOR_QUIET=true to suppress the startup message in production.
Configuration via environment
export VALIQOR_API_KEY="vq_..."
export VALIQOR_PROJECT_NAME="my-app"
export VALIQOR_INTELLIGENCE=true # Upload traces to cloud (default: true)
export VALIQOR_DISABLE=true # Disable tracing entirely
Selective auto-instrumentation
Enable tracing for specific providers only:
from valiqor.trace import enable_autolog
# Enable for all supported providers
enable_autolog()
# Or enable for specific providers only
enable_autolog(providers=["openai", "anthropic"])
Per-provider functions
from valiqor.trace.autolog import (
enable_openai,
enable_anthropic,
enable_langchain,
enable_ollama,
enable_agno,
)
enable_openai() # OpenAI sync, async, streaming, tool calls, embeddings
enable_anthropic() # Anthropic sync, async, streaming, tool use
enable_langchain() # LangChain chains, agents, LCEL, RAG, LangGraph
enable_ollama() # Ollama chat, generate, embeddings
enable_agno() # Agno agents, tools, teams
Trace workflows and functions
For structured tracing beyond auto-instrumentation, use trace_workflow and trace_function:
trace_workflow — creates a new trace
Use as a context manager or decorator. Creates a top-level trace that groups all nested operations:
from valiqor.trace.autolog import trace_workflow, trace_function
# As a context manager
with trace_workflow("my-rag-pipeline"):
# All LLM calls, tool uses, and nested functions within
# this block are captured under one trace
docs = retrieve_documents(query)
answer = generate_answer(query, docs)
# As a decorator
@trace_workflow("chat-handler")
def handle_chat(message):
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": message}],
)
return response.choices[0].message.content
trace_function — creates a span
Creates a span under the currently active trace (does not create a new trace):
@trace_function("retrieve-docs")
def retrieve_documents(query):
# This becomes a span within the active trace
results = vector_db.search(query, top_k=5)
return results
@trace_function("generate-answer")
def generate_answer(query, docs):
response = openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": f"Context: {docs}"},
{"role": "user", "content": query},
],
)
return response.choices[0].message.content
Combined example
@trace_workflow("rag-pipeline")
def run_rag(query: str) -> str:
docs = retrieve(query)
answer = generate(query, docs)
return answer
@trace_function("retrieval")
def retrieve(query: str):
return vector_db.search(query, top_k=5)
@trace_function("generation")
def generate(query: str, docs: list):
return llm.complete(query, context=docs)
# Produces:
# Trace: rag-pipeline
# ├── Span: retrieval
# ├── Span: generation
# │ └── Span: LLM Call (auto-captured by enable_openai)
Conversation tracking
For multi-turn chat applications:
from valiqor.trace.autolog import start_conversation, end_conversation
# Start a conversation-level trace
start_conversation(conversation_id="session_abc123")
# Each user message creates spans under this conversation trace
response1 = llm.chat("Hello!")
response2 = llm.chat("Tell me more.")
# End the conversation
end_conversation()
Exporters
Control where traces are sent:
from valiqor.trace.exporters import ConsoleExporter, FileExporter, CloudExporter
# Print traces to console (useful for debugging)
console = ConsoleExporter(verbose=True)
# Write traces as JSON files to disk
file_exp = FileExporter(output_dir="valiqor_output/traces")
# Upload traces to Valiqor backend
cloud = CloudExporter(api_key="vq_...", backend_url="https://api.valiqor.com")
By default, valiqor.auto configures both FileExporter and CloudExporter (if an API key is set).
Span kinds
Every span is classified with a ValiqorSpanKind:
| Kind | Value | Description |
|---|
WORKFLOW_NODE | "workflow_node" | Named workflow step (e.g., LangGraph node) |
LLM_CALL | "llm_call" | LLM API invocation |
RETRIEVER | "retriever" | Retrieval / search operation |
TOOL | "tool" | Tool / function execution |
EVALUATOR | "evaluator" | Evaluation / judging step |
EMBEDDING | "embedding" | Embedding computation |
SYSTEM | "system" | Internal / framework span |
UNKNOWN | "unknown" | Fallback |
RAG stage types
For RAG applications, spans can be annotated with a ValiqorStage:
| Stage | Description |
|---|
RETRIEVAL | Document / knowledge retrieval |
EVALUATION | Document grading, relevance checks |
SYNTHESIS | Answer generation, response synthesis |
ROUTING | Query routing, decision making |
ORCHESTRATION | Workflow coordination, graph execution |
LLM_CALL | Direct LLM invocations |
TOOL_EXECUTION | Tool / function executions |
EMBEDDING | Embedding generation |
RERANKING | Result reranking |
PREPROCESSING | Query preprocessing, transformation |
POSTPROCESSING | Output formatting, filtering |
CLI workflow
For codebases where you prefer CLI-based instrumentation:
# 1. Scan codebase and generate suggested trace points
valiqor trace init --path .
# 2. Apply @trace_function decorators to suggested functions
valiqor trace apply --path . --dry-run # Preview changes
valiqor trace apply --path . # Apply changes
# 3. Run your app (traces are captured automatically)
python app.py
# 4. Upload captured traces
valiqor upload --path valiqor_output/traces
# 5. Remove auto-applied decorators if needed
valiqor trace remove --path .
# 6. Re-scan after code changes
valiqor trace refresh --path .
Querying traces
Read back traces from the backend using the trace query client:
# List recent traces
traces = client.trace_query.list_traces(project_name="my-app", limit=10)
# Get a specific trace
trace = client.trace_query.get_trace(trace_id="tr_abc123")
# Get trace summary
summary = client.trace_query.get_summary(trace_id="tr_abc123")
# Get messages from a trace
messages = client.trace_query.get_messages(trace_id="tr_abc123")
# Get all spans
spans = client.trace_query.get_spans(trace_id="tr_abc123")
# Get full trace with all data
full = client.trace_query.get_full_trace(trace_id="tr_abc123")
# Get eval steps (for evaluation integration)
steps = client.trace_query.get_eval_steps(trace_id="tr_abc123")
From trace to analysis
Once you have traces, run any analysis on them:
# Failure Analysis on a trace
fa_result = client.failure_analysis.run(trace_id="tr_abc123")
# Evaluation on a trace (pass the trace dict, not just the ID)
trace_data = client.trace_query.get_full_trace(trace_id="tr_abc123")
eval_result = client.eval.evaluate_trace(
trace=trace_data,
metrics=["hallucination", "answer_relevance"],
)
# Security audit on a trace
sec_result = client.security.audit_trace(trace=trace_data)