Documentation Index Fetch the complete documentation index at: https://docs.valiqor.com/llms.txt
Use this file to discover all available pages before exploring further.
Valiqor automatically traces all OpenAI chat.completions.create calls —
both sync and async. Every call is recorded as a span with model name,
token usage, cost, and full message content.
Install
pip install valiqor[openai]
This installs valiqor plus openai>=1.0.0.
Zero-Config (Recommended)
Add a single import at the top of your app — all OpenAI calls are
automatically traced:
import valiqor.auto # ← Add this line
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Explain quantum computing" }]
)
print (response.choices[ 0 ].message.content)
That’s it. Every chat.completions.create call is now traced with full
metadata.
Selective Instrumentation
If you only want OpenAI tracing (not other providers), use the
provider-specific function:
from valiqor.trace import openai_autolog
openai_autolog()
# Or using the namespace-style API:
from valiqor.trace import OpenAI
OpenAI.autolog()
Both are equivalent — they enable tracing only for OpenAI.
Async Support
Async OpenAI calls are automatically traced too:
import valiqor.auto
import openai
client = openai.AsyncOpenAI()
async def main ():
response = await client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Explain quantum computing" }]
)
print (response.choices[ 0 ].message.content)
Tool calls are captured automatically. Each tool call in the response is
recorded with the function name, call ID, and parsed arguments:
import valiqor.auto
import openai
client = openai.OpenAI()
tools = [
{
"type" : "function" ,
"function" : {
"name" : "get_weather" ,
"description" : "Get the weather for a city" ,
"parameters" : {
"type" : "object" ,
"properties" : { "city" : { "type" : "string" }},
"required" : [ "city" ]
}
}
}
]
response = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "What's the weather in Paris?" }],
tools = tools
)
# Tool calls are automatically captured in the trace span
What Gets Captured
Each traced OpenAI call records:
Field Description modelModel name (e.g. gpt-4o, gpt-4o-mini) prompt_tokensInput token count completion_tokensOutput token count total_tokensCombined token count costEstimated cost in USD system_fingerprintOpenAI system fingerprint messagesUser and assistant messages tool_callsFunction name, ID, and arguments (if any) duration_msCall latency statusSuccess or error
With Workflows
Combine with trace_workflow to group multiple OpenAI calls into a
single trace:
import valiqor.auto
from valiqor.trace import trace_workflow
import openai
client = openai.OpenAI()
with trace_workflow( "research-assistant" ):
# Step 1: Generate outline
outline = client.chat.completions.create(
model = "gpt-4o" ,
messages = [{ "role" : "user" , "content" : "Create an outline about AI safety" }]
)
# Step 2: Write each section
content = client.chat.completions.create(
model = "gpt-4o" ,
messages = [
{ "role" : "user" , "content" : f "Write a detailed section based on: { outline.choices[ 0 ].message.content } " }
]
)
Both calls appear as child spans under the research-assistant trace.
Disabling
To disable OpenAI tracing:
from valiqor.trace import disable_autolog
disable_autolog( "openai" ) # Disable OpenAI only
disable_autolog() # Disable all providers
Limitations
Streaming is not currently instrumented — streamed responses are
not captured in traces
Embeddings (client.embeddings.create) are not traced — only
chat.completions.create is instrumented
Next Steps
Tracing Guide Learn about traces, spans, workflows, and exporters.
Failure Analysis Run failure analysis on your traced OpenAI calls.