chat.completions.create calls —
both sync and async. Every call is recorded as a span with model name,
token usage, cost, and full message content.
Install
valiqor plus openai>=1.0.0.
Zero-Config (Recommended)
Add a single import at the top of your app — all OpenAI calls are automatically traced:chat.completions.create call is now traced with full
metadata.
Selective Instrumentation
If you only want OpenAI tracing (not other providers), use the provider-specific function:Async Support
Async OpenAI calls are automatically traced too:Tool / Function Calls
Tool calls are captured automatically. Each tool call in the response is recorded with the function name, call ID, and parsed arguments:What Gets Captured
Each traced OpenAI call records:| Field | Description |
|---|---|
model | Model name (e.g. gpt-4o, gpt-4o-mini) |
prompt_tokens | Input token count |
completion_tokens | Output token count |
total_tokens | Combined token count |
cost | Estimated cost in USD |
system_fingerprint | OpenAI system fingerprint |
messages | User and assistant messages |
tool_calls | Function name, ID, and arguments (if any) |
duration_ms | Call latency |
status | Success or error |
With Workflows
Combine withtrace_workflow to group multiple OpenAI calls into a
single trace:
research-assistant trace.
Disabling
To disable OpenAI tracing:Limitations
- Streaming is not currently instrumented — streamed responses are not captured in traces
- Embeddings (
client.embeddings.create) are not traced — onlychat.completions.createis instrumented