Skip to main content
Valiqor provides the deepest integration with LangChain, automatically tracing chat models, chains, tools, retrievers, and LangGraph nodes. RAG pipelines get dedicated retrieval spans with document scores and relevance data.

Install

pip install valiqor[langchain]
This installs valiqor plus langchain>=0.1.0 and langchain-core>=0.1.0.
Add a single import at the top of your app — all LangChain components are automatically traced:
import valiqor.auto  # ← Add this line

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke([HumanMessage(content="Explain quantum computing")])
print(response.content)
Every LLM call, chain invocation, tool execution, and retrieval is traced.

Selective Instrumentation

If you only want LangChain tracing:
from valiqor.trace import langchain_autolog

langchain_autolog()

# Or using the namespace-style API:
from valiqor.trace import LangChain
LangChain.autolog()

What Gets Instrumented

LangChain instrumentation covers 5 subsystems:

Chat Models

invoke(), ainvoke(), stream(), astream() on any BaseChatModel subclass — captures model name, vendor, tokens, cost, and messages.

Chains

invoke() and ainvoke() on Runnable chains — captures the full LCEL pipeline execution.

Tools

invoke(), ainvoke(), run(), arun() on BaseTool — captures tool name, arguments, and results.

Retrievers

get_relevant_documents() and aget_relevant_documents() on BaseRetriever — captures documents, scores, and retrieval metadata.

LangGraph

LangGraph is also instrumented automatically:
  • Graph execution: invoke() and ainvoke() on compiled graphs
  • Node execution: Individual node functions are wrapped and traced
  • State tracking: Graph state transitions are captured

RAG Pipeline Tracing

When using retrievers, Valiqor captures rich RAG-specific data automatically:
import valiqor.auto
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# Build RAG chain
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_texts(
    ["Paris is the capital of France", "Berlin is the capital of Germany"],
    embeddings
)
retriever = vectorstore.as_retriever()
llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_template(
    "Answer based on context: {context}\n\nQuestion: {question}"
)

chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
)

# RAG retrieval is automatically traced with document scores
response = chain.invoke("What is the capital of France?")
Each retrieval span captures:
FieldDescription
DocumentsRetrieved documents with content snippets
ScoresRelevance scores per document
MetadataDocument metadata (source, page, etc.)
Embedding modelModel used for embeddings (when available)
LatencyRetrieval time in milliseconds
Top-kNumber of documents retrieved

Async Support

All LangChain async methods are traced automatically:
import valiqor.auto
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o")

async def main():
    response = await llm.ainvoke([HumanMessage(content="Hello!")])
    print(response.content)

Streaming

LangChain streaming is supported — stream() and astream() calls are traced:
import valiqor.auto
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o")

for chunk in llm.stream([HumanMessage(content="Tell me a story")]):
    print(chunk.content, end="")

Custom Retriever Detection

If you have custom retrieval tools that Valiqor doesn’t detect automatically, you can register them:
from valiqor.trace import configure_retriever_detection, autolog

configure_retriever_detection(
    name_patterns=["my_search_tool", "doc_finder"],
    class_patterns=["MyCustomRetriever"],
    module_patterns=["my_app.search"]
)

autolog(["langchain"])

What Gets Captured

Chat Model Spans

FieldDescription
modelModel name
vendorProvider (auto-detected from class name)
prompt_tokensInput tokens
completion_tokensOutput tokens
total_tokensCombined tokens
costEstimated cost in USD
messagesFull message history
tool_callsTool call arguments and results

Tool Spans

FieldDescription
tool_nameName of the tool
argumentsInput arguments
resultTool output
duration_msExecution time

With Workflows

Group LangChain operations into a named trace:
import valiqor.auto
from valiqor.trace import trace_workflow
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")

with trace_workflow("customer-support-agent"):
    # All LangChain calls within this block are grouped
    response = llm.invoke("How can I help you?")

Disabling

from valiqor.trace import disable_autolog

disable_autolog("langchain")    # Disable LangChain only
disable_autolog()               # Disable all providers

Next Steps