This package provides OpenInference Core utilities for LLM Traces, including tracing helpers, decorators, and context attribute propagation.
Installation
npm install @arizeai/openinference-core
Tracing Helpers
This package provides convenient helpers to instrument your functions, agents, and LLM operations with OpenInference spans.
withSpan
Wraps any function (sync or async) with OpenTelemetry tracing:
import { withSpan } from "@arizeai/openinference-core" ;
import { OpenInferenceSpanKind } from "@arizeai/openinference-semantic-conventions" ;
const processUserQuery = async ( query : string ) => {
const response = await fetch ( `/api/process?q= ${ query } ` );
return response . json ();
};
const tracedProcess = withSpan ( processUserQuery , {
name: "user-query-processor" ,
kind: OpenInferenceSpanKind . CHAIN ,
});
traceChain
Convenience wrapper for tracing workflow sequences:
import { traceChain } from "@arizeai/openinference-core" ;
const ragPipeline = async ( question : string ) => {
const documents = await retrieveDocuments ( question );
const context = documents . map (( d ) => d . content ). join ( " \n " );
const answer = await generateAnswer ( question , context );
return answer ;
};
const tracedRag = traceChain ( ragPipeline , { name: "rag-pipeline" });
traceAgent
Convenience wrapper for tracing autonomous agents:
import { traceAgent } from "@arizeai/openinference-core" ;
const simpleAgent = async ( question : string ) => {
const documents = await retrieveDocuments ( question );
const analysis = await analyzeContext ( question , documents );
return await executePlan ( analysis );
};
const tracedAgent = traceAgent ( simpleAgent , { name: "qa-agent" });
Convenience wrapper for tracing external tools:
import { traceTool } from "@arizeai/openinference-core" ;
const weatherTool = async ( city : string ) => {
const response = await fetch ( `https://api.weather.com/v1/ ${ city } ` );
return response . json ();
};
const tracedWeatherTool = traceTool ( weatherTool , { name: "weather-api" });
Decorators
@observe
Decorator for automatically tracing class methods:
import { observe } from "@arizeai/openinference-core" ;
class ChatService {
@ observe ({ kind: "chain" })
async processMessage ( message : string ) {
return `Processed: ${ message } ` ;
}
@ observe ({ name: "llm-call" , kind: "llm" })
async callLLM ( prompt : string ) {
return await llmClient . generate ( prompt );
}
}
Customizing Spans
The package offers utilities to track important application metadata using context attribute propagation:
Function Description setSessionSpecify a session ID to track and group multi-turn conversations setUserSpecify a user ID to track different conversations with a given user setMetadataAdd custom metadata for operational needs setTagAdd tags to filter spans on specific keywords setPromptTemplateTrack prompt template used, with version and variables setAttributesAdd multiple custom attributes at once
All @arizeai/openinference auto instrumentation packages will pull attributes off of context and add them to spans.
Example: setSession
import { context } from "@opentelemetry/api" ;
import { setSession } from "@arizeai/openinference-core" ;
context . with ( setSession ( context . active (), { sessionId: "session-id" }), () => {
// Calls within this block will generate spans with the attributes:
// "session.id" = "session-id"
});
Chaining Setters
Each setter function returns a new active context, so they can be chained together:
import { context } from "@opentelemetry/api" ;
import { setAttributes , setSession } from "@arizeai/openinference-core" ;
context . with (
setAttributes ( setSession ( context . active (), { sessionId: "session-id" }), {
myAttribute: "test" ,
}),
() => {
// Calls within this block will generate spans with the attributes:
// "myAttribute" = "test"
// "session.id" = "session-id"
},
);
Manual Span Context Propagation
If you are creating spans manually and want to propagate context attributes, use the getAttributesFromContext utility:
import { getAttributesFromContext } from "@arizeai/openinference-core" ;
import { context , trace } from "@opentelemetry/api" ;
const contextAttributes = getAttributesFromContext ( context . active ());
const tracer = trace . getTracer ( "example" );
const span = tracer . startSpan ( "example span" );
span . setAttributes ( contextAttributes );
span . end ();
Attribute Helpers
Generate properly formatted attributes for common LLM operations.
getLLMAttributes
Generate attributes for LLM operations:
import { getLLMAttributes } from "@arizeai/openinference-core" ;
import { trace } from "@opentelemetry/api" ;
const tracer = trace . getTracer ( "llm-service" );
tracer . startActiveSpan ( "llm-inference" , ( span ) => {
const attributes = getLLMAttributes ({
provider: "openai" ,
modelName: "gpt-4" ,
inputMessages: [{ role: "user" , content: "What is AI?" }],
outputMessages: [{ role: "assistant" , content: "AI is..." }],
tokenCount: { prompt: 10 , completion: 50 , total: 60 },
});
span . setAttributes ( attributes );
span . end ();
});
getEmbeddingAttributes
Generate attributes for embedding operations:
import { getEmbeddingAttributes } from "@arizeai/openinference-core" ;
import { trace } from "@opentelemetry/api" ;
const tracer = trace . getTracer ( "embedding-service" );
tracer . startActiveSpan ( "generate-embeddings" , ( span ) => {
const attributes = getEmbeddingAttributes ({
modelName: "text-embedding-ada-002" ,
embeddings: [
{ text: "The quick brown fox" , vector: [ 0.1 , 0.2 , 0.3 ] },
{ text: "jumps over the lazy dog" , vector: [ 0.4 , 0.5 , 0.6 ] },
],
});
span . setAttributes ( attributes );
span . end ();
});
getRetrieverAttributes
Generate attributes for document retrieval:
import { getRetrieverAttributes } from "@arizeai/openinference-core" ;
import { trace } from "@opentelemetry/api" ;
const tracer = trace . getTracer ( "retriever-service" );
async function retrieveDocuments ( query : string ) {
return tracer . startActiveSpan ( "retrieve-documents" , async ( span ) => {
const documents = await vectorStore . similaritySearch ( query , 5 );
const attributes = getRetrieverAttributes ({
documents: documents . map (( doc ) => ({
content: doc . pageContent ,
id: doc . metadata . id ,
score: doc . score ,
metadata: doc . metadata ,
})),
});
span . setAttributes ( attributes );
span . end ();
return documents ;
});
}
Generate attributes for tool definitions:
import { getToolAttributes } from "@arizeai/openinference-core" ;
import { trace } from "@opentelemetry/api" ;
const tracer = trace . getTracer ( "tool-service" );
tracer . startActiveSpan ( "define-tool" , ( span ) => {
const attributes = getToolAttributes ({
name: "search_web" ,
description: "Search the web for information" ,
parameters: {
query: { type: "string" , description: "The search query" },
maxResults: { type: "number" , description: "Maximum results to return" },
},
});
span . setAttributes ( attributes );
span . end ();
});
Trace Config
Control settings like data privacy and payload sizes. You may want to keep sensitive information from being logged for security reasons, or limit the size of base64 encoded images.
These values can also be controlled via environment variables. See the configuration spec for more information.
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai" ;
const traceConfig = { hideInputs: true };
const instrumentation = new OpenAIInstrumentation ({ traceConfig });
Reference Documentation
OpenInference JS Docs Full API documentation and examples