@arizeai/openinference-vercel provides a set of utilities to help you ingest Vercel AI SDK spans into OpenTelemetry compatible platforms and works in conjunction with Vercel’s AI SDK OpenTelemetry support. @arizeai/openinference-vercel works with typical node projects, as well as Next.js projects. This page will describe usage within a node project, for detailed usage instructions in Next.js follow Vercel’s guide on instrumenting Next.js.To process your Vercel AI SDK Spans, setup a typical OpenTelemetry instrumentation boilerplate file, add a OpenInferenceSimpleSpanProcessor or OpenInferenceBatchSpanProcessor to your OpenTelemetry configuration.
Note: The OpenInferenceSpanProcessor alone does not handle the exporting of spans so you will need to pass it an exporter as a parameter.
Here are two example instrumentation configurations:
Manual instrumentation config for a Node v23+ application.
Next.js register function utilizing @vercel/otel .
Manual Instrumentation
@vercel/otel
Copy
Ask AI
// instrumentation.ts// Node environment instrumentation// Boilerplate importsimport { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";import { resourceFromAttributes } from "@opentelemetry/resources";import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";import { ATTR_SERVICE_NAME } from "@opentelemetry/semantic-conventions";// OpenInference Vercel importsimport { SEMRESATTRS_PROJECT_NAME } from "@arizeai/openinference-semantic-conventions";import { OpenInferenceSimpleSpanProcessor } from "@arizeai/openinference-vercel";diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);// e.g. http://localhost:6006// e.g. https://app.phoenix.arize.com/s/<your-space>const COLLECTOR_ENDPOINT = process.env.PHOENIX_COLLECTOR_ENDPOINT;// The project name that may appear in your collector's interfaceconst SERVICE_NAME = "phoenix-vercel-ai-sdk-app";export const provider = new NodeTracerProvider({ resource: resourceFromAttributes({ [ATTR_SERVICE_NAME]: SERVICE_NAME, [SEMRESATTRS_PROJECT_NAME]: SERVICE_NAME, }), spanProcessors: [ // In production-like environments it is recommended to use // OpenInferenceBatchSpanProcessor instead new OpenInferenceSimpleSpanProcessor({ exporter: new OTLPTraceExporter({ url: `${COLLECTOR_ENDPOINT}/v1/traces`, // (optional) if connecting to a collector with Authentication enabled headers: { Authorization: `Bearer ${process.env.PHOENIX_API_KEY}` }, }), }), ],});provider.register();console.log("Provider registered");// Run this file before the rest of program execution// e.g node --import ./instrumentation.ts index.ts// or at the top of your application's entrypoint// e.g. import "instrumentation.ts";
When instrumenting a Next.js application, traced spans will not be “root spans” when the OpenInference span filter is configured. This is because Next.js parents spans underneath http requests, which do not meet the requirements to be an OpenInference span.
Now enable telemetry in your AI SDK calls by setting the experimental_telemetry parameter to true.
Copy
Ask AI
import { generateText } from "ai";import { openai } from "@ai-sdk/openai";const result = await generateText({ model: openai("gpt-4o"), prompt: "Write a short story about a cat.", experimental_telemetry: { isEnabled: true },});
Ensure your installed version of @opentelemetry/api matches the version installed by ai otherwise the ai sdk will not emit traces to the TracerProvider that you configure. If you install ai before other the packages, then dependency resolution in your package manager should install the correct version.