Skip to main content
Phoenix integrates with the leading AI frameworks, LLM providers, and tools to provide seamless observability, evaluation, and debugging for your AI applications. Whether you’re building with Python, TypeScript, or Java, Phoenix has you covered.
Don’t see an integration you need? We’d love to hear from you!

Integration Types

Phoenix offers several types of integrations to support your AI development workflow:

Developer Tools

Integrate Phoenix with AI coding assistants like Claude Code and Cursor for debugging and analysis.

Tracing Integrations

Automatically capture traces from your AI applications built with popular frameworks and LLM providers.

Eval Model Integrations

Use any LLM provider to power Phoenix’s evaluation capabilities for scoring and classifying your traces.

Eval Library Integrations

Integrate with external evaluation libraries like Ragas, Cleanlab, and MLflow to visualize results in Phoenix.

Span Processors

Convert traces from other instrumentation libraries to the OpenInference format.

Developer Tools

Integrate Phoenix with AI coding assistants to debug and analyze your LLM applications directly from your development environment.

Coding Agents

Install Phoenix debugging skills and CLI for Claude Code, Cursor, and other AI coding assistants.

Phoenix MCP Server

Connect AI assistants directly to your Phoenix instance via the Model Context Protocol.

Tracing Integrations

Phoenix captures detailed traces from your AI applications, giving you visibility into every step of your LLM pipeline.

By Language

ae376d63-image

OpenAI Agents SDK

f1e5c9e9-image

LangChain

386f0e97-image

LlamaIndex

f1e5c9e9-image

LangGraph

23915314-image

CrewAI

1946e18f-image

AutoGen

b765e387-image

DSPy

98eefc93-image

Haystack

7756bbf9-image

Instructor

744e71c6-image

Guardrails AI

pydantic-logo

Pydantic AI

7f44783b-image

smolagents

7b413cbf-image

Google ADK

3daeeb35-image

Agno

9de53658-image

BeeAI

portkey-logo

Portkey

graphite-logo

Graphite

nvidia-logo

NVIDIA

MCP

agentspec-greyscale-logo

Agent Spec

LLM Providers

Phoenix provides native tracing support for all major LLM providers:
ae376d63-image

OpenAI

5d0ed01c-image

Anthropic

db8ae712-image

Amazon Bedrock

7b413cbf-image

Google

612c738c-image

Groq

808d1f26-image

MistralAI

9a7aa063-image

VertexAI

a22c16d3-image

LiteLLM

gitbook_openrouter

OpenRouter

Platforms

Integrate Phoenix with AI development platforms and infrastructure:
e30418d8-image

Dify

603c5346-image

Flowise

af9de5c3-image

LangFlow

693da124-image

Prompt Flow

phoenix-tracing

Envoy AI Gateway

a22c16d3-image

LiteLLM Proxy


Eval Model Integrations

Phoenix’s evaluation library (phoenix-evals) can use any LLM provider to power evaluations. These models score, classify, and analyze your traces.
ae376d63-image

OpenAI

5d0ed01c-image

Anthropic

db8ae712-image

Amazon Bedrock

7b413cbf-image

Google Gemini

9a7aa063-image

VertexAI

808d1f26-image

MistralAI

a22c16d3-image

LiteLLM


Eval Library Integrations

Use external evaluation libraries alongside Phoenix to get the best of both worlds:

Ragas

Cleanlab

Pydantic Evals

MLflow


Span Processors

Normalize and convert data from other instrumentation libraries by adding span processors that unify traces to the OpenInference format:

OpenLIT

OpenLLMetry (Traceloop)