Skip to main content
To see the currently supported LLM providers, use the show_provider_availability function.
from phoenix.evals.llm import show_provider_availability

show_provider_availability()

# 📦 AVAILABLE PROVIDERS (sorted by client priority)
# --------------------------------------------------------------------
# Provider  | Status      | Client    | Dependencies
# --------------------------------------------------------------------
# azure     | ✓ Available | openai    | openai
# openai    | ✓ Available | openai    | openai
# openai    | ✓ Available | langchain | langchain, langchain_openai
# openai    | ✓ Available | litellm   | litellm
# anthropic | ✓ Available | langchain | langchain, langchain_anthropic
# anthropic | ✓ Available | litellm   | litellm
The provider column shows the supported providers, and the status column will read “Available” if the required dependencies are installed in the active Python environment. Note that multiple client SDKs can be used to make LLM requests to a provider, the desired client SDK can be specified when constructing the LLM wrapper client.
from phoenix.evals.llm import LLM

LLM(provider="openai", model="gpt-5")  # uses the the first available provider SDK
LLM(provider="openai", model="gpt-5", client="litellm")  # uses LiteLLM to make requests

Client Configuration

The LLM wrappers can be configured the same way you’d configure the underlying client SDK. For example, when using the OpenAI Python Client:
from phoenix.evals.llm import LLM

LLM(provider="openai", model="gpt-5", client="openai", api_key="my-openai-api-key")
Similarly for OpenAI’s Azure Python Client:
from phoenix.evals.llm import LLM

llm = LLM(
    provider="azure",
    model="gpt-5o",
    api_key="your-api-key",
    api_version="api-version",
    base_url="base-url",
)

Unified Interface

The LLM wrapper provides a unified interface to common LLM operations: generating text and structured outputs. For more information, refer to the API Documentation.