@arizeai/phoenix-evals provides evaluator building blocks for TypeScript workflows. It includes LLM-based evaluators, code-based evaluators, prompt templating helpers, and compatibility points for Phoenix experiments.
Install
@arizeai/phoenix-evals depends on model adapters from the AI SDK ecosystem. Install the package plus at least one provider adapter for the models you plan to use.
Common Setups
Runtime Expectations
- Node.js 18+
- an AI SDK provider package such as
@ai-sdk/openai - credentials required by your chosen provider
Minimal Example
Docs And Source In node_modules
After install, a coding agent can inspect the installed package directly:
Where To Start
- Create evaluator for custom and code-based evaluator flows
- LLM evaluators and Classification for model-backed evaluation
- Templates and Phoenix integration for prompt helpers and experiment wiring
Source Layout
src/index.tsre-exports the package surface you usually import from@arizeai/phoenix-evalssrc/llm/contains classification helpers and built-in LLM evaluator factoriessrc/helpers/containscreateEvaluatorand evaluation-result helperssrc/template/containsformatTemplateandgetTemplateVariablessrc/types/contains shared evaluator and prompt types
Source Map
src/index.tssrc/llm/src/helpers/src/template/src/core/src/types/

