Skip to main content
Phoenix home page
Search...
⌘K
Ask AI
Phoenix Cloud
Phoenix Cloud
Search...
Navigation
How-to: Experiments
How to: Experiments
Documentation
Integrations
SDK & API Reference
Self-Hosting
Phoenix Cloud
Cookbooks
Release Notes
What is Arize Phoenix?
Quick Start
Get Started with Phoenix
End to End Features Notebook
Phoenix Demo
Tracing
Tutorial
Overview: Tracing
How-to: Tracing
Evaluation
TypeScript Quickstart
Python Quickstart
Overview: Evals
How to: Evals
Pre-Built Metrics
Datasets & Experiments
Tutorial
Overview: Datasets & Experiments
How-to: Datasets
How-to: Experiments
Overview
Run Experiments
Using Evaluators
Repetitions
Splits
Prompt Engineering
Tutorial
Overview: Prompts
How to: Prompts
Settings
Access Control (RBAC)
API Keys
Data Retention
Concepts
User Guide
Production Guide
Environments
Tracing
Prompts
Datasets & Experiments
Evals
Resources
Frequently Asked Questions
Contribute to Phoenix
Phoenix to Arize AX Migration
TypeScript API
Python API
Github
OpenInference
On this page
How to run experiments
How to use evaluators
How-to: Experiments
How to: Experiments
Copy page
Copy page
How to run experiments
How to upload a Dataset
How to run a custom task
How to configure evaluators
How to run the experiment
How to use repetitions in experiments
How to run an experiment over a dataset split
How to use evaluators
LLM Evaluators
Code Evaluators
Custom Evaluators
Was this page helpful?
Yes
No
Suggest edits
Raise issue
Exporting Datasets
Run Experiments
⌘I