Skip to main content
Phoenix home page
English
Search...
⌘K
Ask AI
TypeScript API
Python API
GitHub
Phoenix Cloud
Phoenix Cloud
Search...
Navigation
How-to: Experiments
How to: Experiments
Documentation
Self-Hosting
Phoenix Cloud
Cookbooks
Integrations
SDK & API Reference
Release Notes
Community
Blog
Arize Phoenix
Phoenix Demo
Quick Start
Quickstart: Overview
Tracing
Datasets & Experiments
Evaluation
Prompt Engineering
End to End Features Notebook
Tracing
Overview: Tracing
How-to: Tracing
Prompt Engineering
Tutorial
Overview: Prompts
How to: Prompts
Datasets & Experiments
Overview: Datasets & Experiments
How-to: Datasets
How-to: Experiments
Overview
Run Experiments
Using Evaluators
Repetitions
Splits
Evaluation
TypeScript Quickstart
Python Quickstart
Overview: Evals
How to: Evals
Pre-Built Evals
Settings
Access Control (RBAC)
API Keys
Data Retention
Phoenix to Arize AX Migration
Concepts
User Guide
Production Guide
Environments
Tracing
Prompts
Datasets & Experiments
Evals
Resources
Frequently Asked Questions
Contribute to Phoenix
Github
OpenInference
On this page
How to run experiments
How to use evaluators
How-to: Experiments
How to: Experiments
Copy page
Copy page
How to run experiments
How to upload a Dataset
How to run a custom task
How to configure evaluators
How to run the experiment
How to use repetitions in experiments
How to run an experiment over a dataset split
How to use evaluators
LLM Evaluators
Code Evaluators
Custom Evaluators
Was this page helpful?
Yes
No
Suggest edits
Raise issue
Exporting Datasets
Run Experiments
⌘I