- Datasets let you curate and organize examples to test your application systematically.
- Experiments let you compare different model versions or configurations on the same dataset to see which performs best.
Datasets
1
Set environment variables to connect to your Phoenix instance:
2
You can either create a Dataset in the UI, or via code.For this quickstart, you can download this sample.csv as a starter to run you through how to use datasets.That’s it! You’ve now successfully created your first dataset.
- UI
- Python
- TypeScript
In the UI, you can either create an empty dataset and then populate data or upload from a CSV.Once you’ve downloaded the above csv file, you can follow the video below to upload your first dataset.
Experiments
Once you have a dataset, you’re now able to run experiments. Experiments are made of tasks &, optionally, evals. While running evals is optional, they provide valuable metrics to help you compare each of your experiments quickly — such as comparing models, catching regressions, and understanding which version performs best.1
The first step is to pull down your dataset into your code.If you made your dataset in the UI, you can follow this code snippet:To get a specific version, pass If you created your dataset programmatically, you should already have it available as an instance assigned to your dataset variable.
- Python
- TypeScript
version_id="your-version-id". Find version IDs in the Versions tab of your dataset.2
Create a Task to evaluate.Your task can be any function with any definition & does not have to use an LLM. However, for our experiment we want to run our list of input questions through a new prompt.
- Python
- TypeScript
3
Next step is to create your Evaluator. If you have already defined your Q&A Correctness eval from the last quick start, you won’t need to redefine it. If not, you can follow along with these code snippets.You can run multiple evaluators at once. Let’s define a custom Completeness Eval.
- Python
- TypeScript
- Python
- TypeScript
4
Now that we have defined our Task & our Evaluators, we’re now ready to run our experiment.After running multiple experiments, you can compare the experiment output & evals side by side!Optional: If you wanted to run even more evaluators after this experiment, you can do so following this code:
- Python
- TypeScript
- Python
- TypeScript

