Strands Agents is an open-source AI agent SDK that uses model-driven orchestration to build production-ready, multi-agent systems in a few lines of code. It supports many LLM providers (Amazon Bedrock, OpenAI, Anthropic, and more) and offers multi-agent patterns, custom tool creation, and native AWS integrations.
Install
pip install arize-phoenix-otel openinference-instrumentation-strands-agents strands-agents openai
Setup
Set your model provider API key as an environment variable. This example uses OpenAI:
export OPENAI_API_KEY=[your_key_here]
Strands Agents provides its own OpenTelemetry-based telemetry. The openinference-instrumentation-strands-agents package adds a span processor that transforms Strands’ native spans into OpenInference format for Phoenix.
from phoenix.otel import HTTPSpanExporter, SimpleSpanProcessor, register
from openinference.instrumentation.strands_agents import StrandsAgentsToOpenInferenceProcessor
# Strands reads the global tracer provider, so start with register() to make
# Phoenix's provider the process-wide default.
tracer_provider = register(
project_name="strands-agents",
)
# This processor rewrites Strands' native spans into OpenInference spans, so it
# must run before the Phoenix exporter that sends spans to your collector.
tracer_provider.add_span_processor(StrandsAgentsToOpenInferenceProcessor())
tracer_provider.add_span_processor(
SimpleSpanProcessor(HTTPSpanExporter()),
)
Processor ordering matters. register() sets Phoenix as the global tracer provider for Strands, but it also installs Phoenix’s default exporter-backed processor immediately. Replace that default with StrandsAgentsToOpenInferenceProcessor(), then add the Phoenix exporter back after it so Phoenix receives the transformed OpenInference spans.
Run Strands Agents
from strands import Agent
from strands.models.openai import OpenAIModel
model = OpenAIModel(model_id="gpt-4o-mini")
agent = Agent(
model=model,
system_prompt="You are a helpful assistant.",
)
result = agent("Explain the theory of relativity in simple terms.")
Observe
Now that you have tracing setup, all invocations of Strands agents — including LLM calls, tool executions, and event loop cycles — will be streamed to your running Phoenix for observability and evaluation.
Resources