Skip to main content
The OpenInference Specification defines a set of environment variables you can configure to suit your observability needs. In addition, the OpenInference auto-instrumentors accept a trace config which allows you to set these value in code without having to set environment variables, if that’s what you prefer The possible settings are:
Environment Variable NameEffectTypeDefault
OPENINFERENCE_HIDE_INPUTSHides input value, all input messages & embedding input textboolFalse
OPENINFERENCE_HIDE_OUTPUTSHides output value & all output messagesboolFalse
OPENINFERENCE_HIDE_INPUT_MESSAGESHides all input messages & embedding input textboolFalse
OPENINFERENCE_HIDE_OUTPUT_MESSAGESHides all output messagesboolFalse
PENINFERENCE_HIDE_INPUT_IMAGESHides images from input messagesboolFalse
OPENINFERENCE_HIDE_INPUT_TEXTHides text from input messages & input embeddingsboolFalse
OPENINFERENCE_HIDE_OUTPUT_TEXTHides text from output messagesboolFalse
OPENINFERENCE_HIDE_EMBEDDING_VECTORSHides returned embedding vectorsboolFalse
OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERSHides LLM invocation parametersboolFalse
OPENINFERENCE_HIDE_LLM_PROMPTSHides LLM prompts span attributesboolFalse
OPENINFERENCE_BASE64_IMAGE_MAX_LENGTHLimits characters of a base64 encoding of an imageint32,000
To set up this configuration you can either:
  • Set environment variables as specified above
  • Define the configuration in code as shown below
  • Do nothing and fall back to the default values
  • Use a combination of the three, the order of precedence is:
    • Values set in the TraceConfig in code
    • Environment variables
    • default values
Below is an example of how to set these values in code using our OpenAI Python and JavaScript instrumentors, however, the config is respected by all of our auto-instrumentors.
  • Python
  • JS
from openinference.instrumentation import TraceConfig
config = TraceConfig(
    hide_inputs=...,
    hide_outputs=...,
    hide_input_messages=...,
    hide_output_messages=...,
    hide_input_images=...,
    hide_input_text=...,
    hide_output_text=...,
    hide_embedding_vectors=...,
    hide_llm_invocation_parameters=...,
    hide_llm_prompts=...,
    base64_image_max_length=...,
)

from openinference.instrumentation.openai import OpenAIInstrumentor
OpenAIInstrumentor().instrument(
    tracer_provider=tracer_provider,
    config=config,
)