Automatically create spans for LLM calls without code changes.
- LLM SDKs: OpenAI
- Frameworks: LangChain
- Install:
npm install @arizeai/openinference-instrumentation-{name}
CommonJS (automatic):
const { register } = require("@arizeai/phoenix-otel");
const OpenAI = require("openai");
register({ projectName: "my-app" });
const client = new OpenAI();ESM (manual required):
import { register, registerInstrumentations } from "@arizeai/phoenix-otel";
import { OpenAIInstrumentation } from "@arizeai/openinference-instrumentation-openai";
import OpenAI from "openai";
register({ projectName: "my-app" });
const instrumentation = new OpenAIInstrumentation();
instrumentation.manuallyInstrument(OpenAI);
registerInstrumentations({ instrumentations: [instrumentation] });Why: ESM imports are hoisted before register() runs.
What auto-instrumentation does NOT capture:
async function myWorkflow(query: string): Promise<string> {
const preprocessed = await preprocess(query); // Not traced
const response = await client.chat.completions.create(...); // Traced (auto)
const postprocessed = await postprocess(response); // Not traced
return postprocessed;
}Solution: Add manual instrumentation for custom logic:
import { traceChain } from "@arizeai/openinference-core";
const myWorkflow = traceChain(
async (query: string): Promise<string> => {
const preprocessed = await preprocess(query);
const response = await client.chat.completions.create(...);
const postprocessed = await postprocess(response);
return postprocessed;
},
{ name: "my-workflow" }
);import { register } from "@arizeai/phoenix-otel";
import { traceChain } from "@arizeai/openinference-core";
register({ projectName: "my-app" });
const client = new OpenAI();
const workflow = traceChain(
async (query: string) => {
const preprocessed = await preprocess(query);
const response = await client.chat.completions.create(...); // Auto-instrumented
return postprocess(response);
},
{ name: "my-workflow" }
);