Skip to content

fix: PromptOptimizer inherits LLM provider/model from executor's judge when not explicitly set#1866

Open
amethystani wants to merge 1 commit intoevidentlyai:mainfrom
amethystani:fix/prompt-optimizer-inherits-executor-provider
Open

fix: PromptOptimizer inherits LLM provider/model from executor's judge when not explicitly set#1866
amethystani wants to merge 1 commit intoevidentlyai:mainfrom
amethystani:fix/prompt-optimizer-inherits-executor-provider

Conversation

@amethystani
Copy link
Copy Markdown

Fixes #1856.

When PromptOptimizer runs its internal optimization step (e.g. FeedbackStrategy calling SimplePromptOptimizer to generate an improved prompt), it calls context.llm_wrapper. That wrapper was always built from OptimizerConfig.provider and OptimizerConfig.model, which hardcoded defaults of "openai" / "gpt-4o-mini". So any user whose judge used a non-OpenAI provider (e.g. Vertex AI) hit:

OpenAIError: The api_key client option must be set either by passing api_key to the client
or by setting the OPENAI_API_KEY environment variable

even though they never configured OpenAI anywhere.

Root cause: OptimizerConfig.provider and model defaulted to hardcoded OpenAI strings, so the optimizer always used OpenAI for its own completions regardless of what the user's executor was doing.

Fix:

  • OptimizerConfig.provider and model now default to None instead of "openai" / "gpt-4o-mini".
  • OptimizerContext gains resolve_provider_model() which checks the executor's judge for provider/model when the config fields are None, then falls back to "openai" / "gpt-4o-mini" for backward compatibility.
  • llm_wrapper uses resolve_provider_model() rather than reading config directly.
  • BlankLLMJudge._build_judge() raises a clear OptimizationConfigurationError when provider/model are not set (it cannot inherit from an executor since it is the executor creating the sub-judge; users must pass provider= and model= to PromptOptimizer() explicitly).

Before (broken):

optimizer = PromptOptimizer("run", strategy="feedback")
await optimizer.arun(executor=judge_with_vertex_ai, ...)
# → OpenAIError even though no OpenAI configured

After (fixed):

optimizer = PromptOptimizer("run", strategy="feedback")
await optimizer.arun(executor=judge_with_vertex_ai, ...)
# → uses vertex_ai/gemini-2.5-flash for both evaluation and optimization

Users who want a different provider for the optimizer than their judge can still pass provider= and model= explicitly to PromptOptimizer().

Three new unit tests cover the three paths (explicit config, inherited from judge, default fallback).

…en not set

When PromptOptimizer.strategy runs its own LLM completion (e.g. the
FeedbackStrategy generating an improved prompt), it calls
context.llm_wrapper which was always built from OptimizerConfig.provider
and OptimizerConfig.model.  Those defaulted to openai/gpt-4o-mini, so
any user whose judge used a different provider (e.g. Vertex AI) got an
OpenAIError requiring OPENAI_API_KEY even though they never asked for
OpenAI.

Fix:
- OptimizerConfig.provider/model now default to None instead of hardcoded
  openai/gpt-4o-mini strings.
- OptimizerContext gains resolve_provider_model() which checks the
  executor's judge for provider/model when the config fields are None,
  then falls back to openai/gpt-4o-mini for backward compatibility.
- llm_wrapper uses resolve_provider_model() rather than reading config
  directly.
- BlankLLMJudge._build_judge() raises a descriptive error when
  provider/model are not set (it cannot inherit from an executor since it
  IS the executor creating the sub-judge).

Fixes evidentlyai#1856
@amethystani
Copy link
Copy Markdown
Author

@elenasamuylova — tagging you as the maintainer of PromptOptimizer. The fix makes OptimizerConfig.provider/model optional so the optimization step automatically inherits the provider from the executor's judge (e.g. Vertex AI) when not explicitly set, rather than always falling back to OpenAI. Three unit tests cover all three paths. Let me know if the API direction looks right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PromptOptimizer throws OpenAIError when using Vertex AI judge

1 participant