fix: PromptOptimizer inherits LLM provider/model from executor's judge when not explicitly set#1866
Open
amethystani wants to merge 1 commit intoevidentlyai:mainfrom
Conversation
…en not set When PromptOptimizer.strategy runs its own LLM completion (e.g. the FeedbackStrategy generating an improved prompt), it calls context.llm_wrapper which was always built from OptimizerConfig.provider and OptimizerConfig.model. Those defaulted to openai/gpt-4o-mini, so any user whose judge used a different provider (e.g. Vertex AI) got an OpenAIError requiring OPENAI_API_KEY even though they never asked for OpenAI. Fix: - OptimizerConfig.provider/model now default to None instead of hardcoded openai/gpt-4o-mini strings. - OptimizerContext gains resolve_provider_model() which checks the executor's judge for provider/model when the config fields are None, then falls back to openai/gpt-4o-mini for backward compatibility. - llm_wrapper uses resolve_provider_model() rather than reading config directly. - BlankLLMJudge._build_judge() raises a descriptive error when provider/model are not set (it cannot inherit from an executor since it IS the executor creating the sub-judge). Fixes evidentlyai#1856
Author
|
@elenasamuylova — tagging you as the maintainer of |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #1856.
When
PromptOptimizerruns its internal optimization step (e.g.FeedbackStrategycallingSimplePromptOptimizerto generate an improved prompt), it callscontext.llm_wrapper. That wrapper was always built fromOptimizerConfig.providerandOptimizerConfig.model, which hardcoded defaults of"openai"/"gpt-4o-mini". So any user whose judge used a non-OpenAI provider (e.g. Vertex AI) hit:even though they never configured OpenAI anywhere.
Root cause:
OptimizerConfig.providerandmodeldefaulted to hardcoded OpenAI strings, so the optimizer always used OpenAI for its own completions regardless of what the user's executor was doing.Fix:
OptimizerConfig.providerandmodelnow default toNoneinstead of"openai"/"gpt-4o-mini".OptimizerContextgainsresolve_provider_model()which checks the executor's judge forprovider/modelwhen the config fields areNone, then falls back to"openai"/"gpt-4o-mini"for backward compatibility.llm_wrapperusesresolve_provider_model()rather than reading config directly.BlankLLMJudge._build_judge()raises a clearOptimizationConfigurationErrorwhen provider/model are not set (it cannot inherit from an executor since it is the executor creating the sub-judge; users must passprovider=andmodel=toPromptOptimizer()explicitly).Before (broken):
After (fixed):
Users who want a different provider for the optimizer than their judge can still pass
provider=andmodel=explicitly toPromptOptimizer().Three new unit tests cover the three paths (explicit config, inherited from judge, default fallback).