Skip to content

Commit 3970edf

Browse files
weltekialexellis
authored andcommitted
Update email filter example to use OpenAI Responses API
Switch from Chat Completions to the Responses API, update model to gpt-5.4-nano, replace messages array with instructions and input parameters, and update max_tokens to max_output_tokens. Signed-off-by: Han Verstraete (OpenFaaS Ltd) <han@openfaas.com>
1 parent 07f3b1e commit 3970edf

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

_posts/2025-04-11-filter-emails-with-openai.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -481,21 +481,19 @@ If the email is incomplete or ambiguous, base your judgment on available content
481481
Email:
482482
```
483483

484-
The classification function shown below constructs the full prompt dynamically based on the email content and sends it to the OpenAI Chat API using the `gpt-3.5-turbo` model:
484+
The classification function shown below constructs the full prompt dynamically based on the email content and sends it to the OpenAI Responses API using the `gpt-5.4-nano` model:
485485

486486
```python
487487
def classify_email_content(prompt, content):
488488
full_prompt = f"{prompt}\n\nFrom: {content['from']}\nSubject: {content['subject']}\nBody:\n{content['body']}"
489-
response = openAIClient.chat.completions.create(
490-
model="gpt-3.5-turbo",
491-
messages=[
492-
{"role": "system", "content": "You are an assistant that classifies emails. Always respond with JSON."},
493-
{"role": "user", "content": full_prompt}
494-
],
489+
response = openAIClient.responses.create(
490+
model="gpt-5.4-nano",
491+
instructions="You are an assistant that classifies emails. Always respond with JSON.",
492+
input=full_prompt,
495493
temperature=0.2, # Low randomness for consistent output
496-
max_tokens=300
494+
max_output_tokens=300
497495
)
498-
return response.choices[0].message.content
496+
return response.output_text
499497
```
500498

501499
When implementing your own version of the function feel free to experiment with different available models. The prompt we use in this example is very minimal and you might want to give it more context and examples for a more reliable and consistent output. OpenAI also has a great article on [how to optimize the correctness and accuracy of an LLM for specific tasks](https://platform.openai.com/docs/guides/optimizing-llm-accuracy).

0 commit comments

Comments
 (0)