You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
This is a feature request for the Python library
Describe the feature or improvement you're requesting
Summary:
Currently, the OpenAI Agents SDK runs guardrails and agent logic in parallel, prioritizing low latency. While this is a smart design for speed, it can cause token waste if the guardrail raises an error after the agent has already started running.
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Summary:
Currently, the OpenAI Agents SDK runs guardrails and agent logic in parallel, prioritizing low latency. While this is a smart design for speed, it can cause token waste if the guardrail raises an error after the agent has already started running.
Problem:
The documentation says:
However, in practice:
Suggestion:
Please add a config option like this:
The text was updated successfully, but these errors were encountered: