Skip to content

feat(openai-agents): initial instrumentation; collect agent traces #2966

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

adharshctr
Copy link
Contributor

@adharshctr adharshctr commented May 29, 2025

  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

image
image


Important

Introduces OpenTelemetry instrumentation for OpenAI Agents, enabling tracing of agent workflows with new instrumentor, tests, and SDK updates.

  • Instrumentation:
    • Adds OpenAIAgentsInstrumentor in instrumentation.py to trace OpenAI agent workflows.
    • Implements _wrap_agent_run() to wrap Runner._get_new_response for tracing.
    • Adds functions in patch.py to extract and set span attributes for agent details, model settings, run config, prompts, responses, and token usage.
  • SDK Integration:
    • Updates instruments.py to include OPENAI_AGENTS.
    • Modifies tracing.py to initialize OpenAIAgentsInstrumentor in init_instrumentations() and init_openai_agents_instrumentor().
  • Configuration:
    • Adds .flake8 and .python-version for code style and Python version management.
    • Adds pyproject.toml and poetry.toml for project configuration and dependencies.
  • Documentation:
    • Creates README.md with installation and usage instructions for the new instrumentation.
  • Testing:
    • Adds test_openai_agents.py with unit tests for the instrumentor using pytest and unittest.mock.
  • Sample Application:
    • Adds openai_agents_using_litellm.py as a sample app demonstrating the use of the new instrumentation.

This description was created by Ellipsis for 83787d9. You can customize this summary. It will automatically update as commits are pushed.

@adharshctr adharshctr marked this pull request as ready for review May 29, 2025 09:44
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 83787d9 in 2 minutes and 1 seconds. Click for details.
  • Reviewed 612 lines of code in 14 files
  • Skipped 1 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-openai_agents/pyproject.toml:14
  • Draft comment:
    Typo: The repository URL contains "openllmetry". It appears you meant "opentelemetry". Please correct the typo in the repository URL.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While "openllmetry" might look like a typo of "opentelemetry", it could actually be an intentional name for the repository. The repository URL is part of the project metadata and without being able to verify the actual repository structure or organization naming, we can't be certain this is a mistake. This falls into the category of needing more context to be sure. The repository name could be an intentional branding choice combining "LLM" and "telemetry". Without checking the actual GitHub organization, we can't verify if this is really a mistake. Given that we can't verify the correct repository name and it could be intentional, we should err on the side of not making assumptions about organization/repository naming. Delete the comment as we don't have strong evidence that the repository URL is incorrect, and it could be an intentional naming choice.

Workflow ID: wflow_R5yp44NTeSZlDZ1d

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@adharshctr adharshctr force-pushed the openai_agent_tracing branch from 7ce549f to dd58935 Compare May 29, 2025 10:32
@adharshctr adharshctr requested a review from gyliu513 May 30, 2025 08:42
Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @adharshctr! I wonder why you decided to wrap an internal method of the framework and not one of their main APIs?

Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @adharshctr - there are 2 major isseues here we need to resolve. I suggest you look at other instrumentations (like the Anthropic one) to get a sense of what is needed here.

  1. You're not following semantic conventions - especially around LLM prompts and completions.
  2. You should test the spans exist and contain the right set of attributes - again, see other instrumentations for examples.

@adharshctr adharshctr requested a review from nirga June 3, 2025 05:12
Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adharshctr can you make sure you follow semantic conventions as much as possible?
https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adharshctr can you write proper recorded tests like we have in other instrumentations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@adharshctr adharshctr requested a review from nirga June 9, 2025 06:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants