-
Notifications
You must be signed in to change notification settings - Fork 743
feat(openai-agents): initial instrumentation; collect agent traces #2966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 83787d9 in 2 minutes and 1 seconds. Click for details.
- Reviewed
612
lines of code in14
files - Skipped
1
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-openai_agents/pyproject.toml:14
- Draft comment:
Typo: The repository URL contains "openllmetry". It appears you meant "opentelemetry". Please correct the typo in the repository URL. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While "openllmetry" might look like a typo of "opentelemetry", it could actually be an intentional name for the repository. The repository URL is part of the project metadata and without being able to verify the actual repository structure or organization naming, we can't be certain this is a mistake. This falls into the category of needing more context to be sure. The repository name could be an intentional branding choice combining "LLM" and "telemetry". Without checking the actual GitHub organization, we can't verify if this is really a mistake. Given that we can't verify the correct repository name and it could be intentional, we should err on the side of not making assumptions about organization/repository naming. Delete the comment as we don't have strong evidence that the repository URL is incorrect, and it could be an intentional naming choice.
Workflow ID: wflow_R5yp44NTeSZlDZ1d
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
...telemetry-instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/patch.py
Outdated
Show resolved
Hide resolved
...telemetry-instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/patch.py
Outdated
Show resolved
Hide resolved
...instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/instrumentation.py
Outdated
Show resolved
Hide resolved
7ce549f
to
dd58935
Compare
...instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/instrumentation.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @adharshctr! I wonder why you decided to wrap an internal method of the framework and not one of their main APIs?
...instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/instrumentation.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @adharshctr - there are 2 major isseues here we need to resolve. I suggest you look at other instrumentations (like the Anthropic one) to get a sense of what is needed here.
- You're not following semantic conventions - especially around LLM prompts and completions.
- You should test the spans exist and contain the right set of attributes - again, see other instrumentations for examples.
...telemetry-instrumentation-openai_agents/opentelemetry/instrumentation/openai_agents/patch.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adharshctr can you make sure you follow semantic conventions as much as possible?
https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adharshctr can you write proper recorded tests like we have in other instrumentations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
feat(instrumentation): ...
orfix(instrumentation): ...
.Important
Introduces OpenTelemetry instrumentation for OpenAI Agents, enabling tracing of agent workflows with new instrumentor, tests, and SDK updates.
OpenAIAgentsInstrumentor
ininstrumentation.py
to trace OpenAI agent workflows._wrap_agent_run()
to wrapRunner._get_new_response
for tracing.patch.py
to extract and set span attributes for agent details, model settings, run config, prompts, responses, and token usage.instruments.py
to includeOPENAI_AGENTS
.tracing.py
to initializeOpenAIAgentsInstrumentor
ininit_instrumentations()
andinit_openai_agents_instrumentor()
..flake8
and.python-version
for code style and Python version management.pyproject.toml
andpoetry.toml
for project configuration and dependencies.README.md
with installation and usage instructions for the new instrumentation.test_openai_agents.py
with unit tests for the instrumentor usingpytest
andunittest.mock
.openai_agents_using_litellm.py
as a sample app demonstrating the use of the new instrumentation.This description was created by
for 83787d9. You can customize this summary. It will automatically update as commits are pushed.