-
Notifications
You must be signed in to change notification settings - Fork 393
feat: add record_exception in the datadog api #4567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #4567 +/- ##
=======================================
Coverage 97.76% 97.76%
=======================================
Files 1404 1404
Lines 86173 86220 +47
Branches 4354 4356 +2
=======================================
+ Hits 84247 84295 +48
+ Misses 1926 1925 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Datadog ReportBranch report: ❌ 4 Failed (0 Known Flaky), 20386 Passed, 1366 Skipped, 3m 41.29s Total Time ❌ Failed Tests (4)
|
BenchmarksBenchmark execution time: 2025-04-10 15:15:32 Comparing candidate commit a58f267 in PR branch Found 0 performance improvements and 1 performance regressions! Performance is the same for 30 metrics, 2 unstable metrics. scenario:profiler - gvl benchmark samples
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving with a couple of minor doc updates requested
# type. | ||
# | ||
# @return [void] | ||
def record_exception(exception, attributes: {}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are calling this from opentelemetry SDK where attributes
is defaulted to nil, therefore either there or here you must handle the case of attributes being nil and change it to an empty hash.
I'm thinking this would have been caught by type checking if the types were defined for both methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, thank you.
'exception.type' => exc.type, | ||
'exception.message' => exc.message, | ||
'exception.stacktrace' => exc.backtrace, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the response to my comment about the type of keys here being strings and not symbols?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I applied it in my first review commit but it made the linter job fails: https://github.com/DataDog/dd-trace-rb/actions/runs/14382559325/job/40329857628
Therefore I reverted the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, span_event.rbs
defines the attribute types:
type attributes = Hash[String,attributeValue]
type attributeValue = String | Integer | Float | bool | Array[String] | Array[Integer] | Array[Float] | Array[bool]
The record_exception
method should be added to the type definition for SpanOperation referencing these types for attributes
.
# take precedence over the type, message and stacktrace inferred from the exception object | ||
type = attributes&.[]('exception.type') || exception.class.to_s | ||
message = attributes&.[]('exception.message') || exception.message | ||
stacktrace = attributes&.[]('exception.stacktrace') || exception.full_message(highlight: false, order: :top) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This defaulting to full_message
seems to me to have vanished?
What does this PR do?
This PR adds a record_exception call in the tracer API. It mimics the behavior of the OTEL one for which you can find the documentation here: https://opentelemetry.io/docs/specs/otel/trace/exceptions/
Motivation:
Some customers using Sentry complained they were not capable of recording an exception. It appears that if they are using the OTEL api from our tracer, they can. However, as we want to be a competitor to Sentry, I found that it is not a good solution as it is simpler to use this capability directly from our tracer. In addition the feature is easy to emplement
Change log entry
Yes. Record an exception through span events with the
record_exception
call.How to test the change?