This repository was archived by the owner on Oct 23, 2023. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 656
Memory ballooning with breadcrumbs #1276
Comments
Small example to reproduce the problem (this is still difficult to reproduce, it depends on the speed of the machine, the network and the speed of the sentry server):
|
I wonder if someone already had the chance to look at this? :) |
Is this the right place to mention/discuss bugs? Because if I have to mention it somewhere else, I would like to know. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Possibly related to #929, but more specific and we think there is an easy fix, but we would like some feedback if we're on the right track.
The symptoms
Two weeks ago we noticed problems with a project that uses raven-python to send exceptions to sentry. We use aiohttp in combination with the raven-aiohttp package and the QueuedAioHttpTransport transport.
The issue occured when we had problems with a component, causing lots of exceptions to be logged (and sent to sentry). This sadly caused sentry to slow down.
There are already two issues here that we should solve by ourselves: lots of exceptions when a component fails and not enough capacity for sentry. Regardless, we believe that raven-python increased the problem and we think it can be solved.
Investigation
After investigation it turned out that timeouts to sentry and a full queue (messages to sentry) caused extra exceptions to be logged. These exceptions are excluded from sending to sentry, so they cause no problem there.
However, they do end up in breadcrumbs. Breadcrumbs are limited in several ways:
So far so good.
The problem
Breadcrumbs are not limited in their
data
though (there is a TODO in the code regarding this:raven-python/raven/breadcrumbs.py
Line 57 in f6d79c3
We believe that this can cause memory ballooning (with an upper bound, because of the limits described above).
The process can recover from this, as long as it can eventually deliver the messages to sentry.
A workaround
We worked around this problem by excluding certain loggers from the breadcrumbs (this is better than turning off breadcrumbs:
Solution
We think the solution would be to:
client.captureException()
Any thoughts?
The text was updated successfully, but these errors were encountered: