-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Memory Leak Issue with HMAC Calculation in bc-fips-2.1.0 #2059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, Would you mind running from within your container the following from a shell:
Also would you mind launching the JVM that is running 2.1.0, with the following option:
This will disable the use of native code assuming your container is running on a machine with an Intel CPU. Let me know how you go. Thanks MW |
Hi, I’ve applied the flags as you recommended, and I can confirm that the memory usage remains stable after the update. It appears that adding the: Could you please elaborate on the differences between bc-fips-2.0 and 2.1 that require this additional flag in the deployment? Understanding this change would help us better assess future upgrades and configurations. Thank you in advance for your support. |
Ok so what is happening is that it isn't leaking memory, it is just delaying cleaning it up. You can adjust this delay (seconds) with the following:
On JVMs with access to many cores (eg 16+) it is legal for the JVM to notify the library that an object is going to be garbage collected while the last thread to ever access that object is completing its last method call. Note that being notified that an object is going to be garbage collected and actually being collected are two different things. We originally reacted to the notifications immediately and use them to trigger cleaning up of any native memory allocations but on large multi-core JVM deployments the native memory allocation risked being freed while another thread was finishing off accessing the object. This led to the use of invalid pointers and ungraceful terminations of the JVM etc. At this stage there does not appear to be much we can do beyond wait a fixed amount of time and then clean up the native allocation. Difference between 2.0 and 2.1 2.1 has native implementations (Intel only) of a transformations and the provider will delegate to those if they are available. The cleanup delay is only relevant if the cpu_variant is not set to java. |
Hi, Setting Question: What are the benefits of using native objects (via .so files) compared to setting cpu_variant=java in the context of our Java Spring Boot application deployed in Kubernetes pods? Could you provide references to official documentation or resources that detail these configuration options and their implications? Thank you for your assistance. Best regards, |
Hi, enterprise grade support is available at: https://www.keyfactor.com/open-source/bouncy-castle-support/ I would caution we are not responsible for either Spring Boot or Kubernetes pods, so a lot more information would be required. |
Hi,
I am encountering a significant memory leak when calculating HMAC with bc-fips-2.1.0 for my data using the following code:
java:
The issue appears when I use the bc-fips-2.1.0 version, where memory usage increases steadily during execution. However, when I switch to bc-fips-2.0.0, memory usage remains stable, and the issue no longer occurs.
Would you be able to assist me in identifying the root cause of this memory leak in bc-fips-2.1.0? It seems like there may have been some changes or regressions in this version that are affecting memory management.
I appreciate any insights or recommendations you might have.
Thank you for your assistance.
I run this code 500,000 times and got:
Test with BC-FIPS 2.1.x:
Start memory of my container: 250 Mi
Finish memory of my container: 1300 Mi
Test with BC-FIPS 2.0.x:
Start memory of my container: 291 Mi
Finish memory of my container: 296 Mi
Best regards,
Dima
The text was updated successfully, but these errors were encountered: