-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[3.10.8] Caching leads to worse performance than not caching #12560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @JeffreyMJordan 👋 Would you mind checking out the values reported by the memory management section here? https://www.apollographql.com/docs/react/caching/memory-management That |
Hi @jerelmiller , thanks for the reply I'm taking these numbers from our internal replication (not the codesandbox). The internal replica is very similar to what I provided you in the sandbox, as I'm just repeating the same query with the same args a certain number of times. Values in
That said, it does seem likely we're hitting a limit somewhere. I noticed there's a large dropoff in performance between 4900 queries and 5000 queries that doesn't occur when not caching. Below is the full contents of {
"limits": {
"parser": 1000,
"canonicalStringify": 1000,
"print": 2000,
"documentTransform.cache": 2000,
"queryManager.getDocumentInfo": 2000,
"PersistedQueryLink.persistedQueryHashes": 2000,
"fragmentRegistry.transform": 2000,
"fragmentRegistry.lookup": 1000,
"fragmentRegistry.findFragmentSpreads": 4000,
"cache.fragmentQueryDocuments": 1000,
"removeTypenameFromVariables.getVariableDefinitions": 2000,
"inMemoryCache.maybeBroadcastWatch": 5000,
"inMemoryCache.executeSelectionSet": 50000,
"inMemoryCache.executeSubSelectedArray": 10000
},
"sizes": {
"print": 5,
"parser": 9,
"canonicalStringify": 4,
"links": [],
"queryManager": {
"getDocumentInfo": 8,
"documentTransforms": []
},
"cache": {
"fragmentQueryDocuments": 0
},
"addTypenameDocumentTransform": [
{
"cache": 8
}
],
"inMemoryCache": {
"executeSelectionSet": 7,
"executeSubSelectedArray": 0,
"maybeBroadcastWatch": 0
},
"fragmentRegistry": {}
}
} I'm wondering if I also can't help but notice that the limit of |
My best guess is that the production examples are hitting the My current understanding is that |
It's safe to increase if you are hitting it with a single displayed page. All of these limits are memoization cache limits, so they are a tradeoff between memory pressure and not repeating work where the result could be cached. If the limits are too high, you keep too many computed values in memory in case you could need them again at some point in the future - but if the limits are too low and not everything on screen can be memoized at once, the oldest memoized values will be removed from the cache, while they are still needed. |
Hi, just following up on this. I increased the cache limits by quite a bit and I'm not seeing any improvement. This issue seems very similar to this one from 2022. I did patch in this merged PR and I'm still not seeing much improvement. The times we've run into issues in production are all times the client has sent a large volume of queries, all of which are sped up by bypassing InMemoryCache. |
So, this is independent from a specific version number? |
Issue Description
Hi Apollo, I've noticed some instances where caching leads to worse application performance than when not caching. When our application sends a large volume of batched queries, disabling caching frequently leads to better performance.
I've been able to reproduce the issues locally, and it seems that this is related to the
watches
field and related updates that take place when parts of the cache are invalidated. It looks like there is a lot of cpu spend iterating through thewatches
field and updating relevant parts of the cache. I'm curious whether this process could be optimized; I've noticed that when sending many instances of the same query with the same arguments, each instance is added to thewatches
field.I understand that there's inherently more compute needed when caching vs. not, but it's unintuitive that caching would decrease performance.
Intended Outcome
Caching improves performance when an applications sends a large amount of batched queries.
Actual Outcome
Caching degrades performance when an applications sends a large amount of batched queries.
Link to Reproduction
https://codesandbox.io/p/devbox/quirky-johnson-lt2hfy?workspaceId=ws_9rF45qqZxvP5ESx5Mhtmg2
Reproduction Steps
I would have uploaded a performance profile but file upload seems to be broken.
scripting
takes to completefetchPolicy
then reloadrunMicrotasks
takes about 345ms without caching and takes 22.8 seconds with caching.@apollo/client
version3.10.8
The text was updated successfully, but these errors were encountered: