-
Notifications
You must be signed in to change notification settings - Fork 24
Multi tenant client to serve serverless usecases. #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @srikar-jilugu, I don’t know if there is any plan for this. How do you route requests among multiple valkey instances? Is there any spec available? |
Hi @rueian, we want the client to not get binded to a single cache endpoint(like it currently does), we ideally want it to have independent pooling for each cache,(similar to how the aws dynamodb sdk operates). This would allow users to specify the target cache dynamically at the command level. For eg
or
|
It seems like you are referring to a custom routing method instead of a widely accepted spec. I think the best option for you is still using multiple clients. Note that it is no difference between multiple clients and independent pooling for each cache. |
Can |
Try this option: Lines 221 to 224 in aff64a7
|
We have a use case where we need to connect to multiple serverless Redis/Valkey caches (e.g., AWS ElastiCache Serverless) dynamically. However, current valkey/redis clients are designed to interact with a single cluster, requiring periodic topology updates, which makes them incompatible with multi-tenant scenarios.
With the introduction of serverless offerings, a stateless client that can dynamically route commands among multiple endpoints without any need to maintain cluster state could better serve the usecases Most existing clients assume a single cluster and require ongoing topology updates, which aren't required for serverless caches(ref)
Why Not Use Multiple Clients?
Feature Request
A lightweight, stateless valkey client that:
Is there any ongoing work or plans for something like this? Would love to discuss possible approaches!
The text was updated successfully, but these errors were encountered: