Description
Description
I am using Cloud Run but I suppose the issue is similar for any scalable hosting platform.
I am developing a dart_frog server. I don't need a database, but I do need a global state for a specific service/region.
Cloud Run spawns instances to scale, and those do not share memory between each other. This means that I have to set the max amount of instances to 1. And since the maximum concurrent requests per instance is 1000, I assume I can't have more than 1000 websocket clients on a singular service.
In short, dart_frog only scales if it's stateless, or if the state is externalized (using Redis or a regular database for example). I get that dart_frog is meant to be minimalistic and this should be implemented as a middleware. But I didn't see this question being addressed anywhere in the docs. I only found this comment that addresses ORM support.
I'm considering alternatives like serverpod to solve this. But their documentation suggests that serverless solutions like Cloud Run "cannot have a state" and that I'll have to use the more expensive GCE. However, it looks like Cloud Run can in fact use Redis, like this doc shows. There's even a GC service for it called Memorystore. But I'm not sure if I can use this with dart_frog.
Server/web dev isn't my expertise, so I'd really appreciate any guidance on implementing memory caching/sharing. More importantly, I think the dart_frog doc could really benefit from addressing this question to help developers with similar requirements.
Requirements
- Addressing memory caching/sharing in the documentation
Additional Context
No response
Metadata
Metadata
Assignees
Type
Projects
Status