-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a routing extension interface #3625
Comments
I think this sounds great as its been a consistent ask over the years, here's some of the prior art:
I'm in favor, and looking forward to the discussion. ☕ |
brainstorming here, I prefer if this was part of a loadBalancer setting
linking Envoy Gateway's loadBalancer setting here https://gateway.envoyproxy.io/docs/api/extension_types/#loadbalancer |
So we add a For example, if you add a loadbalancer stanza at the backendRef, that implies that you're choosing the LB algo for choosing between endpoints in that backend pool, while if you add it at the Route rule level, then you're choosing a backend pool out of multiple options. |
cc @mikemorris - I think this ends up overlapping a lot with the discussions of a broader "BackendPolicy" with things like retry budget, load balancing, etc |
It would be helpful to get input from non-Envoy Gateway implementations. |
Yes indeed. /cc @mlavacca @nik-netlox @sjberman @guicassolato Check this out and let us know if you have any thoughts? |
To @howardjohn's point, seems like there is potential overlap here with the BackendLBPolicy? I know the community is trying to move away from too many policies, but attaching a loadBalancer config to a Service is much easier than having to add it to every The configuration itself would have to be fairly generic since proxies support different methods and such. For reference, the NGINX upstream module contains the various LB algorithms and related directives that are supported by NGINX Open Source (NGINX Plus supports even more). |
To be honest, I would rather see this sort of extension done inside a custom backend type rather than being added directly to the Route objects. That way, opting in to this feature means opting in to that backend type. Could that backend type be included in Gateway API? Maybe if we all agree. But that's much easier and safer than adding more fields that we're not sure about into a GA type (HTTPRoute). |
This proposal makes a lot of sense to me, thanks for bringing this up here. In Kong, multiple algorithms of load balancing are available through the upstream API. I agree that this feature should ideally be an extension of some sort (maybe a policy). Embedding it directly into the |
I think that using a Policy here may add too much complexity, I would prefer to see us try something else first. |
Maybe a good first step would be to collect use cases and prior art. That should help guide the correct location for the API. |
What would you like to be added:
An extension interface that allows the deployment of a custom extension/load balancing solution. Standardizing a general interface would allow best practices to be baked in, and help build an eco system around tooling supporting this interface.
Why this is needed:
This is applicable in scenarios where a workload may benefit from a bespoke routing algorithm as opposed to a simple round robin. We do something like this in https://github.com/kubernetes-sigs/gateway-api-inference-extension, and would love to help collaborate on a more general solution that can work for all gateway users.
The way we do this is via ext-proc and the
original dst cluster
featureoriginal dst host
. We rename the header here, and populate that header here. This currently uses many envoy specific features, and is all in our experimental API. As we mature this product we are open to finding a solution that works for all. So we would love to find a way to integrate this into Gateway proper, or at the very least ensure we are aligned with yall on what the final solution should be! Thanks!The text was updated successfully, but these errors were encountered: