Skip to content

refactor the kubernetes provider to be provider-agnostic #3213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
shawnh2 opened this issue Apr 17, 2024 · 7 comments · Fixed by #5767
Closed

refactor the kubernetes provider to be provider-agnostic #3213

shawnh2 opened this issue Apr 17, 2024 · 7 comments · Fixed by #5767
Assignees

Comments

@shawnh2
Copy link
Contributor

shawnh2 commented Apr 17, 2024

Description:

For now, all the resources related process method in k8s provider are coupling deeply with k8s client, we should free these method from it.

func (r *gatewayAPIReconciler) processGateways(ctx context.Context, managedGC *gwapiv1.GatewayClass, resourceMap *resourceMappings, resourceTree *gatewayapi.Resources) error {

So I propose a refactor: the basic idea is to separate how we process resources and how we retrieve resources.

By defining Operations interface and ResourceProcessor structure, each provider can maintain their own implemention of Operations, and different provider can share the same ResourceProcessor.

type Operations interface {
    ListGateways()
    ListHTTPRoutes()
    FindReferenceGrant()
    ...
}

type ResourceProcessor struct {
    Operations
}

func (r *ResourceProcessor) ProcessGateways() {
    // use r.ListGateways() to retrieve all Gateway resources
}
func (r *ResourceProcessor) ProcessHTTPRoutes() {}
func (r *ResourceProcessor) findReferenceGrant() {}
...

we can

  • let k8s provider only foucs on how to retrieve resources, so does file provider
  • the k8s and file provider can share the same process logic for resources, and we don't need to maintain two places of code for each provider type

[optional Relevant Links:]

Any extra documentation required to understand the issue.

@shawnh2 shawnh2 added kind/decision A record of a decision made by the community. area/provider kind/refactor labels Apr 17, 2024
@shawnh2 shawnh2 changed the title refactor the kubernetes provider to be provider-insensitive refactor the kubernetes provider to be provider-agnostic Apr 23, 2024
@Xunzhuo
Copy link
Member

Xunzhuo commented May 14, 2024

Make sense to me, it helps for multi-provider support.

@shawnh2 shawnh2 added kind/enhancement New feature or request kind/feature new feature and removed kind/decision A record of a decision made by the community. labels May 16, 2024
@shawnh2
Copy link
Contributor Author

shawnh2 commented May 16, 2024

will try to first refactor the current existing k8s provider, and then implement for the new coming file provider.

@shawnh2 shawnh2 self-assigned this May 16, 2024
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

@sky92zwq
Copy link

Is it possible with this refactoring that part of the gateway resources come from k8s and another part from files ( Gateway api related)?

@github-actions github-actions bot removed the stale label Aug 27, 2024
@arkodg
Copy link
Contributor

arkodg commented Aug 27, 2024

hey @sky92zwq we'll need to update the Custom provider API and add a Aggregate ish resource provider to handle the case you highlighted

Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

@shawnh2
Copy link
Contributor Author

shawnh2 commented Apr 19, 2025

Found a simple yet effective way to let other provider (like file provider) to reuse the k8s reconcile logic.

Create a offline k8s controller structure, like:

package kubernetes

import (
	"context"
	"fmt"

	"k8s.io/apimachinery/pkg/runtime/schema"
	"k8s.io/apimachinery/pkg/util/sets"
	"sigs.k8s.io/controller-runtime/pkg/client"
	"sigs.k8s.io/controller-runtime/pkg/client/fake"
	"sigs.k8s.io/controller-runtime/pkg/reconcile"
	gwapiv1 "sigs.k8s.io/gateway-api/apis/v1"

	egv1a1 "github.com/envoyproxy/gateway/api/v1alpha1"
	"github.com/envoyproxy/gateway/internal/envoygateway"
	"github.com/envoyproxy/gateway/internal/envoygateway/config"
	"github.com/envoyproxy/gateway/internal/message"
)

// OfflineGatewayAPIReconciler
type OfflineGatewayAPIReconciler struct {
	gatewayAPIReconciler

	// offline Client
	Client client.Client
}

func NewOfflineGatewayAPIController(
	ctx context.Context, cfg *config.Server, su Updater, resources *message.ProviderResources,
) (*OfflineGatewayAPIReconciler, error) {
	// Do not allow k8s provider to use this controller.
	if cfg.EnvoyGateway.Provider.Type == egv1a1.ProviderTypeKubernetes {
		return nil, fmt.Errorf("offline ")
	}

	// Gather additional resources to watch from registered extensions
	var (
		extGVKs               []schema.GroupVersionKind
		extServerPoliciesGVKs []schema.GroupVersionKind
	)

	if cfg.EnvoyGateway.ExtensionManager != nil {
		for _, rsrc := range cfg.EnvoyGateway.ExtensionManager.Resources {
			gvk := schema.GroupVersionKind(rsrc)
			extGVKs = append(extGVKs, gvk)
		}
		for _, rsrc := range cfg.EnvoyGateway.ExtensionManager.PolicyResources {
			gvk := schema.GroupVersionKind(rsrc)
			extServerPoliciesGVKs = append(extServerPoliciesGVKs, gvk)
		}
	}

	// Using fake client to store resources.
	cli := fake.NewClientBuilder().
		WithScheme(envoygateway.GetScheme()).
		WithIndex(&gwapiv1.Gateway{}, classGatewayIndex, func(rawObj client.Object) []string {
			gateway := rawObj.(*gwapiv1.Gateway)
			return []string{string(gateway.Spec.GatewayClassName)}
		}).
		WithIndex(&gwapiv1.HTTPRoute{}, gatewayHTTPRouteIndex, gatewayHTTPRouteIndexFunc).
// with other index ...
		Build()

	r := gatewayAPIReconciler{
		client:            cli,
		log:               cfg.Logger,
		classController:   gwapiv1.GatewayController(cfg.EnvoyGateway.Gateway.ControllerName),
		namespace:         cfg.Namespace,
		statusUpdater:     su, // todo
		resources:         resources,
		extGVKs:           extGVKs,
		store:             newProviderStore(),
		envoyGateway:      cfg.EnvoyGateway,
		mergeGateways:     sets.New[string](),
		extServerPolicies: extServerPoliciesGVKs,
	}

	if byNamespaceSelectorEnabled(cfg.EnvoyGateway) {
		r.namespaceLabel = cfg.EnvoyGateway.Provider.Kubernetes.Watch.NamespaceSelector // todo
	}

	r.log.Info("created offline gatewayapi controller")
	r.subscribeAndUpdateStatus(ctx, cfg.EnvoyGateway.EnvoyGatewaySpec.ExtensionManager != nil)

	return &OfflineGatewayAPIReconciler{
		gatewayAPIReconciler: r,
		Client:               cli,
	}, nil
}

// Reconcile
func (r *OfflineGatewayAPIReconciler) Reconcile(ctx context.Context) error {
	_, err := r.gatewayAPIReconciler.Reconcile(ctx, reconcile.Request{})
	return err
}
  • using controller-runtime fake client (call it offline client) to store resources in mem (without any connections to apiserver), in this way, file provider will only need to CRUD resources through offline client after loading all resources from file
  • file provider can design its own statusUpdater and hand it over to offline controller, the status processing remains consistency between different providers. In this way, we can discard feat: log resource status for standalone mode #5760
  • the Reconcile method will need to be called manually, but won't be a problem for file provider, it can call the reconcile method after loading and storing resources from file
  • we will only need to maintain the reconcile method in k8s provider, and other providers can share the same logic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants