Skip to content

are we going to support opennmmlab package in flower examples #4521

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cam-ai opened this issue Nov 18, 2024 · 7 comments
Open

are we going to support opennmmlab package in flower examples #4521

cam-ai opened this issue Nov 18, 2024 · 7 comments

Comments

@cam-ai
Copy link

cam-ai commented Nov 18, 2024

Describe the type of feature and its functionality.

are we going to support to opennmmlab in future, it is hard for you to get a simple Net(nn.module) in openmmlab,

Describe step by step what files and adjustments are you planning to include.

need some examples for openmmlab usage

Is there something else you want to add?

openmmlab like mmdet or mmdet3d

@adam-narozniak
Copy link
Contributor

Just to clarify, you're requesting an example with open-mmlab e.g. the mmdetection library?

@cam-ai
Copy link
Author

cam-ai commented Nov 27, 2024

Yes, you create fl-client, and this client will call mmdetection to build up model, prepare related data loader, then you can train the model with flower

@cam-ai
Copy link
Author

cam-ai commented Nov 28, 2024

mmdetection depends on the mmengine, but in mmengine, how could you intergrate the fl client with mmengin.runner.run()

@jafermarq
Copy link
Contributor

Hi @cam-ai , it should be possible. I'm not familiar with that library but typically you can think of the training stage of each client as a mini-centralised training (only with the data the client owns). This means that it is possible to make use of any training library or training loop design and call it from the client's fit(). There is a Flower Baseline (FedVSSL) that uses it. Please take a look. I note it wasn't the easiest package to integrate, but maybe @yan-gao-GY can give you some tips since he wrote that baseline.

@cam-ai
Copy link
Author

cam-ai commented Dec 2, 2024

it is close. but i still can not find how did the server control the client learning rate, at least there should be a learning rate callback to adjust learning with epochs, cause everything is encapsulated in mmengine.runner.run()

@WilliamLindskog
Copy link
Contributor

Hi @cam-ai,

just checking in here. Did you find a way to make it work?

@WilliamLindskog
Copy link
Contributor

Hi @cam-ai, thanks again for raising this and sharing your thoughts.

As @jafermarq mentioned, it’s definitely possible to integrate libraries like OpenMMLab by wrapping their training logic inside the fit() method of a custom Flower client.

Regarding your question about server controlling the client learning rate: in Flower, server-driven hyperparameter updates (like changing learning rate across rounds) are typically passed via the config dictionary in fit().

On the client side, inside fit(), you would need to: read config["learning_rate"], and adjust your optimizer manually before calling runner.run(). Since mmengine.runner encapsulates everything, you might need to override or wrap part of the config dynamically before the runner starts.

Could you please share if you made this work?
Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants