Giter Site home page Giter Site logo

Comments (10)

chaoyanghe avatar chaoyanghe commented on July 25, 2024 1

FedML supports multiple parameter servers for the communication efficiency via hierarchical FL and decentralized FL .
In hierarchical FL, there are group parameter servers that split the total client set into multiple client subsets.
In decentralized FL, each client acts as a parameter server.

Please refer to the following links for details.
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/hierarchical_fl
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/decentralized

@prosopher Thanks. But I guess he was discussing the distributed computing setting, not the standalone version.

from fedml.

wizard1203 avatar wizard1203 commented on July 25, 2024 1

@chaoyanghe Thanks for your detailed explanation. Maybe I can try to complete it by myself, and when I finish it I would like to push it to your master branch.

from fedml.

chaoyanghe avatar chaoyanghe commented on July 25, 2024

BytePS is for data center-based distributed training, while FedML (e.g., FedAvg) is edge-based distributed training. The particular assumptions of FL include:

  1. heterogeneous data distribution cross devices (non-I.I.D.)
  2. resource constrained edge devices (memory, computational, and communication)
  3. label deficiency (harder to label the data points because of privacy)
  4. concern the security and privacy

So what do you mean "adding more servers in FedAvg"?

from fedml.

wizard1203 avatar wizard1203 commented on July 25, 2024

BytePS is for data center-based distributed training, while FedML (e.g., FedAvg) is edge-based distributed training. The particular assumptions of FL include:

  1. heterogeneous data distribution cross devices (non-I.I.D.)
  2. resource constrained edge devices (memory, computational, and communication)
  3. label deficiency (harder to label the data points because of privacy)
  4. concern the security and privacy

So what do you mean "adding more servers in FedAvg"?

I mean adding more parameter servers to improve the communication efficiency. Maybe this can be used suitably only in cluster environment but not true Federated Learning environment with resource constrained edge devices. However, it is still can accelerate the training when doing research.

from fedml.

prosopher avatar prosopher commented on July 25, 2024

FedML supports multiple parameter servers for the communication efficiency via hierarchical FL and decentralized FL .
In hierarchical FL, there are group parameter servers that split the total client set into multiple client subsets.
In decentralized FL, each client acts as a parameter server.

Please refer to the following links for details.
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/hierarchical_fl
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/decentralized

from fedml.

chaoyanghe avatar chaoyanghe commented on July 25, 2024

@wizard1203 Thanks for your suggestion. As for acceleration, FedML is the only research-oriented FL framework that supports cross-machine multiple GPU distributed training. To further accelerate, we can definitely use many techniques from traditional distributed training (very mature with much less research attention). I elaborate a few here:

  1. AllReduce-based GPU-GPU communication using InfiniBand. However, this is not a real FL setting. As you said, it is only useful to evaluate algorithms or modeling which are not insensitive to the training speed.
  2. Hybrid parallelism (model + data parallelism) + pipeline
  3. Bucking BP (as introduced in PyTorch VLDB 2020)
  4. low-bit (half precision)
  5. pruning
    ....

As @prosopher pointed out, you can design any topology as you like. Our topology configuration is very flexible. In distributed computing setting, you can refer the following algorithms with different topologies:
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/distributed

In addition, I have to point out that "adding more parameter servers to improve the communication efficiency" is a bit confusing conceptually. We cannot say using more computation resources improves communication efficiency. Normally, the relationship between computation and communication is a trade-off. Using more parallel computation cannot change the communication itself, and it also does not mean we can speed up the training since the communication cross machines may dominate the training time. But I agree with your idea of using traditional techniques in distributed computing to accelerate FL research. Thanks.

from fedml.

wizard1203 avatar wizard1203 commented on July 25, 2024

FedML supports multiple parameter servers for the communication efficiency via hierarchical FL and decentralized FL .
In hierarchical FL, there are group parameter servers that split the total client set into multiple client subsets.
In decentralized FL, each client acts as a parameter server.

Please refer to the following links for details.
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/hierarchical_fl
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/standalone/decentralized

@prosopher Thanks for this, I will carefully read them.

from fedml.

chaoyanghe avatar chaoyanghe commented on July 25, 2024

@chaoyanghe Thanks for your detailed explanation. Maybe I can try to complete it by myself, and when I finish it I would like to push it to your master branch.

Thanks. Looking forward to your contribution.

from fedml.

chaoyanghe avatar chaoyanghe commented on July 25, 2024

@wizard1203 Do you mean modifying based on this code?
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/distributed/fedavg

from fedml.

wizard1203 avatar wizard1203 commented on July 25, 2024

@wizard1203 Do you mean modifying based on this code?
https://github.com/FedML-AI/FedML/tree/master/fedml_experiments/distributed/fedavg

@chaoyanghe No, maybe it needs to base on those codes on fedml_core. Whatever, I may try to do it many days later. In fact, I have some other algorithms that I want to implement more urgently than this.

from fedml.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.