Forked from: https://github.com/ebagdasa/backdoor_federated_learning
Updates made via fork:
- Code now runs in Python 3.7 and Pytorch 1.7.1+cu110
- folder saved_models contains BASE, which is a model trained for Image Classification task for 10,000 epochs. This is in relation to the paper, which requires such a trained model on which backdoor attacks are conducted.
- utils/img_class_continue.yaml has been made to allow model poisoning from saved_models/BASE.
- Clone repository
git clone https://github.com/monopolize-all/backdoor_federated_learning.git
cd backdoor_federated_learning
- Make a new venv and activate it, also installing all required libraries
python3.7 -m venv venv
source venv/bin/activate
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
- Start visdom server and specify port
visdom -port 8097
- Run poisoning on pretrained model (Requires new terminal window with venv activated as visdom needs to be kept running.)
python training.py --params utils/img_class_continue.yaml
- Open http://localhost:8097 in a web browser to see live accuracy results as model gets trained.
Credits go to [email protected] as code has been originally provided by him. All I did was make it compatible with newer pytorch versions and also provide a pretrained model for Backdoor attacks on Image Classification.
This code includes experiments for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)