We simply implement several training strategies combined with adversarial training methods to explore the effectiveness of previous methods on adversarial robustness.
- Pytorch (>=1.7.0)
- torchattacks
- numpy
- tqdm
- matplotlib
- scipy
python train.py --arch resnet --dataset cifar10 \\
--nr 0.2 --noise_type [sym, asy] \\
--method [pgd, trades, pgd_te, trades_te, pgd_sat, trades_sat, pencil, labelcorr, elr, selfie, plc] \\
--save save_name --exp experiment_name