This repository improves CoTeaching Co-teaching Robust Training of Deep Neural Networks with Extremely Noisy Labels.
It uses the small-loss trick and samples which are predicted equivalently by the two networks.
It is implemented by TensorFlow
CoTeaching+ is the ICML'19 paper How does Disagreement Help Generalization against Label Corruption?
CoTeaching++ is a little different from CoTeaching+
CoTeaching+ selects the samples which are predicted differently by the two networks.
You can install TensorFlow1.4 cuda8 cudnn6
To see the improvements between CoTeaching and CoTeaching++
Here are examples:
${dataset_name} can be cifar10 cifar100
$ python main_tf.py --dataset ${dataset_name} --noise_type symmetric --fr_type type_1 --batch_size 128 --noise_rate 0.2 --mode_type coteaching
$ python main_tf.py --dataset ${dataset_name} --noise_type symmetric --fr_type type_1 --batch_size 128 --noise_rate 0.2 --mode_type coteaching_plus
$ python main_tf.py --dataset ${dataset_name} --noise_type symmetric --fr_type type_1 --batch_size 128 --noise_rate 0.5 --mode_type coteaching
$ python main_tf.py --dataset ${dataset_name} --noise_type symmetric --fr_type type_1 --batch_size 128 --noise_rate 0.5 --mode_type coteaching_plus
$ python main_tf.py --dataset ${dataset_name} --noise_type pairflip --fr_type type_1 --batch_size 128 --noise_rate 0.45 --mode_type coteaching
$ python main_tf.py --dataset ${dataset_name} --noise_type pairflip --fr_type type_1 --batch_size 128 --noise_rate 0.45 --mode_type coteaching_plus
You can replace this line
NonEqual = tf.equal(pred1, pred2)
in model_tf.py by
NonEqual = tf.not_equal(pred1, pred2)
Then this repository is changed to implement CoTeaching+