Giter Site home page Giter Site logo

awesome-learning-with-label-noise's Introduction

Learning-with-Label-Noise

A curated list of resources for Learning with Noisy Labels


Papers & Code

  • 2008-NIPS - Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. [Paper] [Code]

  • 2009-ICML - Supervised learning from multiple experts: whom to trust when everyone lies a bit. [Paper]

  • 2011-NIPS - Bayesian Bias Mitigation for Crowdsourcing. [Paper]

  • 2012-ICML - Learning to Label Aerial Images from Noisy Data. [Paper]

  • 2013-NIPS - Learning with Multiple Labels. [Paper]

  • 2013-NIPS - Learning with Noisy Labels. [Paper] [Code]

  • 2014-ML - Learning from multiple annotators with varying expertise. [Paper]

  • 2014 - A Comprehensive Introduction to Label Noise. [Paper]

  • 2014 - Learning from Noisy Labels with Deep Neural Networks. [Paper]

  • 2015-ICLR_W - Training Convolutional Networks with Noisy Labels. [Paper] [Code]

  • 2015-CVPR - Learning from Massive Noisy Labeled Data for Image Classification. [Paper] [Code]

  • 2015-CVPR - Visual recognition by learning from web data: A weakly supervised domain generalization approach. [Paper] [Code]

  • 2015-CVPR - Training Deep Neural Networks on Noisy Labels with Bootstrapping. [Paper] [Loss-Code-Unofficial-1] [Loss-Code-Unofficial-2] [Code-Keras]

  • 2015-ICCV - Webly supervised learning of convolutional networks. [Paper] [Project Pagee]

  • 2015-TPAMI - Classification with noisy labels by importance reweighting. [Paper] [Code]

  • 2015-NIPS - Learning with Symmetric Label Noise: The Importance of Being Unhinged. [Paper] [Loss-Code-Unofficial]

  • 2015-Arxiv - Making Risk Minimization Tolerant to Label Noise. [Paper]

  • 2015 - Learning Discriminative Reconstructions for Unsupervised Outlier Removal. [Paper] [Code]

  • 2015-TNLS - Rboost: label noise-robust boosting algorithm based on a nonconvex loss function and the numerically stable base learners. [Paper]

  • 2016-AAAI - Robust semi-supervised learning through label aggregation. [Paper]

  • 2016-ICLR - Auxiliary Image Regularization for Deep CNNs with Noisy Labels. [Paper] [Code]

  • 2016-CVPR - Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels. [Paper] [Code]

  • 2016-ICML - Loss factorization, weakly supervised learning and label noise robustness. [Paper]

  • 2016-RL - On the convergence of a family of robust losses for stochastic gradient descent. [Paper]

  • 2016-NC - Noise detection in the Meta-Learning Level. [Paper] [Additional information]

  • 2016-ECCV - The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition. [Paper] [Project Page]

  • 2016-ICASSP - Training deep neural-networks based on unreliable labels. [Paper] [Poster] [Code-Unofficial]

  • 2016-ICDM - Learning deep networks from noisy labels with dropout regularization. [Paper] [Code]

  • 2016-KBS - A robust multi-class AdaBoost algorithm for mislabeled noisy data. [Paper]

  • 2017-AAAI - Robust Loss Functions under Label Noise for Deep Neural Networks. [Paper]

  • 2017-PAKDD - On the Robustness of Decision Tree Learning under Label Noise. [Paper]

  • 2017-ICLR - Training deep neural-networks using a noise adaptation layer. [Paper] [Code]

  • 2017-ICLR - Who Said What: Modeling Individual Labelers Improves Classification. [Paper] [Code]

  • 2017-CVPR - Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach. [Paper] [Code]

  • 2017-CVPR - Learning From Noisy Large-Scale Datasets With Minimal Supervision. [Paper]

  • 2017-CVPR - Lean crowdsourcing: Combining humans and machines in an online system. [Paper] [Code]

  • 2017-CVPR - Attend in groups: a weakly-supervised deep learning framework for learning from web data. [Paper] [Code]

  • 2017-ICML - Robust Probabilistic Modeling with Bayesian Data Reweighting. [Paper] [Code]

  • 2017-ICCV - Learning From Noisy Labels With Distillation. [Paper] [Code]

  • 2017-NIPS - Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks. [Paper]

  • 2017-NIPS - Active bias: Training more accurate neural networks by emphasizing high variance samples. [Paper] [Code]

  • 2017-NIPS - Decoupling" when to update" from" how to update". [Paper] [Code]

  • 2017-IEEE-TIFS - A Light CNN for Deep Face Representation with Noisy Labels. [Paper] [Code-Pytorch] [Code-Keras] [Code-Tensorflow]

  • 2017-TNLS - Improving Crowdsourced Label Quality Using Noise Correction. [Paper]

  • 2017-ML - Learning to Learn from Weak Supervision by Full Supervision. [Paper] [Code]

  • 2017-ML - Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision. [Paper]

  • 2017-Arxiv - Deep Learning is Robust to Massive Label Noise. [Paper]

  • 2017-Arxiv - Fidelity-weighted learning. [Paper]

  • 2017 - Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels. [Paper]

  • 2017-Arxiv - Learning with confident examples: Rank pruning for robust classification with noisy labels. [Paper] [Code]

  • 2017-Arxiv - Regularizing neural networks by penalizing confident output distributions. [Paper]

  • 2017 - Learning with Auxiliary Less-Noisy Labels. [Paper]

  • 2018-AAAI - Deep learning from crowds. [Paper]

  • 2018-ICLR - mixup: Beyond Empirical Risk Minimization. [Paper] [Code]

  • 2018-ICLR - Learning From Noisy Singly-labeled Data. [Paper] [Code]

  • 2018-ICLR_W - How Do Neural Networks Overcome Label Noise?. [Paper]

  • 2018-CVPR - CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise. [Paper] [Code]

  • 2018-CVPR - Joint Optimization Framework for Learning with Noisy Labels. [Paper] [Code] [Code-Unofficial-Pytorch]

  • 2018-CVPR - Iterative Learning with Open-set Noisy Labels. [Paper] [Code]

  • 2018-ICML - MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. [Paper] [Code]

  • 2018-ICML - Learning to Reweight Examples for Robust Deep Learning. [Paper] [Code] [Code-Unofficial-PyTorch]

  • 2018-ICML - Dimensionality-Driven Learning with Noisy Labels. [Paper] [Code]

  • 2018-ECCV - CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images. [Paper] [Code]

  • 2018-ECCV - Deep Bilevel Learning. [Paper] [Code]

  • 2018-ECCV - Learning with Biased Complementary Labels. [Paper] [Code]

  • 2018-ISBI - Training a neural network based on unreliable human annotation of medical images. [Paper]

  • 2018-WACV - Iterative Cross Learning on Noisy Labels. [Paper]

  • 2018-WACV - A semi-supervised two-stage approach to learning from noisy labels. [Paper]

  • 2018-NIPS - Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels. [Paper] [Code]

  • 2018-NIPS - Masking: A New Perspective of Noisy Supervision. [Paper] [Code]

  • 2018-NIPS - Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise. [Paper] [Code]

  • 2018-NIPS - Robustness of conditional GANs to noisy labels. [Paper] [Code]

  • 2018-NIPS - Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. [Paper] [Loss-Code-Unofficial]

  • 2018-TIP - Deep learning from noisy image labels with quality embedding. [Paper]

  • 2018-TNLS - Progressive Stochastic Learning for Noisy Labels. [Paper]

  • 2018 - Multiclass Learning with Partially Corrupted Labels. [Paper]

  • 2018-Arxiv- Improving Multi-Person Pose Estimation using Label Correction. [Paper]

  • 2018 - Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks. [Paper]

  • 2019-AAAI - Safeguarded Dynamic Label Regression for Generalized Noisy Supervision. [Paper] [Code] [Slides] [Poster]

  • 2019-ICLR_W - SOSELETO: A Unified Approach to Transfer Learning and Training with Noisy Labels.[Paper] [Code]

  • 2019-CVPR - Learning From Noisy Labels by Regularized Estimation of Annotator Confusion. [Paper]

  • 2019-CVPR - Learning to Learn from Noisy Labeled Data. [Paper] [Code]

  • 2019-CVPR - Learning a Deep ConvNet for Multi-label Classification with Partial Labels. [Paper]

  • 2019-CVPR - Label-Noise Robust Generative Adversarial Networks. [Paper] [Code]

  • 2019-CVPR - Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion. [Paper] [Code]

  • 2019-CVPR - Probabilistic End-to-end Noise Correction for Learning with Noisy Labels. [Paper] [Code]

  • 2019-CVPR - Graph Convolutional Label Noise Cleaner: Train a Plug-and-play Action Classifier for Anomaly Detection. [Paper] [Code]

  • 2019-CVPR - Improving Semantic Segmentation via Video Propagation and Label Relaxation. [Paper] [Code]

  • 2019-CVPR - Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations. [Paper] [Code] [Project-page]

  • 2019-CVPR - Noise-Tolerant Paradigm for Training Face Recognition CNNs. [Paper] [Code]

  • 2019-CVPR - A Nonlinear, Noise-aware, Quasi-clustering Approach to Learning Deep CNNs from Noisy Labels. [Paper]

  • 2019-IJCAI - Learning Sound Events from Webly Labeled Data. [Paper] [Code]

  • 2019-ICML - Unsupervised Label Noise Modeling and Loss Correction. [Paper] [Code]

  • 2019-ICML - Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels. [Paper] [Code]

  • 2019-ICML - How does Disagreement Help Generalization against Label Corruption?. [Paper] [Code]

  • 2019-ICML - Using Pre-Training Can Improve Model Robustness and Uncertainty. [Paper] [Code]

  • 2019-ICML - On Symmetric Losses for Learning from Corrupted Labels. [Paper] [Poster] [Slides] [Code]

  • 2019-ICML - Combating Label Noise in Deep Learning Using Abstention. [Paper] [Code]

  • 2019-ICML - SELFIE: Refurbishing unclean samples for robust deep learning. [Paper] [Code]

  • 2019-ICASSP - Learning Sound Event Classifiers from Web Audio with Noisy Labels. [Paper] [Code]

  • 2019-TGRS - Hyperspectral Image Classification in the Presence of Noisy Labels. [Paper] [Code]

  • 2019-ICCV - NLNL: Negative Learning for Noisy Labels. [Paper] [Code]

  • 2019-ICCV - Symmetric Cross Entropy for Robust Learning With Noisy Labels. [Paper] [Code]

  • 2019-ICCV - Co-Mining: Deep Face Recognition With Noisy Labels.[Paper]

  • 2019-ICCV - O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks.[Paper] [Code]

  • 2019-ICCV - Deep Self-Learning From Noisy Labels. [Paper] [Code]

  • 2019-ICCV_W - Photometric Transformer Networks and Label Adjustment for Breast Density Prediction. [Paper]

  • 2019-NIPS - Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting.[Paper] [Code]

  • 2019-TPAMI - Learning from Large-scale Noisy Web Data with Ubiquitous Reweighting for Image Classification. [Paper]

  • 2019-ISBI - Robust Learning at Noisy Labeled Medical Images: Applied to Skin Lesion Classification. [Paper]

  • 2019-AISTATS - Two-temperature logistic regression based on the Tsallis divergence. [Paper]

  • 2019-NIPS - Robust bi-tempered logistic loss based on Bregman divergences. [Paper] [Blog] [Code] [Demo]

  • 2019-NIPS - Are Anchor Points Really Indispensable in Label-Noise Learning?. [Paper] [Code]

  • 2019-NIPS - Noise-tolerant fair classification. [Paper] [Code]

  • 2019-NIPS - Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels. [Paper]

  • 2019-NIPS - Combinatorial Inference against Label Noise. [Paper] [Code]

  • 2019-NIPS - L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise. [Paper] [Code]

  • 2019-Arxiv - ChoiceNet: Robust Learning by Revealing Output Correlations. [Paper]

  • 2019-Arxiv - Robust Learning Under Label Noise With Iterative Noise-Filtering. [Paper]

  • 2019-Arxiv - IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters. [Paper] [Project page]

  • 2019-Arxiv - Confident Learning: Estimating Uncertainty in Dataset Labels. [Paper] [Code]

  • 2019-Arxiv - Derivative Manipulation for General Example Weighting. [Paper] [Code]

  • 2020-ICPR - Towards Robust Learning with Different Label Noise Distributions. [Paper] [Code]

  • 2020-AAAI - Reinforcement Learning with Perturbed Rewards. [Paper] [Code]

  • 2020-AAAI - Less Is Better: Unweighted Data Subsampling via Influence Function. [Paper] [Code]

  • 2020-AAAI - Label Error Correction and Generation Through Label Relationships. [Paper]

  • 2020-AAAI - Self-Paced Robust Learning for Leveraging Clean Labels in Noisy Data. [Paper]

  • 2020-AAAI - Coupled-view Deep Classifier Learning from Multiple Noisy Annotators. [Paper]

  • 2020-AAAI - Partial Multi-label Learning with Noisy Label Identification. [Paper]

  • 2020-WACV - A Novel Self-Supervised Re-labeling Approach for Training with Noisy Labels. [Paper]

  • 2020-WACV - Disentangling Human Dynamics for Pedestrian Locomotion Forecasting with Noisy Supervision. [Paper]

  • 2020-WACV - Learning from Noisy Labels via Discrepant Collaborative Training. [Paper]

  • 2020-ICLR - SELF: Learning to Filter Noisy Labels with Self-Ensembling. [Paper]

  • 2020-ICLR - DivideMix: Learning with Noisy Labels as Semi-supervised Learning. [Paper] [Code]

  • 2020-ICLR - Can gradient clipping mitigate label noise?. [Paper] [Code]

  • 2020-ICLR - Curriculum Loss: Robust Learning and Generalization against Label Corruption. [Paper]

  • 2020-ICLR - Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee. [Paper]

  • 2020-ICLR - Learning from Rules Generalizing Labeled Exemplars. [Paper] [Code]

  • 2020-ICLR - Robust training with ensemble consensus. [Paper] [Code]

  • 2020-CVPR - Combating noisy labels by agreement: A joint training method with co-regularization. [Paper] [Code]

  • 2020-CVPR - Distilling Effective Supervision From Severe Label Noise. [Paper] [Code]

  • 2020-CVPR - Learning From Noisy Anchors for One-Stage Object Detection. [Paper] [Code]

  • 2020-CVPR - Self-Training With Noisy Student Improves ImageNet Classification. [Paper] [Code]

  • 2020-CVPR - Noise Robust Generative Adversarial Networks. [Paper] [Code]

  • 2020-CVPR - Noise-Aware Fully Webly Supervised Object Detection. [Paper] [Code]

  • 2020-CVPR - Global-Local GCN: Large-Scale Label Noise Cleansing for Face Recognition. [Paper]

  • 2020-CVPR - Training Noise-Robust Deep Neural Networks via Meta-Learning. [Paper] [Code]

  • 2020-ICML - Learning with Bounded Instance-and Label-dependent Label Noise. [Paper] [Matlab Code]

  • 2020-ICML - Label-Noise Robust Domain Adaptation. [Paper]

  • 2020-ICML - LTF: A Label Transformation Framework for Correcting Label Shift. [Papeer]

  • 2020-ICML - Does label smoothing mitigate label noise?. [Paper]

  • 2020-ICML - Error-Bounded Correction of Noisy Labels. [Paper] [Code]

  • 2020-ICML - Deep k-NN for Noisy Labels. [Paper]

  • 2020-ICML - Searching to Exploit Memorization Effect in Learning from Noisy Labels. [Paper] [Code]

  • 2020-ICML - Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels. [Paper] [Code]

  • 2020-ICML - Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates. [Paper]

  • 2020-ICML - Improving Generalization by Controlling Label-Noise Information in Neural Network Weights. [Paper] [Code]

  • 2020-ICML - Training Binary Neural Networks through Learning with Noisy Supervision. [Paperr] [Code]

  • 2020-ICML - SIGUA: Forgetting May Make Learning with Noisy Labels More Robust. [Paper] [Code]

  • 2020-ICML - Normalized Loss Functions for Deep Learning with Noisy Labels. [Paper] [Code]

  • 2020-ICML_W - How does Early Stopping Help Generalization against Label Noise?. [Paper]

  • 2020-IJCAI - learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling. [Paper]

  • 2020-IJCAI - Can Cross Entropy Loss Be Robust to Label Noise?. [Paper]

  • 2020-ECCV - Graph convolutional networks for learning with few clean and many noisy labels. [Paper] [Code]

  • 2020-ECCV - Learning with Noisy Class Labels for Instance Segmentation. [Paper] [Code]

  • 2020-ECCV - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating Back-Propagation for Saliency Detection. [Paper] [Code]

  • 2020-ECCV - NoiseRank: Unsupervised Label Noise Reduction with Dependence Models. [Paper]

  • 2020-ECCV - Weakly-Supervised Learning with Side Information for Noisy Labeled Images. [Paper]

  • 2020-ECCV - Sub-center ArcFace: Boosting Face Recognition by Large-scale Noisy Web Faces. [Paper] [Code]

  • 2020-TASLP - Audio Tagging by Cross Filtering Noisy Labels. [Paper]

  • 2020-NIPS - Robust Optimization for Fairness with Noisy Protected Groups. [Paper] [Code]

  • 2020-NIPS - A Topological Filter for Learning with Label Noise. [Paper] [Code]

  • 2020-NIPS - Self-Adaptive Training: beyond Empirical Risk Minimization. [Paper] [Code]

  • 2020-NIPS - Parts-dependent Label Noise: Towards Instance-dependent Label Noise. [Paper][Code]

  • 2020-NIPS - Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning. [Paper]

  • 2020-NIPS - Early-Learning Regularization Prevents Memorization of Noisy Labels. [Paper] [Code]

  • 2020-NIPS - Disentangling Human Error from the Ground Truth in Segmentation of Medical Images. [Paper] [Code]

  • 2020-NIPS - Identifying Mislabeled Data using the Area Under the Margin Ranking. [Paper] [Code]

  • 2020-NIPS - Coresets for Robust Training of Neural Networks against Noisy Labels. [Paper] [Code]

  • 2020-IJCNN - Temporal Calibrated Regularization for Robust Noisy Label Learning. [Paper]

  • 2020-MICCAI - Characterizing Label Errors: Confident Learning for Noisy-labeled Image Segmentation. [Paper] [Code]

  • 2020-ICPR - Meta Soft Label Generation for Noisy Labels. [Paper] [Code]

  • 2020-IJCV - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation [Paper] [Code]

  • 2020-IEEEAccess - Limited Gradient Descent: Learning With Noisy Labels. [Paper]

  • 2020-Arxiv - Multi-Class Classification from Noisy-Similarity-Labeled Data. [Paper]

  • 2020-Arxiv - Learning Adaptive Loss for Robust Learning with Noisy Labels. [Paper]

  • 2020-Arxiv - Class2Simi: A New Perspective on Learning with Label Noise. [Paper]

  • 2020-Arxiv - Confidence Scores Make Instance-dependent Label-noise Learning Possible. [Paper] [Code]

  • 2020-Arxiv - ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks. [Paper] [Code]

  • 2020-Arxiv - Learning from Noisy Labels with Noise Modeling Network. [Paper]

  • 2020-Arxiv - ExpertNet: Adversarial Learning and Recovery Against Noisy Labels. [Paper]

  • 2020-Arxiv - Noisy Labels Can Induce Good Representations. [Paper]

  • 2020-Arxiv - Contrast to Divide: self-supervised pre-training for learning with noisy labels. [Paper] [Code]

  • 2021-AAAI - Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels. [Paper] [Code]

  • 2021-AAAI - Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise. [Paper] [Code]

  • 2021-AAAI - Meta Label Correction for Noisy Label Learning. [Paper] [Code]

  • 2021-AAAI - From Label Smoothing to Label Relaxation. [Paper] [Code]

  • 2021-AAAI - Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model. [Paper]

  • 2021-WACV - Do We Really Need Gold Samples for Sample Weighting Under Label Noise? [Paper] [Code]

  • 2021-WACV - EvidentialMix: Learning with Combined Open-set and Closed-set Noisy Labels. [Paper] [Code] [Blog]

  • 2021-CVPR - Improving Unsupervised Image Clustering With Robust Learning. [Paper] [Code]

  • 2021-CVPR - Multi-Objective Interpolation Training for Robustness to Label Noise. [Paper] [Code]

  • 2021-CVPR - Noise-resistant Deep Metric Learning with Ranking-based Instance Selection. [Paper] [Code]

  • 2021-CVPR - Augmentation Strategies for Learning with Noisy Labels. [Paper] [Code]

  • 2021-CVPR - A Second-Order Approach to Learning with Instance-Dependent Label Noise. [Paper] [Code]

  • 2021-CVPR - Faster Meta Update Strategy for Noise-Robust Deep Learning. [Paper] [Code]

  • 2021-CVPR - Partially View-aligned Representation Learning with Noise-robust Contrastive Loss. [Paper] [Code]

  • 2021-CVPR - Correlated Input-Dependent Label Noise in Large-Scale Image Classification. [Paper]

  • 2021-CVPR - Divergence Optimization for Noisy Universal Domain Adaptation. [Paper] [Code]

  • 2021-CVPR - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for Unsupervised Person Re-Identification. [Paper] [Code]

  • 2021-CVPR - Jo-SRC: A Contrastive Approach for Combating Noisy Labels. [Paper] [Code]

  • 2021-CVPR - Learning Cross-Modal Retrieval With Noisy Labels. [Paper]

  • 2021-CVPR - Learning an Explicit Weighting Scheme for Adapting Complex HSI Noise. [Paper] [Code]

  • 2021-CVPR - DAT: Training Deep Networks Robust To Label-Noise by Matching the Feature Distributions. [Paper]

  • 2021-CVPR - Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation. [Paper] [Code]

  • 2021-CVPR - Joint Negative and Positive Learning for Noisy Labels. [Paper] [Code]

  • 2021-CVPR - DualGraph: A Graph-Based Method for Reasoning About Label Noise. [Paper]

  • 2021-CVPR - AutoDO: Robust AutoAugment for Biased Data With Label Noise via Scalable Probabilistic Implicit Differentiation. [Paper] [Code]

  • 2021-CVPRW - Contrastive Learning Improves Model Robustness Under Label Noise. [Paper] [Code]

  • 2021-CVPRW - Boosting Co-teaching with Compression Regularization for Label Noise. [Paper] [Code]

  • 2021-ICLR - Learning with Feature-Dependent Label Noise: A Progressive Approach. [Paper] [Code]

  • 2021-ICLR - Robust early-learning: Hindering the memorization of noisy labels. [Paper] [Code]

  • 2021-ICLR - MoPro: Webly Supervised Learning with Momentum Prototypes. [Paper] [Code]

  • 2021-TIP - Delving Deep into Label Smoothing. [Paper] [Code]

  • 2021-PAKDD - Memorization in Deep Neural Networks: Does the Loss Function Matter? [Paper] [Code]

  • 2021-NIPS - FINE Samples for Learning with Noisy Labels. [Paper][Code]

  • 2021-NeurIPS - Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. [Paper] [Demo] [Code] [Blog Post]

  • 2021-NeurIPS - Understanding and Improving Early Stopping for Learning with Noisy Labels. [Paper]

  • 2021-Arxiv - Improving Medical Image Classification with Label Noise Using Dual-uncertainty Estimation. [Paper]

  • 2021-Arxiv - A Framework using Contrastive Learning for Classification with Noisy Labels. [Paper]

  • 2021-Arxiv - Adaptive Sample Selection for Robust Learning under Label Noise. [Paper] [Code]

  • 2021 - An Instance-Dependent Simulation Framework for Learning with Label Noise. [Paper] [Project Page]

  • 2021-ECML - Estimating the Electrical Power Output of Industrial Devices with End-to-End Time-Series Classification in the Presence of Label Noise. [Paper] [Code]

  • 2021-MM - Co-learning: Learning from Noisy Labels with Self-supervision. [Paper] [Code]

  • 2021-IJCAI - Towards Understanding Deep Learning from Noisy Labels with Small-Loss Criterion. [Paper]

  • 2022-WSDM - Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels. [Paper] [Code]

  • 2022-Arxiv - Multi-class Label Noise Learning via Loss Decomposition and Centroid Estimation. [Paper]

  • 2022-AAAI - Deep Neural Networks Learn Meta-Structures from Noisy Labels in Semantic Segmentation. [Paper] [Code]

  • 2022-AAAI - Noise-robust Learning from Multiple Unsupervised Sources of Inferred Labels. [Paper]

  • 2022-ICLR - PiCO: Contrastive Label Disambiguation for Partial Label Learning. [Paper] [Code]

  • 2022-CVPR - UNICON: Combating Label Noise Through Uniform Selection and Contrastive Learning. [Paper] [Code]

  • 2022-CVPR - Few-shot Learning with Noisy Labels. [Paper]

  • 2022-CVPR - Scalable Penalized Regression for Noise Detection in Learning with Noisy Labels. [Paper] [Code]

  • 2022-CVPR - Large-Scale Pre-training for Person Re-identification with Noisy Labels. [Paper] [Code]

  • 2022-CVPR - Adaptive Early-Learning Correction for Segmentation from Noisy Annotations. [Paper] [Code]

  • 2022-CVPR - Selective-Supervised Contrastive Learning with Noisy Labels. [Paper] [Code]

Survey

  • 2014-TNLS - Classification in the Presence of Label Noise: a Survey. [Paper]

  • 2019-KBS - Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey. [Paper]

  • 2020-SIBGRAPI - A Survey on Deep Learning with Noisy Labels:How to train your model when you cannot trust on the annotations?. [Paper] [Code]

  • 2020-MIA - Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. [Paper]

  • 2020 - Learning from Noisy Labels with Deep Neural Networks: A Survey. [Paper] [Project Page]

Github

Others

Acknowledgements

Some of the above contents are borrowed from Noisy-Labels-Problem-Collection

awesome-learning-with-label-noise's People

Contributors

anishathalye avatar arghosh avatar cgnorthcutt avatar chenpf1025 avatar cocoxili avatar dailing avatar dbp1994 avatar guixianjin avatar hendrycks avatar janus-shiau avatar julilien avatar kentonishi avatar layneh avatar layumi avatar nolfwin avatar pxiangwu avatar pyjulie avatar ragavsachdeva avatar rtanno21609 avatar shikunli avatar smithliu95 avatar subeeshvasu avatar wenshuoguo avatar ximinng avatar yulv-git avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.