Giter Site home page Giter Site logo

kthyeon / fine_official Goto Github PK

View Code? Open in Web Editor NEW
34.0 3.0 14.0 164.4 MB

NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

Python 98.25% Shell 1.75%
noisy-label-learning semi-supervised-learning neurips2021 machine-learning pytorch

fine_official's Introduction

[Official] FINE Samples for Learning with Noisy Labels

This repository is the official implementation of "FINE Samples for Learning with Noisy Labels" paper presented in NeurIPS 2021. New version of previous repository https://github.com/jaychoi12/FINE. Future code modifications and official developments will take place here. Thanks to the contributors in the previous repo.

  • Paper, NeurIPS 21, FINE Samples for Learning with Noisy Labels, [Arxiv][OpenReview]

Reference Codes

We refer to some official implementation codes

Requirements

  • This codebase is written for python3 (used python 3.7.6 while implementing).
  • To install necessary python packages, run pip install -r requirements.txt.

Training

Sample-Selection Approaches and Collaboration with Noise-Robust loss functions

Semi-Supervised Approaches

  • Most codes are similar with the original implementation code in https://github.com/LiJunnan1992/DivideMix.
  • If you want to train the model with FINE (f-dividemix), move to the folder dividemix and run the bash files by following the README.md in the dividemix folder.

Results

You can reproduce all results in the paper with our code. All results have been described in our paper including Appendix. The results of our experiments are so numerous that it is difficult to post everything here. However, if you experiment several times by modifying the hyperparameter value in the .sh file, you will be able to reproduce all of our analysis.

Contact

License
This project is licensed under the terms of the MIT license.

Acknowledgements

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) [No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)] and [No. 2021-0-00907, Development of Adaptive and Lightweight Edge-Collaborative Analysis Technology for Enabling Proactively Immediate Response and Rapid Learning].

fine_official's People

Contributors

forestnoobie avatar jaychoi12 avatar jongwooko avatar kthyeon avatar sangwook-cho avatar sungnyun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fine_official's Issues

non-exist functions

Hi, thanks for sharing your implementation.
In your implementation, threr are some code error.
The functions ,get_out_list and get_singular_value_vector, used in FINE_official/dynamic_selection/traintools/gtrobustlosstrain.py is not really so non-existent.
Can you give me details about these functions?

Motivation of the method

Hello,

I have read your paper and find it very interesting. However, I may have some confusion about your method. If I understood correctly, the first eigenvector represents the latent distribution of a class, which is similar as the function of a prototype. And I also saw some methods utilize the similarity between a sample and the class-prototype to select clean samples. I would like to know what is the advantage of using the eigenvector over prototypes.

Thanks.

tabular data/ noisy instances/ new datasets

Hi,
thanks for sharing your implementation. I have some questions about it:

  1. Does it also work on tabular data?
  2. Is the code tailored to the datasets used in the paper or can one apply it to any data?
  3. Is it possible to identify the noisy instances (return the noisy IDs or the clean set)?

Thanks!

Could you provide hyperparameters?

I found some hyperparameters are different from the description in the paper, "All hyper-parameters settings are the same with [25], even for the clean probability threshold.". For example, the threshold in the code is 0.6, but in DivideMix the threshold is 0.5. The number of warmup is also different from DivideMix. Could you provide hyperparameters for reproducing the results?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.