Giter Site home page Giter Site logo

kdhht2334 / elim_fer Goto Github PK

View Code? Open in Web Editor NEW
34.0 1.0 4.0 55.4 MB

[NeurIPS 2022] The official repository of Expression Learning with Identity Matching for Facial Expression Recognition

License: MIT License

Python 98.47% Shell 1.53%
facial-expression-recognition feature-normalization human-computer-interaction optimal-transport valence-arousal facial-expressions pytorch real-time-demo domain-shift out-of-distribution-generalization

elim_fer's Introduction

ELIM_FER

Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition (NeurIPS 2022)

PAPER | DEMO

Ubuntu PyThon PyTorch

Daeha Kim, Byung Cheol Song

CVIP Lab, Inha University

Update

  • 2022.09.20: Initialize this repository.

Requirements

  • Python (>=3.8)
  • PyTorch (>=1.7.1)
  • pretrainedmodels (>=0.7.4)
  • Wandb
  • Fabulous (terminal color toolkit)

To install all dependencies, do this.

pip install -r requirements.txt

Datasets

  1. Download four public benchmarks for training and evaluation (please download after agreement accepted).

(For more details visit website)

  1. Follow preprocessing rules for each dataset by referring pytorch official custom dataset tutorial.

Training

Just run the below script!

chmod 755 run.sh
./run.sh <method> <gpu_no> <port_no> 
  • <method>: elim or elim_category
  • <gpu_no>: GPU number such as 0 (or 0, 1 etc.)
  • <port_no>: port number to clarify workers (e.g., 12345)
  • Note: If you want to try 7-class task (e.g., AffectNet), add age_script folder to your train or val. script and turn on elim_category option.

Evaluation

  • Evaluation is performed automatically at each print_check point in training phase.

Demo

  • Do to demo folder, and then feel free to use.
  • Real-time demo with pre-trained weights

TODO

  • Refactoring
  • Upload pre-trained model weights
  • Upload demo files
  • Upload train/eval files

Note

  • In case of Mlp-Mixer, please refer official repository (link)

Citation

If our work is useful for your work, then please consider citing below bibtex:

@misc{kim2022elim,
    author = {Kim, Daeha and Song, Byung Cheol},
    title = {Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition},
    Year = {2022},
    Eprint = {arXiv:2209.12172}
}

Contact

If you have any questions, feel free to contact me at [email protected].

elim_fer's People

Contributors

kdhht2334 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

elim_fer's Issues

offset of valence and arousal

Hi.

Thank you for the amazing work.

I have a question about offsets in scripts. In your code, you set offset values for valence and arousal. Can you tell me why this is necessary and how you determined these values?

Reproducing AffectNet results.

Hello!

Thanks for sharing your works.

  • How can I reproduce the affectnet results?
    The script seems to require some preprocess.

  • Can you share affectnet pretrained model?

Regards,
Sungguk Cha

How to create custom data for training ?

First thanks for your great work. I have a question about the dataset annotation and I need your help. Now that I have 10000 images that don't have labels. How I could label my dataset to apply to your model ? Hope that you can explain it to me.

License

Hi.
Thank you for the amazing work!
Could you clarify the license for this code? I would like to utilize this valence-arousal estimation model in my research.

Pre trained model

Hi,

I was wondering if you plan to upload the weights for a pre-trained model.
Currently, I couldn't find any.

Thanks

Estimation using my own facial image data.

Thanks for the amazing work!

For my own research, I would like to estimate valence-arousal using my own facial image data.
Which script should I use for this?
I thought this part was estimating valence and arousal, but I am not sure what "def hand_detection()" is doing.

Can you provide a script to estimate valence-arousal from a single image using pre-trained weights?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.