Giter Site home page Giter Site logo

genbing99 / mtsn-spot-me-mae Goto Github PK

View Code? Open in Web Editor NEW
10.0 2.0 1.0 1.38 MB

MTSN: A Multi-Temporal Stream Network for Spotting Facial Macro- and Micro-Expression with Hard and Soft Pseudo-labels

Python 100.00%
micro-expression multistream-network spotting deep-learning emotion

mtsn-spot-me-mae's Introduction

Hi there! I'm Gen Bing (Larry)

「 Innovative and Passionate 」
「 An AI Engineer with great enthusiasm 」
「 Never Stop Learning 」

isrealodejobi


  • 😄 My Pronouns: He/Him

  • ⚡ Research Interest: Artificial Intelligence, Affective Computing, Digital Twin, Pattern Recognition

  • ✨ Google Scholar: @profile

  • 📫 How to reach me: @email, @linkedin

My GitHub Stats

Gift's LangStat Gift's language

Expand to view

⚡ GitHub Profile Stat

mtsn-spot-me-mae's People

Contributors

genbing99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

vligh

mtsn-spot-me-mae's Issues

some question

Dear author, I want to ask how to obtain the dataset, as shown below:
image

Feature extraction process?

Thanks for your work! After reading your paper, I found out that feature processing was only mentioned briefly. Specifically, the input size was reduced from 42 x 42 to 28 x 28. Did you process them as in previous work SoftNet(extract opt flow using TV-L1 and crop out 3 ROI regions) but only resized the feature to a smaller size?

代码训练结果复现

您好,我尝试重新训练了您的代码,使用您的训练集,1/4做验证,但是发现f1-score非常低,和论文不符,请问可能是什么原因造成的呢?

Code for evaluation on MEGC2021 benchmark?

Thanks for the great work. I noticed that there's only evaluation code for SAMM-Challenge and CAS(ME)^3, would you please also provide the counterpark for SAMM-LV and CAS(ME)^2 for reproduction on the F1-score results for those datasets?

about megc2022-processed-data

Hello, I am not very clear how your processed data is obtained. CAS_Test_cropped_dataset_k1.pkl and CAS_Test_cropped_dataset_k6.pkl are came from here? I found that they are the same size(28283)
image

Problems on result reproduction on MEGC2021 Benchmark

I've had a hard time trying to reproduce the results. Listed are what I've tried.

  1. I've re-organized the code in the way I'm used to, and run experiments on CASME_sq using the features extracted by myself as instructed. The overall F1-score is somewhere around 0.23. So I doubt if there's something wrong with my feature extraction procedure, so I turn to the preprocessed features offered in the repo.
  2. Run experiments on CASME_sq using the features you provided in repo.
    Results:
    Final result: TP:101, FP:290, FN:256
    Precision = 0.2583
    Recall = 0.185
    F1-Score = 0.2156
    The results are still not so good. So I finally tried to run the code in the jupyter notebook in provided in #3
  3. Run experiments on CASME_sq & SAMMLV using the notebook & features you provided in repo. Here are the results.

Reproduction ipynb:

CASME:

Micro result: TP:3 FP:137 FN:54 F1_score:0.0305
Macro result: TP:100 FP:206 FN:200 F1_score:0.3300
Overall result: TP:103 FP:343 FN:254 F1_score:0.2565

SAMMLV:

Cumulative result until subject 30:
Micro result: TP:10 FP:169 FN:149 F1_score:0.0592
Macro result: TP:97 FP:277 FN:246 F1_score:0.2706
Overall result: TP:107 FP:446 FN:395 F1_score:0.2028

Orig ipynb:

CASME:
Micro result: TP:5 FP:77 FN:52 F1_score:0.0719
Macro result: TP:108 FP:166 FN:192 F1_score:0.3763
Overall result: TP:113 FP:243 FN:244 F1_score:0.3170

SAMM:

Micro result: TP:12 FP:104 FN:147 F1_score:0.0873
Macro result: TP:88 FP:198 FN:255 F1_score:0.2798
Overall result: TP:100 FP:302 FN:402 F1_score:0.2212

As reported above, there's a huge gap between the reproduction result & orig. performance on CASME_sq, while the gap for SAMMLV dataset is much smaller.

I've also tried fixing the ransom seed=1, the result does not improve, and replacing the mix of hard&soft label loss by pure hard label loss improves results. Moreover, I notive there are many subtle differences between the orig. code & jupyter notebook, using spotting method in the orig. code produces very bad results:
Final result: TP:53, FP:320, FN:304
Precision = 0.1421
Recall = 0.0849
F1-Score = 0.1063
Replacing it by the spotting method in the jupyter notebook turns out better, with results:
Final result: TP:102, FP:299, FN:255
Precision = 0.2544
Recall = 0.1841
F1-Score = 0.2136

And I found a typo in the orig. code

if end-start > macro_min and end-start < macro_max and ( score_plot_micro[peak] > 0.95 or (score_plot_macro[peak] > score_plot_macro[start] and score_plot_macro[peak] > score_plot_macro[end])):

I believe score_plot_micro[peak] > 0.95 should be score_plot_macro[peak] > 0.95

I'm trying to make some improvement on your work and take that as a baseline model, but I'm veru frustrated by the reproduction result. Any insight/help would be very precious to me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.