wtomin / multitask-emotion-recognition-with-incomplete-labels Goto Github PK
View Code? Open in Web Editor NEWThis is the repository containing the solution for FG-2020 ABAW Competition
License: MIT License
This is the repository containing the solution for FG-2020 ABAW Competition
License: MIT License
您好,我不确定我的理解是否准确。在./create_annotation_file/Aff_wild2/create_test_set_file.py文件中,您读入了三个关于测试数据集(../cropped_aligned)信息的txt文件(aus_test_set.txt,expression_test_set.txt,va_test_set.txt),据此请问这三个txt文件您是怎样得到的,我应该怎样获得?
In multitask-CNN-RNN training step, a resnet50_GRU model is needed. However in models directory, there is only resnet50 model. Could you help to fix the problem?
Hi Didan,
Congratulation for getting the 1st place in the challenge!
We are currently trying to train your method from scratch, including the teacher model. We compared our teacher model with the ones you provided, unfortunately we found a huge performance gap between them in validation set...
We wonder which dataset did you use to train the teacher model, is it balanced mixed data, or Affwild2 only? What parameter setting did you use to train the teacher model?
We really appreciate if you can help us reproduce your results!
Thanks,
Shiyang
In order to replicate your results, the 4th step says:
Training: For Multitask-CNN, run python train.py --force_balance....
However I cannot find train.py in the repository. Is it missing?
Hello!
I was trying to run the run the example.py script in the api folder but ran into following issues:
1)http://www.robots.ox.ac.uk/~albanie/models/pytorch-mcn/resnet50_ferplus_dag.py link does not work so it is impossible to use this model. Is it possible for you to upload it again?
2)I also tried to use pretrained imagenet instead of ferplus. To do this I changed the __OPT variable in config.py into the following:
__OPT = {"AU_label_size":8 ,
"EXPR_label_size":7,
"VA_label_size":2,
"digitize_num": 20,
"hidden_size": 128,
"image_size": 112,
"tasks": ['EXPR','AU','VA'],
"pretrained_dataset": 'imagenet'}
However, I get a following error when loading the the pretrained models:
RuntimeError: Error(s) in loading state_dict for Seq_Model:
Missing key(s) in state_dict: "backbone.backbone.model.conv1.weight", "backbone.backbone.model.bn1.weight",... (some weights are missing)
Perhaps, I am missing something and should change the config more.
I would be grateful if you could help me run the models.
Kind regards
Atilla
Now I'm trying your code according to"How to implement our models on your data", but I don't know how to get cropped & aligned face images.
Could you tell me how to crop & align face images?
And if possible, could you share some samples of cropped & aligned face images?
Thanks,
I know it's a little bit stupid, but could you please list the structure of the whole three datasets or the folder where you put your data.
Hi, as far as I know, it's not possible to download crop and align face images now.
Could you provide sample crop and align files for reference?
Hey, I am facing an issue that while using the api in the code, when I run "run_example.py" in the api folder then it is generating me exact same csv for even different video files. What could be the possible reason for it ?
Hi there,
I have been following the "Get Started" of the ReadMe file in the "api" folder. In one of the step, after git clone of pytorch-benchmark repo, it has been recommended to download some folders through the link provided below:
which isn't working and giving the following error:
"Sorry, you cannot access this document. Please contact the person who shared it with you."
Is there some other way to go through this?
Thank you for sharing the code:
I am running the "run_pretrained_model.py" as instructed in the readme, but every time I got the following error:
Can you advise, please?
For python run_pretrained_model.py --image_dir (path to faces cropped and aligned by openface)--model_type CNN --batch_size 2 --eval_with_teacher --eval_with_students --save_dir (path to a folder to save results) --workers 8 --ensemble
And in the run_pretrained_model.py file, I change the path in python run_pretrained_model.py as:
MODEL_DIR = r'C:\Multitask-Emotion-Recognition-with-Incomplete-Labels-master\pytorch-benchmarks\models'
Here in the model's folder, I made a folder with fer+ name and put resnet50_feplus_dag.py and resnet50_feplus_dag.pth there.
And I am getting the following error:
Traceback (most recent call last):
File "run_pretrained_model.py", line 664, in
main()
File "run_pretrained_model.py", line 585, in main
assert len(path) == 1
I download their pre-trained models from
https://hkustconnect-my.sharepoint.com/personal/ddeng_connect_ust_hk/_layouts/15/onedrive.aspx?id=%2Fpersonal%2Fddeng%5Fconnect%5Fust%5Fhk%2FDocuments%2FFG2020%2DABAW
but it is not specified in which folder we should keep these pre-trained models, so I put them in the main directory I think I a doing something wrong here?
Thank You
Thank you for your contribution, I was impressed with your work.
btw, Could I get pkl file of labels for Aff-Wild2 and DISFA dataset you used?
Could you share the detection result (F1 Score and Accuracy) on each AUs annotated on AFF-Wild2?
I have tried to replicate your result of AU Detection on your paper (arxiv:2002.03557v1) as below:
But my result was "0.415 (FYI. F1: 0.251, Acc: 0.579)", it had big difference with yours ("0.626").
Could you tell me how to replicate your result in case of using AFF-Wild2?
Thank you so much for your sharing.
I have a question about running run_example.py.
When I tried to run it , the following error occurs:
Traceback (most recent call last):
File "/home/Multitask-Emotion/api/run_example.py", line 20, in
api.run(example_video, csv_output='examples/utterance_1_opface.csv')
File "/home/Multitask-Emotion/api/Emotion_API.py", line 111, in run
assert len(os.listdir(opface_output_dir)) >0 , "The OpenFace output directory should not be empty: {}".format(opface_output_dir)
AssertionError: The OpenFace output directory should not be empty: examples/utterance_1_opface
Could you tell me how to solve it? Thank you very much!
Hello,
I read the description about using your pre-trained model. And I tried following description but I failed. Because I don't have
resnet50_ferplus_dag.py
the exact error message was
FileNotFoundError: [Errno 2] No such file or directory: 'models/fer+/resnet50_ferplus_dag.py'
I checked you added as pretrained model link, but there were only .pth and pkls.
Can you give me some advice?
Hello!
I'm trying to implement my own pipeline to train resnet 50 for VA on Aff-wild2 only. I follow your idea of bucketization of continious values into a fixed number of classes and applying both classification CE and regression CCC losses and experimented with different weights for each.
I tried initialization from both fer+ and imagenet
However, I've faced an issue that my validation loss increases after first epoch (along with decreasing in ccc metric). However, I can successfully overfit my model on a single video from the training subset.
Didn't you face the same or similar problem?
Would you mind to provide some details whether every frames are used for each sequence of video or how many frames were skipped during training/inference?
I want to test on my own video, so I tried emotion_demo.py, and modifid the content of video_file. But I cannot find the three text files: AU.txt, EXPR.txt, and VA.txt. Where can I find or download them?
Thank you so much for your sharing.
I have a question about creating aligned images of ExpW dataset.
In the Expw dataset, several people appear at the same time in some images. Therefore, there are cases in which there are multiple labels although the name is the same in the annotation file. And when I looked at 'create_annotation_file/ExpW/create_annotations.py', I found that in this case, I could only get an image of one out of several people. So the image directory only has 68,096 images, while the number of instances of the annotation file is 91,793. I wonder if the result of the paper is the result of applying it as it is or the result of modifying it.
hi,i just pay attention to the VA,please tell me if i should download all the file(Aff-wild2、AFEW-VA,DISFA and ExpW) and start trainning?
您好,在“run_pretrained_model.py”中要求加载"resnet50_ferplus_dag.pth"或者"resnet50_face_sfew_dag.pth",请问这两个预训练模型在哪下载呢?我在网上没找到链接,您可以提供一个链接嘛?
Hello Didan,
I'm trying to reproduce the training student CNN models based on the provided teacher model (which seems to be easier than training everything from scratch). However, I'm unable to get the same level with your results on the AffWild 2 validation set. Could you have a look to see if there is any mis-configured parameters ?.
The command to train is: python3.7 train.py --force_balance --name image_size_112_n_students_5_pretrained --image_size 112 --batch_size 64 --print_freq_s 240 --pretrained_teacher_model /home/mvu/Documents/pretrained/FG2020-ABAW/Multitask-CNN/net_epoch_teacher_id_resnet50.pth
My evaluation result:
Merged First method EXPR Validation 08:21: Eval_0 0.3389 Eval_1 0.4597 eval_res 0.3787
Merged First method AU Validation 08:21: Eval_0 0.1828 Eval_1 0.9453 eval_res 0.5640
Merged First method VA Validation 08:21: Eval_0 0.3517 Eval_1 0.4941 eval_res 0.8457
Merged Second method EXPR Validation 08:21: Eval_0 0.3386 Eval_1 0.4552 eval_res 0.3771
The best AU thresholds over models: [0.1823972 0.07081039 0.13026756 0.04179615 0.3001067 0.07159575 0.04400642 0.19260739]
Merged Second method AU Validation 08:21: Eval_0 0.3241 Eval_1 0.9305 eval_res 0.6273
Merged Second method VA Validation 08:21: Eval_0 0.3100 Eval_1 0.4623 eval_res 0.7724
The AffWild-2 dataset is the cropped, aligned version from the organizer. The training and evaluating log are in the attachment
multi-cnn-pretrained.log
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.