rungjoo / compm Goto Github PK
View Code? Open in Web Editor NEWContext Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)
Context Modeling with Speaker's Pre-trained Memory Tracking for Emotion Recognition in Conversation (NAACL 2022)
sorry to disturb you. I have trouble about "CoMPM: Context Modeling with Speaker’s Pre-trained Memory Tracking for Emotion Recognition in Conversation".
when I run the command "python train.py".
it is reported the issue as follows: "MELD_models/roberta-large/pretrained/no_freeze/emotion/1.0/model.bin" . It doesn't have mobel.bin file.
could you please help me to solve it? I want to reproduce your work based on your code.
many thanks
best wishes
The paper explains the CoMPM figure in Page 4 and states:
Figure 2: Our model consists of two modules: a context embedding module and a pre-trained memory module.
The figure shows an example of predicting emotion of u 6 , from a 6-turn dialogue context. A, B, and C refer to the
participants in the conversation, where s A = p u 1 = p u 3 = p u 6 , s B = p u 2 = p u 5 , s C = p u 3 . W o and W p are
linear matrices.
I believe s C = p u 3 should be s C = p u 4
I met the warning below:
/root/miniconda3/lib/python3.8/site-packages/sklearn/metrics/_classification.py:1471:
UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use zero_division
parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
But the accuracy still printed. And seems like the train can be continue. I am wondering will this affect the training?
I clicked the link, but the message is like below.
"404. 오류가 발생했습니다. 요청하신 URL을 서버에서 찾을 수 없습니다. 다른 원인은 확인할 수 없습니다."
Could you fix this problem?
Contextualized Emotion Recognition in Conversation as Sequence Tagging seems to be one influential article in ERC area, I wonder that why didn't you compare with CESTa on dailydialog dataset? Is there any specific reason?
I am reading your code. And I found that, batch_speaker_tokens are not put on cuda. (The code is as below: )
batch_input_tokens, batch_labels, batch_speaker_tokens = data
batch_input_tokens, batch_labels = batch_input_tokens.cuda(), batch_labels.cuda()
However, I found the speaker_token is a list with only one Tensor. I am quite confused that, shall we remove the list, just use Tensor as batch_speaker_tokens? In this way, the speaker_tokens can be speed up by GPU.
By the way, I am wondering why you put a list outside the Tensor in batch_speaker_tokens. I think there might be a reason for doing so.
Many thanks.
I don't see any operations for attention_mask, which means if the Roberta model will set all attention_mask to 1?
What if all previous utterances is over than 512 number of tokens?
I know the max input token size of RoBERTa large is 512. [link]
And in your paper, I could find this
context embedding module(CoM) reflects all previous utterances as context.
...
We use an Transformer encoder as a context model (such as RoBERTa).
If you met that kind of problem, then did you use sliding window or something
Thank you
Hi,
Do you know which version of cuda the code was built using? Prerequisites listed pytorch 1.8 but didn't say with which version of cuda. I am finding it very difficult to run because I couldn't find torch 1.8 anywhere. Appreciate your help!
Hi,
I am working on sentiment analysis of dyadic conversations. Can I use your work for this purpose?
I understand that this work can help with utterance-level sentiment/emotion recognition in conversations? Now from there, if I want to recognize sentiment/emotion of the whole conversation, are you aware of any techniques that might help?
Thank you so much for your help!
your work is interesting and perfect. don't introduce external knowledge, just dependent on CoM and PM.
when i try to reproduce same result in MELD dataset. I want to achieve same result F1 score is 66.52.
use the command :python test.py? it is right? do need to setup other experiments.
could you please give me a hand? many thanks
when i run the command "python train.py --initial {pretrained} --cls {emotion} --dataset {MELD}" .
it is reported the error as follows: "UnboundLocalError: local variable 'DATA_loader' referenced before assignment " .
how to fix it ?
many thanks
Hi,
I am trying to mimic your results and am getting the following error:
Any idea what could be causing this error?
Note that I have the latest version of torch (2.0.1), not 1.8 because I couldn't find its build anywhere.
Also, the warning about RobertaModel not initialized from the checkpoint -> what does that mean?
Thank you for your help!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.