Comments (7)
Hi there, thanks for your interest in our paper.
|C| corresponds to the number of emotions and we used label names like anger, fear, joy, etc. Yes, it's fixed.
Hope this helps
from spanemo.
Thank u !
from spanemo.
Yes, this helps the model to give a high possibility for highly correlated emotions and a low possibility for less correlated emotions. If you'd like to have strong penalization, you can also follow the implementation of the original paper which negate the output of each comparison before using the exp function. This would have a similar output as the abs function. In our case, since we trained the LCA loss jointly with cross-entropy, we decided to go that way and this appears to work pretty well. Hope this helps :)
from spanemo.
thank you! I will go through the original paper and do some experiments!
from spanemo.
I got another problem that I couldn't understand the LCA loss equation. To understand it, I have just read the original paper, which name is 'Multi-Label Neural Networks with Applications to Functional Genomics and Text Categorization'. I also can't understand it.
LCA loss is as follows:
I think that this equation will penalize this situation that y_p is much bigger than y_q and won't penalize this situation that y_p is much smaller than y_q. I don't think the LCA loss makes sense.
I would appreciate it if you could explain the LCA loss!
from spanemo.
Yes, we want y_p to be smaller than y_q. Eq (3) compares a pair of emotions, where each is obtained from the positive (pos) and negative (neg) set, respectively. The number of comparison is equal to the number of emotions in pos * neg set. This helps to penalise the model when it predicts labels that shouldn't co-exist together. You might ask how we obtained label-label correlation, we extract label co-occurrences from the data. I'd suggest that you go over our implementation of the loss and then provided it with a few examples. I think this would help you a lot.
from spanemo.
Thank you for replying to me!!! I have read your implementation. Maybe I haven't clarified my question.
y_p is smaller than y_q means that the possibility of negative is smaller than positive. Does this make the model has the tendency that gives a high possibility for the positive and a low possibility for the negative? I don't know why the equation could work. I think exp(abs(y_p - y_q)) can work better?
from spanemo.
Related Issues (13)
- Error Runtime HOT 2
- Type error during test HOT 7
- AttributeError: 'NoneType' object has no attribute 'update' HOT 1
- questions on overfitting issue and co-existing emotions percentage calculation HOT 1
- Spanish test accuracy pretty low (> 10%) HOT 1
- Dataset HOT 2
- discussion on previous issue :) HOT 5
- a problem of code HOT 15
- Problem with test.py HOT 1
- Question about evaluation on SemEval2018 HOT 2
- Cannot reproduce results, could you please provide the pretrained weights? HOT 2
- the question of running project with Spanish HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from spanemo.