Giter Site home page Giter Site logo

lsan's Introduction

This is the code for our paper "Label-Specific Document Representation for Multi-Label Text Classification"
If you make use of this code or the LSAN algorithm in your work, please cite the following paper:

@inproceedings{xiao2019label,
title={Label-Specific Document Representation for Multi-Label Text Classification},
author={Xiao, Lin and Huang, Xin and Chen, Boli and Jing, Liping},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={466--475},
year={2019}
}

Requirements: Ubuntu 16.0.4;
Python 3.6.7;
Pytorch 1.0.0;
mxnet 1.3.1;

Reproducibility: We provide the processed dataset AAPD, put them in the folder./data/

Train: python classification.py

Processed data download: https://drive.google.com/file/d/1QoqcJkZBHsDporttTxaYWOM_ExSn7-Dz/view

lsan's People

Contributors

emnlp2019lsan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lsan's Issues

eur-lex training parameter

Hi Lin, here are some problems about LSAN code, need your help.

  1. the code lack mask mechansim.
  2. EUR-lex dataset reproduce bad, is it need more epoch training or extra parameter setting?

what's mean of d_a?

def init(self, batch_size, lstm_hid_dim, d_a, n_classes, label_embed, embeddings):

Averaging label representations

Hi,

Thanks for this work, it's an interesting idea.
I am a bit confused about the final averaging step in your forward method though, this line:

avg_sentence_embeddings = torch.sum(doc, 1)/self.n_classes

Doesn't averaging all the label representations undo the effect of adaptive fusion? Your code comment says it's also possible to do a linear layer here instead of averaging. Can you please explain more? Just concat'ing the label representations would require the output layer to have shape (batch, num_classes * emb_dim * 2, num_classes), which would never fit on any GPU's memory for any reasonable number of classes. Maybe some max pooling step would help here? Curious to hear your thoughts.
Cheers.

Hope to get full model code

Hi ! I like your work, can you push the full model code? I check the model code and find that it incompletable.

Data

Your method is very inspiring to me.
Can you provide other data sets?
Appreciate it!

F1 too low?

I added F1 to the evaluation code and found that the model stopped at max F1 micro of 15%. Can you look at this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.