Giter Site home page Giter Site logo

ruidan / aspect-level-sentiment Goto Github PK

View Code? Open in Web Editor NEW
148.0 4.0 32.0 27.11 MB

Code and dataset for ACL2018 paper "Exploiting Document Knowledge for Aspect-level Sentiment Classification"

License: Apache License 2.0

Python 98.71% Shell 1.29%
aspect-based-sentiment-analysis knowledge-transfer

aspect-level-sentiment's Introduction

Aspect-level Sentiment Classification

Code and dataset for ACL2018 [paper] ‘‘Exploiting Document Knowledge for Aspect-level Sentiment Classification’’.

Data

The preprocessed aspect-level datasets can be downloaded at [Download], and the document-level datasets can be downloaded at [Download]. The zip files should be decompressed and put in the main folder.

The pre-trained Glove vectors (on 840B tokens) are used for initializing word embeddings. You can download the extracted subset of Glove vectors for each dataset at [Download], the size of which is much smaller. The zip file should be decompressed and put in the main folder.

Training and evaluation

Pretraining on document-level dataset

The pretrained weights from document-level examples used in our experiments are provided at pretrained_weights/. You can use them directly for initialising aspect-level models.

Or if you want to retrain on ducment-level again, execute the command below under code_pretrain/:

CUDA_VISIBLE_DEVICES="0" python pre_train.py \
--domain $domain \

where $domain in ['yelp_large', 'electronics_large'] denotes the corresponding document-level domain. The trained model parameters will be saved under pretrained_weights/. You can find more arguments defined in pre_train.py with default values used in our experiments.

Training and evaluation on aspect-level dataset

To train aspect-level sentiment classifier, excute the command below under code/:

CUDA_VISIBLE_DEVICES="0" python train.py \
--domain $domain \
--alpha 0.1 \
--is-pretrain 1 \

where $domain in ['res', 'lt', 'res_15', 'res_16'] denotes the corresponding aspect-level domain. --alpha denotes the weight of the document-level training objective (\lamda in the paper). --is-pretrain is set to either 0 or 1, denoting whether to use pretrained weights from document-level examples for initialisition. You can find more arguments defined in train.py with default values used in our experiments. At the end of each epoch, results on training, validation and test sets will be printed respectively.

Dependencies

  • Python 2.7
  • Keras 2.1.2
  • tensorflow 1.4.1
  • numpy 1.13.3

Cite

If you use the code, please cite the following paper:

@InProceedings{he-EtAl:2018,
  author    = {He, Ruidan  and  Lee, Wee Sun  and  Ng, Hwee Tou  and  Dahlmeier, Daniel},
  title     = {Exploiting Document Knowledge for Aspect-level Sentiment Classification},
  booktitle = {Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics},
  publisher = {Association for Computational Linguistics}
}

aspect-level-sentiment's People

Contributors

ruidan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

aspect-level-sentiment's Issues

missing a few training data ?

Thanks for sharing.
From the preprocessed data, I realized the counts of examples (from my script) are not the same as reported in the paper.

For example, the training data of SemEval 2014 is like this:
lt Counter({'positive': 987, 'negative': 866, 'neutral': 460})
res Counter({'positive': 2164, 'negative': 805, 'neutral': 633})

Did I make any mistake?

error when runing the code

Hi,

I got this error when trying to run the train.py file

TypeError: add_weight() got multiple values for keyword argument 'name'

ANy ideas plz

thank u

About the precision

hello, I modified the code to run with python3 and it can run, but I can't get the same precision you mentioned in the paper, the precision just was 50%, when I adjust the learning rate the precision improve to 67% but still has a gap with your experiment. I used your preprocessed_data. Do you have any idea about improving the precision? thank you!

Predicting output for new sentences with the new model

Hi,
I tried retraining the model, it went on training and showed the loss during every epoch but at the end of the training it didn't save anything (Model or Word vector file), which I could use for further predictions. Also is there a way to use model and find the sentiment of the input sentence?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.