Giter Site home page Giter Site logo

njunlp / gts Goto Github PK

View Code? Open in Web Editor NEW
83.0 83.0 25.0 14.3 MB

Code and data for paper "Grid Tagging Scheme for Aspect-oriented Fine-grained Opinion Extraction". Aspect opinion pair datasets and aspect triplet datasets.

License: Apache License 2.0

Python 100.00%

gts's People

Contributors

gillesj avatar wuzhen247 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gts's Issues

predict.py

If yes,How long will you take add predict.py for OPE and OTE tasks?

Clarification on source of BERT model provided.

I am using this code to run some experiments on new ABSA data and will hopefully test new transformer models.

I was wondering if the provided bert-base-uncased model weights in the README is the same model as listed by Google here: https://github.com/google-research/bert.

I just want to make sure the provided model is not fine-tuned on the task or the Pontiki et al. Semeval ABSA datasets.
I assume this is the vanilla bert-base-uncased model as the paper does not mention fine-tuning.

I will probably use huggingface/transformers model downloader to automatically download pretrained models and tokenizers instead of manually specifying paths in future experiments.

Thank you for sharing this research code, it really helps a lot!

data.py中对于aspect和opinion的交叉位置设为-1 的方式有些困惑

        for al, ar in aspect_span:
            for pl, pr in opinion_span:
                for i in range(al, ar+1):
                    for j in range(pl, pr+1):
                        sal, sar = self.token_range[i]
                        spl, spr = self.token_range[j]
                        self.tags[sal:sar+1, spl:spr+1] = -1  这里将tags的对应位置先设为-1,我觉得应该是在为排除bert分词器分出的带有#号的“小词”做处理,但是文章中的特征都设计在上半个表格中,这里并没有排除aspect的位置比opinion更加靠后的可能,而后面的
                            if i > j:
                                self.tags[spl][sal] = sentiment2id[triple['sentiment']]
                            else:
                                self.tags[sal][spl] = sentiment2id[triple['sentiment']]

中仅对上半个表格作出处理,这里是bug还是我的认识错误呢?

json

请问作者怎么将数据成json格式

Cite and name triplet datasets

I am about to submit a manuscript in which I ran GTS on my own data.
I benchmarked against the joined dataset of your work GTS that consists of making one large set of the triplets SemEval subdatasets.

I do describe how the triplet dataset is obtained from Fan et al. 2019 pairs annotation to Wu et al. 2020 adding in the corresponding polarity. Right now I refer to the benchmark triplet dataset as Wu et al. 2019, as this seems apt. But I see you have fairly recently changed the preferred reference and naming and it is quite confusing.

How should I refer to the triplet dataset(s) used in this repo under /data? ASTE V1, ASTE V2, or indeed GTS (Wu et al. 2020)?

How to understand the GTS tags that set non-first aspect/opinion words' tags to -1?

Hello! Recently, I have been doing experiments related to GTS tags, and I find that you set non-first aspect/opinion words' tags to -1, and I'm puzzled by this. In my opinion, it needs to keep the initial label value, where aspect corresponds to 1 and opinion corresponds to 2. A screenshot of the code is shown below.
Hoping to your time to answer my questions. Thank you.
image

aspect term,opinion term

(Aspect term,opinion term) pairs and (Aspect term,opinion term, sentiment polaritsy) triplets are not printed for test.json data, only the metrics are being printed.Incomplete code

Performance with five repeats

The experimental result in your article is the average of 5 repeats, but there is no trace of 5 repeats in the code. I tried your model initially, and I found that every time the performance was very unstable and the variance was extremely large, what could be the reason for this?

For example, In res14 dataset, sometimes the result is F1:0.70368, sometimes F1:0.41351

test data and BIO tagging

Hi. I have a question

Couldn't I extract the triplet by inserting a sentence without BIO tagging?

please reply me. thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.