Giter Site home page Giter Site logo

de-cnn's People

Contributors

howardhsu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

de-cnn's Issues

How is the data organized in npz files?

Hello, thank you for sharing the code.
It seemed that train data was processed and saved in the npz file. Would you please explain that how the data is organized in .npz file? I found that the shape of train_X is [2895,83], valid_X is [150,83]. What's the exact meaning of these information? Many thanks!

eval.jar and A.jar

hello,
Sorry, I am not code expert. Maybe it's a simple question. I am wondering, where do I find the eval.jar and A.jar. I have downloaded the other files and placed in the folder but I don't know how to create or where to find the jar file. Can you please guide me? thanks

Using words in the test data to build the embedding matrix

As per my understanding, we are not supposed to take any hint (or any information) from the testing data for the model construction. However, in your implementation, when the word_idx.json is computed, the words in the testing set are also considered. (As per my understanding)

Can you please clarify on this matter.

when i run evaluation.py there are some errors as follow :

Error: Could not find or load main class Main.Aspects

Caused by: java.lang.ClassNotFoundException: Main.Aspects

subprocess.CalledProcessError: Command '['java', '-cp', 'script/eval.jar', 'Main.Aspects', 'data/official_data/pred.xml', 'data/official_data/Laptops_Test_Gold.xml']' returned non-zero exit status 1.

i do not know what's wrong here

F1 score replication

I have run the same code but could achieve the F1 score of 71.9 with the same hyperparameters, instead of 74.37 ( mentioned in the paper ). Can you please suggest what might have gone wrong?
Thanks in advance!

Amazon Review Dataset Laptop for Train Embedding

Excuse me sir, I want to ask about Amazon Laptop Dataset that you mention it on your paper when trained domain embedding. How do you get the laptop review only from the amazon review dataset? Maybe you can give explanation or the code. Thanks in advance.

Yelp Dataset for Train Embeddings

Hello sir, I want to ask about the yelp dataset challenge 4 for train the embeddings

  1. In the paper you said "We only use reviews from restaurant categories that the second dataset is selected from Cuisines.wht". Are you filter the restaurant category first then filter based on the category in file Cuisines.wht or do something else?
  2. And are you filter the location of the restaurant too? Ex : restaurant in new york only or etc.
    Thank you sir

mask value in crf

Hi,
Thanks for the code.
I wanted to what is the x_mask and how is it calculated when passed to crf ??

Missing .jar files

Hi,

thanks for your great work!

Do you happen to know, where one can download both required .jar files?
I'm unable to download them from the official pages, making it impossible to evaluate my models :(

Am I doing something wrong or looking in the wrong place?

Thanks in advance!

Another Language

First of all thank you so much for your effort in this code. I want to ask you if we can use this code with another language such as the Arabic language.

Thanks a lot,

CRF开关问题

您好,感谢开源!我试了你程序中的CRF,但是CRF的loss值非常大,请问您程序中的crf层是怎么使用的呢?谢谢

statistical significant

Hi, thanks for sharing. It is mentioned in the paper that the result is statistical significant at the level of 0.05. Can you explain how to get the number 0.05?

原始数据能否共享下

您好,是否可以提供下原始数据呢,在文件data/official_data/的全部数据,谢谢您,我在整理下载数据的时候总是断。
data/official_data/Laptops_Test_Data_PhaseA.xml
data/official_data/Laptops_Test_Gold.xml
script/eval.jar
data/official_data/EN_REST_SB1_TEST.xml.A
data/official_data/EN_REST_SB1_TEST.xml.gold
script/A.jar

数据使用

你好,首先感谢代码开源;
我在阅读代码时,没有发现在训练阶段哪个地方使用了原始的xml格式的数据,好像是.npz的数据,这里跟readme说明是否有出入,谢谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.