Giter Site home page Giter Site logo

deepconvlstm's People

Contributors

fjordonez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepconvlstm's Issues

Question about slide window

Hi, I appreciate your job for HAR. I firstly read your paper and then google your DeepConvLSTM. I fork your repo, and want to reproduction your result. But there is some question I did not get you. In DeepConvLSTM.ipynb, why are you just slide window for test data, but not for traning data?
It is locate in
# Sensor data is segmented using a sliding window mechanism X_test, y_test = opp_sliding_window(X_test, y_test, SLIDING_WINDOW_LENGTH, SLIDING_WINDOW_STEP) print(" ..after sliding window (testing): inputs {0}, targets {1}".format(X_test.shape, y_test.shape))

I look forward to your reply. And I will give an acknowledgement in my paper for your help.

Transforms original labels into the range [0, nb_labels-1] when preprocessing data ,but where is '0'?

the function 'adjust_idx_labels' try to Transforms original labels into the range [0, nb_labels-1], in the file preprocess_data.py, but why to [0,nb_labels-1],not [1,nb_labels]? and now where is the class 0?
I'm now using the dataset to do sth ion the Torch,and there may be an error where 'Assertion t >= 0 && t < n_classes failed'.

So I'm confused about the transform ,looking forward to your answer~

Use of forget gate from sequence to sequence

Hi:
I have reproduced your results for both training and testing.

I also appreciate your help with suggesting hyper-parameters for training.

One question I have is the use of the LSTM forget gate. In other sequence classifier implementations, I see the use a sequence marker to demarcate the start of a sequence. This is yet another input, in addition to the data input and the label input. The sequence marker is presumably used to erase the LSTM memory across sequences. This may be implicit in your network as it is assumed that all sequences in the batch are of the same length and the erasure is "automatic".

It would be very helpful to have your thoughts.

Thank you

Recurrent range

Hi, thanks for your great job and now actually I take your paper as a reference, I have a doubt on the recurrent range, according your description, if I'm not getting wrong understanding, I think the recurrent range is within each sliding window, what if to do window level recurrent, could you give some idea about that?

DeepConvLSTM training

Hi:

Your paper is exemplary in laying out the problem as a sequence-learning task and, to top it off, the accompanying "model deploy" script works!

I took a shot at the training script (attached). Being new to LSTMs and Lasagne, I could be missing many of the hyper-parameters that will recreate your model.

Could you please share your training script?

Alternately, could you kindly suggest improvements in the attached training script.

DeepConvLSTM_train_val.txt

Thank you,
Auro

Why is SLIDING_WINDOW_LENGTH = 24?

Why is SLIDING_WINDOW_LENGTH = 24 ?

In the paper, it is mentioned that the window is 500 milliseconds.

I don't see how setting the window length to be 24 ensures that its length is always 500 milliseconds.

Problem in Gesture recognition WITHOUT NULL class

Hi @fjordonez @sussexwearlab,
I am implementing the DeepConvLSTM code in Keras, I have got 89% f1 score for Task B( Gesture recognition with NULL class). But I am not getting expected f1 score in Task B (Gesture recognition WITHOUT NULL class). I have a few questions

  1. For Gesture recognition WITHOUT NULL class, do we have to delete all the NULL class data and pass 17 in class number? But After deleting, the data is very less. so the f1 score is coming less

  2. Or do we have to train the model on data with NULL class data and test it on data (after removing NULL class)
    or any other method to increase f1 score in Task B (Gesture recognition WITHOUT NULL class)

Thankyou

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.