Giter Site home page Giter Site logo

deep-learning-har's People

Contributors

bhimmetoglu avatar davidbradway avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-har's Issues

Copyright Infringement!

Hi @bhimmetoglu ,

Your work is very interesting. I really like how you managed to crank the test accuracy up to 94.8% with the LSTM alone, that is interesting and remarkable. I have also made work about LSTMs for HAR. I think you forgot to cite my work, which is under the MIT License, Copyright (c) 2016 Guillaume Chevalier: https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/blob/master/LICENSE

I used the two stacked LSTM cells like you just did, on the same HAR dataset, and with TensorFlow, too. Overall, the part of you project where the LSTMs are used seems entirely:

  1. Copied from my project,
  2. Rewritten a bit differently,
  3. With a few improvements added.
    Congratulations for achieving an accuracy of 94.8%!

The structure of the code in your project follows the same progression as my original code. I also see that you have put a star on my project and that you were aware of it. Also, Some lines of code, as well as the unique comments on the logic of those lines are nearly identical to mines, such as for example here:

    lstm_in = tf.transpose(inputs_, [1,0,2]) # reshape into (seq_len, N, channels)
    lstm_in = tf.reshape(lstm_in, [-1, n_channels]) # Now (seq_len*N, n_channels)

Compared to my code:

    # input shape: (batch_size, n_steps, n_input)
    _X = tf.transpose(_X, [1, 0, 2])  # permute n_steps and batch_size
    # Reshape to prepare input to hidden activation
    _X = tf.reshape(_X, [-1, n_input]) 
    # new shape: (n_steps*batch_size, n_input)

To cite my work, I at least ask to point to the URL of the homepage of the GitHub repository, as stated in the README.md of my project, such as:

Guillaume Chevalier, LSTMs for Human Activity Recognition, 2016
https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

For now:

Please act quickly, your work is already copied and cited by others without citations to mine:

Citing someone is simple. Sorry for the trouble, and thanks in advance.

Use train data as test data?

The offical code in HAR-CNN use "train" folder to generate test data:

X_train, labels_train, list_ch_train = read_data(data_path="./data/", split="train") # train
X_test, labels_test, list_ch_test = read_data(data_path="./data/", split="train") # test

When I change them to:

X_train, labels_train, list_ch_train = read_data(data_path="./data/", split="train") # train
X_test, labels_test, list_ch_test = read_data(data_path="./data/", split="test") # test

The test accuracy will decrease to 0.877917, not 0.98.

CNN/LTSM diagram tool

Not really an issue (sorry), but I'd really like to know what tool did you use to make the diagrams for the networks. Thanks.

Regards LSTM

Hi,
i would like to ask for your LSTM so what is your initial input length?
and for every iteration within the LSTM from your picture, stated with 128steps
what does the 27 for each loop for?

and you stacked 2 LSTM cells if im not mistaken. so how do you from input length of 9 jump into LSTM with input of 27?

Sorry that im kind of new to LSTM and stacked LSTM. Do you mind to explain how it works?

Do you mind to explain this part below.


lstm_in = tf.transpose(inputs_, [1,0,2]) # reshape into (seq_len, N, channels)
lstm_in = tf.reshape(lstm_in, [-1, n_channels]) # Now (seq_len*N, n_channels)

lstm_in = tf.layers.dense(lstm_in, lstm_size, activation=None) 
lstm_in = tf.split(lstm_in, seq_len, 0)
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob_)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)

regarding outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype=tf.float32,initial_state = initial_state)

Hi,

Thanks for sharing the code, but running the HAR-CNN_LSTM.ipynb, I got the following error message directly related to

with graph.as_default():
    outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype=tf.float32,
                                                     initial_state = initial_state)

The error message is as follows, would you like to take a look at it, and see how to fix it? Thanks.
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.BasicLSTMCell object at 0x2b53b31b2710> with a different variable scope than its first use. First use of cell was with scope 'rnn/multi_rnn_cell/cell_0/basic_lstm_cell', this attempt is with scope 'rnn/multi_rnn_cell/cell_1/basic_lstm_cell'. Please create a new instance of the cell if you would like it to use a different set of weights. If before you were using: MultiRNNCell([BasicLSTMCell(...)] * num_layers), change to: MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)]). If before you were using the same cell instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances (one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation, so this error will remain until then.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.