Giter Site home page Giter Site logo

guillaume-chevalier / lstm-human-activity-recognition Goto Github PK

View Code? Open in Web Editor NEW
3.3K 159.0 929.0 1.41 MB

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier

License: MIT License

Python 0.43% Jupyter Notebook 99.57%
machine-learning deep-learning lstm human-activity-recognition neural-network rnn recurrent-neural-networks tensorflow activity-recognition

lstm-human-activity-recognition's Introduction

Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amongst six categories:

  • WALKING,
  • WALKING_UPSTAIRS,
  • WALKING_DOWNSTAIRS,
  • SITTING,
  • STANDING,
  • LAYING.

Compared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed.

Let's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series.

Video dataset overview

Follow this link to see a video of the 6 activities recorded in the experiment with one of the participants:

Video of the experiment

[Watch video]

Details about the input data

I will be using an LSTM on the data to learn (as a cellphone attached on the waist) to recognise the type of activity that the user is doing. The dataset's description goes like this:

The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used.

That said, I will use the almost raw data: only the gravity effect has been filtered out of the accelerometer as a preprocessing step for another 3D feature as an input to help learning. If you'd ever want to extract the gravity by yourself, you could fork my code on using a Butterworth Low-Pass Filter (LPF) in Python and edit it to have the right cutoff frequency of 0.3 Hz which is a good frequency for activity recognition from body sensors.

What is an RNN?

As explained in this article, an RNN takes many input vectors to process them and output other vectors. It can be roughly pictured like in the image below, imagining each rectangle has a vectorial depth and other special hidden quirks in the image below. In our case, the "many to one" architecture is used: we accept time series of feature vectors (one vector per time step) to convert them to a probability vector at the output for classification. Note that a "one to one" architecture would be a standard feedforward neural network.

RNN Architectures Learn more on RNNs

What is an LSTM?

An LSTM is an improved RNN. It is more complex, but easier to train, avoiding what is called the vanishing gradient problem. I recommend this course for you to learn more on LSTMs.

Learn more on LSTMs

Results

Scroll on! Nice visuals awaits.

# All Includes

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf  # Version 1.0.0 (some previous versions are used in past commits)
from sklearn import metrics

import os
# Useful Constants

# Those are separate normalised input features for the neural network
INPUT_SIGNAL_TYPES = [
    "body_acc_x_",
    "body_acc_y_",
    "body_acc_z_",
    "body_gyro_x_",
    "body_gyro_y_",
    "body_gyro_z_",
    "total_acc_x_",
    "total_acc_y_",
    "total_acc_z_"
]

# Output classes to learn how to classify
LABELS = [
    "WALKING",
    "WALKING_UPSTAIRS",
    "WALKING_DOWNSTAIRS",
    "SITTING",
    "STANDING",
    "LAYING"
]

Let's start by downloading the data:

# Note: Linux bash commands start with a "!" inside those "ipython notebook" cells

DATA_PATH = "data/"

!pwd && ls
os.chdir(DATA_PATH)
!pwd && ls

!python download_dataset.py

!pwd && ls
os.chdir("..")
!pwd && ls

DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data	 LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py	     screenlog.0
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  source.txt

Downloading...
--2017-05-24 01:49:53--  https://archive.ics.uci.edu/ml/machine-learning-databases/00240/UCI%20HAR%20Dataset.zip
Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.249
Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.249|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 60999314 (58M) [application/zip]
Saving to: ‘UCI HAR Dataset.zip’

100%[======================================>] 60,999,314  1.69MB/s   in 38s    

2017-05-24 01:50:31 (1.55 MB/s) - ‘UCI HAR Dataset.zip’ saved [60999314/60999314]

Downloading done.

Extracting...
Extracting successfully done to /home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data/UCI HAR Dataset.
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition/data
download_dataset.py  __MACOSX  source.txt  UCI HAR Dataset  UCI HAR Dataset.zip
/home/ubuntu/pynb/LSTM-Human-Activity-Recognition
data	 LSTM_files  LSTM_OLD.ipynb  README.md
LICENSE  LSTM.ipynb  lstm.py	     screenlog.0

Dataset is now located at: data/UCI HAR Dataset/

Preparing dataset:

TRAIN = "train/"
TEST = "test/"


# Load "X" (the neural network's training and testing inputs)

def load_X(X_signals_paths):
    X_signals = []

    for signal_type_path in X_signals_paths:
        file = open(signal_type_path, 'r')
        # Read dataset from disk, dealing with text files' syntax
        X_signals.append(
            [np.array(serie, dtype=np.float32) for serie in [
                row.replace('  ', ' ').strip().split(' ') for row in file
            ]]
        )
        file.close()

    return np.transpose(np.array(X_signals), (1, 2, 0))

X_train_signals_paths = [
    DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
    DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES
]

X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)


# Load "y" (the neural network's training and testing outputs)

def load_y(y_path):
    file = open(y_path, 'r')
    # Read dataset from disk, dealing with text file's syntax
    y_ = np.array(
        [elem for elem in [
            row.replace('  ', ' ').strip().split(' ') for row in file
        ]],
        dtype=np.int32
    )
    file.close()

    # Substract 1 to each output class for friendly 0-based indexing
    return y_ - 1

y_train_path = DATASET_PATH + TRAIN + "y_train.txt"
y_test_path = DATASET_PATH + TEST + "y_test.txt"

y_train = load_y(y_train_path)
y_test = load_y(y_test_path)

Additionnal Parameters:

Here are some core parameter definitions for the training.

For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.

# Input Data

training_data_count = len(X_train)  # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test)  # 2947 testing series
n_steps = len(X_train[0])  # 128 timesteps per series
n_input = len(X_train[0][0])  # 9 input parameters per timestep


# LSTM Neural Network's internal structure

n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)


# Training

learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300  # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000  # To show test set accuracy during training


# Some debugging info

print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
Some useful info to get an insight on dataset's shape and normalisation:
(X shape, y shape, every X's mean, every X's standard deviation)
(2947, 128, 9) (2947, 1) 0.0991399 0.395671
The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.

Utility functions for training:

def LSTM_RNN(_X, _weights, _biases):
    # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
    # Moreover, two LSTM cells are stacked which adds deepness to the neural network.
    # Note, some code of this notebook is inspired from an slightly different
    # RNN architecture used on another dataset, some of the credits goes to
    # "aymericdamien" under the MIT license.

    # (NOTE: This step could be greatly optimised by shaping the dataset once
    # input shape: (batch_size, n_steps, n_input)
    _X = tf.transpose(_X, [1, 0, 2])  # permute n_steps and batch_size
    # Reshape to prepare input to hidden activation
    _X = tf.reshape(_X, [-1, n_input])
    # new shape: (n_steps*batch_size, n_input)

    # ReLU activation, thanks to Yu Zhao for adding this improvement here:
    _X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
    # Split data because rnn cell needs a list of inputs for the RNN inner loop
    _X = tf.split(_X, n_steps, 0)
    # new shape: n_steps * (batch_size, n_hidden)

    # Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
    lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
    lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
    # Get LSTM cell output
    outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)

    # Get last time step's output feature for a "many-to-one" style classifier,
    # as in the image describing RNNs at the top of this page
    lstm_last_output = outputs[-1]

    # Linear activation
    return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']


def extract_batch_size(_train, step, batch_size):
    # Function to fetch a "batch_size" amount of data from "(X|y)_train" data.

    shape = list(_train.shape)
    shape[0] = batch_size
    batch_s = np.empty(shape)

    for i in range(batch_size):
        # Loop index
        index = ((step-1)*batch_size + i) % len(_train)
        batch_s[i] = _train[index]

    return batch_s


def one_hot(y_, n_classes=n_classes):
    # Function to encode neural one-hot output labels from number indexes
    # e.g.:
    # one_hot(y_=[[5], [0], [3]], n_classes=6):
    #     return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]

    y_ = y_.reshape(len(y_))
    return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

Let's get serious and build the neural network:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Graph weights
weights = {
    'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([n_hidden])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
    tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Hooray, now train the neural network:

# To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []

# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)

# Perform Training steps with "batch_size" amount of example data at each loop
step = 1
while step * batch_size <= training_iters:
    batch_xs =         extract_batch_size(X_train, step, batch_size)
    batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))

    # Fit training using batch data
    _, loss, acc = sess.run(
        [optimizer, cost, accuracy],
        feed_dict={
            x: batch_xs,
            y: batch_ys
        }
    )
    train_losses.append(loss)
    train_accuracies.append(acc)

    # Evaluate network only at some steps for faster training:
    if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):

        # To not spam console, show training accuracy/loss in this "if"
        print("Training iter #" + str(step*batch_size) + \
              ":   Batch Loss = " + "{:.6f}".format(loss) + \
              ", Accuracy = {}".format(acc))

        # Evaluation on the test set (no learning made here - just evaluation for diagnosis)
        loss, acc = sess.run(
            [cost, accuracy],
            feed_dict={
                x: X_test,
                y: one_hot(y_test)
            }
        )
        test_losses.append(loss)
        test_accuracies.append(acc)
        print("PERFORMANCE ON TEST SET: " + \
              "Batch Loss = {}".format(loss) + \
              ", Accuracy = {}".format(acc))

    step += 1

print("Optimization Finished!")

# Accuracy for test data

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        x: X_test,
        y: one_hot(y_test)
    }
)

test_losses.append(final_loss)
test_accuracies.append(accuracy)

print("FINAL RESULT: " + \
      "Batch Loss = {}".format(final_loss) + \
      ", Accuracy = {}".format(accuracy))
WARNING:tensorflow:From <ipython-input-19-3339689e51f6>:9: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
Training iter #1500:   Batch Loss = 5.416760, Accuracy = 0.15266665816307068
PERFORMANCE ON TEST SET: Batch Loss = 4.880829811096191, Accuracy = 0.05632847175002098
Training iter #30000:   Batch Loss = 3.031930, Accuracy = 0.607333242893219
PERFORMANCE ON TEST SET: Batch Loss = 3.0515167713165283, Accuracy = 0.6067186594009399
Training iter #60000:   Batch Loss = 2.672764, Accuracy = 0.7386666536331177
PERFORMANCE ON TEST SET: Batch Loss = 2.780435085296631, Accuracy = 0.7027485370635986
Training iter #90000:   Batch Loss = 2.378301, Accuracy = 0.8366667032241821
PERFORMANCE ON TEST SET: Batch Loss = 2.6019773483276367, Accuracy = 0.7617915868759155
Training iter #120000:   Batch Loss = 2.127290, Accuracy = 0.9066667556762695
PERFORMANCE ON TEST SET: Batch Loss = 2.3625404834747314, Accuracy = 0.8116728663444519
Training iter #150000:   Batch Loss = 1.929805, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 2.306251049041748, Accuracy = 0.8276212215423584
Training iter #180000:   Batch Loss = 1.971904, Accuracy = 0.9153333902359009
PERFORMANCE ON TEST SET: Batch Loss = 2.0835530757904053, Accuracy = 0.8771631121635437
Training iter #210000:   Batch Loss = 1.860249, Accuracy = 0.8613333702087402
PERFORMANCE ON TEST SET: Batch Loss = 1.9994492530822754, Accuracy = 0.8788597583770752
Training iter #240000:   Batch Loss = 1.626292, Accuracy = 0.9380000233650208
PERFORMANCE ON TEST SET: Batch Loss = 1.879166603088379, Accuracy = 0.8944689035415649
Training iter #270000:   Batch Loss = 1.582758, Accuracy = 0.9386667013168335
PERFORMANCE ON TEST SET: Batch Loss = 2.0341007709503174, Accuracy = 0.8361043930053711
Training iter #300000:   Batch Loss = 1.620352, Accuracy = 0.9306666851043701
PERFORMANCE ON TEST SET: Batch Loss = 1.8185184001922607, Accuracy = 0.8639293313026428
Training iter #330000:   Batch Loss = 1.474394, Accuracy = 0.9693333506584167
PERFORMANCE ON TEST SET: Batch Loss = 1.7638503313064575, Accuracy = 0.8747878670692444
Training iter #360000:   Batch Loss = 1.406998, Accuracy = 0.9420000314712524
PERFORMANCE ON TEST SET: Batch Loss = 1.5946787595748901, Accuracy = 0.902273416519165
Training iter #390000:   Batch Loss = 1.362515, Accuracy = 0.940000057220459
PERFORMANCE ON TEST SET: Batch Loss = 1.5285792350769043, Accuracy = 0.9046487212181091
Training iter #420000:   Batch Loss = 1.252860, Accuracy = 0.9566667079925537
PERFORMANCE ON TEST SET: Batch Loss = 1.4635565280914307, Accuracy = 0.9107565879821777
Training iter #450000:   Batch Loss = 1.190078, Accuracy = 0.9553333520889282
...
PERFORMANCE ON TEST SET: Batch Loss = 0.42567864060401917, Accuracy = 0.9324736595153809
Training iter #2070000:   Batch Loss = 0.342763, Accuracy = 0.9326667189598083
PERFORMANCE ON TEST SET: Batch Loss = 0.4292983412742615, Accuracy = 0.9273836612701416
Training iter #2100000:   Batch Loss = 0.259442, Accuracy = 0.9873334169387817
PERFORMANCE ON TEST SET: Batch Loss = 0.44131210446357727, Accuracy = 0.9273836612701416
Training iter #2130000:   Batch Loss = 0.284630, Accuracy = 0.9593333601951599
PERFORMANCE ON TEST SET: Batch Loss = 0.46982717514038086, Accuracy = 0.9093992710113525
Training iter #2160000:   Batch Loss = 0.299012, Accuracy = 0.9686667323112488
PERFORMANCE ON TEST SET: Batch Loss = 0.48389002680778503, Accuracy = 0.9138105511665344
Training iter #2190000:   Batch Loss = 0.287106, Accuracy = 0.9700000286102295
PERFORMANCE ON TEST SET: Batch Loss = 0.4670214056968689, Accuracy = 0.9216151237487793
Optimization Finished!
FINAL RESULT: Batch Loss = 0.45611169934272766, Accuracy = 0.9165252447128296

Training is good, but having visual insight is even better:

Okay, let's plot this simply in the notebook for now.

# (Inline plots: )
%matplotlib inline

font = {
    'family' : 'Bitstream Vera Sans',
    'weight' : 'bold',
    'size'   : 18
}
matplotlib.rc('font', **font)

width = 12
height = 12
plt.figure(figsize=(width, height))

indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))
plt.plot(indep_train_axis, np.array(train_losses),     "b--", label="Train losses")
plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies")

indep_test_axis = np.append(
    np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),
    [training_iters]
)
plt.plot(indep_test_axis, np.array(test_losses),     "b-", label="Test losses")
plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies")

plt.title("Training session's progress over iterations")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training iteration')

plt.show()

LSTM Training Testing Comparison Curve

And finally, the multi-class confusion matrix and metrics!

# Results

predictions = one_hot_predictions.argmax(1)

print("Testing Accuracy: {}%".format(100*accuracy))

print("")
print("Precision: {}%".format(100*metrics.precision_score(y_test, predictions, average="weighted")))
print("Recall: {}%".format(100*metrics.recall_score(y_test, predictions, average="weighted")))
print("f1_score: {}%".format(100*metrics.f1_score(y_test, predictions, average="weighted")))

print("")
print("Confusion Matrix:")
confusion_matrix = metrics.confusion_matrix(y_test, predictions)
print(confusion_matrix)
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100

print("")
print("Confusion matrix (normalised to % of total test data):")
print(normalised_confusion_matrix)
print("Note: training and testing data is not equally distributed amongst classes, ")
print("so it is normal that more than a 6th of the data is correctly classifier in the last category.")

# Plot Results:
width = 12
height = 12
plt.figure(figsize=(width, height))
plt.imshow(
    normalised_confusion_matrix,
    interpolation='nearest',
    cmap=plt.cm.rainbow
)
plt.title("Confusion matrix \n(normalised to % of total test data)")
plt.colorbar()
tick_marks = np.arange(n_classes)
plt.xticks(tick_marks, LABELS, rotation=90)
plt.yticks(tick_marks, LABELS)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Testing Accuracy: 91.65252447128296%

Precision: 91.76286479743305%
Recall: 91.65252799457076%
f1_score: 91.6437546304815%

Confusion Matrix:
[[466   2  26   0   2   0]
 [  5 441  25   0   0   0]
 [  1   0 419   0   0   0]
 [  1   1   0 396  87   6]
 [  2   1   0  87 442   0]
 [  0   0   0   0   0 537]]

Confusion matrix (normalised to % of total test data):
[[ 15.81269073   0.06786563   0.88225317   0.           0.06786563   0.        ]
 [  0.16966406  14.96437073   0.84832031   0.           0.           0.        ]
 [  0.03393281   0.          14.21784878   0.           0.           0.        ]
 [  0.03393281   0.03393281   0.          13.43739319   2.95215464
    0.20359688]
 [  0.06786563   0.03393281   0.           2.95215464  14.99830341   0.        ]
 [  0.           0.           0.           0.           0.          18.22192001]]
Note: training and testing data is not equally distributed amongst classes,
so it is normal that more than a 6th of the data is correctly classifier in the last category.

Confusion Matrix

sess.close()

Conclusion

Outstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly.

This means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so it amazes me how those predictions are extremely accurate given this small window of context and raw data. I've validated and re-validated that there is no important bug, and the community used and tried this code a lot. (Note: be sure to report something in the issue tab if you find bugs, otherwise Quora, StackOverflow, and other StackExchange sites are the places for asking questions.)

I specially did not expect such good results for guessing between the labels "SITTING" and "STANDING". Those are seemingly almost the same thing from the point of view of a device placed at waist level according to how the dataset was originally gathered. Thought, it is still possible to see a little cluster on the matrix between those classes, which drifts away just a bit from the identity. This is great.

It is also possible to see that there was a slight difficulty in doing the difference between "WALKING", "WALKING_UPSTAIRS" and "WALKING_DOWNSTAIRS". Obviously, those activities are quite similar in terms of movements.

I also tried my code without the gyroscope, using only the 3D accelerometer's 6 features (and not changing the training hyperparameters), and got an accuracy of 87%. In general, gyroscopes consumes more power than accelerometers, so it is preferable to turn them off.

Improvements

In another open-source repository of mine, the accuracy is pushed up to nearly 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections, and stacked cells. This architecture is also tested on another similar activity dataset. It resembles the nice architecture used in "Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", without an attention mechanism, and with just the encoder part - as a "many to one" architecture instead of a "many to many" to be adapted to the Human Activity Recognition (HAR) problem. I also worked more on the problem and came up with the LARNN, however it's complicated for just a little gain. Thus the current, original activity recognition project is simply better to use for its simplicity. We've also coded a non-deep learning machine learning pipeline on the same datasets using classical featurization techniques and older machine learning algorithms.

If you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here.

References

The dataset can be found on the UCI Machine Learning Repository:

Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.

Citation

Copyright (c) 2016 Guillaume Chevalier. To cite my code, you can point to the URL of the GitHub repository, for example:

Guillaume Chevalier, LSTMs for Human Activity Recognition, 2016, https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition

My code is available for free and even for private usage for anyone under the MIT License, however I ask to cite for using the code.

Here is the BibTeX citation code:

@misc{chevalier2016lstms,
  title={LSTMs for human activity recognition},
  author={Chevalier, Guillaume},
  year={2016}
}

I've also published a second paper, with contributors, regarding a second iteration as an improvement of this work, with deeper neural networks. The paper is available on arXiv. Here is the BibTeX citation code for this newer piece of work based on this project:

@article{DBLP:journals/corr/abs-1708-08989,
  author    = {Yu Zhao and
               Rennong Yang and
               Guillaume Chevalier and
               Maoguo Gong},
  title     = {Deep Residual Bidir-LSTM for Human Activity Recognition Using Wearable
               Sensors},
  journal   = {CoRR},
  volume    = {abs/1708.08989},
  year      = {2017},
  url       = {http://arxiv.org/abs/1708.08989},
  archivePrefix = {arXiv},
  eprint    = {1708.08989},
  timestamp = {Mon, 13 Aug 2018 16:46:48 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1708-08989},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Extra links

Connect with me

Liked this project? Did it help you? Leave a star, fork and share the love!

This activity recognition project has been seen in:


# Let's convert this notebook to a README automatically for the GitHub project's title page:
!jupyter nbconvert --to markdown LSTM.ipynb
!mv LSTM.md README.md
[NbConvertApp] Converting notebook LSTM.ipynb to markdown
[NbConvertApp] Support files will be in LSTM_files/
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Making directory LSTM_files
[NbConvertApp] Writing 38654 bytes to LSTM.md

lstm-human-activity-recognition's People

Contributors

guillaume-chevalier avatar zhaoyu611 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lstm-human-activity-recognition's Issues

Clean pipeline using Neuraxle

Should do something that looks like this to clean the project by using Neuraxle:

deep_learning_seq_classif_pipeline = EpochRepeater(Pipeline([
    TrainOnlyWrapper(DataShuffler(seed=42)),
    MiniBatchSequentialPipeline([
        ForEachDataInput(Pipeline([
            ToNumpy(np_dtype=np.float32),
            DefaultValuesFiller(0.0),
        ])),
        ClassificationLSTM(n_stacked=2, n_residual=3),
    ], batch_size=32),
]), epochs=200, fit_only=True)

Where the ClassificationLSTM class contains the actual TensorFlow Code.

Can I use any smartphone sensors to test it?

I just wanted to test the model with a real smartphone. The data was gathered from Samsung Galaxy S2 but I have Huawei P20. Is the dataset suitable for other types of a phone?

when I run the train_and_save.py,I got this error

Traceback (most recent call last):
File "D:/GraduationCode/LSTM-HAR-latest/train_and_save.py", line 11, in
from neuraxle_tensorflow.tensorflow_v1 import TensorflowV1ModelStep
ModuleNotFoundError: No module named 'neuraxle_tensorflow'

Thanks.

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists

when I run this code

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Graph weights
weights = {
    'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
    'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([n_hidden])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
    tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

I get this error

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-20-7963db4edbf4> in <module>()
     14 }
     15 
---> 16 pred = LSTM_RNN(x, weights, biases)
     17 
     18 # Loss, optimizer and evaluation

<ipython-input-13-1da1ce9bcbd5> in LSTM_RNN(_X, _weights, _biases)
     24     lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
     25     # Get LSTM cell output
---> 26     outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
     27 
     28     # Get last time step's output feature for a "many to one" style classifier,

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn.py in static_rnn(cell, inputs, initial_state, dtype, sequence_length, scope)
   1235             state_size=cell.state_size)
   1236       else:
-> 1237         (output, state) = call_cell()
   1238 
   1239       outputs.append(output)

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn.py in <lambda>()
   1222         varscope.reuse_variables()
   1223       # pylint: disable=cell-var-from-loop
-> 1224       call_cell = lambda: cell(input_, state)
   1225       # pylint: enable=cell-var-from-loop
   1226       if sequence_length is not None:

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in __call__(self, inputs, state, scope)
    178       with vs.variable_scope(vs.get_variable_scope(),
    179                              custom_getter=self._rnn_get_variable):
--> 180         return super(RNNCell, self).__call__(inputs, state)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):

E:\Anaconda\lib\site-packages\tensorflow\python\layers\base.py in __call__(self, inputs, *args, **kwargs)
    448         # Check input assumptions set after layer building, e.g. input shape.
    449         self._assert_input_compatibility(inputs)
--> 450         outputs = self.call(inputs, *args, **kwargs)
    451 
    452         # Apply activity regularization.

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
    936                                       [-1, cell.state_size])
    937           cur_state_pos += cell.state_size
--> 938         cur_inp, new_state = cell(cur_inp, cur_state)
    939         new_states.append(new_state)
    940 

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in __call__(self, inputs, state, scope)
    178       with vs.variable_scope(vs.get_variable_scope(),
    179                              custom_getter=self._rnn_get_variable):
--> 180         return super(RNNCell, self).__call__(inputs, state)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):

E:\Anaconda\lib\site-packages\tensorflow\python\layers\base.py in __call__(self, inputs, *args, **kwargs)
    448         # Check input assumptions set after layer building, e.g. input shape.
    449         self._assert_input_compatibility(inputs)
--> 450         outputs = self.call(inputs, *args, **kwargs)
    451 
    452         # Apply activity regularization.

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in call(self, inputs, state)
    399       c, h = array_ops.split(value=state, num_or_size_splits=2, axis=1)
    400 
--> 401     concat = _linear([inputs, h], 4 * self._num_units, True)
    402 
    403     # i = input_gate, j = new_input, f = forget_gate, o = output_gate

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _linear(args, output_size, bias, bias_initializer, kernel_initializer)
   1037         _WEIGHTS_VARIABLE_NAME, [total_arg_size, output_size],
   1038         dtype=dtype,
-> 1039         initializer=kernel_initializer)
   1040     if len(args) == 1:
   1041       res = math_ops.matmul(args[0], weights)

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
   1063       collections=collections, caching_device=caching_device,
   1064       partitioner=partitioner, validate_shape=validate_shape,
-> 1065       use_resource=use_resource, custom_getter=custom_getter)
   1066 get_variable_or_local_docstring = (
   1067     """%s

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    960           collections=collections, caching_device=caching_device,
    961           partitioner=partitioner, validate_shape=validate_shape,
--> 962           use_resource=use_resource, custom_getter=custom_getter)
    963 
    964   def _get_partitioned_variable(self,

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter)
    358           reuse=reuse, trainable=trainable, collections=collections,
    359           caching_device=caching_device, partitioner=partitioner,
--> 360           validate_shape=validate_shape, use_resource=use_resource)
    361     else:
    362       return _true_getter(

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in wrapped_custom_getter(getter, *args, **kwargs)
   1403     return custom_getter(
   1404         functools.partial(old_getter, getter),
-> 1405         *args, **kwargs)
   1406   return wrapped_custom_getter
   1407 

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183     variable = getter(*args, **kwargs)
    184     trainable = (variable in tf_variables.trainable_variables() or
    185                  (isinstance(variable, tf_variables.PartitionedVariable) and

E:\Anaconda\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py in _rnn_get_variable(self, getter, *args, **kwargs)
    181 
    182   def _rnn_get_variable(self, getter, *args, **kwargs):
--> 183     variable = getter(*args, **kwargs)
    184     trainable = (variable in tf_variables.trainable_variables() or
    185                  (isinstance(variable, tf_variables.PartitionedVariable) and

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource)
    350           trainable=trainable, collections=collections,
    351           caching_device=caching_device, validate_shape=validate_shape,
--> 352           use_resource=use_resource)
    353 
    354     if custom_getter is not None:

E:\Anaconda\lib\site-packages\tensorflow\python\ops\variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource)
    662                          " Did you mean to set reuse=True in VarScope? "
    663                          "Originally defined at:\n\n%s" % (
--> 664                              name, "".join(traceback.format_list(tb))))
    665       found_var = self._vars[name]
    666       if not shape.is_compatible_with(found_var.get_shape()):

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "E:\Anaconda\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
    op_def=op_def)

something about video

Hi,thank you for your wonderful turtiol,there exist three lines in the vedio,they may resprent three data sets in your code,every moment each line has only one value,I wonder how do you get the 3-dimension datas?would you help me understand it?thank you

Realtime classification

Hi!

Thanks for posting this code. Do you think this approach will work for realtime classification of the human activity?

issue with builiding of nueral network on the current version of tensorflow

Graph input/output

x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

Graph weights

weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}

pred = LSTM_RNN(x, weights, biases)

Loss, optimizer and evaluation

l2 = lambda_loss_amount * sum(
tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

for this part of the code

Validation on Saved Checkpoint

I want to validate the validation data on saved checkpoints . I am unable to load the saved checkpoints.Can you provide the validation script for validation on saved checkpoints.Thank you

How to keep the best performance model

Hi Guillaume,
Thanks for your great post, this helps me a lot.
When training the RNN model, the model gives a very high performance in the middle of the training process, while after all the iterations, the final performance is not the best. I am not sure whether this case is normal, is there any way I can keep the best performance model in the training process, instead of the final model after all the iterations?
Thanks!

Upgrade to the latest tensorflow ?

The tensorflow used here is out-of-date, the latest tensorflow has something change in the rnn and rnn_cell. So it will be great if one can upgrade this project.

Less data

I used your lstm blackbox for a small sized training dataset, and got very poor results? Could you suggest any ideas?

How to know the values of all kind of weights

Hello,
I am really impressed by your work. But I have met some issues.
I used the tf.saver to get all the ops and save the model. However, I printed it out all the tf.global_variables but couldn't know each of the individual array means. Is there a way that I can know the detail variables that I saved.

Changing batch size

Hello
I tried to change the batch size from 1500 to 100, since I am using different features.
The input dimension of my features are 4096 instead of your 128.
But this causes the problem of Resource exhausted. So, I tried to reduce the batch size. But now the problem is that I am getting this error
ValueError: Cannot feed value of shape (100, 4) for Tensor u'Placeholder_1:0', which has shape '(?, 14)'
Thank You in advance for the help.

live recognition activity

hello thanks for great github 👍
is this model capable of detecting and recognition activity from laptop's camera for live ?

How to use this model

I am able to download and ran this code. But could you please provide steps to use this for identifying the human activity from video or any image.

IndexError: index 3 is out of bounds for axis 0 with size 3

Hello,

I'm trying to run your code with a smaller Dataset. I have :
X_train.shape (61,100,75)
y_train.shape (61,1)
X_test.shape (27,100,75)
y_test.shape (27,1)

I changed the number of classes (3) but I still have this error :

image

Do you know where is my problem ?

Thanks a lot for your help and thank you for your code !!

Robin Fays

LSTM + KNN

Hey,

Do you know if it is possible to use 1 LSTM and 1 KNN instead of 2 LSTM ?

Thanks in advance,

R.F.

Model application in android

Hi, @guillaume-chevalier
Whether this work can be applied to android. I have also seen other open-source projects but encountered many problems in the application process. Can you give me some suggestions? Thank you.

Why change the dimensional of inputs from n_inputs to n_hidden?

I'm interested in this brillirant project but I have some doubts about it. Hopefully someone can help me a little, thank you very much!
I used to believe num_units(config.n_hidden=32) of BasicLSTMCell is the dimension of the hidden cell state, while input_size(config.n_inputs=9) is the dimension of a input vector, i.e. one set of 9-axis sensor data, at one time in a time serie, of one sliding window, out of a batch.
Could somebody tell me if I'm getting this right?

Also, in

# Linear activation
_X = tf.nn.relu(tf.matmul(_X, config.W['hidden']) + config.biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, config.n_steps, 0)
# new shape: n_steps * (batch_size, n_hidden)

Why transform the input size from 9 to 32? Is it a neccessary step to make the model mathematically right? e.g. My guess is to intentionally make the input, output and hidden state of a cell all at the same dimension? (if true, how can we benefit from it? Cause as far as I know, the LSTM cell would concatenate the inputs _X and the hidden state ht-1, so with and without this relu transformation , the only diffrence is in the weight mapping inside LSTM cell: (9 + 32)--->(32) versus (32 + 32)-->(32)?)

I kinda want to know for sure how does tf.contrib.rnn.static_rnn() combine "inputs + hidden state" in the forward propagation or what's the dimensions of the weight matrix inside the LSTM?

Or maybe it is just a "fully connected input layer" or something like that, which helped the overall learning process? Actually, I've tried to remove it. The code still worked but the accuracy dropped a little. So I'm even more confused. Please help me with this!

Creating chunks of segments

Hi,

Thank you for putting these immense resources in one place.

  1. May I please ask why the creation of fixed-sized chunks/blocks of the training series?

  2. Support I choose to use a classical machine learning classifier, such as random forest, would you also recommend creating these fixed-sized chunks first?

Regards.

Shape of input signal (7352, 128, 9)

Hi Guillaume,

First of all, thanks a lot for this wonderful walkthrough of Human Activity Recognition using LSTMs. Your code worked like a charm on my CPU. I'm new to deep learning and don't have a lot of practical experience with RNN/LSTM. Your code has provided an excellent guide to see what's happening under the hood.

This isn't really an issue, but I was wondering how did the number 128 come about. I know it's the number of timesteps per series. Does this number 128 refer to the number of columns of the 9 input text files? And more abstractly, I am not sure about how to choose the number of time steps. Is there any guideline to do this?

Thanks a lot,

  • Madhav

LSTM model is giving an ValueError while predicting based on X_test data

Hi need a help to solve value erorr wile running LSTM. It seems everything works fine on training data but prediction generates less then expected dimensions
my x_train data shape is (846, 30, 3), my y_train data shape is (846,) my x_test 363, 30, 3), my y_test (363)
hat = modell.predict(test_X) generates (363, 100)

part of the code

reshape input to be 3D [samples, timesteps, features]

train_X = tr_xval.reshape((tr_xval.shape[0], 30, 3))
test_X = ts_xval.reshape((ts_xval.shape[0], 30, 3))
train_y=tr_yval
test_y=ts_yval
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)

design network

modell = Sequential()
modell.add(LSTM(200, activation='relu',input_shape=(train_X.shape[1], train_X.shape[2]),return_sequences=False,stateful=False))

#model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True))
modell.add(Dense(100, activation='relu'))
modell.compile(loss='mae', optimizer='adam',metrics=['accuracy'])
modell.summary()

fit network

history = modell.fit(train_X, train_y, epochs=200, batch_size=72, validation_data=(test_X, test_y), verbose=2, shuffle=False)

#works fin until here but then
yhat = modell.predict(test_X)

from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
values=lnorm.values.astype('float32')
scaled = scaler.fit_transform(values)

invert scaling for forecast

inv_yhat = np.concatenate((yhat, test_X[:, -2:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]

invert scaling for actual

test_y = test_y.reshape((len(test_y), 1))
inv_y = np.concatenate((test_y, test_X[:, -2:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]


ValueError Traceback (most recent call last)
C:\Users\M55F1~1.AYU\AppData\Local\Temp/ipykernel_7572/1751946881.py in
5
6 # invert scaling for forecast
----> 7 inv_yhat = np.concatenate((yhat, test_X[:, -2:]), axis=1)
8 inv_yhat = scaler.inverse_transform(inv_yhat)
9 inv_yhat = inv_yhat[:,0]

<array_function internals> in concatenate(*args, **kwargs)

ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 3 dimension(s)

Query regarding data collection

HI @guillaume-chevalier , your code is very useful. I wanted to know if the collected data was from multiple users or from single user? and if it is from multiple users are all the users present in both train and test (overlap) or train and test has different users?

How to set up number of Epoch

I wanna describe the number of times the algorithm sees the entire data set by setting An epoch.
Where can I set inside this?

IndexError when n_classes < 6

Hello,

First, thank you very much for your code. It has helped me out a lot.

I tried to change n_classes to 2, as I am only classifying between two states. However, I receive an IndexError whenever I reduce n_classes below 6. The error message is below:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-15-581f85d3f7ff> in <module>()
     43             feed_dict={
     44                 x: X_test,
---> 45                 y: one_hot(y_test)
     46             }
     47         )

<ipython-input-13-7d65b978d73d> in one_hot(y_, n_classes)
     50     # Function to encode output labels from number indexes
     51     y_ = y_.reshape(len(y_))
---> 52     return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

IndexError: index 2 is out of bounds for axis 0 with size 2

I'm not sure if I'm just misunderstanding what n_classes is supposed to represent or if there is a bug when it is reduced below 6 (any number I set it to that is greater than 6 still works).

My data has the shape:

X_train: (6312, 50, 9)
y_train: (6312, 1)
X_test: (1578, 50, 9)
y_test: (1578, 1) 

Where the sole feature in the y arrays are labelled either 1 or 2 for my two classes.

My hyperparameters are currently set to:

training_data_count = len(X_train)
test_data_count = len(X_test)
n_steps = len(X_train[0])
n_input = len(X_train[0][0])

# NN Internal Structure

n_hidden = 32
n_classes = 2

# Training

learning_rate = 0.001
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300  # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000  # To show test set accuracy during training

and the one_hot function that I'm using is a fix that you suggested in another issue

def one_hot(y_, n_classes=n_classes):
    # Function to encode output labels from number indexes 
    y_ = y_.reshape(len(y_))
    return np.eye(n_classes)[np.array(y_, dtype=np.int32)]  # Returns FLOATS

Any help with this is much appreciated.

Best,
Sean

Something about the coordinate of IMU

Hi, thank you for your great work!
I have a question about the IMU's coordinate in your dataset. When you collect the acc and gyro data, what is the IMU's coordinate? That is what directons are the x-axis, y-axis and z-axis respectively?
Looking forward to your reply!
Thank you!

Different Performance Using the Current Version of lstm.py with TensorFlow r1.0

To fit the current code in to the new released TensorFlow r1.0, I made several modification on the code

In the Loading Function

#line 25:
file = open(signal_type_path, 'rb')    ===>>>     file = open(signal_type_path, 'r')

#line 40:
file = open(y_path, 'rb')     ===>>>    file = open(y_path, 'r')

In the LSTM_NETWORK() Function

#line 110:     
hidden = tf.split(0, config.n_steps, hidden)     ===>>>    hidden = tf.split(hidden, config.n_steps, 0)

#line 114    
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(config.n_hidden, forget_bias=1.0)    ===>>>    lstm_cell = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0)

#line 117 
lsmt_layers = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * 2)    ===>>>    lsmt_layers = tf.contrib.rnn.MultiRNNCell([lstm_cell] * 2)

#line 120
outputs, _ = tf.nn.rnn(lsmt_layers, hidden, dtype=tf.float32)    ===>>>    outputs, _ = tf.contrib.rnn.static_rnn(lsmt_layers, hidden, dtype=tf.float32)

In main()

#line 216         
tf.nn.softmax_cross_entropy_with_logits(pred_Y, Y)) + l2    ===>>>    tf.nn.softmax_cross_entropy_with_logits(labels=pred_Y,logits= Y)) + l2

#line 228
tf.initialize_all_variables().run()    ===>>>    tf.global_variables_initializer().run()

However, I ran the code and the performance does not seem good as it is shown in the readme file, I want to know whether the modification of the code have some mistakes. The results shown as below:

traing iter: 0, test accuracy : 0.34781134128570557, loss : 1.3058252334594727
traing iter: 1, test accuracy : 0.3338988721370697, loss : 1.5186371803283691
traing iter: 2, test accuracy : 0.287750244140625, loss : 1.7945531606674194
traing iter: 3, test accuracy : 0.2789277136325836, loss : 2.190826416015625
traing iter: 4, test accuracy : 0.36274176836013794, loss : 2.607555866241455
traing iter: 5, test accuracy : 0.3366135060787201, loss : 2.898186206817627
traing iter: 6, test accuracy : 0.235154390335083, loss : 3.007314443588257
traing iter: 7, test accuracy : 0.18154054880142212, loss : 3.0111827850341797
traing iter: 8, test accuracy : 0.18052256107330322, loss : 2.9800398349761963
traing iter: 9, test accuracy : 0.18052256107330322, loss : 2.953343391418457
traing iter: 10, test accuracy : 0.18052256107330322, loss : 2.934436559677124
traing iter: 11, test accuracy : 0.18052256107330322, loss : 2.927518844604492
traing iter: 12, test accuracy : 0.18052256107330322, loss : 2.9316229820251465
traing iter: 13, test accuracy : 0.18052256107330322, loss : 2.935426712036133
traing iter: 14, test accuracy : 0.18052256107330322, loss : 2.9258742332458496
traing iter: 15, test accuracy : 0.18052256107330322, loss : 2.9044976234436035
traing iter: 16, test accuracy : 0.18052256107330322, loss : 2.878373622894287
traing iter: 17, test accuracy : 0.18052256107330322, loss : 2.850264310836792
traing iter: 18, test accuracy : 0.18052256107330322, loss : 2.820138454437256
traing iter: 19, test accuracy : 0.18052256107330322, loss : 2.787750244140625
traing iter: 20, test accuracy : 0.18052256107330322, loss : 2.753265380859375
traing iter: 21, test accuracy : 0.18052256107330322, loss : 2.717087507247925
traing iter: 22, test accuracy : 0.18052256107330322, loss : 2.6796491146087646
traing iter: 23, test accuracy : 0.18052256107330322, loss : 2.6416709423065186
traing iter: 24, test accuracy : 0.18052256107330322, loss : 2.6035842895507812
traing iter: 25, test accuracy : 0.18052256107330322, loss : 2.5656495094299316
traing iter: 26, test accuracy : 0.18052256107330322, loss : 2.5279884338378906
traing iter: 27, test accuracy : 0.18052256107330322, loss : 2.4905736446380615
traing iter: 28, test accuracy : 0.18052256107330322, loss : 2.453395366668701
traing iter: 29, test accuracy : 0.18052256107330322, loss : 2.416445732116699
traing iter: 30, test accuracy : 0.18052256107330322, loss : 2.3797318935394287
traing iter: 31, test accuracy : 0.18052256107330322, loss : 2.3432376384735107
traing iter: 32, test accuracy : 0.18052256107330322, loss : 2.3069679737091064
traing iter: 33, test accuracy : 0.18052256107330322, loss : 2.27091646194458
traing iter: 34, test accuracy : 0.18052256107330322, loss : 2.235081911087036
traing iter: 35, test accuracy : 0.18052256107330322, loss : 2.1994683742523193
traing iter: 36, test accuracy : 0.18052256107330322, loss : 2.164074182510376
traing iter: 37, test accuracy : 0.18052256107330322, loss : 2.1289024353027344
traing iter: 38, test accuracy : 0.18052256107330322, loss : 2.0939483642578125
traing iter: 39, test accuracy : 0.18052256107330322, loss : 2.059211492538452
traing iter: 40, test accuracy : 0.18052256107330322, loss : 2.0247159004211426
traing iter: 41, test accuracy : 0.18052256107330322, loss : 1.9904437065124512
traing iter: 42, test accuracy : 0.18052256107330322, loss : 1.9563994407653809
traing iter: 43, test accuracy : 0.18052256107330322, loss : 1.9225943088531494
traing iter: 44, test accuracy : 0.18052256107330322, loss : 1.889019250869751
traing iter: 45, test accuracy : 0.18052256107330322, loss : 1.8556859493255615
traing iter: 46, test accuracy : 0.18052256107330322, loss : 1.8225984573364258
traing iter: 47, test accuracy : 0.18052256107330322, loss : 1.7897469997406006
traing iter: 48, test accuracy : 0.18052256107330322, loss : 1.757143259048462
traing iter: 49, test accuracy : 0.18052256107330322, loss : 1.7247881889343262
traing iter: 50, test accuracy : 0.18052256107330322, loss : 1.6926804780960083
traing iter: 51, test accuracy : 0.18052256107330322, loss : 1.6608327627182007
traing iter: 52, test accuracy : 0.18052256107330322, loss : 1.6292425394058228
traing iter: 53, test accuracy : 0.18052256107330322, loss : 1.5979050397872925
traing iter: 54, test accuracy : 0.18052256107330322, loss : 1.566849946975708
traing iter: 55, test accuracy : 0.18052256107330322, loss : 1.536041498184204
traing iter: 56, test accuracy : 0.18052256107330322, loss : 1.5055114030838013
traing iter: 57, test accuracy : 0.18052256107330322, loss : 1.4752501249313354
traing iter: 58, test accuracy : 0.18052256107330322, loss : 1.4452615976333618
traing iter: 59, test accuracy : 0.18052256107330322, loss : 1.4155560731887817
traing iter: 60, test accuracy : 0.18052256107330322, loss : 1.386133074760437
traing iter: 61, test accuracy : 0.18052256107330322, loss : 1.3569962978363037
traing iter: 62, test accuracy : 0.18052256107330322, loss : 1.3281437158584595
traing iter: 63, test accuracy : 0.18052256107330322, loss : 1.299586534500122
traing iter: 64, test accuracy : 0.18052256107330322, loss : 1.2713215351104736
traing iter: 65, test accuracy : 0.18052256107330322, loss : 1.2433592081069946
traing iter: 66, test accuracy : 0.18052256107330322, loss : 1.2156893014907837
traing iter: 67, test accuracy : 0.18052256107330322, loss : 1.1883275508880615
traing iter: 68, test accuracy : 0.18052256107330322, loss : 1.1612651348114014
traing iter: 69, test accuracy : 0.18052256107330322, loss : 1.134517788887024
traing iter: 70, test accuracy : 0.18052256107330322, loss : 1.108081340789795
traing iter: 71, test accuracy : 0.18052256107330322, loss : 1.0819562673568726
traing iter: 72, test accuracy : 0.18052256107330322, loss : 1.0561437606811523
traing iter: 73, test accuracy : 0.18052256107330322, loss : 1.030653953552246
traing iter: 74, test accuracy : 0.18052256107330322, loss : 1.0054810047149658
traing iter: 75, test accuracy : 0.18052256107330322, loss : 0.9806308746337891
traing iter: 76, test accuracy : 0.18052256107330322, loss : 0.9561023712158203
traing iter: 77, test accuracy : 0.18052256107330322, loss : 0.9319024085998535
traing iter: 78, test accuracy : 0.18052256107330322, loss : 0.9080308079719543
traing iter: 79, test accuracy : 0.18052256107330322, loss : 0.8844877481460571
traing iter: 80, test accuracy : 0.18052256107330322, loss : 0.8612725138664246
traing iter: 81, test accuracy : 0.18052256107330322, loss : 0.8383844494819641
traing iter: 82, test accuracy : 0.18052256107330322, loss : 0.8158326148986816
traing iter: 83, test accuracy : 0.18052256107330322, loss : 0.7936134934425354
traing iter: 84, test accuracy : 0.18052256107330322, loss : 0.7717282772064209
traing iter: 85, test accuracy : 0.18052256107330322, loss : 0.750174880027771
traing iter: 86, test accuracy : 0.18052256107330322, loss : 0.7289565801620483
traing iter: 87, test accuracy : 0.18052256107330322, loss : 0.7080764770507812
traing iter: 88, test accuracy : 0.18052256107330322, loss : 0.6875315308570862
traing iter: 89, test accuracy : 0.18052256107330322, loss : 0.667317271232605
traing iter: 90, test accuracy : 0.18052256107330322, loss : 0.6474432945251465
traing iter: 91, test accuracy : 0.18052256107330322, loss : 0.6279003024101257
traing iter: 92, test accuracy : 0.18052256107330322, loss : 0.6086910367012024
traing iter: 93, test accuracy : 0.18052256107330322, loss : 0.5898177623748779
traing iter: 94, test accuracy : 0.18052256107330322, loss : 0.5712740421295166
traing iter: 95, test accuracy : 0.18052256107330322, loss : 0.5530636310577393
traing iter: 96, test accuracy : 0.18052256107330322, loss : 0.5351837277412415
traing iter: 97, test accuracy : 0.18052256107330322, loss : 0.517633318901062
traing iter: 98, test accuracy : 0.18052256107330322, loss : 0.5004111528396606
traing iter: 99, test accuracy : 0.18052256107330322, loss : 0.48351573944091797
traing iter: 100, test accuracy : 0.18052256107330322, loss : 0.46694350242614746
traing iter: 101, test accuracy : 0.18052256107330322, loss : 0.45069605112075806
traing iter: 102, test accuracy : 0.18052256107330322, loss : 0.4347696900367737
traing iter: 103, test accuracy : 0.18052256107330322, loss : 0.4191637635231018
traing iter: 104, test accuracy : 0.18052256107330322, loss : 0.403874009847641
traing iter: 105, test accuracy : 0.18052256107330322, loss : 0.3889009356498718
traing iter: 106, test accuracy : 0.18052256107330322, loss : 0.37423935532569885
traing iter: 107, test accuracy : 0.18052256107330322, loss : 0.35988837480545044
traing iter: 108, test accuracy : 0.18052256107330322, loss : 0.34584617614746094
traing iter: 109, test accuracy : 0.18052256107330322, loss : 0.33210957050323486
traing iter: 110, test accuracy : 0.18052256107330322, loss : 0.31867480278015137
traing iter: 111, test accuracy : 0.18052256107330322, loss : 0.3055408000946045
traing iter: 112, test accuracy : 0.18052256107330322, loss : 0.2927030920982361
traing iter: 113, test accuracy : 0.18052256107330322, loss : 0.28015977144241333
traing iter: 114, test accuracy : 0.18052256107330322, loss : 0.26790836453437805
traing iter: 115, test accuracy : 0.18052256107330322, loss : 0.2559434473514557
traing iter: 116, test accuracy : 0.18052256107330322, loss : 0.24426409602165222
traing iter: 117, test accuracy : 0.18052256107330322, loss : 0.2328660935163498
traing iter: 118, test accuracy : 0.18052256107330322, loss : 0.2217465490102768
traing iter: 119, test accuracy : 0.18052256107330322, loss : 0.21090266108512878
traing iter: 120, test accuracy : 0.18052256107330322, loss : 0.20032905042171478
traing iter: 121, test accuracy : 0.18052256107330322, loss : 0.1900242269039154
traing iter: 122, test accuracy : 0.18052256107330322, loss : 0.17998453974723816
traing iter: 123, test accuracy : 0.18052256107330322, loss : 0.17020505666732788
traing iter: 124, test accuracy : 0.18052256107330322, loss : 0.16068293154239655
traing iter: 125, test accuracy : 0.18052256107330322, loss : 0.15141479671001434
traing iter: 126, test accuracy : 0.18052256107330322, loss : 0.14239707589149475
traing iter: 127, test accuracy : 0.18052256107330322, loss : 0.13362593948841095
traing iter: 128, test accuracy : 0.18052256107330322, loss : 0.12509757280349731
traing iter: 129, test accuracy : 0.18052256107330322, loss : 0.11680810153484344
traing iter: 130, test accuracy : 0.18052256107330322, loss : 0.10875467956066132
traing iter: 131, test accuracy : 0.18052256107330322, loss : 0.10093227028846741
traing iter: 132, test accuracy : 0.18052256107330322, loss : 0.09333805739879608
traing iter: 133, test accuracy : 0.18052256107330322, loss : 0.08596782386302948
traing iter: 134, test accuracy : 0.18052256107330322, loss : 0.07881791889667511
traing iter: 135, test accuracy : 0.18052256107330322, loss : 0.07188472896814346
traing iter: 136, test accuracy : 0.18052256107330322, loss : 0.06516419351100922
traing iter: 137, test accuracy : 0.18052256107330322, loss : 0.058652739971876144
traing iter: 138, test accuracy : 0.18052256107330322, loss : 0.05234657600522041
traing iter: 139, test accuracy : 0.18052256107330322, loss : 0.04624189808964729
traing iter: 140, test accuracy : 0.18052256107330322, loss : 0.04033491760492325
traing iter: 141, test accuracy : 0.18052256107330322, loss : 0.034621983766555786
traing iter: 142, test accuracy : 0.18052256107330322, loss : 0.029099291190505028
traing iter: 143, test accuracy : 0.18052256107330322, loss : 0.023763025179505348
traing iter: 144, test accuracy : 0.18052256107330322, loss : 0.018609726801514626
traing iter: 145, test accuracy : 0.18052256107330322, loss : 0.013635683804750443
traing iter: 146, test accuracy : 0.18052256107330322, loss : 0.0088372603058815
traing iter: 147, test accuracy : 0.18052256107330322, loss : 0.004210382699966431
traing iter: 148, test accuracy : 0.18052256107330322, loss : -0.0002478770911693573
traing iter: 149, test accuracy : 0.18052256107330322, loss : -0.004541546106338501
traing iter: 150, test accuracy : 0.18052256107330322, loss : -0.008673999458551407
traing iter: 151, test accuracy : 0.18052256107330322, loss : -0.012648768723011017
traing iter: 152, test accuracy : 0.18052256107330322, loss : -0.01646951586008072
traing iter: 153, test accuracy : 0.18052256107330322, loss : -0.020139258354902267
traing iter: 154, test accuracy : 0.18052256107330322, loss : -0.02366192266345024
traing iter: 155, test accuracy : 0.18052256107330322, loss : -0.027040652930736542
traing iter: 156, test accuracy : 0.18052256107330322, loss : -0.03027883544564247
traing iter: 157, test accuracy : 0.18052256107330322, loss : -0.03337998315691948
traing iter: 158, test accuracy : 0.18052256107330322, loss : -0.036346666514873505
traing iter: 159, test accuracy : 0.18052256107330322, loss : -0.03918309509754181
traing iter: 160, test accuracy : 0.18052256107330322, loss : -0.04189173877239227
traing iter: 161, test accuracy : 0.18052256107330322, loss : -0.04447639361023903
traing iter: 162, test accuracy : 0.18052256107330322, loss : -0.04693935066461563
traing iter: 163, test accuracy : 0.18052256107330322, loss : -0.049284275621175766
traing iter: 164, test accuracy : 0.18052256107330322, loss : -0.051514316350221634
traing iter: 165, test accuracy : 0.18052256107330322, loss : -0.05363213270902634
traing iter: 166, test accuracy : 0.18052256107330322, loss : -0.055640846490859985
traing iter: 167, test accuracy : 0.18052256107330322, loss : -0.05754372850060463
traing iter: 168, test accuracy : 0.18052256107330322, loss : -0.059342704713344574
traing iter: 169, test accuracy : 0.18052256107330322, loss : -0.0610412135720253
traing iter: 170, test accuracy : 0.18052256107330322, loss : -0.06264205276966095
traing iter: 171, test accuracy : 0.18052256107330322, loss : -0.06414808332920074
traing iter: 172, test accuracy : 0.18052256107330322, loss : -0.06556138396263123
traing iter: 173, test accuracy : 0.18052256107330322, loss : -0.06688489019870758
traing iter: 174, test accuracy : 0.18052256107330322, loss : -0.06812205910682678
traing iter: 175, test accuracy : 0.18052256107330322, loss : -0.06927430629730225
traing iter: 176, test accuracy : 0.18052256107330322, loss : -0.07034479826688766
traing iter: 177, test accuracy : 0.18052256107330322, loss : -0.07133537530899048
traing iter: 178, test accuracy : 0.18052256107330322, loss : -0.07224904000759125
traing iter: 179, test accuracy : 0.18052256107330322, loss : -0.07308772951364517
traing iter: 180, test accuracy : 0.18052256107330322, loss : -0.07385437935590744
traing iter: 181, test accuracy : 0.18052256107330322, loss : -0.07455061376094818
traing iter: 182, test accuracy : 0.18052256107330322, loss : -0.07517953217029572
traing iter: 183, test accuracy : 0.18052256107330322, loss : -0.07574253529310226
traing iter: 184, test accuracy : 0.18052256107330322, loss : -0.07624218612909317
traing iter: 185, test accuracy : 0.18052256107330322, loss : -0.07668038457632065
traing iter: 186, test accuracy : 0.18052256107330322, loss : -0.07705892622470856
traing iter: 187, test accuracy : 0.18052256107330322, loss : -0.07738093286752701
traing iter: 188, test accuracy : 0.18052256107330322, loss : -0.07764744758605957
traing iter: 189, test accuracy : 0.18052256107330322, loss : -0.07786049693822861
traing iter: 190, test accuracy : 0.18052256107330322, loss : -0.078022301197052
traing iter: 191, test accuracy : 0.18052256107330322, loss : -0.07813508808612823
traing iter: 192, test accuracy : 0.18052256107330322, loss : -0.07819987088441849
traing iter: 193, test accuracy : 0.18052256107330322, loss : -0.07821857929229736
traing iter: 194, test accuracy : 0.18052256107330322, loss : -0.07819265872240067
traing iter: 195, test accuracy : 0.18052256107330322, loss : -0.07812502235174179
traing iter: 196, test accuracy : 0.18052256107330322, loss : -0.07801615446805954
traing iter: 197, test accuracy : 0.18052256107330322, loss : -0.07786814868450165
traing iter: 198, test accuracy : 0.18052256107330322, loss : -0.07768278568983078
traing iter: 199, test accuracy : 0.18052256107330322, loss : -0.07746139913797379
traing iter: 200, test accuracy : 0.18052256107330322, loss : -0.07720571756362915
traing iter: 201, test accuracy : 0.18052256107330322, loss : -0.07691645622253418
traing iter: 202, test accuracy : 0.18052256107330322, loss : -0.07659582793712616
traing iter: 203, test accuracy : 0.18052256107330322, loss : -0.07624495029449463
traing iter: 204, test accuracy : 0.18052256107330322, loss : -0.07586495578289032
traing iter: 205, test accuracy : 0.18052256107330322, loss : -0.0754581168293953
traing iter: 206, test accuracy : 0.18052256107330322, loss : -0.07502477616071701
traing iter: 207, test accuracy : 0.18052256107330322, loss : -0.0745663046836853
traing iter: 208, test accuracy : 0.18052256107330322, loss : -0.07408446073532104
traing iter: 209, test accuracy : 0.18052256107330322, loss : -0.07357922941446304
traing iter: 210, test accuracy : 0.18052256107330322, loss : -0.07305324822664261
traing iter: 211, test accuracy : 0.18052256107330322, loss : -0.07250723987817764
traing iter: 212, test accuracy : 0.18052256107330322, loss : -0.0719418153166771
traing iter: 213, test accuracy : 0.18052256107330322, loss : -0.07135853171348572
traing iter: 214, test accuracy : 0.18052256107330322, loss : -0.07075759023427963
traing iter: 215, test accuracy : 0.18052256107330322, loss : -0.07014109939336777
traing iter: 216, test accuracy : 0.18052256107330322, loss : -0.06950978189706802
traing iter: 217, test accuracy : 0.18052256107330322, loss : -0.06886371970176697
traing iter: 218, test accuracy : 0.18052256107330322, loss : -0.06820454448461533
traing iter: 219, test accuracy : 0.18052256107330322, loss : -0.06753383576869965
traing iter: 220, test accuracy : 0.18052256107330322, loss : -0.0668511614203453
traing iter: 221, test accuracy : 0.18052256107330322, loss : -0.06615811586380005
traing iter: 222, test accuracy : 0.18052256107330322, loss : -0.06545504927635193
traing iter: 223, test accuracy : 0.18052256107330322, loss : -0.06474266946315765
traing iter: 224, test accuracy : 0.18052256107330322, loss : -0.06402260065078735
traing iter: 225, test accuracy : 0.18052256107330322, loss : -0.06329485028982162
traing iter: 226, test accuracy : 0.18052256107330322, loss : -0.06256052106618881
traing iter: 227, test accuracy : 0.18052256107330322, loss : -0.06181925907731056
traing iter: 228, test accuracy : 0.18052256107330322, loss : -0.06107352674007416
traing iter: 229, test accuracy : 0.18052256107330322, loss : -0.06032247841358185
traing iter: 230, test accuracy : 0.18052256107330322, loss : -0.05956796929240227
traing iter: 231, test accuracy : 0.18052256107330322, loss : -0.05880892276763916
traing iter: 232, test accuracy : 0.18052256107330322, loss : -0.0580473430454731
traing iter: 233, test accuracy : 0.18052256107330322, loss : -0.057283416390419006
traing iter: 234, test accuracy : 0.18052256107330322, loss : -0.05651719868183136
traing iter: 235, test accuracy : 0.18052256107330322, loss : -0.05574985221028328
traing iter: 236, test accuracy : 0.18052256107330322, loss : -0.05498150736093521
traing iter: 237, test accuracy : 0.18052256107330322, loss : -0.05421300232410431
traing iter: 238, test accuracy : 0.18052256107330322, loss : -0.05344397947192192
traing iter: 239, test accuracy : 0.18052256107330322, loss : -0.05267596244812012
traing iter: 240, test accuracy : 0.18052256107330322, loss : -0.051908738911151886
traing iter: 241, test accuracy : 0.18052256107330322, loss : -0.05114242434501648
traing iter: 242, test accuracy : 0.18052256107330322, loss : -0.05037837475538254
traing iter: 243, test accuracy : 0.18052256107330322, loss : -0.04961588233709335
traing iter: 244, test accuracy : 0.18052256107330322, loss : -0.048856236040592194
traing iter: 245, test accuracy : 0.18052256107330322, loss : -0.04809919744729996
traing iter: 246, test accuracy : 0.18052256107330322, loss : -0.04734491556882858
traing iter: 247, test accuracy : 0.18052256107330322, loss : -0.046594373881816864
traing iter: 248, test accuracy : 0.18052256107330322, loss : -0.045847661793231964
traing iter: 249, test accuracy : 0.18052256107330322, loss : -0.04510471224784851
traing iter: 250, test accuracy : 0.18052256107330322, loss : -0.04436592012643814
traing iter: 251, test accuracy : 0.18052256107330322, loss : -0.04363199323415756
traing iter: 252, test accuracy : 0.18052256107330322, loss : -0.04290255159139633
traing iter: 253, test accuracy : 0.18052256107330322, loss : -0.04217810183763504
traing iter: 254, test accuracy : 0.18052256107330322, loss : -0.04145902022719383
traing iter: 255, test accuracy : 0.18052256107330322, loss : -0.040745168924331665
traing iter: 256, test accuracy : 0.18052256107330322, loss : -0.04003699868917465
traing iter: 257, test accuracy : 0.18052256107330322, loss : -0.03933443874120712
traing iter: 258, test accuracy : 0.18052256107330322, loss : -0.038638122379779816
traing iter: 259, test accuracy : 0.18052256107330322, loss : -0.03794777765870094
traing iter: 260, test accuracy : 0.18052256107330322, loss : -0.03726353123784065
traing iter: 261, test accuracy : 0.18052256107330322, loss : -0.036586061120033264
traing iter: 262, test accuracy : 0.18052256107330322, loss : -0.035915084183216095
traing iter: 263, test accuracy : 0.18052256107330322, loss : -0.035250455141067505
traing iter: 264, test accuracy : 0.18052256107330322, loss : -0.03459298610687256
traing iter: 265, test accuracy : 0.18052256107330322, loss : -0.03394236043095589
traing iter: 266, test accuracy : 0.18052256107330322, loss : -0.033298444002866745
traing iter: 267, test accuracy : 0.18052256107330322, loss : -0.03266187384724617
traing iter: 268, test accuracy : 0.18052256107330322, loss : -0.03203270584344864
traing iter: 269, test accuracy : 0.18052256107330322, loss : -0.031410589814186096
traing iter: 270, test accuracy : 0.18052256107330322, loss : -0.030795607715845108
traing iter: 271, test accuracy : 0.18052256107330322, loss : -0.030188273638486862
traing iter: 272, test accuracy : 0.18052256107330322, loss : -0.02958841621875763
traing iter: 273, test accuracy : 0.18052256107330322, loss : -0.028995685279369354
traing iter: 274, test accuracy : 0.18052256107330322, loss : -0.028410688042640686
traing iter: 275, test accuracy : 0.18052256107330322, loss : -0.027833130210638046
traing iter: 276, test accuracy : 0.18052256107330322, loss : -0.02726338803768158
traing iter: 277, test accuracy : 0.18052256107330322, loss : -0.02670123055577278
traing iter: 278, test accuracy : 0.18052256107330322, loss : -0.02614673227071762
traing iter: 279, test accuracy : 0.18052256107330322, loss : -0.025599848479032516
traing iter: 280, test accuracy : 0.18052256107330322, loss : -0.025060418993234634
traing iter: 281, test accuracy : 0.18052256107330322, loss : -0.024528808891773224
traing iter: 282, test accuracy : 0.18052256107330322, loss : -0.02400490641593933
traing iter: 283, test accuracy : 0.18052256107330322, loss : -0.023488491773605347
traing iter: 284, test accuracy : 0.18052256107330322, loss : -0.022979963570833206
traing iter: 285, test accuracy : 0.18052256107330322, loss : -0.02247888222336769
traing iter: 286, test accuracy : 0.18052256107330322, loss : -0.021985376253724098
traing iter: 287, test accuracy : 0.18052256107330322, loss : -0.02149956300854683
traing iter: 288, test accuracy : 0.18052256107330322, loss : -0.02102125622332096
traing iter: 289, test accuracy : 0.18052256107330322, loss : -0.02055053971707821
traing iter: 290, test accuracy : 0.18052256107330322, loss : -0.020087242126464844
traing iter: 291, test accuracy : 0.18052256107330322, loss : -0.01963147521018982
traing iter: 292, test accuracy : 0.18052256107330322, loss : -0.019183173775672913
traing iter: 293, test accuracy : 0.18052256107330322, loss : -0.018742157146334648
traing iter: 294, test accuracy : 0.18052256107330322, loss : -0.018308615311980247
traing iter: 295, test accuracy : 0.18052256107330322, loss : -0.017882268875837326
traing iter: 296, test accuracy : 0.18052256107330322, loss : -0.017463278025388718
traing iter: 297, test accuracy : 0.18052256107330322, loss : -0.017051348462700844
traing iter: 298, test accuracy : 0.18052256107330322, loss : -0.016646670177578926
traing iter: 299, test accuracy : 0.18052256107330322, loss : -0.01624903827905655

final test accuracy: 0.18052256107330322
best epoch's test accuracy: 0.36274176836013794

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.