Giter Site home page Giter Site logo

adriannunez / fall-detection-with-cnns-and-optical-flow Goto Github PK

View Code? Open in Web Editor NEW
223.0 223.0 58.0 133 KB

Repository containing the material required to reproduce the results of the paper "Vision-Based Fall Detection with Convolutional Neural Networks"

Home Page: https://www.hindawi.com/journals/wcmc/2017/9474806/

License: MIT License

Python 100.00%

fall-detection-with-cnns-and-optical-flow's People

Contributors

adriannunez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fall-detection-with-cnns-and-optical-flow's Issues

Transfer learning for slip detection

Hi Adrain,

I have properly reproduced your thesis experiment and would like to try my own dataset. I am wondering if a data set of slips can be possibly used to train for a slip detection system?

Usually, slips are harder to notice and the start point of slips is ambiguous.

Missing code lines in temporalnetmulticam.py

First of all, I really appreciate your work on this. I think there's some missing part in your code in temporalnetmulticam file line #500 (approximately) where you are doing batch training.

saveFeatures() may forget 'y_images'

Hi, In Python code file: temporalnet_urfd.py, the function called 'saveFeatures()'
You just load the 'x_images' into folders and classes, but the 'y_images' was not be loaded.
Did you forget to write code about loading the 'y_images' into those two list?

Oh sorry ,I understand by myself.

bug in xxxxmulticam.py

Hi Adrian,

After trying to fix a few bugs in the code file, I came to anther error after running.

"Too many arguments for format string" line #241 and #242
Capture

I wasn't sure how you structure your dataset folder for Multicamera. I was wondering if you could upload parts of the dataset for the part of the experiment since the whole dataset would be huge.

best regards,

real-time applications

HI!I want to ask this code can use in real-time applications or not? for example, I want to monitoring of the actions of the elderly immediately,can it work?

Not using VGG16?

Perhaps I am mistaken, but is VGG16 actually used?

Looking at the code tf2_temporalnet_urfd3.py, the model for VGG16 is no longer used after line 376: model.compile(optimizer=adam, loss='categorical_crossentropy',metrics=['accuracy'])

The input layer for the classifier is:
Input(shape=(num_features,),
dtype='float32', name='input_{}'.format(fold_number))
which is not connected to the model at all.

In fact, the code still runs when commenting out the entire VGG16 section, as well as any lines involving model, and the accuracy is (so far from my testing) unaffected!

Are these lines right in temporalnet_combined.py?

    # Step 2
    all0_multicam = np.random.choice(all0_multicam, size_0, replace=False)
    all1_multicam = np.random.choice(all1_multicam, size_0, replace=False)
    all0_urfd = np.random.choice(all0_urfd, size_0, replace=False)
    all1_urfd = np.random.choice(all1_urfd, size_0, replace=False)
    all0_fdd = np.random.choice(all0_fdd, size_0, replace=False)
    all1_fdd = np.random.choice(all1_fdd, size_0, replace=False)

All use size_0 but never use size_1

Test Fall detection on a new video

Hello,

First, thanks for sharing this work.
My question is about how can i apply your algorithm to a new video? how should i do?
(I tried to do that by passing frames to the model but didnt work)

Thanks a lot

'keras.optimizers'

from keras.optimizers import AdamImportError: cannot import name 'Adam' from 'keras.optimizers' (C:\Users\vaska\AppData\Roaming\Python\Python310\site-packages\keras\optimizers.py)

Not Able To Follow The Codes

Hello,
I read paper and found quite interesting. I want to replicate it for my own dataset. The codes listed here are very difficult to understand. Could you please include a read me file. Where are the weights obtained after training the network with action recognition datasets. Please help me out.

global name 'feature_extractor' is not defined

Hi,

I tried to use the function 'test_video', but it seems that no definition of input 'feature_extractor'.

Is it a function 'feature_extractor.py' or something else? Where can I get the input variable?

Thank you.

Couldn't set up environment properly

Hi Andrian,

I read your paper and I am interested in trying it on my own dataset. However, I think the package versions are not compatible with each other. I tried running requirements.txt in terminal, but few issues came up. For example, I could only install Pypubsub 3.3.0 instead of 4.0.0, with python 2.7.17, and also tensorflow 1.12.0 not installable. I tried getting the version of 1.15.0, but it prompted that it only supports python 3.4 or higher.

My OS: Windows 10 Version 10.0.18362 Build 18362

Is it possible for you to update the dependency list a bit? Is there a way of installing the modules with python 3.7?

Thanks in advance!

help in keras ersion

please i need your help i cant load the retrained model vgg_16
i have this error at WEIGHT INITIALIZATION

...............................................................................................................................................................................................................
File "", line 6, in
layer_dict[layer].W.set_value(w2)
AttributeError: 'Conv2D' object has no attribute 'W'
..............................................................................................................................................................................................................

when i searched for this error i found this post say

...............................................................................................................................................................................................................
Please read: https://github.com/fchollet/keras/wiki/Keras-2.0-release-notes
Also, code tells you everything. Change to layer.kernel, layer.bias
...............................................................................................................................................................................................................

and i don't know what i have to do .... please help me with your keras version or editing your code to the last keras

Typo in saveFeatures function in temporalnet_multicam.py

y_images = glob.glob(not_fall + '/flow_x*.jpg')

y_images = glob.glob(not_fall + '/flow_x*.jpg') should be '/flow_y*.jpg'

Question:
whether the Extracted features and labels for Multicam were extracted with this script?
I tried to reproduce and got very bad sensitivity(less than 50%) with those (Extracted features and labels) , not sure whether it caused by wrong features

Analysis of False Alarms and Missed Detection

Hi Andrian,

How did you analyze false alarms and missed detection as in 4.3.1 in your paper?

I read through the scripts, but they don't seem to include such a part. Given the results in 4.3.1, I assume there is a way you can trace back the errors to their original video clips?

I am not quite sure how to achieve that. Much appreciated if you could give me some enlightenment

test_video function

Hi!
I tried to run the test_video function:
saved_model = load_model("/home/VGG16/models/urfd_fold_5.h5")
path = "/home/OF_NewData_1/Falls/fall_fall-01"
pred, truth = test_video(saved_model, path, 1)
print(pred)
But I keep on getting the error : ValueError: Error when checking : expected input to have 2 dimensions, but got array with shape (1, 20, 224, 224)
at : prediction = feature_extractor.predict(np.expand_dims(flow[i, ...], 0))
I'm still very new to Neural Nets so I might be doing something wrong but I would really appreciate help on how to generate prediction using your code.
Thanks!

Detect fall from Live Camera feed

Can I please know if it is possible to capture frames from Live camera and detect Fall from each frame or considering a set of consecutive frames.

test a picture

Hello, may I ask if I want to test whether a picture is a fall? How to change it?thank you

Original DataSet

Adrian, thanks for sharing this excellent work. I have a need to train on my own dataset but I wont be in possession of that dataset for some time and I want to better understand your methodology. I see that you had some storage limitations. If I open an S3 bucket for you could you transfer the original dataset you used for training? If not, could you link me to a good dataset I could use to train on?

Are these lines right in the temporalnet_combined.py

    # Step 2
    all0_multicam = np.random.choice(all0_multicam, size_0, replace=False)
    all1_multicam = np.random.choice(all1_multicam, size_0, replace=False)
    all0_urfd = np.random.choice(all0_urfd, size_0, replace=False)
    all1_urfd = np.random.choice(all1_urfd, size_0, replace=False)
    all0_fdd = np.random.choice(all0_fdd, size_0, replace=False)
    all1_fdd = np.random.choice(all1_fdd, size_0, replace=False)

All use size_0 but never use size_1

FDD Dataset Missing Labels

Hi Adrian, thank you for putting your code on GitHub, it has been really useful for me as I work on my own project. I want to ask about where you get the labels for the FDD dataset. Specifically, for Lecture_room and Office 1 & 2, I couldn't find the labels in the original dataset on this website: http://le2i.cnrs.fr/Fall-detection-Dataset. Did you manually label the videos in those folders yourself, or did you get it from somewhere else, perhaps the author of the dataset? Thank you very much in advance!

Testing the model on live videos

I have gone through your paper and "temporalneturfd.py" code which was really helpful to build the concept. I am able to build and test the model on URFD dataset. Since I have a requirement to classify the fall in the live videos, would it be possible for you to share the code for streaming in live videos using laptop webcam and then converting it into RGBimages? Once test data get converted into RGB images . Can we use existing model to classify the RGB images rather than Optical flow.
I want to avoid optical flow conversion , since it takes lot of time to get converted. For testing can i used directly RGB images

Size of the Optical Flow Images

Could you please specify the size of the computed optical flow images. If I am not wrong, it should be of 224 X 224 size. My optical flow image id of size 640 X 480. Do I need to resize it. Please help me in this.

Thank You

No module named 'vgg16module'

This error comes up on line 13 when running temporalneturfd.py, with python3.

I tryed using the vgg16module.py from project deeplearning-activity-recognition, the problem jumped to: No module named 'imagenet_utils'.

Keras 2.1.2 is installed. weights.h5, flow_mean.mat, labels_urfd.h5 and labels_urfd.h5 are also included in the folder.

Error running temporalneturfd.py

Hi,

I have tried running temporalneturfd.py to produce a trained model but I am getting the following error. I am not entirely sure how to go about fixing it.

Any suggestions would be helpful.

Traceback (most recent call last):
File "temporalneturfd.py", line 602, in <module>
main()
File "temporalneturfd.py", line 422, in main
train_indices_0, val_indices_0 = next(indices_0)
File "/opt/apps/anaconda3/envs/wmlce-v1.6.2-py3.7/lib/python3.7/site-packages/sklearn/model_selection/_split.py", line 1323, in split
for train, test in self._iter_indices(X, y, groups):
File "/opt/apps/anaconda3/envs/wmlce-v1.6.2-py3.7/lib/python3.7/site-packages/sklearn/model_selection/_split.py", line 1624, in _iter_indices default_test_size=self._default_test_size)
File "/opt/apps/anaconda3/envs/wmlce-v1.6.2-py3.7/lib/python3.7/site-packages/sklearn/model_selection/_split.py", line 1734, in _validate_shuffle_split '(0, 1) range'.format(test_size, n_samples))
ValueError: test_size=50.0 should be either positive and smaller than the number of samples 480 or a float in the (0, 1) range

Apologies for the bad formatting of the stack trace.

Thanks,

How to test on a new set of optical flow image

Hi! I'm testing this excellent project on my server, my question may be silly as I am just begin to learn about computer vision hahaha. I have already re-trained this model, and I noticed there is a function called 'test_video' in temporalneturfd.py. Is it used to test on a new video? If so, what should I do with the two returned variables called 'predictions' & 'truth' to know whether there is a fall action? Thank you very much.

Train the model on Multi-class problem ??

I'd like to apply this architecture to solve a multi-class problem, instead of a binary classification of fall or no-fall sequence to many classes, like moving or doing some actions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.