Giter Site home page Giter Site logo

lincellularneuroscience / vame Goto Github PK

View Code? Open in Web Editor NEW
159.0 13.0 58.0 25.1 MB

Variational Animal Motion Embedding - A tool for time series embedding and clustering

License: GNU General Public License v3.0

Python 99.94% Shell 0.06%
neuroscience behavior-quantification variational-autoencoder vame behavior-analysis

vame's Introduction

VAME workflow

VAME in a Nutshell

VAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a PyTorch based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every step of the input time series.

behavior

The workflow of VAME consists of 5 steps and we explain them in detail here.

Installation

To get started we recommend using Anaconda with Python 3.6 or higher. Here, you can create a virtual enviroment to store all the dependencies necessary for VAME. (you can also use the VAME.yaml file supplied here, byt simply openning the terminal, running git clone https://github.com/LINCellularNeuroscience/VAME.git, then type cd VAME then run: conda env create -f VAME.yaml).

  • Go to the locally cloned VAME directory and run python setup.py install in order to install VAME in your active conda environment.
  • Install the current stable Pytorch release using the OS-dependent instructions from the Pytorch website. Currently, VAME is tested on PyTorch 1.5. (Note, if you use the conda file we supply, PyTorch is already installed and you don't need to do this step.)

Getting Started

First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our paper, we were using a single Nvidia GTX 1080 Ti GPU to train our network. A hardware guide can be found here. Once you have your hardware ready, try VAME following the workflow guide.

If you want to follow an example first you can download video-1 here and find the .csv file in our example folder.

News

  • November 2022: Finally the VAME paper is published! Check it out on the publisher werbsite. In comparison to the preprint version, there is also a practical workflow guide included with many useful instructions on how to use VAME.
  • March 2021: We are happy to release VAME 1.0 with a bunch of improvements and new features! These include the community analysis script, a model allowing generation of unseen datapoints, new visualization functions, as well as the much requested function to generate GIF sequences containing UMAP embeddings and trajectories together with the video of the behaving animal. Big thanks also to @MMathisLab for contributing to the OS compatibility and usability of our code.
  • November 2020: We uploaded an egocentric alignment script to allow more researcher to use VAME
  • October 2020: We updated our manuscript on Biorxiv
  • May 2020: Our preprint "Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion" is out! Read it on Biorxiv!

Authors and Code Contributors

VAME was developed by Kevin Luxem and Pavol Bauer.

The development of VAME is heavily inspired by DeepLabCut. As such, the VAME project management codebase has been adapted from the DeepLabCut codebase. The DeepLabCut 2.0 toolbox is © A. & M.W. Mathis Labs deeplabcut.org, released under LGPL v3.0. The implementation of the VRAE model is partially adapted from the Timeseries clustering repository developed by Tejas Lodaya.

References

VAME preprint: Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion
Kingma & Welling: Auto-Encoding Variational Bayes
Pereira & Silveira: Learning Representations from Healthcare Time Series Data for Unsupervised Anomaly Detection

License: GPLv3

See the LICENSE file for the full statement.

Code Reference (DOI)

DOI

vame's People

Contributors

alexcwsmith avatar bramn22 avatar kvnlxm avatar mmathislab avatar orena1 avatar pavolbauer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vame's Issues

How to put labels of motifs and communities in the GIF?

I have generated the png by using:
vame.gif(config, pose_ref_index=[0,5], subtract_background=True, start=0, length=500, max_lag=30, label='community', file_format='.mp4', crop_size=(300,300))

However, unlike the GIF in the github frontpage, there is no lables showing motifs and communities, and the movement of the animal is not that smooth.
k-means = 15, cut_tree = 2

The GIF looks like this:
https://drive.google.com/file/d/15LVOY8ps4VKlgMgB4uY-VTvabwNYH1cM/view?usp=sharing

How to put labels of motifs and communities in the GIF?

Many thanks!

Reconstruction and Prediction Improvements

Hello! Thank you for sharing this amazing program. I just had a quick question about how I can improve on my reconstruction and prediction. This is what I have so far:
data_size = 300k
Screen Shot 2020-07-01 at 12 40 55 AM
MSE-and-KL-LossVAME
Future_Reconstruction1
Future_Reconstruction4
Future_Reconstruction7

I think the reconstruction does relatively well, but do you think I can do better? I am currently thinking of increasing zdims and time_window to pick up more granularity and reconstruct longer movements better. The prediction on the other hand is not too good in my opinion.

I also applied UMAP to getting these embedding. (Note, I set the n_cluster=30 for right now, but I think I will change this later)
ant
ant_scatter

Function for batch analysis of new files

Hihi!

First of all, thanks a ton for the work you have put on this, so far I'm super happy with VAME. Second, below I outline a sorta request/idea:

  • I already have a model trained on a very large portion of my data (~500k data points) that I'm very happy with
  • I'd like to incorporate new data (being produced daily) into the analysis, namely using the already trained encoder to obtain the latent space representation of the data, and also their motif structure (i.e. clustering)
  • This would ideally be a single function, so that I can incorporate it into my pipeline, and where I can supply data independently from the VAME project folder, as I have my own data structure for other analyses that have nothing to do with VAME
  • I looked at pose_segmentation.py . If I get this right, it seems I would need to modify embedd_latent_vectors (as the data path is assembled hardcoded from the VAME project folder structure) to get the latent representation, and also use parts of same_parameterization to get the motifs.
  • This also would mean I'll have to save the kmeans object created from the trained model the first time I run vame.pose_segmentation() (i.e. will have to expose it in the code).

The question is: is this something you are working on/have something lying around/something already there that I missed, or should I just go ahead and try to code it myself (+ send a pull request if this is of interest). Hope this makes sense and I'm not completely off the mark. And thanks in advance for the help !

Is it possible to run VAME on google colab?

Hi,

I am currently trying to run VAME on my laptop without GPU. I could not create training data set as I mentioned in #46
I am wondering if it is possible to run VAME on google colab and use its GPU?

I tried to installed anaconda in colab and have successfully created the conda enviroment as followed:

from google.colab import drive
drive.mount('/content/drive')

import os
os.chdir("/content/drive/MyDrive/DeepLabCut_ROI/VAME")

!nvidia-smi

!wget https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh
!bash Anaconda3-5.2.0-Linux-x86_64.sh -bfp /usr/local

import sys
sys.path.insert(0, "/usr/local/lib/python3.7/site-packages/")

!conda env create -f VAME.yaml

!source activate VAME

However, when I tried to install vame on colab, I got the error message belowed:
!python setup.py install

/usr/local/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
warnings.warn(msg)
running install
running bdist_egg
running egg_info
writing vame.egg-info/PKG-INFO
writing dependency_links to vame.egg-info/dependency_links.txt
writing entry points to vame.egg-info/entry_points.txt
writing requirements to vame.egg-info/requires.txt
writing top-level names to vame.egg-info/top_level.txt
reading manifest file 'vame.egg-info/SOURCES.txt'
writing manifest file 'vame.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/vame
copying build/lib/vame/init.py -> build/bdist.linux-x86_64/egg/vame
creating build/bdist.linux-x86_64/egg/vame/initialize_project
copying build/lib/vame/initialize_project/new.py -> build/bdist.linux-x86_64/egg/vame/initialize_project
copying build/lib/vame/initialize_project/init.py -> build/bdist.linux-x86_64/egg/vame/initialize_project
creating build/bdist.linux-x86_64/egg/vame/util
copying build/lib/vame/util/gif_pose_helper.py -> build/bdist.linux-x86_64/egg/vame/util
copying build/lib/vame/util/csv_to_npy.py -> build/bdist.linux-x86_64/egg/vame/util
copying build/lib/vame/util/init.py -> build/bdist.linux-x86_64/egg/vame/util
copying build/lib/vame/util/align_egocentrical.py -> build/bdist.linux-x86_64/egg/vame/util
copying build/lib/vame/util/auxiliary.py -> build/bdist.linux-x86_64/egg/vame/util
creating build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/create_training.py -> build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/evaluate.py -> build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/dataloader.py -> build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/rnn_model.py -> build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/rnn_vae.py -> build/bdist.linux-x86_64/egg/vame/model
copying build/lib/vame/model/init.py -> build/bdist.linux-x86_64/egg/vame/model
creating build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/segment_behavior.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/tree_hierarchy.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/init.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/videowriter.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/umap_visualization.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/pose_segmentation.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/community_analysis.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/gif_creator.py -> build/bdist.linux-x86_64/egg/vame/analysis
copying build/lib/vame/analysis/generative_functions.py -> build/bdist.linux-x86_64/egg/vame/analysis
byte-compiling build/bdist.linux-x86_64/egg/vame/init.py to init.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/initialize_project/new.py to new.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/initialize_project/init.py to init.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/util/gif_pose_helper.py to gif_pose_helper.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/util/csv_to_npy.py to csv_to_npy.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/util/init.py to init.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/util/align_egocentrical.py to align_egocentrical.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/util/auxiliary.py to auxiliary.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/create_training.py to create_training.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/evaluate.py to evaluate.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/dataloader.py to dataloader.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/rnn_model.py to rnn_model.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/rnn_vae.py to rnn_vae.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/model/init.py to init.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/segment_behavior.py to segment_behavior.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/tree_hierarchy.py to tree_hierarchy.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/init.py to init.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/videowriter.py to videowriter.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/umap_visualization.py to umap_visualization.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/pose_segmentation.py to pose_segmentation.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/community_analysis.py to community_analysis.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/gif_creator.py to gif_creator.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/vame/analysis/generative_functions.py to generative_functions.cpython-36.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying vame.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating 'dist/vame-1.0-py3.6.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing vame-1.0-py3.6.egg
Copying vame-1.0-py3.6.egg to /usr/local/lib/python3.6/site-packages
Adding vame 1.0 to easy-install.pth file
Installing vame script to /usr/local/bin

Installed /usr/local/lib/python3.6/site-packages/vame-1.0-py3.6.egg
Processing dependencies for vame==1.0
Searching for opencv-python
Reading https://pypi.org/simple/opencv-python/
Downloading https://files.pythonhosted.org/packages/bb/08/9dbc183a3ac6baa95fabf749ddb531bd26256edfff5b6c2195eca26258e9/opencv-python-4.5.1.48.tar.gz#sha256=78a6db8467639383caedf1d111da3510a4ee1a0aacf2117821cae2ee8f92ce37
Best match: opencv-python 4.5.1.48
Processing opencv-python-4.5.1.48.tar.gz
Writing /tmp/easy_install-9nd3ra8a/opencv-python-4.5.1.48/setup.cfg
Running opencv-python-4.5.1.48/setup.py -q bdist_egg --dist-dir /tmp/easy_install-9nd3ra8a/opencv-python-4.5.1.48/egg-dist-tmp-loujmu5p
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-9nd3ra8a/opencv-python-4.5.1.48/setup.py", line 10, in
entry_points={"console_scripts": "vame = vame:main"},
ModuleNotFoundError: No module named 'skbuild'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 29, in
"opencv-python",
File "/usr/local/lib/python3.6/site-packages/setuptools/init.py", line 129, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/local/lib/python3.6/site-packages/setuptools/command/install.py", line 117, in do_egg_install
cmd.run()
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 412, in run
self.easy_install(spec, not self.no_deps)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 654, in easy_install
return self.install_item(None, spec, tmpdir, deps, True)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 701, in install_item
self.process_distribution(spec, dist, deps)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 746, in process_distribution
[requirement], self.local_index, self.easy_install
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 774, in resolve
replace_conflicting=replace_conflicting
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 1057, in best_match
return self.obtain(req, installer)
File "/usr/local/lib/python3.6/site-packages/pkg_resources/init.py", line 1069, in obtain
return installer(requirement)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 673, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 699, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 884, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 1152, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/local/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 1138, in run_setup
run_setup(setup_script, args)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/usr/local/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/local/lib/python3.6/contextlib.py", line 99, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/usr/local/lib/python3.6/site-packages/setuptools/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/usr/local/lib/python3.6/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-9nd3ra8a/opencv-python-4.5.1.48/setup.py", line 10, in
entry_points={"console_scripts": "vame = vame:main"},
ModuleNotFoundError: No module named 'skbuild'

Bug when scheduler turned off

Hi,

When the 'scheduler' parameter is turned off, the learning rate decreases to 0 after 100 epochs. This is because in lines 432-435 of rnn_vae.py:

    if optimizer_scheduler:
        scheduler = StepLR(optimizer, step_size=100, gamma=0.2, last_epoch=-1)
    else:
        scheduler = StepLR(optimizer, step_size=100, gamma=0, last_epoch=-1)

with optimizer_scheduler deactivated, gamma should be 1, not 0. This will maintain the same learning rate throughout training. I am happy to submit a pull request if you'd like, but not sure you're currently reviewing/merging them.

In my fork I've implemented a 'manual' learning rate descent method where if scheduler=0, LR decreases if/when BEST_LOSS does not improve after STEP_SIZE epochs (and added step_size and gamma to config). If you're interested in implementing this, happy to submit a full pull request there as well, check out my fork if you want to.

Best,
Alex

.npy file structure

Hi,

For the files that have to be added to the /data folder, it seems that .npy files are expected. How should the pose estimation information be organized in these files (rows=frames, columns=x or y coordinate of a bodypart)?
Is there a script that makes the automatic conversion from e.g. Deeplabcut .csv files to .npy files?

Thanks,
Bram

Program is terminated because you tried to allocate too many memory regions.

Hello,

I installed VAME early last week using the git clone feature shown in your main page. I am able to successfully follow the workflow until the pose_segmentation step. When I select multiple video/excel files for training, everything works until I try to use the pose_segmentation function, where I get an error "Program is terminated. Because you tried to allocate too many memory regions" and then the Anaconda powershell window I'm running on terminates out of Ipython and the VAME environment.

If I edit the config file after training and use only one video file at a time, I can get get the pose_segmentation and the latter functions to work, so I think it may be potentially be an issue with setting up parallel computing during analysis? For reference, I'm running on a GeForce RTX 2080 Super GPU with 8 Gb memory and the RAM onboard the computer is 96 gigs of ram, running on Windows 10 OS.

Here is an example of what the "error" looks like during the pose_segmentation function execution. It looks like it's able to analyze the pose predictions, and then quits right after. Thanks for your help!

image

Flat latent space

Hello,
I am wondering if you may be able to give insight to parameter tuning. Running with a top-down view produces results as expected, but I am getting weird results when I try to run on some DLC data that was acquired with a side view (with 9 points labeled, nose, tail-base, 3 points along the spine, and 4 paws). These videos are 30FPS, and if I train with time_window=15, I get a mostly flat reconstruction.

Future_Reconstruction

I'm not positive how to interpret this, but I think this indicates that it is just not detecting any change in motion in the latent state? Would you recommend increasing the time window as a first step at improving this model?

In this experiment, about 1/2 the videos show the mice engaging in highly stereotyped behavior. I am wondering if that could contribute to a flat latent state, as there is not as much variation in animal motion in those videos. Would you recommend training a model on only non-experimental (i.e. non-stereotyped behavior) videos and then using that model to analyze the experimental videos? I'm not sure how I could quantify accuracy if I do not include the stereotyped behavior in the training of the model.

Thanks,
Alex

Egocentric Coordinates - ValueError: array of sample points is empty

Hi,

I am running VAME on colab. I could go over the pipeline with de example video and .csv. However, when I try to use VAME to analyze my own DLC video with its .csv, I encountered with this error message:

vame.egocentric_alignment(config, pose_ref_index=[0,6]) # cuz I have labeled 7 body parts

Aligning data Trial_2_2739DLC_resnet50_EPMMay11shuffle1_10000_labeled, Pose confidence value: 0.99
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/util/align_egocentrical.py", line 314, in egocentric_alignment
File "/usr/local/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/util/align_egocentrical.py", line 290, in alignment
File "/usr/local/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/util/align_egocentrical.py", line 155, in align_mouse
File "/usr/local/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/util/align_egocentrical.py", line 87, in interpol
File "<array_function internals>", line 6, in interp
File "/usr/local/envs/VAME/lib/python3.7/site-packages/numpy/lib/function_base.py", line 1428, in interp
return interp_func(x, xp, fp, left, right)
ValueError: array of sample points is empty

I guess it is because I only trained the video with 10000 times, therefore the pose-confidence / likelihood is generally about 0.5.
It seems that the pose confidence threshold is set to 0.99, therefore, in my .csv there is no sample point that could meet this criterion.
Is it possible to change the Pose confidence value to 0.5, cuz I don't have a GPU and it is super slow to train a large DLC training model on colab. I just want to play with my own behavioral video.

Many thanks!

RuntimeError: svd_cuda: For batch 0: U(129,129) is zero, singular U.

Hey @KLuxem, testing out the new rnn_vae.py code I am hitting an issue with the cluster_loss. (Note, I also tried to set a project up with the demo video-1.py but I actually couldn't make a training set.) However, I can confirm the data I am using I could use this past weekend before the big change.

torch.version is 1.8.0+cu101

Here is the traceback (source of error is here):

_ ,sv_2, _ = torch.svd(gram_matrix)

A VAME project has been created. 

Next use vame.create_trainset(config) to split your data into a train and test set. 
Afterwards you can use vame.rnn_model() to train the model on your data.

conversion from DeepLabCut csv to numpy complete...
DATA CONVERTED, CREATING TRAINING SET...

Creating training dataset...
Lenght of train data: 159570
Lenght of test data: 17730
batches size set to: 128
max epochs set to: 50
num_features set to: 22

TRAINING VAME MODEL...
Train Variational Autoencoder - Model name: VAME 

Using CUDA
GPU active: True
GPU used:  Tesla P100-PCIE-16GB
Latent Dimensions: 30, Time window: 30, Beta: 1, lr: 0.0005

Compute mean and std for temporal dataset.
Initialize train data. Datapoints 159570
Initialize test data. Datapoints 17730
Scheduler step size: 100, Scheduler gamma: 0.20

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/vame/model/rnn_vae.py", line 48, in cluster_loss
    _ ,sv_2, _ = torch.svd(gram_matrix)
RuntimeError: svd_cuda: For batch 0: U(129,129) is zero, singular U.

RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12

Hi,
I am using VAME to analyse my own video.
However, when I run vame.train_model(config), I got the error message:
"RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12"

I tried to fix it following the suggestion mentioned in #27
"try to set the num_features in your config.yaml" However, in my config.yaml, the num_features is exactly 12, I tried to changed it to 10, but it will return exactly the same error message:

I don't know how to fix it, any help would be much appreciated.

In [16]: vame.train_model(config)
Train Variational Autoencoder - model name: VAME

Latent Dimensions: 30, Time window: 30, Batch Size: 256, Beta: 1, lr: 0.0005

Initialize train data. Datapoints 8004
Initialize test data. Datapoints 889
Scheduler step size: 100, Scheduler gamma: 0.20

Start training...
Epoch: 1

RuntimeError Traceback (most recent call last)
in
----> 1 vame.train_model(config)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_vae.py in train_model(config)
336 FUTURE_STEPS, scheduler, MSE_REC_REDUCTION,
337 MSE_PRED_REDUCTION, KMEANS_LOSS, KMEANS_LAMBDA,
--> 338 TRAIN_BATCH_SIZE, noise)
339
340 current_loss, test_loss, test_list = test(test_loader, epoch, model, optimizer,

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_vae.py in train(train_loader, epoch, model, optimizer, anneal_function, BETA, kl_start, annealtime, seq_len, future_decoder, future_steps, scheduler, mse_red, mse_pred, kloss, klmbda, bsize, noise)
120
121 if future_decoder:
--> 122 data_tilde, future, latent, mu, logvar = model(data_gaussian)
123
124 rec_loss = reconstruction_loss(data, data_tilde, mse_red)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_model.py in forward(self, seq)
168
169 """ Encode input sequence """
--> 170 h_n = self.encoder(seq)
171
172 """ Compute the latent state via reparametrization trick """

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/vame-1.0-py3.7.egg/vame/model/rnn_model.py in forward(self, inputs)
34
35 def forward(self, inputs):
---> 36 outputs_1, hidden_1 = self.encoder_rnn(inputs)#UNRELEASED!
37
38 hidden = torch.cat((hidden_1[0,...], hidden_1[1,...], hidden_1[2,...], hidden_1[3,...]),1)

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
817 hx = self.permute_hidden(hx, sorted_indices)
818
--> 819 self.check_forward_args(input, hx, batch_sizes)
820 if batch_sizes is None:
821 result = _VF.gru(input, hx, self._flat_weights, self.bias, self.num_layers,

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_forward_args(self, input, hidden, batch_sizes)
224
225 def check_forward_args(self, input: Tensor, hidden: Tensor, batch_sizes: Optional[Tensor]):
--> 226 self.check_input(input, batch_sizes)
227 expected_hidden_size = self.get_expected_hidden_size(input, batch_sizes)
228

~/opt/anaconda3/envs/VAME/lib/python3.7/site-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
202 raise RuntimeError(
203 'input.size(-1) must be equal to input_size. Expected {}, got {}'.format(
--> 204 self.input_size, input.size(-1)))
205
206 def get_expected_hidden_size(self, input: Tensor, batch_sizes: Optional[Tensor]) -> Tuple[int, int, int]:

RuntimeError: input.size(-1) must be equal to input_size. Expected 10, got 12

extremely slow training process.

Hi Kevin

I have been using your pipeline frequently and it has provided some insight into the behavioural data collected in our lab. Your effort on developing and sharing this pipeline is highly appreciated.

I came across a case in which the training process is extremely slow. after about 24 h, it was only about 150 epochs. And my PC has crashed now. I used 1,400,000 frames in total (0.2 test, 0.8 training), 12 features, and dim = 30.
1- Could you think of any possible source of slowing this down.
2- probably I have to restart my PC, could you help on how can I continue training the same network. If Im not wrong the code saves the network parameter after each 50 epochs.

bests and thanks for your help in advance,
Mohammad

VAME Motifs

Does VAME provide consistent color-coding for the motifs across different participants?
As it appeared so in the publication and I would like to know if that was prepared by specifying certain parameters or if its built in to do so

Thank you.

I have a few question

Hello. I am RIM-YU. I have a few question.

  1. Is it correct to use .csv file obtained through deeplabcut.analyze_videos(config_path, ['fullpath/analysis/project/videos/reachingvideo1.avi'], save_as_csv=True) in VAME algorithm?

  2. Approximately how many data in .csv file is optimal for the algorithm?

Thank you.

note: if using pre-1.0 release project in 1.0 one needs a few extra variables in config.yaml for backwards compt.

Nice update! Small note for other users, if you were using an alpha release, you need to add the following to the config.yaml file for backwards compatibility (values rather random, I would look to the authors new config for better ones):

min_dist: 0.1
n_neighbors: 3
random_state: true
model_name: VAME
n_cluster: 30
num_points: 10000
length_of_motif_video: 500

and the DLC .csv needs to be added to the folder titled pose_estimation.

I tested:

vame.community(cfg, show_umap=True)

vame.visualization(cfg)

vame.community_videos(cfg, videoType='.avi')

I would suggest the wiki be updated with the minimum input required. i.e., vame.gif(cfg, pose_ref_index=[0,5], subtract_background=False, start=None, length=500, max_lag=30, label='community', file_format='.avi', crop_size=(300, 300))

Also vame.rnn_model is renamed to train_model

vame.behavior_segmentation is renamed to vame.pose_segmentation

Conversion to numpy from csv [VAME independent mistake]

Disclaimer:
As you can read below in my last post, this is not a VAME dependent error but a mistake in conversion on my side.

Original post:

Me again,

I created an egocentric representation of my mouse with the mouse spanned on the y-Axis between Neck and Tailroot (set new origin and rotated the coordinates). I also normalized the mouse for good measure based on its length to create a somewhat norm mouse representation. This created a dataframe with values around -2 to 2 around the origin 0,0.
When trying to create a trainingset (with vame.create_trainset(config)), I was got an error in line 39 in vame/model/create_training.py:

    seq_inter = np.zeros((X.shape[0],X.shape[1]))
    for s in range(num_features):
        seq_temp = X[s,:]
        seq_pd = pd.Series(seq_temp)
        if np.isnan(seq_pd[0]):
            -->seq_pd[0] = next(x for x in seq_pd if not np.isnan(x))
        seq_pd_inter = seq_pd.interpolate(method="linear", order=None)
        seq_inter[s,:] = seq_pd_inter

The error simple stated "Stop Iteration"

As this directly followed after issue #9 , I tried to pinpoint the issue and investigated my input '.npy' again.
Truncating the data to 10 rows did not solve the issue, but I was able to convince myself that the data itself is clean. No NaN's, empty entries, weird dtypes. Nothing.
Only that i had negative values.
So I just added +100 to all values... and it worked.

The model is happily training for now. Can you elaborate?

VAME runs on google colab?

I know that to date it can be difficult to install in the cloud since there is no launch of pip, but I would like to know if there is already any other way to work with VAME that is not locally since the use of GPU is an important limiting factor for being able to use this software

Interaction with objects?

This looks like a great piece of software and something I would love to dig into. For my application I have a single animal that I am interested in, which interacts with multiple tracked points. My DLC pose estimates are the single animal of interest and objects of interest that the mouse interacts with. Is it possible to use the location of these objects with VAME?

motif_video error

Hi,

Thank you for the great tool!

Sorry to bring this up again, I'm experiencing the same problem as in Issue #11.
Unfortunately I can't go around the error message: 'UnboundLocalError: local variable 'fps' referenced before assignment'

Do I have to implement the workaroung for .avi files in videowriter.py from VAME\vame\analysis or VAME\build\lib\vame, or the VAME\dist .egg file?

Best,
Guillermo

Step 2: vame.rnn_model() UnboundLocalError: local variable 'idx' referenced before assignment

Hi,
After executing first step in the guide successfully, I got "UnboundLocalError: local variable 'idx' referenced before assignment" in step 2 (vame.rnn_model() step). Error refers to "VAME/vame/model/rnn_vae.py", line 291. Do you know why I am getting this error and how can i fix it?

(vame) [eryilmaz@scc-q28 VAME]$ python3
Python 3.7.7 (default, May 21 2020, 14:57:43) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import vame
>>> config='/projectnb/dnn-motion/kubra/VAME/vame_trial_3-Dec2-2020/config.yaml'
>>> vame.rnn_model(config, model_name='VAME', pretrained_weights=False, pretrained_model=None)
Train RNN model!
Using CUDA
GPU active: True
GPU used: Tesla V100-PCIE-16GB
Latent Dimensions: 30, Beta: 1, lr: 0.0005
Initialize train data. Datapoints 160
Initialize test data. Datapoints 40
Epoch: 1
Train: 
/share/pkg.7/pytorch/1.6.0/install/3.7/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:123: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/projectnb2/dnn-motion/kubra/VAME/vame/model/rnn_vae.py", line 444, in rnn_model
    TRAIN_BATCH_SIZE)
  File "/projectnb2/dnn-motion/kubra/VAME/vame/model/rnn_vae.py", line 291, in train
    print('Average Train loss: {:.4f}, MSE-Loss: {:.4f}, MSE-Future-Loss {:.4f}, KL-Loss: {:.4f},  Kmeans-Loss: {:.4f}, weigt: {:.4f}'.format(train_loss / idx,
UnboundLocalError: local variable 'idx' referenced before assignment

RuntimeError: input.size(-1) must be equal to input_size. Expected 12, got 18

Hi, I've been able to run VAME's csv_to_numpy and align_demo.py functions, followed by vame.create_trainset(config).

Lenght of train data: 9472
Lenght of test data: 2367

but when running

vame.rnn_model(config, model_name='VAME', pretrained_weights=False,
pretrained_model=None)

I received the above RuntimeError. Is there a solution for this?

egocentric_alignment = ValueError: array of sample points is empty

Hello

Note that in issue # 48, a problem similar to the one I have now was mentioned, but the difference is that in my case I run it locally, I have modified the config file according to the characteristics of my project, in this case, I have marked 17 parts, so my num_features in config.yaml have set it to 34 and pose_ref_index = [0,16]. I don't know it is affecting that in my video four animals appear in four regions of the video and only one has been marked
I hope you can help me because this tool fits perfectly with what I was looking for to continue with my project
issue_empty array
img01744_bodypart

Install fails on Windows following the instructions as written

Creating a conda environment, installing pytorch, then running setup.py fails on Windows. Seems to be an issue with installing scipy and can be solved by specifying scipy<=1.2.1 when creating the conda environment (not sure how critical the exact version is, I pulled it from the setup.py). Mostly putting this here in case anyone else runs into the same problem.

Steps to reproduce:-

  1. Open Command Prompt in the vame repo folder
  2. conda create -n vametest
  3. conda activate vametest
  4. Install pytorch for your system, e.g. in my case conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
  5. python setup.py install

Expected behaviour: VAME installed.

Actual behaviour: a huge error message, attached. Main problem seems to be the following:

Running scipy-1.2.1\setup.py -q bdist_egg --dist-dir C:\Users\LSPS2\AppData\Local\Temp\easy_install-xjjkkxxc\scipy-1.2.1\egg-dist-tmp-ckxdo8qj
C:\Users\LSPS2\AppData\Local\Temp\easy_install-xjjkkxxc\scipy-1.2.1\setup.py:114: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
C:\Users\LSPS2\AppData\Local\Temp\easy_install-xjjkkxxc\scipy-1.2.1\setup.py:377: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
  warnings.warn("Unrecognized setuptools command, proceeding with "
Running from scipy source directory.
Could not locate executable g77
Could not locate executable f77
Could not locate executable ifort
Could not locate executable ifl
Could not locate executable f90
Could not locate executable DF
Could not locate executable efl
Could not locate executable gfortran
Could not locate executable f95
Could not locate executable g95
Could not locate executable efort
Could not locate executable efc
Could not locate executable flang
don't know how to compile Fortran code on platform 'nt'

Additional notes: as mentioned up top, can be fixed by changing step 2 to conda create -n vametest scipy<=1.2.1.

System info:-

Windows 10 x64
Conda 4.8.3
Python 3.7.7
CUDA 10.1.105

keyError:'random_state_kmeans: '

Hello, first of all thank you very much for this excellent project, which will provide a lot of help for the data analysis of my project.

VAME created a new project on my project data, and the training and evaluation network were all running successfully, but the following error was encountered when running vame.pose_segmentation():

fa8899ce8a920848b6ac5914d3e4c29

Because I am not familiar with your code, I cannot know the reason for this error. I hope to get your help.

I think it may be caused by which parameter in my config.yaml does not match my project data. Therefore, I would also like to ask if there is a document that details the role of the parameters in the config.yaml file and how to modify the configuration, so that I can know that I should how to adjust which parameters to get better performance according to my project data (training set size, etc.) .

Very thanks for any reply and suggestions!

Installation VAME 1.0

Hi All,

thank you again for this tool, super exciting!

Problem:
I was having troubles installing the new VAME release with the VAME.yaml and setup.py with some pip errors poping up, something like AttributeError: module 'brotli' has no attribute 'error'.

Solution:
After manually installing all dependencies in the VAME.yaml in the new environment running python setup.py install worked again.

Best,
Guillermo

ValueError during behavior segmentation

Hi,

I'm testing vame for some OF videos I tracked with DLC but when I'm calling vame.behavior_segmentation(config, model_name='VAME', cluster_method='kmeans', n_cluster=[30]) I get:

image

Any idea what might be causing the issue?

Create_trainset error

Hi,
Thanks for your work. It's excellent!
I run the demo.py file. And got an error at --'create_trainset'.

It says: 'ValueError: need at least one array to concatenate'
could you tell me how to fix it?
image
image

Further Analysis?

Is it possible to get a output of how many times a motif is repeated? I am trying to compare the amount of times a animal repeats a behavior between some groups and was wondering if it was possible with this program?

Thanks,

John

Resuming from pretrained weights

Hello,
Thanks for writing this, it was surprisingly easy to install and get started!

However, I'm having trouble resuming training from pretrained weights. I'm a bit confused what I am supposed to specify in pretrained_model. Does this use a .pkl file snapshot from a previously trained model? Do I need to copy the .pkl file into the pretrained_model folder within my project path? Any clarification on this would be great!

Also, once a model is trained, can I use it to analyze videos / egocentric CSVs that were not included in the training of the model?

Thanks,
Alex

RuntimeError: Vame.rnn_model()

Hi,
I tried to train the model using the code steps provided in demo.py but ran into an issue:
Can you help?

# Train rnn model:
vame.rnn_model(config, model_name='VAME', pretrained_weights=False, pretrained_model='pretrained')

Console Output:

Train RNN model!
Using CUDA
GPU active: True
GPU used: GeForce RTX 2070 with Max-Q Design
Latent Dimensions: 30, Beta: 1, lr: 0.0005
Initialize train data. Datapoints 45911
Initialize test data. Datapoints 11477
Epoch: 1
Train: 

Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.

Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.
Traceback (most recent call last):
  File "D:\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\pydevd.py", line 1434, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "D:\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/schwa/PycharmProjects/VAME/examples/demo.py", line 31, in <module>
    vame.rnn_model(config, model_name='VAME', pretrained_weights=False, pretrained_model='pretrained')
  File "C:\Users\schwa\PycharmProjects\VAME\vame\model\rnn_vae.py", line 442, in rnn_model
    TRAIN_BATCH_SIZE)
  File "C:\Users\schwa\PycharmProjects\VAME\vame\model\rnn_vae.py", line 256, in train
    kmeans_loss = cluster_loss(latent.T, kloss, klmbda, bsize)
  File "C:\Users\schwa\PycharmProjects\VAME\vame\model\rnn_vae.py", line 186, in cluster_loss
    _ ,sv_2, _ = torch.svd(gram_matrix)
RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 14)

best,
Jens

VAME question

Hi Kevin

Thanks a lot for your response to my email and your help. My VAME code workes now!
I could run the code and get some reconstructed trajectories.
But I think they are not at least visually satisfactory!
I have attached the figure. I am not very familiar but I know there are many parameters that could be tuned. As we have in config.yaml file.

I have attached my config file and evaluation images.
I wanted your opinion about which part of the parameters do you think I could focus on playing around to improve the reconstruction quality?

Thanks
Mohammad

Screenshot from 2020-05-24 13-43-43
Future_Reconstruction
MSE-and-KL-LossVAME

How to deal with multi-animals?

The camera recorded 4 mice simultaneously during a 30 min open-field task. I have used DeepLabCut to extract the position of each animal. How could I use VAME to analyze their behavior motifs based on the .csv file created by DLC?

Many thanks!

Help with Creating Egocentric Coordinates

Hello,

I am currently trying to use DeepLabCut tracking of mice in an open field arena to implement VAME. I have finished analyzing the Open Field videos using DeepLabCut, but I am having trouble transforming the resulting coordinates into egocentric coordinates. Does anyone have experience in performing this transformation or have any tips for going about this?

Also, I tried using the sample code from the README to convert our csv to np array, but I receive an indexerror (see attached picture) when trying to create a training dataset. Is related to not having egocentric coordinates?
VAME-error

Thanks,
Patrick

Using Setup.py: multiple errors in installation (Connection lost?)

Hi Kevin & Pavel;

awesome work! I found an error using your setup.py though....

I start with the relavant sections and post the full ouput underneath. I will also add a conda list (after trying the installation once) to show the result and a likely solved conda list (after 3 installations)

I am using a Windows 10 Laptop with PyCharm and Anaconda. I tested the pytorch installation before installing VAME and it works.

TLDR:
[WinError 10054] An existing connection was forcibly closed by the remote host
seems to break off the connection at several points during the installation. restarting (multiple times) setup.py seems to solve the issue.

Note to likely solution:

Running the setup.py again installed kiwisolver succesfully and continued several other installations but broke off again with the same errortype:

[WinError 10054] An existing connection was forcibly closed by the remote host

Running the setup.py a third time successfully installed vame or at least reached the end of the code.
Finished processing dependencies for vame==0.1

The first bug:

Failed to install kiwisolver:

Installed d:\anaconda\envs\vame\lib\site-packages\python_dateutil-2.8.1-py3.7.egg
Searching for kiwisolver>=1.0.1
Reading https://pypi.org/simple/kiwisolver/
Downloading https://files.pythonhosted.org/packages/7e/e5/d8bd2d063da3b6761270f29038d2bb9785c88ff385009bf61589cde6e6ef/kiwisolver-1.2.0-cp37-none-win_amd64.whl#sha256=4eadb361baf3069f278b055e3bb53fa189cea2fd02cb2c353b7a99ebb4477ef1
error: Download error for https://files.pythonhosted.org/packages/7e/e5/d8bd2d063da3b6761270f29038d2bb9785c88ff385009bf61589cde6e6ef/kiwisolver-1.2.0-cp37-none-win_amd64.whl#sha256=4eadb361baf3069f278b055e3bb53fa189cea2fd02cb2c353b7a99ebb4477ef1: [WinError 10054] An existing connection was forcibly closed by the remote host

Complete output:

(vame) C:\Users\schwa\PycharmProjects\VAME>python setup.py install
running install
running bdist_egg
running egg_info
creating vame.egg-info
writing vame.egg-info\PKG-INFO
writing dependency_links to vame.egg-info\dependency_links.txt
writing entry points to vame.egg-info\entry_points.txt
writing requirements to vame.egg-info\requires.txt
writing top-level names to vame.egg-info\top_level.txt
writing manifest file 'vame.egg-info\SOURCES.txt'
reading manifest file 'vame.egg-info\SOURCES.txt'
writing manifest file 'vame.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
creating build
creating build\lib
creating build\lib\vame
copying vame\__init__.py -> build\lib\vame
creating build\lib\vame\analysis
copying vame\analysis\behavior_structure.py -> build\lib\vame\analysis
copying vame\analysis\segment_behavior.py -> build\lib\vame\analysis
copying vame\analysis\videowriter.py -> build\lib\vame\analysis
copying vame\analysis\__init__.py -> build\lib\vame\analysis
creating build\lib\vame\initialize_project
copying vame\initialize_project\new.py -> build\lib\vame\initialize_project
copying vame\initialize_project\__init__.py -> build\lib\vame\initialize_project
creating build\lib\vame\model
copying vame\model\create_training.py -> build\lib\vame\model
copying vame\model\dataloader.py -> build\lib\vame\model
copying vame\model\evaluate.py -> build\lib\vame\model
copying vame\model\rnn_vae.py -> build\lib\vame\model
copying vame\model\__init__.py -> build\lib\vame\model
creating build\lib\vame\util
copying vame\util\auxiliary.py -> build\lib\vame\util
copying vame\util\__init__.py -> build\lib\vame\util
creating build\bdist.win-amd64
creating build\bdist.win-amd64\egg
creating build\bdist.win-amd64\egg\vame
creating build\bdist.win-amd64\egg\vame\analysis
copying build\lib\vame\analysis\behavior_structure.py -> build\bdist.win-amd64\egg\vame\analysis
copying build\lib\vame\analysis\segment_behavior.py -> build\bdist.win-amd64\egg\vame\analysis
copying build\lib\vame\analysis\videowriter.py -> build\bdist.win-amd64\egg\vame\analysis
copying build\lib\vame\analysis\__init__.py -> build\bdist.win-amd64\egg\vame\analysis
creating build\bdist.win-amd64\egg\vame\initialize_project
copying build\lib\vame\initialize_project\new.py -> build\bdist.win-amd64\egg\vame\initialize_project
copying build\lib\vame\initialize_project\__init__.py -> build\bdist.win-amd64\egg\vame\initialize_project
creating build\bdist.win-amd64\egg\vame\model
copying build\lib\vame\model\create_training.py -> build\bdist.win-amd64\egg\vame\model
copying build\lib\vame\model\dataloader.py -> build\bdist.win-amd64\egg\vame\model
copying build\lib\vame\model\evaluate.py -> build\bdist.win-amd64\egg\vame\model
copying build\lib\vame\model\rnn_vae.py -> build\bdist.win-amd64\egg\vame\model
copying build\lib\vame\model\__init__.py -> build\bdist.win-amd64\egg\vame\model
creating build\bdist.win-amd64\egg\vame\util
copying build\lib\vame\util\auxiliary.py -> build\bdist.win-amd64\egg\vame\util
copying build\lib\vame\util\__init__.py -> build\bdist.win-amd64\egg\vame\util
copying build\lib\vame\__init__.py -> build\bdist.win-amd64\egg\vame
byte-compiling build\bdist.win-amd64\egg\vame\analysis\behavior_structure.py to behavior_structure.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\analysis\segment_behavior.py to segment_behavior.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\analysis\videowriter.py to videowriter.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\analysis\__init__.py to __init__.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\initialize_project\new.py to new.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\initialize_project\__init__.py to __init__.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\model\create_training.py to create_training.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\model\dataloader.py to dataloader.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\model\evaluate.py to evaluate.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\model\rnn_vae.py to rnn_vae.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\model\__init__.py to __init__.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\util\auxiliary.py to auxiliary.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\util\__init__.py to __init__.cpython-37.pyc
byte-compiling build\bdist.win-amd64\egg\vame\__init__.py to __init__.cpython-37.pyc
creating build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\entry_points.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\requires.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying vame.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist\vame-0.1-py3.7.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing vame-0.1-py3.7.egg
Copying vame-0.1-py3.7.egg to d:\anaconda\envs\vame\lib\site-packages
Adding vame 0.1 to easy-install.pth file
Installing vame-script.py script to D:\Anaconda\envs\vame\Scripts
Installing vame.exe script to D:\Anaconda\envs\vame\Scripts

Installed d:\anaconda\envs\vame\lib\site-packages\vame-0.1-py3.7.egg
Processing dependencies for vame==0.1
Searching for opencv-python
Reading https://pypi.org/simple/opencv-python/
Downloading https://files.pythonhosted.org/packages/85/17/bad54f67bbe27d88ba520c3f59315e95b4e254cd28767c20accacb0597d8/opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl#sha256=d8a55585631f9c9eca4b1a996e9732ae023169cf2f46f69e4518d67d96198226
Best match: opencv-python 4.2.0.34
Processing opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl
Installing opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding opencv-python 4.2.0.34 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\opencv_python-4.2.0.34-py3.7-win-amd64.egg
Searching for pyyaml
Reading https://pypi.org/simple/pyyaml/
Downloading https://files.pythonhosted.org/packages/fb/aa/1a7ac452c52b93ab759d0b5b81c901ea122d95a5abf429decc660a44a2f1/PyYAML-5.3.1-cp37-cp37m-win_amd64.whl#sha256=73f099454b799e05e5ab51423c7bcf361c58d3206fa7b0d555426b1f4d9a3eaf
Best match: PyYAML 5.3.1
Processing PyYAML-5.3.1-cp37-cp37m-win_amd64.whl
Installing PyYAML-5.3.1-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding PyYAML 5.3.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\pyyaml-5.3.1-py3.7-win-amd64.egg
Searching for sklearn
Reading https://pypi.org/simple/sklearn/
Downloading https://files.pythonhosted.org/packages/1e/7a/dbb3be0ce9bd5c8b7e3d87328e79063f8b263b2b1bfa4774cb1147bfcd3f/sklearn-0.0.tar.gz#sha256=e23001573aa194b834122d2b9562459bf5ae494a2d59ca6b8aa22c85a44c0e31
Best match: sklearn 0.0
Processing sklearn-0.0.tar.gz
Writing C:\Users\schwa\AppData\Local\Temp\easy_install-xvhf09u4\sklearn-0.0\setup.cfg
Running sklearn-0.0\setup.py -q bdist_egg --dist-dir C:\Users\schwa\AppData\Local\Temp\easy_install-xvhf09u4\sklearn-0.0\egg-dist-tmp-2_0b8kpp
file wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141.py (for module wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141) not found
file wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141.py (for module wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141) not found
file wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141.py (for module wheel-platform-tag-is-broken-on-empty-wheels-see-issue-141) not found
warning: install_lib: 'build\lib' does not exist -- no Python modules to install

creating d:\anaconda\envs\vame\lib\site-packages\sklearn-0.0-py3.7.egg
Extracting sklearn-0.0-py3.7.egg to d:\anaconda\envs\vame\lib\site-packages
Adding sklearn 0.0 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\sklearn-0.0-py3.7.egg
Searching for ruamel.yaml
Reading https://pypi.org/simple/ruamel.yaml/
Downloading https://files.pythonhosted.org/packages/a6/92/59af3e38227b9cc14520bf1e59516d99ceca53e3b8448094248171e9432b/ruamel.yaml-0.16.10-py2.py3-none-any.whl#sha256=0962fd7999e064c4865f96fb1e23079075f4a2a14849bcdc5cdba53a24f9759b
Best match: ruamel.yaml 0.16.10
Processing ruamel.yaml-0.16.10-py2.py3-none-any.whl
Installing ruamel.yaml-0.16.10-py2.py3-none-any.whl to d:\anaconda\envs\vame\lib\site-packages
Adding ruamel.yaml 0.16.10 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\ruamel.yaml-0.16.10-py3.7.egg
Searching for pandas
Reading https://pypi.org/simple/pandas/
Downloading https://files.pythonhosted.org/packages/69/69/c35fbbc9bec374c44e9c800e491e914a521dc3926fc6cee80d4821771295/pandas-1.0.3-cp37-cp37m-win_amd64.whl#sha256=12f492dd840e9db1688126216706aa2d1fcd3f4df68a195f9479272d50054645
Best match: pandas 1.0.3
Processing pandas-1.0.3-cp37-cp37m-win_amd64.whl
Installing pandas-1.0.3-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding pandas 1.0.3 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\pandas-1.0.3-py3.7-win-amd64.egg
Searching for pathlib
Reading https://pypi.org/simple/pathlib/
Downloading https://files.pythonhosted.org/packages/ac/aa/9b065a76b9af472437a0059f77e8f962fe350438b927cb80184c32f075eb/pathlib-1.0.1.tar.gz#sha256=6940718dfc3eff4258203ad5021090933e5c04707d5ca8cc9e73c94a7894ea9f
Best match: pathlib 1.0.1
Processing pathlib-1.0.1.tar.gz
Writing C:\Users\schwa\AppData\Local\Temp\easy_install-503ds2u8\pathlib-1.0.1\setup.cfg
Running pathlib-1.0.1\setup.py -q bdist_egg --dist-dir C:\Users\schwa\AppData\Local\Temp\easy_install-503ds2u8\pathlib-1.0.1\egg-dist-tmp-4w1lin85
zip_safe flag not set; analyzing archive contents...
Copying pathlib-1.0.1-py3.7.egg to d:\anaconda\envs\vame\lib\site-packages
Adding pathlib 1.0.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\pathlib-1.0.1-py3.7.egg
Searching for matplotlib
Reading https://pypi.org/simple/matplotlib/
Downloading https://files.pythonhosted.org/packages/b4/4d/8a2c06cb69935bb762738a8b9d5f8ce2a66be5a1410787839b71e146f000/matplotlib-3.2.1-cp37-cp37m-win_amd64.whl#sha256=56d3147714da5c7ac4bc452d041e70e0e0b07c763f604110bd4e2527f320b86d
Best match: matplotlib 3.2.1
Processing matplotlib-3.2.1-cp37-cp37m-win_amd64.whl
Installing matplotlib-3.2.1-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding matplotlib 3.2.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\matplotlib-3.2.1-py3.7-win-amd64.egg
Searching for scipy
Reading https://pypi.org/simple/scipy/
Downloading https://files.pythonhosted.org/packages/61/51/046cbc61c7607e5ecead6ff1a9453fba5e7e47a5ea8d608cc7036586a5ef/scipy-1.4.1-cp37-cp37m-win_amd64.whl#sha256=a2d6df9eb074af7f08866598e4ef068a2b310d98f87dc23bd1b90ec7bdcec802
Best match: scipy 1.4.1
Processing scipy-1.4.1-cp37-cp37m-win_amd64.whl
Installing scipy-1.4.1-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding scipy 1.4.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\scipy-1.4.1-py3.7-win-amd64.egg
Searching for pytest-shutil
Reading https://pypi.org/simple/pytest-shutil/
Downloading https://files.pythonhosted.org/packages/26/b7/ef48a8f1f81ae4cd6f22992f6ffb7e9bf030d6e6654e2e626a05aaf5e880/pytest_shutil-1.7.0-py2.py3-none-any.whl#sha256=b3568a675cb092c9b15c789ebd3046b79cfaca476868939748729d14557a98ff
Best match: pytest-shutil 1.7.0
Processing pytest_shutil-1.7.0-py2.py3-none-any.whl
Installing pytest_shutil-1.7.0-py2.py3-none-any.whl to d:\anaconda\envs\vame\lib\site-packages
Adding pytest-shutil 1.7.0 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\pytest_shutil-1.7.0-py3.7.egg
Searching for scikit-learn
Reading https://pypi.org/simple/scikit-learn/
Downloading https://files.pythonhosted.org/packages/70/8e/682770fc1da047bb56443150bfd8d87d850459cd7cc412a5311de3abaa4a/scikit_learn-0.23.1-cp37-cp37m-win_amd64.whl#sha256=04799686060ecbf8992f26a35be1d99e981894c8c7860c1365cda4200f954a16
Best match: scikit-learn 0.23.1
Processing scikit_learn-0.23.1-cp37-cp37m-win_amd64.whl
Installing scikit_learn-0.23.1-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding scikit-learn 0.23.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\scikit_learn-0.23.1-py3.7-win-amd64.egg
Searching for ruamel.yaml.clib>=0.1.2
Reading https://pypi.org/simple/ruamel.yaml.clib/
Downloading https://files.pythonhosted.org/packages/df/b7/0a84f9a762282314a9df54c56aeec8c2b4f17404ee3d3a05faa76e27e006/ruamel.yaml.clib-0.2.0-cp37-cp37m-win_amd64.whl#sha256=b1b7fcee6aedcdc7e62c3a73f238b3d080c7ba6650cd808bce8d7761ec484070
Best match: ruamel.yaml.clib 0.2.0
Processing ruamel.yaml.clib-0.2.0-cp37-cp37m-win_amd64.whl
Installing ruamel.yaml.clib-0.2.0-cp37-cp37m-win_amd64.whl to d:\anaconda\envs\vame\lib\site-packages
Adding ruamel.yaml.clib 0.2.0 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\ruamel.yaml.clib-0.2.0-py3.7-win-amd64.egg
Searching for pytz>=2017.2
Reading https://pypi.org/simple/pytz/
Downloading https://files.pythonhosted.org/packages/4f/a4/879454d49688e2fad93e59d7d4efda580b783c745fd2ec2a3adf87b0808d/pytz-2020.1-py2.py3-none-any.whl#sha256=a494d53b6d39c3c6e44c3bec237336e14305e4f29bbf800b599253057fbb79ed
Best match: pytz 2020.1
Processing pytz-2020.1-py2.py3-none-any.whl
Installing pytz-2020.1-py2.py3-none-any.whl to d:\anaconda\envs\vame\lib\site-packages
Adding pytz 2020.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\pytz-2020.1-py3.7.egg
Searching for python-dateutil>=2.6.1
Reading https://pypi.org/simple/python-dateutil/
Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl#sha256=75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a
Best match: python-dateutil 2.8.1
Processing python_dateutil-2.8.1-py2.py3-none-any.whl
Installing python_dateutil-2.8.1-py2.py3-none-any.whl to d:\anaconda\envs\vame\lib\site-packages
Adding python-dateutil 2.8.1 to easy-install.pth file

Installed d:\anaconda\envs\vame\lib\site-packages\python_dateutil-2.8.1-py3.7.egg
Searching for kiwisolver>=1.0.1
Reading https://pypi.org/simple/kiwisolver/
Downloading https://files.pythonhosted.org/packages/7e/e5/d8bd2d063da3b6761270f29038d2bb9785c88ff385009bf61589cde6e6ef/kiwisolver-1.2.0-cp37-none-win_amd64.whl#sha256=4eadb361baf3069f278b055e3bb53fa189cea2fd02cb2c353b7a99ebb4477ef1
error: Download error for https://files.pythonhosted.org/packages/7e/e5/d8bd2d063da3b6761270f29038d2bb9785c88ff385009bf61589cde6e6ef/kiwisolver-1.2.0-cp37-none-win_amd64.whl#sha256=4eadb361baf3069f278b055e3bb53fa189cea2fd02cb2c353b7a99ebb4477ef1: [WinError 10054] An existing connection was forcibly closed by the remote host

conda list output:

(vame) C:\Users\schwa\PycharmProjects\VAME>conda list
# packages in environment at D:\Anaconda\envs\vame:
#
# Name                    Version                   Build  Channel
blas                      1.0                         mkl
ca-certificates           2020.1.1                      0
certifi                   2020.4.5.1               py37_0
cudatoolkit               10.1.243             h74a9793_0
freetype                  2.9.1                ha9979f8_1
icc_rt                    2019.0.0             h0cc432a_1
intel-openmp              2020.1                      216
jpeg                      9b                   hb83a4c4_2
libpng                    1.6.37               h2a8f88b_0
libtiff                   4.1.0                h56a325e_0
matplotlib                3.2.1                    pypi_0    pypi
mkl                       2020.1                      216
mkl-service               2.3.0            py37hb782905_0
mkl_fft                   1.0.15           py37h14836fe_0
mkl_random                1.1.0            py37h675688f_0
ninja                     1.9.0            py37h74a9793_0
numpy                     1.18.1           py37h93ca92e_0
numpy-base                1.18.1           py37hc3f5095_1
olefile                   0.46                     py37_0
opencv-python             4.2.0.34                 pypi_0    pypi
openssl                   1.1.1g               he774522_0
pandas                    1.0.3                    pypi_0    pypi
pillow                    7.1.2            py37hcc1f983_0
pip                       20.0.2                   py37_3
pytest-shutil             1.7.0                    pypi_0    pypi
python                    3.7.7                h81c818b_4
python-dateutil           2.8.1                    pypi_0    pypi
pytorch                   1.5.0           py3.7_cuda101_cudnn7_0    pytorch
pytz                      2020.1                   pypi_0    pypi
pyyaml                    5.3.1                    pypi_0    pypi
ruamel-yaml               0.16.10                  pypi_0    pypi
ruamel-yaml-clib          0.2.0                    pypi_0    pypi
scikit-learn              0.23.1                   pypi_0    pypi
scipy                     1.4.1                    pypi_0    pypi
setuptools                46.4.0                   py37_0
six                       1.14.0                   py37_0
sklearn                   0.0                      pypi_0    pypi
sqlite                    3.31.1               h2a8f88b_1
tk                        8.6.8                hfa6e2cd_0
torchvision               0.6.0                py37_cu101    pytorch
vc                        14.1                 h0510ff6_4
vs2015_runtime            14.16.27012          hf0eaf9b_1
wheel                     0.34.2                   py37_0
wincertstore              0.2                      py37_0
xz                        5.2.5                h62dcd97_0
zlib                      1.2.11               h62dcd97_4
zstd                      1.3.7                h508b16e_0

conda list after running setup.py 3 times:

(vame) C:\Users\schwa\PycharmProjects\VAME>conda list
# packages in environment at D:\Anaconda\envs\vame:
#
# Name                    Version                   Build  Channel
apipkg                    1.5                      pypi_0    pypi
blas                      1.0                         mkl
ca-certificates           2020.1.1                      0
certifi                   2020.4.5.1               py37_0
contextlib2               0.6.0.post1              pypi_0    pypi
cudatoolkit               10.1.243             h74a9793_0
cycler                    0.10.0                   pypi_0    pypi
execnet                   1.7.1                    pypi_0    pypi
freetype                  2.9.1                ha9979f8_1
icc_rt                    2019.0.0             h0cc432a_1
intel-openmp              2020.1                      216
joblib                    0.15.1                   pypi_0    pypi
jpeg                      9b                   hb83a4c4_2
kiwisolver                1.2.0                    pypi_0    pypi
libpng                    1.6.37               h2a8f88b_0
libtiff                   4.1.0                h56a325e_0
matplotlib                3.2.1                    pypi_0    pypi
mkl                       2020.1                      216
mkl-service               2.3.0            py37hb782905_0
mkl_fft                   1.0.15           py37h14836fe_0
mkl_random                1.1.0            py37h675688f_0
mock                      4.0.2                    pypi_0    pypi
ninja                     1.9.0            py37h74a9793_0
numpy                     1.18.1           py37h93ca92e_0
numpy-base                1.18.1           py37hc3f5095_1
olefile                   0.46                     py37_0
opencv-python             4.2.0.34                 pypi_0    pypi
openssl                   1.1.1g               he774522_0
pandas                    1.0.3                    pypi_0    pypi
path                      13.1.0                   pypi_0    pypi
path-py                   12.4.0                   pypi_0    pypi
pillow                    7.1.2            py37hcc1f983_0
pip                       20.0.2                   py37_3
pytest-shutil             1.7.0                    pypi_0    pypi
python                    3.7.7                h81c818b_4
python-dateutil           2.8.1                    pypi_0    pypi
pytorch                   1.5.0           py3.7_cuda101_cudnn7_0    pytorch
pytz                      2020.1                   pypi_0    pypi
pyyaml                    5.3.1                    pypi_0    pypi
ruamel-yaml               0.16.10                  pypi_0    pypi
ruamel-yaml-clib          0.2.0                    pypi_0    pypi
scikit-learn              0.23.1                   pypi_0    pypi
scipy                     1.4.1                    pypi_0    pypi
setuptools                46.4.0                   py37_0
six                       1.14.0                   py37_0
sklearn                   0.0                      pypi_0    pypi
sqlite                    3.31.1               h2a8f88b_1
threadpoolctl             2.0.0                    pypi_0    pypi
tk                        8.6.8                hfa6e2cd_0
torchvision               0.6.0                py37_cu101    pytorch
vc                        14.1                 h0510ff6_4
vs2015_runtime            14.16.27012          hf0eaf9b_1
wheel                     0.34.2                   py37_0
wincertstore              0.2                      py37_0
xz                        5.2.5                h62dcd97_0
zlib                      1.2.11               h62dcd97_4
zstd                      1.3.7                h508b16e_0

ModuleNotFoundError: No module named 'ruamel'

Hello,

After installing VAME, when I call import vame, I get the following error:

(base) C:\Users\serce>cd VAME

(base) C:\Users\serce\VAME>conda activate vame

(vame) C:\Users\serce\VAME>ipython
Python 3.7.7 (default, Apr 15 2020, 05:09:04) [MSC v.1916 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.13.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import vame
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-1-6e42d1f2837d> in <module>
----> 1 import vame

~\VAME\vame\__init__.py in <module>
     10
     11 from vame.initialize_project import init_new_project
---> 12 from vame.model import create_trainset
     13 from vame.model import rnn_model
     14 from vame.model import evaluate_model

~\VAME\vame\model\__init__.py in <module>
     10 sys.dont_write_bytecode = True
     11
---> 12 from vame.model.create_training import create_trainset
     13 from vame.model.dataloader import SEQUENCE_DATASET
     14 from vame.model.rnn_vae import rnn_model

~\VAME\vame\model\create_training.py in <module>
     13 import scipy.signal
     14
---> 15 from vame.util.auxiliary import read_config
     16
     17

~\VAME\vame\util\__init__.py in <module>
      9 sys.dont_write_bytecode = True
     10
---> 11 from vame.util.auxiliary import *

~\VAME\vame\util\auxiliary.py in <module>
     18 import os, yaml
     19 from pathlib import Path
---> 20 import ruamel.yaml
     21
     22

ModuleNotFoundError: No module named 'ruamel'

Although, the module is in the setup.py file and I can see it installed with conda list command:

(vame) C:\Users\serce\VAME>conda list
# packages in environment at C:\Users\serce\AppData\Local\Continuum\anaconda3\envs\vame:
#
# Name                    Version                   Build  Channel
apipkg                    1.5                      pypi_0    pypi
blas                      1.0                         mkl
ca-certificates           2020.1.1                      0
certifi                   2020.4.5.1               py38_0
contextlib2               0.6.0.post1              pypi_0    pypi
cudatoolkit               10.2.89              h74a9793_1
cycler                    0.10.0                   pypi_0    pypi
execnet                   1.7.1                    pypi_0    pypi
freetype                  2.9.1                ha9979f8_1
icc_rt                    2019.0.0             h0cc432a_1
intel-openmp              2020.1                      216
joblib                    0.15.1                   pypi_0    pypi
jpeg                      9b                   hb83a4c4_2
kiwisolver                1.2.0                    pypi_0    pypi
libpng                    1.6.37               h2a8f88b_0
libtiff                   4.1.0                h56a325e_1
lz4-c                     1.9.2                h62dcd97_0
matplotlib                3.2.1                    pypi_0    pypi
mkl                       2020.1                      216
mkl-service               2.3.0            py38hb782905_0
mkl_fft                   1.0.15           py38h14836fe_0
mkl_random                1.1.1            py38h47e9c7a_0
mock                      4.0.2                    pypi_0    pypi
ninja                     1.9.0            py38h74a9793_0
numpy                     1.18.1           py38h93ca92e_0
numpy-base                1.18.1           py38hc3f5095_1
olefile                   0.46                       py_0
opencv-python             4.2.0.34                 pypi_0    pypi
openssl                   1.1.1g               he774522_0
pandas                    1.0.4                    pypi_0    pypi
path-py                   12.4.0                   pypi_0    pypi
pillow                    7.1.2            py38hcc1f983_0
pip                       20.0.2                   py38_3
pytest-shutil             1.7.0                    pypi_0    pypi
python                    3.8.3                he1778fa_0
python-dateutil           2.8.1                    pypi_0    pypi
pytorch                   1.5.0           py3.8_cuda102_cudnn7_0    pytorch
pytz                      2020.1                   pypi_0    pypi
pyyaml                    5.3.1                    pypi_0    pypi
ruamel-yaml               0.16.10                  pypi_0    pypi
scikit-learn              0.23.1                   pypi_0    pypi
scipy                     1.5.0rc1                 pypi_0    pypi
setuptools                47.1.1                   py38_0
six                       1.15.0                     py_0
sklearn                   0.0                      pypi_0    pypi
sqlite                    3.31.1               h2a8f88b_1
threadpoolctl             2.1.0                    pypi_0    pypi
tk                        8.6.8                hfa6e2cd_0
torchvision               0.6.0                py38_cu102    pytorch
vc                        14.1                 h0510ff6_4
vs2015_runtime            14.16.27012          hf0eaf9b_2
wheel                     0.34.2                   py38_0
wincertstore              0.2                      py38_0
xz                        5.2.5                h62dcd97_0
zlib                      1.2.11               h62dcd97_4
zstd                      1.4.4                ha9fde0e_3

Any ideas on that?

Memory issue

When I run 'vame.train_model(config)' I get this error. Is there a fix for "RuntimeError: CUDA error: out of memory" on a graphics card that only has 6gb of memory?
I am new to this and I am curious as to if there is any workaround for having a lousy graphics card.

Thanks.

What is the pose file?

Hi,

After creating a new project, it is mentioned that I have to add my pose file into the /videos/pose-estimation folder. What is this pose file? Is this one of the .csv or .h5 file that was created by deeplabcut?

Thanks,
Bram

behavior segmentation has no output

I was able to train the model with no errors, but when I run behavior_segmentation (like this: vame.behavior_segmentation(config, model_name='VAME', cluster_method='kmeans', n_cluster=[15,30,60])), it prints that it is computing latent space for each of my videos, but the results/<video_name>/<model_name> folder is empty, and I get the error message below, suggesting that it's not saving anything and therefore can't proceed. I have a large dataset (~1500 videos).


Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/n/home06/ktyssowski/.local/lib/python3.7/site-packages/vame/analysis/segment_behavior.py", line 83, in behavior_segmentation
    z, z_logger = temporal_quant(cfg, model_name, files, use_gpu)
  File "/n/home06/ktyssowski/.local/lib/python3.7/site-packages/vame/analysis/segment_behavior.py", line 149, in temporal_quant
    z_temp = np.concatenate(x_decoded,axis=0)
ValueError: need at least one array to concatenate

community errors / endless loop

Hey Kevin,
The update looks great, excited to try it out. But on plotting vame.community(config, show_umap=False), I get this error:

AttributeError: 'FigureManagerBase' object has no attribute 'window'

this is fixed by specifying to use Qt5 as the window manager with:

import matplotlib
matplotlib.use('Qt5Agg')

but that window is still very glitchy and the 'save' button doesn't work. I'm using qt=5.12.9. Can you let me know if you use a different version of Qt?

I also get an error in true_divide when I run this, and it gets stuck in an endless loop (i.e. inputting 'yes' does not break the loop). I looked at the code in community_analysis.vame

/d1/studies/VAME/vame/analysis/community_analysis.py:56: RuntimeWarning: invalid value encountered in true_divide
  transition_matrix = adjacency_matrix/row_sum[:,np.newaxis]
/d1/studies/VAME/vame/analysis/tree_hierarchy.py:66: RuntimeWarning: divide by zero encountered in double_scalars
  cost = motif_norm[i] + motif_norm[j] / np.abs(transition_matrix[i,j] + transition_matrix[j,i] )
/d1/studies/VAME/vame/analysis/tree_hierarchy.py:66: RuntimeWarning: invalid value encountered in double_scalars
  cost = motif_norm[i] + motif_norm[j] / np.abs(transition_matrix[i,j] + transition_matrix[j,i] )

Where do you want to cut the Tree? 0/1/2/3/...1
[[3, 2, 12, 7, 11, 6, 5, 0, 4, 10, 14], [13, 9, 1, 8]]


Are all motifs in the list? (yes/no/restart)yes

Where do you want to cut the Tree? 0/1/2/3/...1
[[3, 2, 12, 11, 7, 6, 5, 0, 4, 10, 14], [13, 9, 8, 1]]


Are all motifs in the list? (yes/no/restart)yes

From looking at the "while flag_1=='no'" loop I can't figure out why the logic is breaking there, but I can tell that isn't the expected behavior. But if I specify the cut_tree parameter in the vame.community function call then it seems to break Qt completely, nothing is displayed and the kernel dies (that again may be a Qt issue).

Very strangely, if I run the code in create_community_bag on it's own, not in a function, it works as expected IF I comment out the plt.pause(0.5) line. But having that line commented out while running it from the vame.community() function still gets stuck in an endless loop. I'm officially stumped...

Motif_video is limited to mp4 files

Me again,

The videowriter from openCV that you use is only looking for mp4 files and is not reporting an error if it entcounters other filetypes, which results in an empty variable that get's further passed down.

First error that happens in line 38 when trying to write the first videos:

 video = cv.VideoWriter(output, cv.VideoWriter_fourcc('M','J','P','G'), fps, (int(width), int(height)))

Error says "fps" is called before assignment.

Most likely origin:

in line:19 vame/analysis/videowriter.py when calling vame.motif_videos()

def get_cluster_vid(cfg, path_to_file, file, n_cluster):
    print("Videos get created for "+file+" ...")
    labels = np.load(path_to_file+'/'+str(n_cluster)+'_km_label_'+file+'.npy')
    --> capture = cv.VideoCapture(cfg['project_path']+'videos/'+file+'.mp4')
    

My workaround:
capture = cv.VideoCapture(cfg['project_path'] + 'videos/' + file + '.avi')

Suggested solution:
allow filetype parameter in config file (as early DLC did)
This would also help if you plan to include an automatic video/PE copying when initializing the project.

Alternative:
You could search for common filetypes within OpenCVs limits.

best,
Jens

Compatibility with GeForce 30xx series GPUs

I'm a new user setting up VAME on my desktop (OS: Linux Ubuntu 18.04 GPU: nvidia GeForce RTX 3070). Initially, when I tried to run the demo I received an error when running import vame:
GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation
I resolved this issue by updating to Cuda 11 and Pytorch 1.8 nightly in my VAME environment by running the following command
conda install pytorch torchvision cudatoolkit=11 -c pytorch-nightly
I was then able to successfully run the full demo script. This is a closed issue, but I wanted to add as documentation for any other 30xx series users.

Plotting and further analysis

Hi,

not really an issue but rather a question. It's my first time using an autoencoder and I'm still learning python. Could you point me in the right direction, where to start with my analysis. I finished the workflow and looked at the numpy arrays in results but I'm not sure what to do with them and what are they.

If anyone could help me, even by giving me some link to a tutorial on analysing this kind of data or what to use to plot a video like the one in readme with motif and change in cluster I'd be super grateful!

Thank you,
Konrad

Could not create trainset

Hi,

I try to run VAME locally on my laptop, following the instruction, and could successfully install and import VAME in the conda environment.
However, when try to convert .csv to .npy using the vame.csv_to_npy(), it runs without erroe but I could not find the .npy file in de working directory.
Then I try to run vame.create_trainset(config)
I got the error message below.

vm.csv_to_numpy(config, datapath='/Users/virginiali/VAME/Muhang-VAME-Project-May14-2021/videos/pose_estimation')
Your data is now ine right format and you can call vame.create_trainset()
vm.create_trainset(config)
Creating training dataset...
Using robust setting to eliminate outliers! IQR factor: 4
Traceback (most recent call last):
File "", line 1, in
File "/Users/virginiali/VAME/VAME-master/vame/model/create_training.py", line 186, in create_trainset
traindata(cfg, files, cfg['test_fraction'], cfg['num_features'], cfg['savgol_filter'])
File "/Users/virginiali/VAME/VAME-master/vame/model/create_training.py", line 66, in traindata
X = np.concatenate(X_train, axis=0)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

Could anyone kindly give me some suggestions to fix it? Many thanks!

I am using MAC airbook, macOS Catalina 10.15.7

Structure of latent_vector.npy

Hi, and thank you for the nice toolbox!

I am trying to work with the outputs, i.e., separating data by motifs, running umaps etc, and I was wondering about the structure of the latent_vector numpy file.
The latent_vector output seems to have twice the size of the input_data_PE-seq-clean.npy (twice the num_features). Is it save to assume the first half corresponds to the reconstruction and the second to the prediction?

Best Regards,
Guillermo

Missing dependencies

There seem to be some dependencies not listed in the setup.py, specifically pytest, pyyaml, and opencv-python, that cause install to fail when following the installation instructions.

Steps to reproduce:-

  1. Open a command prompt in the VAME repo folder
  2. conda create -n vametest2 scipy<=1.2.1 (see issue #1 for why specifying scipy is necessary)
  3. conda activate vametest2
  4. Install pytorch, e.g. conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
  5. python setupy.py install---fails with error "The 'pytest' distribution was not found and is required by pytest-shutil"
  6. conda install pytest
  7. Repeat step 5
  8. Open python, type import vame---fails with "ModuleNotFoundError: No module named 'yaml'"
  9. Quit python, enter conda install pyyaml
  10. Repeat step 8---fails with "ModuleNotFoundError: No module named 'cv2'"
  11. Quit python, enter conda install opencv-python
  12. Repeat step 8.

Expected behaviour: running setup.py installs VAME successfully the first time (i.e. step 5) and then VAME can be imported in Python

Actual behaviour: there are a few missing dependencies, namely pytest, pyyaml, and opencv-python, that have to be installed before running setup.py so that VAME can be used.

Also, there's a typo on line 44 of vame/model/create_training.py: should be test = int(num_frames*cfg['test-fraction']) (not TestFraction).

System info:-

Windows 10 x64
Conda 4.8.3
Python 3.7.7
CUDA 10.1.105

TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

  1. Is it correct to execute align_demo.py in order to display the output like the picture on the web page?

1

  1. When align_demo.py is executed, the following error occurs. How can I fix this error?

C:\Users\kangel\AppData\Local\Continuum\anaconda3\envs\vame\python.exe D:/Project/AI/Dog/VAME-master/examples/align_demo.py
Traceback (most recent call last):
File "D:/Project/AI/Dog/VAME-master/examples/align_demo.py", line 300, in
egocentric_time_series = align_demo(path_to_file, filename, file_format, crop_size, use_video=False, check_video=False)
File "D:/Project/AI/Dog/VAME-master/examples/align_demo.py", line 284, in align_demo
pose_flip_ref, bg, frame_count, use_video)
File "D:/Project/AI/Dog/VAME-master/examples/align_demo.py", line 152, in align_mouse
i = interpol(i)
File "D:/Project/AI/Dog/VAME-master/examples/align_demo.py", line 83, in interpol
nans, x = nan_helper(y[0])
File "D:/Project/AI/Dog/VAME-master/examples/align_demo.py", line 75, in nan_helper
return np.isnan(y), lambda z: z.nonzero()[0]
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''

Process finished with exit code 1

Please help me :(

Wrong cost expression in merge_func

The cost expression in merge_func (tree_hierarchy.py:66) seems to miss parentheses around the numerator, based on equation (21) in Luxem et al. (2020).
Furthermore, the cases of zero numerator and/or denominator should be explicitly handled.

Possible NumPy Version issue in createTrainset()

Hi Kevin,
Hi Pavel,

Quick report, so others might find it. This is not a VAME issue but might be connected to the installation process. Please feel free to remove it, if you think it is unnecessary!

I ran into an error when creating the trainset using numpy 1.18.1.
This seems to be connected to np.load()
ValueError("Object arrays cannot be loaded when "ValueError: Object arrays cannot be loaded when allow_pickle=False")

A quick google suggested a version conflict: Stackoverflow link
Apparently newer versions of NumPy do not have allow_pickle set to True.

Downgrading NumPy to 1.16.1 solved the issue for me.

pip install numpy==1.16.1
Although the thread suggests another work around.

best,
Jens

Issue about storing the latent vector of videos.(segment_behavior.py)

In the file "segment_behavior.py"
def cluster_latent_space(cfg, files, z_data, **z_logger**, cluster_method, n_cluster, model_name):
In this function, there is a input called z_logger but z_logger is not used within the function at all. Besides, there is a variable idx is also not used within the for loop inside the function called cluster_latent_space.

When I have a group of videos, I've noticed that the length of labels for each videos have the exactly same size which is equal to the number of labels of the whole video library. Therefore, it might cause mislabeling when generating corresponding videos like the function called "get_cluster_vid". Also, the function of z_logger is to log the segmentation of z_data, but it is not stored in result.

Besides, I would suggest to store both the z_data for the whole video library and z_data for each video.

Personally, I edit your code a bit to avoid those minor problems:

def cluster_latent_space(cfg, files, z_data, z_logger, cluster_method, n_cluster, model_name):    
    for cluster in n_cluster:
        if cluster_method == 'kmeans':
            print('Behavior segmentation via k-Means for %d cluster.' %cluster)
            data_labels = kmeans_clustering(z_data, n_clusters=cluster)
            data_labels = np.int64(scipy.signal.medfilt(data_labels, cfg['median_filter']))
            
        if cluster_method == 'GMM':
            print('Behavior segmentation via GMM.')
            data_labels = gmm_clustering(z_data, n_components=cluster)
            data_labels = np.int64(scipy.signal.medfilt(data_labels, cfg['median_filter']))
            
        #save latent vector
        print("Saving latent vector..." )
        if not os.path.exists(cfg['project_path']+'results_latent'):
                os.mkdir(cfg['project_path']+'results_latent')
        
        np.save(cfg['project_path']+'results_latent'+os.sep+str(cluster)+'_zdata', z_data)
        np.save(cfg['project_path']+'results_latent'+os.sep+str(cluster)+'_zlogger', z_logger)
        if cluster_method == 'kmeans':
            np.save(cfg['project_path']+'results_latent'+os.sep+str(cluster)+'_km_label', data_labels)
        
        for idx, file in enumerate(files):
            print("Segmentation for file %s..." %file )
            if not os.path.exists(cfg['project_path']+'results/'+file+'/'+model_name+'/'+cluster_method+'-'+str(cluster)):
                os.mkdir(cfg['project_path']+'results/'+file+'/'+model_name+'/'+cluster_method+'-'+str(cluster))
        
            save_data = cfg['project_path']+'results/'+file+'/'+model_name+'/'
            labels = data_labels[z_logger[idx]:z_logger[idx+1]-1]
            latent_v = z_data[z_logger[idx]:z_logger[idx+1]-1]

                
            if cluster_method == 'kmeans':
                np.save(save_data+cluster_method+'-'+str(cluster)+'/'+str(cluster)+'_km_label_'+file, labels)
                
            if cluster_method == 'GMM':
                np.save(save_data+cluster_method+'-'+str(cluster)+'/'+str(cluster)+'_gmm_label_'+file, labels)
            
            if cluster_method == 'all':
                np.save(save_data+cluster_method+'-'+str(cluster)+'/'+str(cluster)+'_km_label_'+file, labels)
                np.save(save_data+cluster_method+'-'+str(cluster)+'/'+str(cluster)+'_gmm_label_'+file, labels)
                
            # store z data
            np.save(save_data+cluster_method+'-'+str(cluster)+'/'+str(cluster)+'_latent_vector_'+file, latent_v)

I am a beginner at GitHub. This is the first time for posting issues. So I have no idea about how to report something like this. Please forgive me if I have posted Please forgive me if I have posted anything inappropriate. If you think it is better to contact you with email, please let me know.

Many thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.