Giter Site home page Giter Site logo

deeplabcut / deeplabcut-core Goto Github PK

View Code? Open in Web Editor NEW
30.0 30.0 17.0 43.15 MB

Headless DeepLabCut (no GUI support)

Home Page: http://deeplabcut.org

License: GNU Lesser General Public License v3.0

Python 97.67% Shell 0.07% Jupyter Notebook 2.27%
behavior-analysis deep-learning deeplabcut pose-estimation pose-tracking

deeplabcut-core's Introduction

Welcome! 👋

DeepLabCut™️ is a toolbox for state-of-the-art markerless pose estimation of animals performing various behaviors. As long as you can see (label) what you want to track, you can use this toolbox, as it is animal and object agnostic. Read a short development and application summary below.

Please click the link above for all the information you need to get started! Please note that currently we support only Python 3.10+ (see conda files for guidance).

Developers Stable Release:

  • Very quick start: You need to have TensorFlow installed (up to v2.10 supported across platforms) pip install "deeplabcut[gui,tf]" that includes all functions plus GUIs, or pip install deeplabcut[tf] (headless version with PyTorch and TensorFlow).

Developers Alpha Release:

We recommend using our conda file, see here or the new deeplabcut-docker package.

Our docs walk you through using DeepLabCut, and key API points. For an overview of the toolbox and workflow for project management, see our step-by-step at Nature Protocols paper.

For a deeper understanding and more resources for you to get started with Python and DeepLabCut, please check out our free online course! http://DLCcourse.deeplabcut.org

🐭 pose tracking of single animals demo Open in Colab

🐭🐭🐭 pose tracking of multiple animals demo Open in Colab

  • See more demos here. We provide data and several Jupyter Notebooks: one that walks you through a demo dataset to test your installation, and another Notebook to run DeepLabCut from the beginning on your own data. We also show you how to use the code in Docker, and on Google Colab.

Why use DeepLabCut?

In 2018, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. The toolbox has already been successfully applied (by us and others) to rats, humans, various fish species, bacteria, leeches, various robots, cheetahs, mouse whiskers and race horses. DeepLabCut utilized the feature detectors (ResNets + readout layers) of one of the state-of-the-art algorithms for human pose estimation by Insafutdinov et al., called DeeperCut, which inspired the name for our toolbox (see references below). Since this time, the package has changed substantially. The code has been re-tooled and re-factored since 2.1+: We have added faster and higher performance variants with MobileNetV2s, EfficientNets, and our own DLCRNet backbones (see Pretraining boosts out-of-domain robustness for pose estimation and Lauer et al 2022). Additionally, we have improved the inference speed and provided both additional and novel augmentation methods, added real-time, and multi-animal support. In v3.0+ we have changed the backend to support PyTorch. This brings not only an easier installation process for users, but performance gains, developer flexibility, and a lot of new tools! Importantly, the high-level API stays the same, so it will be a seamless transition for users 💜! We currently provide state-of-the-art performance for animal pose estimation and the labs (M. Mathis Lab and A. Mathis Group) have both top journal and computer vision conference papers.

Left: Due to transfer learning it requires little training data for multiple, challenging behaviors (see Mathis et al. 2018 for details). Mid Left: The feature detectors are robust to video compression (see Mathis/Warren for details). Mid Right: It allows 3D pose estimation with a single network and camera (see Mathis/Warren). Right: It allows 3D pose estimation with a single network trained on data from multiple cameras together with standard triangulation methods (see Nath* and Mathis* et al. 2019).

DeepLabCut is embedding in a larger open-source eco-system, providing behavioral tracking for neuroscience, ecology, medical, and technical applications. Moreover, many new tools are being actively developed. See DLC-Utils for some helper code.

Code contributors:

DLC code was originally developed by Alexander Mathis & Mackenzie Mathis, and was extended in 2.0 with the core dev team consisting of Tanmay Nath (2.0-2.1), and currently (2.1+) with Jessy Lauer and (2.3+) Niels Poulsen. DeepLabCut is an open-source tool and has benefited from suggestions and edits by many individuals including Mert Yuksekgonul, Tom Biasi, Richard Warren, Ronny Eichler, Hao Wu, Federico Claudi, Gary Kane and Jonny Saunders as well as the 100+ contributors. Please see AUTHORS for more details!

This is an actively developed package and we welcome community development and involvement.

Get Assistance & be part of the DLC Community✨:

🚉 Platform 🎯 Goal ⏱️ Estimated Response Time 📢 Support Squad
Image.sc forum
🐭Tag: DeepLabCut
To ask help and support questions👋 Promptly🔥 DLC Team and The DLC Community
GitHub DeepLabCut/Issues To report bugs and code issues🐛 (we encourage you to search issues first) 2-3 days DLC Team
Gitter To discuss with other users, share ideas and collaborate💡 2 days The DLC Community
GitHub DeepLabCut/Contributing To contribute your expertise and experience🙏💯 Promptly🔥 DLC Team
🚧 GitHub DeepLabCut/Roadmap To learn more about our journey✈️ N/A N/A
Twitter Follow To keep up with our latest news and updates 📢 Daily DLC Team
The DeepLabCut AI Residency Program To come and work with us next summer👏 Annually DLC Team

References:

If you use this code or data we kindly ask that you please cite Mathis et al, 2018 and, if you use the Python package (DeepLabCut2.x) please also cite Nath, Mathis et al, 2019. If you utilize the MobileNetV2s or EfficientNets please cite Mathis, Biasi et al. 2021. If you use versions 2.2beta+ or 2.2rc1+, please cite Lauer et al. 2022.

DOIs (#ProTip, for helping you find citations for software, check out CiteAs.org!):

Please check out the following references for more details:

@article{Mathisetal2018,
    title = {DeepLabCut: markerless pose estimation of user-defined body parts with deep learning},
    author = {Alexander Mathis and Pranav Mamidanna and Kevin M. Cury and Taiga Abe  and Venkatesh N. Murthy and Mackenzie W. Mathis and Matthias Bethge},
    journal = {Nature Neuroscience},
    year = {2018},
    url = {https://www.nature.com/articles/s41593-018-0209-y}}

 @article{NathMathisetal2019,
    title = {Using DeepLabCut for 3D markerless pose estimation across species and behaviors},
    author = {Nath*, Tanmay and Mathis*, Alexander and Chen, An Chi and Patel, Amir and Bethge, Matthias and Mathis, Mackenzie W},
    journal = {Nature Protocols},
    year = {2019},
    url = {https://doi.org/10.1038/s41596-019-0176-0}}
    
@InProceedings{Mathis_2021_WACV,
    author    = {Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W.},
    title     = {Pretraining Boosts Out-of-Domain Robustness for Pose Estimation},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2021},
    pages     = {1859-1868}}
    
@article{Lauer2022MultianimalPE,
    title={Multi-animal pose estimation, identification and tracking with DeepLabCut},
    author={Jessy Lauer and Mu Zhou and Shaokai Ye and William Menegas and Steffen Schneider and Tanmay Nath and Mohammed Mostafizur Rahman and     Valentina Di Santo and Daniel Soberanes and Guoping Feng and Venkatesh N. Murthy and George Lauder and Catherine Dulac and M. Mathis and Alexander Mathis},
    journal={Nature Methods},
    year={2022},
    volume={19},
    pages={496 - 504}}

@article{insafutdinov2016eccv,
    title = {DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model},
    author = {Eldar Insafutdinov and Leonid Pishchulin and Bjoern Andres and Mykhaylo Andriluka and Bernt Schiele},
    booktitle = {ECCV'16},
    url = {http://arxiv.org/abs/1605.03170}}

Review & Educational articles:

@article{Mathis2020DeepLT,
    title={Deep learning tools for the measurement of animal behavior in neuroscience},
    author={Mackenzie W. Mathis and Alexander Mathis},
    journal={Current Opinion in Neurobiology},
    year={2020},
    volume={60},
    pages={1-11}}

@article{Mathis2020Primer,
    title={A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives},
    author={Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis},
    journal={Neuron},
    year={2020},
    volume={108},
    pages={44-65}}

Other open-access pre-prints related to our work on DeepLabCut:

@article{MathisWarren2018speed,
    author = {Mathis, Alexander and Warren, Richard A.},
    title = {On the inference speed and video-compression robustness of DeepLabCut},
    year = {2018},
    doi = {10.1101/457242},
    publisher = {Cold Spring Harbor Laboratory},
    URL = {https://www.biorxiv.org/content/early/2018/10/30/457242},
    eprint = {https://www.biorxiv.org/content/early/2018/10/30/457242.full.pdf},
    journal = {bioRxiv}}

License:

This project is primarily licensed under the GNU Lesser General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, please cite us! Note, artwork (DeepLabCut logo) and images are copyrighted; please do not take or use these images without written permission.

SuperAnimal models are provided for research use only (non-commercial use).

Major Versions:

  • For all versions, please see here.

VERSION 3.0: A whole new experience with PyTorch🔥. While the high-level API remains the same, the backend and developer friendliness have greatly improved, along with performance gains!

VERSION 2.3: Model Zoo SuperAnimals, and a whole new GUI experience.

VERSION 2.2: Multi-animal pose estimation, identification, and tracking with DeepLabCut is supported (as well as single-animal projects).

VERSION 2.0-2.1: This is the Python package of DeepLabCut that was originally released in Oct 2018 with our Nature Protocols paper (preprint here). This package includes graphical user interfaces to label your data, and take you from data set creation to automatic behavioral analysis. It also introduces an active learning framework to efficiently use DeepLabCut on large experimental projects, and data augmentation tools that improve network performance, especially in challenging cases (see panel b).

VERSION 1.0: The initial, Nature Neuroscience version of DeepLabCut can be found in the history of git, or here: https://github.com/DeepLabCut/DeepLabCut/releases/tag/1.11

News (and in the news):

💜 We released a major update, moving from 2.x --> 3.x with the backend change to PyTorch

💜 The DeepLabCut Model Zoo launches SuperAnimals, see more here.

💜 DeepLabCut supports multi-animal pose estimation! maDLC is out of beta/rc mode and beta is deprecated, thanks to the testers out there for feedback! Your labeled data will be backwards compatible, but not all other steps. Please see the new 2.2+ releases for what's new & how to install it, please see our new paper, Lauer et al 2022, and the new docs on how to use it!

💜 We support multi-animal re-identification, see Lauer et al 2022.

💜 We have a real-time package available! http://DLClive.deeplabcut.org

deeplabcut-core's People

Contributors

alexemg avatar alyetama avatar jpellman avatar maflister avatar mmathislab avatar stes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deeplabcut-core's Issues

Multi-animal DLC Compatibility with RTX 3070 GPU

OS: Win 10
DeepLabCut Version: 2.2rc1
Anaconda env used: DLC Core
Tensorflow version: 2.4
Cuda version: 11.3
driver: 466.27
cuDNN: 8.2

Hi folks,

Using DLCore, I have managed to train a dataset on a single animal using my RTX 3070 GPU. However, I am having issues running maDLC on the same. My understanding is that DLCore does not have the code to run maDLC, but has someone found a way around this or is this yet to be released?

Hope to remove the Intel-openmp dependency

I try to install deeplabcut-core to download pretrained model on my jetson nano. However, I found DLC need Intel-openmp. ARM frame CPU becomes more and more popular (like Apple M1). Maybe you can consider my suggestion :; Also I wonder how you test your models on Jetson Xavier. If I trained a model on an x86, it can be implemented on an arm directly with tensorRT support? Thanks!

UnboundLocalError: local variable 'cfg' referenced before assignment

Hi all,

I am having this issue when I attempt "dlc.extract_frames(config_path, crop = True"
I am using Visual Studio Code as a text editor instead of ATOM, could this be the source of the problem? It seems to be in the config.yaml and I even tried it using an old config file that used to work

WIP tf 2.2+ migration

I started a branch called TF2.2alpha that is a start at migrating to TF2.

I followed the guide here:
https://www.tensorflow.org/guide/migrate
and am utilizing pip install tf_ slim
also see issue DeepLabCut/DeepLabCut#601

Here is the log of the outstanding issues, some of which are resolved, and some need more work. Zero rush. just did this for a bit fo fun.

short list:

Converted 187 files
Detected 3 issues that require attention
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
File: DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train 2.py
--------------------------------------------------------------------------------
DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train 2.py:205:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
--------------------------------------------------------------------------------
File: DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train.py
--------------------------------------------------------------------------------
DeepLabCut-core/build/lib/deeplabcutcore/pose_estimation_tensorflow/train.py:207:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.
--------------------------------------------------------------------------------
File: DeepLabCut-core/deeplabcutcore/pose_estimation_tensorflow/train.py
--------------------------------------------------------------------------------
DeepLabCut-core/deeplabcutcore/pose_estimation_tensorflow/train.py:207:12: WARNING: *.save requires manual check. (This warning is only applicable if the code saves a tf.Keras model) Keras model.save now saves to the Tensorflow SavedModel format by default, instead of HDF5. To continue saving to HDF5, add the argument save_format='h5' to the save() function.

report.txt

remaining issues (a like):

(1) testscript is failing due to no video to load into project

  • CREATING PROJECT Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/videos" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/labeled-data" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/training-datasets" Created "/Users/mwmathis/Documents/DeepLabCut-core_v3/Testcore-Alex-2020-06-24/dlc-models" Copying the videos WARNING: No valid videos were found. The project was not created ... Verify the video files and re-create the project. Traceback (most recent call last): File "testscript.py", line 57, in <module> cfg=dlc.auxiliaryfunctions.read_config(path_config_file) File "/Users/mwmathis/Documents/DeepLabCut-core_v3/deeplabcutcore/utils/auxiliaryfunctions.py", line 132, in read_config "Config file is not found. Please make sure that the file exists and/or that you passed the path of the config file correctly!") FileNotFoundError: Config file is not found. Please make sure that the file exists and/or that you passed the path of the config file correctly!

  • dlccore.train_network( ) pose_estimation_tensorflow/nnet/pose_net.py", line 69, in prediction_layers with tf.variable_scope('pose', reuse=reuse): AttributeError: module 'tensorflow' has no attribute 'variable_scope'

Compatibility with RTX 3080

OS: Win 10
DeepLabCut Version: DeepLabCut-core tf 2.2 alpha
Anaconda env used: DLC-GPU (clone the DLC-GPU env and uninstall the CUDA and cudnn)
Tensorflow Version: TF2.3, TF2.4, or tf-nightly, installed with pip (see below)
Cuda version: 11.0 and 11.1 (see below)

Hi everyone,
First of all, I want to say thank you to the deeplabcut team! I have been using the DLC for whisker tracking on an RTX 2060 for a while and it significantly facilitates my project.
Recently, I got an RTX 3080 in the lab. However, I had a hard time setting it up for DLC due to the compatibility issue. First, I noticed that RTX 3000 series does not support CUDA 10.x or earlier versions, so I installed CUDA 11.0 or CUDA 11.1 with the coresponding CuDNN on my windows. And I also cloned DLC-GPU conda environment and uninstalled the original CUDA and cudnn in the environment to prevent conflict.
TensorFlow starts to support CUDA 11.0 from TensorFlow 2.4, so I installed the TensorFlow 2.4 or tf-nightly-2.5 in the conda environment (via pip). I also tried TF-2.3 to check whether TF-2.3 is indeed incompatible with CUDA 11.x. I followed the
https://github.com/DeepLabCut/DeepLabCut-core/blob/tf2.2alpha/Colab_TrainNetwork_VideoAnalysis_TF2.ipynb
to install DeepLabCut-core tf 2.2 alpha and tf-slim and run the deeplabcut-core. However, I could not get it to start training in any of the settings.
Here is the summary
CUDA 11.0 | TF-2.3 | TF cannot recognize GPU as it is looking for .dll files that only exist in CUDA10.x
CUDA 11.0 | TF-2.4 | TF can recognize GPU smoothly, cannot start training with an error message (see Notes 1)
CUDA 11.0 | TF-nightly | TF can recognize GPU smoothly, cannot start training with an error message (see Notes 1)
CUDA 11.1 | TF-2.4| TF can recognize GPU with a trick (see Notes 2), cannot start training with no error message
CUDA 11.1 | TF-nightly | TF can recognize GPU with a trick (see Notes 2), cannot start training with no error message
I tested some simple TensorFlow script (https://www.tensorflow.org/tutorials/quickstart/advanced), they seemed to work fine on GPU in the last 4 configurations that I listed above.

Notes 1: Error message: failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED. And I saw the VRAM exploded in Windows Task manager after I started training. I tried to restrict the memory to a lower use by "config.gpu_options.per_process_gpu_memory_fraction = 0.6". It did not help, unfortunately.

Notes 2: TF could not recognize GPU because it could find "cusolver64_10.dll" which exists in CUDA 11.0 but replaced by "cusolver64_11.dll" in CUDA 11.1. So I copied "cusolver64_11.dll" and renamed it as "cusolver64_10.dll". Although TF can recognize GPU after that, it cannot start training. I saw the VRAM usage increased (but did not explode) in task manager after training start and after ~ 30 seconds, ipython or python just closed itself without any error message.

I also carefully followed the suggestions in DeepLabCut/DeepLabCut#944. They are very useful suggestions. However, I still cannot get my RTX3080 work.

Do you have any more suggestions that I could try?
Does anyone have a guide to set DLC-Core on RTX 3000 Series?

Thank you in advance

Issues with installing matplotlib

Specs:
OS: Windows 10
Graphics card: RTX3070
CUDA: 9.0
Python: 3.9

Due to series 3000 cards not working with Tensorflow 1.x, I'm trying to run the headless DeepLabCut with tensorflow 2.0.

Issue:
When I make a fresh anaconda environment and run: pip install git+https://github.com/DeepLabCut/[email protected]
(code I retrieved from the colab), I am unable to install matplotlib.

>>>pip install git+https://github.com/DeepLabCut/[email protected]
Collecting git+https://github.com/DeepLabCut/[email protected]
  Cloning https://github.com/DeepLabCut/DeepLabCut-core.git (to revision tf2.2alpha) to c:\users\jc\appdata\local\temp\pip-req-build-r3dhdv6n
Collecting certifi
  Using cached certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet
  Using cached chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting click
  Using cached click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting easydict
  Using cached easydict-1.9.tar.gz (6.4 kB)
Collecting h5py~=2.7
  Using cached h5py-2.10.0.tar.gz (301 kB)
Collecting intel-openmp
  Using cached intel_openmp-2021.1.2-py2.py3-none-win_amd64.whl (3.3 MB)
Collecting imgaug
  Using cached imgaug-0.4.0-py2.py3-none-any.whl (948 kB)
Collecting ipython
  Using cached ipython-7.19.0-py3-none-any.whl (784 kB)
Collecting ipython-genutils
  Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting matplotlib==3.0.3
  Using cached matplotlib-3.0.3.tar.gz (36.6 MB)
    ERROR: Command errored out with exit status 1:
     command: 'C:\Users\JC\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\JC\\AppData\\Local\\Temp\\pip-install-cy4dervr\\matplotlib\\setup.py'"'"'; __file__='"'"'C:\\Users\\JC\\AppData\\Local\\Temp\\pip-install-cy4dervr\\matplotlib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\JC\AppData\Local\Temp\pip-pip-egg-info-wqn2llym'
         cwd: C:\Users\JC\AppData\Local\Temp\pip-install-cy4dervr\matplotlib\
    Complete output (47 lines):
    ============================================================================
    Edit setup.cfg to change the build options

    BUILDING MATPLOTLIB
                matplotlib: yes [3.0.3]
                    python: yes [3.9.1 (tags/v3.9.1:1e5d33e, Dec  7 2020,
                            17:08:21) [MSC v.1927 64 bit (AMD64)]]
                  platform: yes [win32]

    REQUIRED DEPENDENCIES AND EXTENSIONS
                     numpy: yes [not found. pip may install it below.]
          install_requires: yes [handled by setuptools]
                    libagg: yes [pkg-config information for 'libagg' could not
                            be found. Using local copy.]
                  freetype: no  [The C/C++ header for freetype
                            (freetype2\ft2build.h) could not be found.  You may
                            need to install the development package.]
                       png: no  [The C/C++ header for png (png.h) could not be
                            found.  You may need to install the development
                            package.]
                     qhull: yes [pkg-config information for 'libqhull' could not
                            be found. Using local copy.]

    OPTIONAL SUBPACKAGES
               sample_data: yes [installing]
                  toolkits: yes [installing]
                     tests: no  [skipping due to configuration]
            toolkits_tests: no  [skipping due to configuration]

    OPTIONAL BACKEND EXTENSIONS
                       agg: yes [installing]
                     tkagg: yes [installing; run-time loading from Python Tcl /
                            Tk]
                    macosx: no  [Mac OS-X only]
                 windowing: yes [installing]

    OPTIONAL PACKAGE DATA
                      dlls: no  [skipping due to configuration]

    ============================================================================
                            * The following required packages can not be built:
                            * freetype, png
                            * Please check http://gnuwin32.sourceforge.net/packa
                            * ges/freetype.htm for instructions to install
                            * freetype
                            * Please check http://gnuwin32.sourceforge.net/packa
                            * ges/libpng.htm for instructions to install png
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

I've tried installing freetype and libpng libraries. However, I only run into more errors once I have done that. Any idea what the issue is here?

Joe

Attributes

Hi! Thank you for making such a great tool!

Your Operating system and DeepLabCut version

OS: Windows 10
DeepLabCut Version: DeepLabCut 2.1.8.2
Anaconda env used: DeepLabCut & TF 1.13.1

OS: Linux (High Performance Cluster)
DeepLabCut Version: DeepLabCut 2.1.8.1
Virtual env used: DeepLabCut-core & TF 1.13.1 (GPU)

Describe the problem

I created a project with the full package (DeepLabCut 2.1.8.2) and labelled on Windows. Everything was transferred to the cluster and the paths were adapted in the config.yaml from Windows to Linux. However, since I did not label all images I wanted to call dropimagesduetolackofannotation with DeepLabCut-core. I got an AttributeError: module 'deeplabcutcore' has no attribute 'dropimagesduetolackofannotation'.

On both Windows and Linux dir(DeepLabCut) have different outputs. The windows output includes dropimagesduetolackofannotation, whereas it is not in the core-version output.

After I checked trainingsetmanipulation.py and saw it was included I thought something might be going on, since create_training_dataset works just fine. I am planning on doing some more trainingset manipulation (select my own train & test set ect.), so I thought I would check. I am not sure if this was intended or not.

Traceback

unload python 3.7.4.
load python 3.6.8.
load cuda 10.0 library and binaries.
load cudnn 7.5.0.56 library and binaries.
Traceback (most recent call last):
File "DLC_CustomTrainingSet.py", line 9, in
deeplabcut.dropimagesduetolackofannotation
AttributeError: module 'deeplabcutcore' has no attribute 'dropimagesduetolackofannotation'

How to Reproduce the problem
Steps to reproduce the behavior:
Run the following script:

import deeplabcutcore as deeplabcut

Set config path

config_path = '/home/.../..../.../.../config.yaml'

Remove images without annotations

deeplabcut.dropimagesduetolackofannotation(config_path)

Additional context
A successful (training, evaluation and analyzing) transition between Windows and Linux was already achieved.

Output dir(deeplabcutcore)

['CropVideo', 'DEBUG', 'DownSampleVideo', 'ShortenVideo', 'builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'add_new_videos', 'analyze_time_lapse_frames', 'analyze_videos', 'analyze_videos_converth5_to_csv', 'analyzeskeleton', 'auxfun_videos', 'auxiliaryfunctions', 'calibrate_cameras', 'check_labels', 'check_undistortion', 'convertannotationdata_fromwindows2unixstyle', 'convertcsv2h5', 'create_labeled_video', 'create_labeled_video_3d', 'create_new_project', 'create_new_project_3d', 'create_pretrained_human_project', 'create_pretrained_project', 'create_project', 'create_training_dataset', 'create_training_model_comparison', 'evaluate_network', 'export_model', 'extract_frames', 'extract_outlier_frames', 'filterpredictions', 'generate_training_dataset', 'load_demo_data', 'merge_datasets', 'mergeandsplit', 'os', 'platform', 'plot_trajectories', 'pose_estimation_3d', 'pose_estimation_tensorflow', 'post_processing', 'refine_training_dataset', 'return_evaluate_network_data', 'return_train_network_path', 'select_cropping_area', 'train_network', 'triangulate', 'utils']

Output dir(deeplabcut)

['CropVideo', 'DEBUG', 'DownSampleVideo', 'ShortenVideo', 'VERSION', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', 'add_new_videos', 'adddatasetstovideolistandviceversa', 'analyze_time_lapse_frames', 'analyze_videos', 'analyze_videos_converth5_to_csv', 'analyzeskeleton', 'auxfun_videos', 'auxiliaryfunctions', 'calibrate_cameras', 'check_labels', 'check_undistortion', 'comparevideolistsanddatafolders', 'convertannotationdata_fromwindows2unixstyle', 'convertcsv2h5', 'create_labeled_video', 'create_labeled_video_3d', 'create_new_project', 'create_new_project_3d', 'create_pretrained_human_project', 'create_pretrained_project', 'create_project', 'create_training_dataset', 'create_training_model_comparison', 'dropannotationfileentriesduetodeletedimages', 'dropduplicatesinannotatinfiles', 'dropimagesduetolackofannotation', 'evaluate_network', 'export_model', 'extract_frames', 'extract_outlier_frames', 'filterpredictions', 'generate_training_dataset', 'gui', 'label_frames', 'launch_dlc', 'load_demo_data', 'merge_datasets', 'mergeandsplit', 'mpl', 'multiple_individuals_labeling_toolbox', 'os', 'platform', 'plot_trajectories', 'pose_estimation_3d', 'pose_estimation_tensorflow', 'post_processing', 'refine_labels', 'refine_training_dataset', 'return_evaluate_network_data', 'return_train_network_path', 'select_crop_parameters', 'select_cropping_area', 'train_network', 'triangulate', 'utils', 'version']

unable to triangulate

I have a GPU card RTX3090, so I chose to use deeplabcutcore.

I got a TypeError: unhashable type: 'CommentedMap' while running deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=True) (already import deeplabcutcore as deeplabcut).

And I found that if I set filterpredictions=False, I got another error IndexError: list index out of range.

If I use import deeplabcut, it works well but really slowly!

Hope you can help.

IndexError: list index out of range

Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-1.avi using config_file_camera-1
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-1.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-1DLC_resnet50_finger3d-camera1Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-1 DLC_resnet50_finger3d-camera1Mar5shuffle1_2000
Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-5.avi using config_file_camera-5
Snapshotindex is set to 'all' in the config.yaml file. Running video analysis with all snapshots is very costly! Use the function 'evaluate_network' to choose the best the snapshot. For now, changing snapshot index to -1!
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera5-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera5Mar5-trainset95shuffle1\train\snapshot-2000
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-5.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-5DLC_resnet50_finger3d-camera5Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-5 DLC_resnet50_finger3d-camera5Mar5shuffle1_2000
Undistorting...
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-24-682fd20e3c04> in <module>
      4 video_path = 'D:\\deeplabcut-video\\3dvideos'
      5 
----> 6 deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=False)

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in triangulate(config, video_path, videotype, filterpredictions, filtertype, gputouse, destfolder, save_as_csv)
    212             #undistort points for this pair
    213             print("Undistorting...")
--> 214             dataFrame_camera1_undistort,dataFrame_camera2_undistort,stereomatrix,path_stereo_file = undistort_points(config,dataname,str(cam_names[0]+'-'+cam_names[1]),destfolder)
    215             if len(dataFrame_camera1_undistort) != len(dataFrame_camera2_undistort):
    216                 import warnings

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in undistort_points(config, dataframe, camera_pair, destfolder)
    314     if True:
    315         # Create an empty dataFrame to store the undistorted 2d coordinates and likelihood
--> 316         dataframe_cam1 = pd.read_hdf(dataframe[0])
    317         dataframe_cam2 = pd.read_hdf(dataframe[1])
    318         scorer_cam1 = dataframe_cam1.columns.get_level_values(0)[0]

IndexError: list index out of range

TypeError: unhashable type: 'CommentedMap'

Analyzing video D:\deeplabcut-video\3dvideos\finger-camera-1.avi using config_file_camera-1
Using snapshot-2000 for model D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1
Initializing ResNet
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
INFO:tensorflow:Restoring parameters from D:/deeplabcut-video/finger3d-camera1-cshh-2021-03-05\dlc-models\iteration-0\finger3d-camera1Mar5-trainset95shuffle1\train\snapshot-2000
0it [00:00, ?it/s]
Starting to analyze %  D:\deeplabcut-video\3dvideos\finger-camera-1.avi
Video already analyzed! D:\deeplabcut-video\3dvideos\finger-camera-1DLC_resnet50_finger3d-camera1Mar5shuffle1_2000.h5
The videos are analyzed. Now your research can truly start! 
 You can create labeled videos with 'create_labeled_video'.
If the tracking is not satisfactory for some videos, consider expanding the training set. You can use the function 'extract_outlier_frames' to extract any outlier frames!
D:\deeplabcut-video\3dvideos finger-camera-1 DLC_resnet50_finger3d-camera1Mar5shuffle1_2000
Filtering with median model D:\deeplabcut-video\3dvideos\finger-camera-1.avi

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in __init__(self, values, categories, ordered, dtype, fastpath)
    342             try:
--> 343                 codes, categories = factorize(values, sort=True)
    344             except TypeError as err:

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in factorize(values, sort, na_sentinel, size_hint)
    677         codes, uniques = _factorize_array(
--> 678             values, na_sentinel=na_sentinel, size_hint=size_hint, na_value=na_value
    679         )

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value, mask)
    500     uniques, codes = table.factorize(
--> 501         values, na_sentinel=na_sentinel, na_value=na_value, mask=mask
    502     )

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize()

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()

TypeError: unhashable type: 'CommentedMap'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-25-3fd320d1d100> in <module>
      4 video_path = 'D:\\deeplabcut-video\\3dvideos'
      5 
----> 6 deeplabcut.triangulate(config3d_path, video_path, videotype='avi', gputouse=0, filterpredictions=True)

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\pose_estimation_3d\triangulation.py in triangulate(config, video_path, videotype, filterpredictions, filtertype, gputouse, destfolder, save_as_csv)
    205                     print(destfolder, vname , DLCscorer)
    206                     if filterpredictions:
--> 207                         filtering.filterpredictions(config_2d,[video],videotype=videotype,shuffle=shuffle,trainingsetindex=trainingsetindex,filtertype=filtertype,destfolder=destfolder)
    208                         dataname.append(os.path.join(destfolder,vname + DLCscorer + '.h5'))
    209 

~\.conda\envs\deeplabcutcore\lib\site-packages\deeplabcutcore\post_processing\filtering.py in filterpredictions(config, video, videotype, shuffle, trainingsetindex, filtertype, windowlength, p_bound, ARdegree, MAdegree, alpha, save_as_csv, destfolder)
    108                     Dataframe = pd.read_hdf(sourcedataname,'df_with_missing')
    109                     for bpindex,bp in tqdm(enumerate(cfg['bodyparts'])):
--> 110                         pdindex = pd.MultiIndex.from_product([[scorer], [bp], ['x', 'y','likelihood']],names=['scorer', 'bodyparts', 'coords'])
    111                         x,y,p=Dataframe[scorer][bp]['x'].values,Dataframe[scorer][bp]['y'].values,Dataframe[scorer][bp]['likelihood'].values
    112 

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\indexes\multi.py in from_product(cls, iterables, sortorder, names)
    558             iterables = list(iterables)
    559 
--> 560         codes, levels = factorize_from_iterables(iterables)
    561         if names is lib.no_default:
    562             names = [getattr(it, "name", None) for it in iterables]

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in factorize_from_iterables(iterables)
   2723         # For consistency, it should return a list of 2 lists.
   2724         return [[], []]
-> 2725     return map(list, zip(*(factorize_from_iterable(it) for it in iterables)))

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in <genexpr>(.0)
   2723         # For consistency, it should return a list of 2 lists.
   2724         return [[], []]
-> 2725     return map(list, zip(*(factorize_from_iterable(it) for it in iterables)))

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in factorize_from_iterable(values)
   2695         # but only the resulting categories, the order of which is independent
   2696         # from ordered. Set ordered to False as default. See GH #15457
-> 2697         cat = Categorical(values, ordered=False)
   2698         categories = cat.categories
   2699         codes = cat.codes

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\arrays\categorical.py in __init__(self, values, categories, ordered, dtype, fastpath)
    343                 codes, categories = factorize(values, sort=True)
    344             except TypeError as err:
--> 345                 codes, categories = factorize(values, sort=False)
    346                 if dtype.ordered:
    347                     # raise, as we don't have a sortable data structure and so

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in factorize(values, sort, na_sentinel, size_hint)
    676 
    677         codes, uniques = _factorize_array(
--> 678             values, na_sentinel=na_sentinel, size_hint=size_hint, na_value=na_value
    679         )
    680 

~\.conda\envs\deeplabcutcore\lib\site-packages\pandas\core\algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value, mask)
    499     table = hash_klass(size_hint or len(values))
    500     uniques, codes = table.factorize(
--> 501         values, na_sentinel=na_sentinel, na_value=na_value, mask=mask
    502     )
    503 

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize()

pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()

TypeError: unhashable type: 'CommentedMap'

list of fxns needed updated from dlc to dlc-c

deeplabcutcore import error(RTX3070)

Hello everyone
I have a issue.
I install deeplabcutcore and packages when I import Deeplabcutcore occur no module error 'tensorflow.contrib'
what I do next?

(DLC-GPU) C:\Windows\system32>pip list
Package Version


  •                    nsorflow-gpu
    

-ensorflow-gpu 2.3.0
absl-py 0.11.0
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
attrs 20.3.0
backcall 0.2.0
bayesian-optimization 1.2.0
bleach 3.2.1
cachetools 4.2.0
certifi 2020.12.5
cffi 1.14.4
chardet 4.0.0
click 7.1.2
colorama 0.4.4
cycler 0.10.0
Cython 0.29.21
decorator 4.4.2
deeplabcut-core 0.0b0
deeplabcutcore 0.0b3
defusedxml 0.6.0
easydict 1.9
entrypoints 0.3
filterpy 1.4.5
gast 0.3.3
google-auth 1.24.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
grpcio 1.31.0
h5py 2.10.0
idna 2.10
imageio 2.9.0
imageio-ffmpeg 0.4.3
imgaug 0.4.0
importlib-metadata 2.0.0
intel-openmp 2021.1.2
ipykernel 5.3.4
ipython 7.19.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 2.11.2
joblib 1.0.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-core 4.7.0
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.1
llvmlite 0.34.0
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.0.3
mistune 0.8.4
mkl-fft 1.2.0
mkl-random 1.1.1
mkl-service 2.3.0
mock 4.0.3
moviepy 1.0.1
msgpack 1.0.2
msgpack-numpy 0.4.7.1
nb-conda 2.2.1
nb-conda-kernels 2.3.1
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.3
networkx 2.5
notebook 6.1.6
numba 0.51.1
numexpr 2.7.2
numpy 1.16.4
oauthlib 3.1.0
opencv-python 3.4.13.47
opencv-python-headless 4.5.1.48
opt-einsum 3.3.0
packaging 20.8
pandas 1.1.5
pandocfilters 1.4.3
parso 0.8.1
patsy 0.5.1
pickleshare 0.7.5
Pillow 8.1.0
pip 20.3.3
proglog 0.1.9
prometheus-client 0.9.0
prompt-toolkit 3.0.8
protobuf 3.13.0
psutil 5.8.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.20
Pygments 2.7.4
pyparsing 2.4.7
pyreadline 2.1
pyrsistent 0.17.3
python-dateutil 2.8.1
pytz 2020.5
PyWavelets 1.1.1
pywin32 227
pywinpty 0.5.7
PyYAML 5.3.1
pyzmq 20.0.0
qtconsole 4.7.7
QtPy 1.9.0
requests 2.25.1
requests-oauthlib 1.3.0
rsa 4.7
ruamel.yaml 0.16.12
ruamel.yaml.clib 0.2.2
scikit-image 0.17.2
scikit-learn 0.24.0
scipy 1.4.1
Send2Trash 1.5.0
setuptools 51.1.2.post20210112
Shapely 1.7.1
six 1.15.0
statsmodels 0.12.1
tables 3.6.1
tabulate 0.8.7
tb-nightly 1.14.0a20190301
tensorboard 2.4.1
tensorboard-plugin-wit 1.7.0
tensorflow 2.3.0
tensorflow-estimator 2.3.0
tensorflow-gpu-estimator 2.3.0
tensorpack 0.9.8
termcolor 1.1.0
terminado 0.9.2
testpath 0.4.4
tf-estimator-nightly 1.14.0.dev2019030115
tf-slim 1.1.0
threadpoolctl 2.1.0
tifffile 2021.1.11
tornado 6.1
tqdm 4.56.0
traitlets 5.0.5
urllib3 1.26.2
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.36.2
widgetsnbextension 3.5.1
wincertstore 0.2
wrapt 1.12.1
wxPython 4.0.4
zipp 3.4.0

(DLC-GPU) C:\Windows\system32>python
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
Failed calling sys.interactivehook
Traceback (most recent call last):
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site.py", line 408, in register_readline
import readline
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\readline.py", line 6, in
from pyreadline.rlmain import Readline
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline_init_.py", line 12, in
from . import logger, clipboard, lineeditor, modes, console
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\modes_init_.py", line 3, in
from . import emacs, notemacs, vi
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\modes\emacs.py", line 15, in
import pyreadline.lineeditor.history as history
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\pyreadline\lineeditor\history.py", line 257
q.add_history(RL("aaaa"),encoding='utf8'))
^
SyntaxError: invalid syntax

import deeplabcutcore
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore_init_.py", line 20, in
from deeplabcutcore.create_project import create_new_project, create_new_project_3d, add_new_videos, load_demo_data
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project_init_.py", line 4, in
from deeplabcutcore.create_project.demo_data import load_demo_data
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project\demo_data.py", line 14, in
from deeplabcutcore.utils import auxiliaryfunctions
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils_init_.py", line 1, in
from deeplabcutcore.utils.make_labeled_video import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils\make_labeled_video.py", line 38, in
from deeplabcutcore.pose_estimation_tensorflow.config import load_config
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow_init_.py", line 13, in
from deeplabcutcore.pose_estimation_tensorflow.nnet import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet_init_.py", line 16, in
from deeplabcutcore.pose_estimation_tensorflow.nnet.pose_net import *
File "C:\Users\htomi\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet\pose_net.py", line 9, in
import tensorflow.contrib.slim as slim
ModuleNotFoundError: No module named 'tensorflow.contrib'
^Z

(DLC-GPU) C:\Windows\system32>ipython
Python 3.7.9 (default, Aug 31 2020, 17:10:11) [MSC v.1916 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import deeplabcutcore

ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import deeplabcutcore

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore_init_.py in
18
19
---> 20 from deeplabcutcore.create_project import create_new_project, create_new_project_3d, add_new_videos, load_demo_data
21 from deeplabcutcore.create_project import create_pretrained_project, create_pretrained_human_project
22 from deeplabcutcore.generate_training_dataset import extract_frames, select_cropping_area

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project_init_.py in
2 from deeplabcutcore.create_project.new_3d import create_new_project_3d
3 from deeplabcutcore.create_project.add import add_new_videos
----> 4 from deeplabcutcore.create_project.demo_data import load_demo_data
5 from deeplabcutcore.create_project.modelzoo import create_pretrained_human_project, create_pretrained_project

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\create_project\demo_data.py in
12 from pathlib import Path
13 import deeplabcutcore
---> 14 from deeplabcutcore.utils import auxiliaryfunctions
15
16 def load_demo_data(config,createtrainingset=True):

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils_init_.py in
----> 1 from deeplabcutcore.utils.make_labeled_video import *
2 from deeplabcutcore.utils.auxiliaryfunctions import *
3 from deeplabcutcore.utils.video_processor import *
4 from deeplabcutcore.utils.plotting import *
5

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\utils\make_labeled_video.py in
36
37 from deeplabcutcore.utils import auxiliaryfunctions
---> 38 from deeplabcutcore.pose_estimation_tensorflow.config import load_config
39 from skimage.util import img_as_ubyte
40 from skimage.draw import circle_perimeter, circle, line,line_aa

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow_init_.py in
11 from deeplabcutcore.pose_estimation_tensorflow.dataset import *
12 from deeplabcutcore.pose_estimation_tensorflow.models import *
---> 13 from deeplabcutcore.pose_estimation_tensorflow.nnet import *
14 from deeplabcutcore.pose_estimation_tensorflow.util import *
15

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet_init_.py in
14 from deeplabcutcore.pose_estimation_tensorflow.nnet.losses import *
15 from deeplabcutcore.pose_estimation_tensorflow.nnet.net_factory import *
---> 16 from deeplabcutcore.pose_estimation_tensorflow.nnet.pose_net import *
17 from deeplabcutcore.pose_estimation_tensorflow.nnet.predict import *

~\anaconda3\envs\DLC-GPU\lib\site-packages\deeplabcutcore\pose_estimation_tensorflow\nnet\pose_net.py in
7 import re
8 import tensorflow as tf
----> 9 import tensorflow.contrib.slim as slim
10 from tensorflow.contrib.slim.nets import resnet_v1
11 from deeplabcutcore.pose_estimation_tensorflow.dataset.pose_dataset import Batch

ModuleNotFoundError: No module named 'tensorflow.contrib'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.