Giter Site home page Giter Site logo

tao_pytorch_backend's Introduction

TAO Toolkit - PyTorch Backend

Overview

TAO Toolkit is a Python package hosted on the NVIDIA Python Package Index. It interacts with lower-level TAO dockers available from the NVIDIA GPU Accelerated Container Registry (NGC). The TAO containers come pre-installed with all dependencies required for training. The output of the TAO workflow is a trained model that can be deployed for inference on NVIDIA devices using DeepStream, TensorRT and Triton.

This repository contains the required implementation for the all the deep learning components and networks using the PyTorch backend. These routines are packaged as part of the TAO Toolkit PyTorch container in the Toolkit package. These source code here is compatible with PyTorch version > 2.0.0

Getting Started

As soon as the repository is cloned, run the envsetup.sh file to check if the build environment has the necessary dependencies, and the required environment variables are set.

source ${PATH_TO_REPO}/scripts/envsetup.sh

We recommend adding this command to your local ~/.bashrc file, so that every new terminal instance receives this.

Requirements

Hardware Requirements

Minimum system configuration
  • 8 GB system RAM
  • 4 GB of GPU RAM
  • 8 core CPU
  • 1 NVIDIA GPU
  • 100 GB of SSD space
Recommended system configuration
  • 32 GB system RAM
  • 32 GB of GPU RAM
  • 8 core CPU
  • 1 NVIDIA GPU
  • 100 GB of SSD space

Software Requirements

Software Version
Ubuntu LTS >=18.04
python >=3.10.x
docker-ce >19.03.5
docker-API 1.40
nvidia-container-toolkit >1.3.0-1
nvidia-container-runtime 3.4.0-1
nvidia-docker2 2.5.0-1
nvidia-driver >535.85
python-pip >21.06

Instantiating the development container

Inorder to maintain a uniform development environment across all users, TAO Toolkit provides a base environment Dockerfile in docker/Dockerfile that contains all the required third party dependencies for the developers. For instantiating the docker, simply run the tao_pt CLI. The usage for the command line launcher is mentioned below.

usage: tao_pt [-h] [--gpus GPUS] [--volume VOLUME] [--env ENV]
              [--mounts_file MOUNTS_FILE] [--shm_size SHM_SIZE]
              [--run_as_user] [--tag TAG] [--ulimit ULIMIT] [--port PORT]

Tool to run the pytorch container.

optional arguments:
  -h, --help                show this help message and exit
  --gpus GPUS               Comma separated GPU indices to be exposed to the docker.
  --volume VOLUME           Volumes to bind.
  --env ENV                 Environment variables to bind.
  --mounts_file MOUNTS_FILE Path to the mounts file.
  --shm_size SHM_SIZE       Shared memory size for docker
  --run_as_user             Flag to run as user
  --tag TAG                 The tag value for the local dev docker.
  --ulimit ULIMIT           Docker ulimits for the host machine.
  --port PORT               Port mapping (e.g. 8889:8889).

A sample command to instantiate an interactive session in the base development docker is mentioned below.

tao_pt --gpus all \
       --volume /path/to/data/on/host:/path/to/data/on/container \
       --volume /path/to/results/on/host:/path/to/results/in/container \
       --env PYTHONPATH=/tao-pt

Running Deep Neural Networks implies working on large datasets. These datasets are usually stored on network share drives with significantly higher storage capacity. Since the tao_pt CLI wrapper uses docker containers under the hood, these drives/mount points need to be mapped to the docker.

There are 2 ways to configure the tao_pt CLI wrapper.

  1. Via the command line options
  2. Via the mounts file. By default, at ~/.tao_mounts.json.

Command line options

Option Description Default
gpus Comma separated GPU indices to be exposed to the docker 1
volume Paths on the host machine to be exposed to the container. This is analogous to the -v option in the docker CLI. You may define multiple mount points by using the --volume option multiple times. None
env Environment variables to defined inside the interactive container. You may set them as --env VAR=<value>. Multiple environment variables can be set by repeatedly defining the --env option. None
mounts_file Path to the mounts file, explained more in the next section. ~/.tao_mounts.json
shm_size Shared memory size for docker in Bytes. 16G
run_as_user Flag to run as default user account on the host machine. This helps with maintaining permissions for all directories and artifacts created by the container.
tag The tag value for the local dev docker None
ulimit Docker ulimits for the host machine
port Port mapping (e.g. 8889:8889) None

Using the mounts file

The tao_pt CLI wrapper instance can be configured by using a mounts file. By default, the wrapper expects the mounts file to be at ~/.tao_mounts.json. However, for multiple options, you may be able

The launcher config file consists of three sections:

  • Mounts

The Mounts parameter defines the paths in the local machine, that should be mapped to the docker. This is a list of json dictionaries containing the source path in the local machine and the destination path that is mapped for the CLI wrapper.

A sample config file containing 2 mount points and no docker options is as below.

{
    "Mounts": [
        {
            "source": "/path/to/your/experiments",
            "destination": "/workspace/tao-experiments"
        },
        {
            "source": "/path/to/config/files",
            "destination": "/workspace/tao-experiments/specs"
        }
    ]
}

Updating the base docker

There will be situations where developers would be required to update the third party dependancies to newer versions, or upgrade CUDA etc. In such a case, please follow the steps below:

Build base docker

The base dev docker is defined in $NV_TAO_PYTORCH_TOP/docker/Dockerfile. The python packages required for the TAO dev is defined in $NV_TAO_PYTORCH_TOP/docker/requirements-pip.txt and the third party apt packages are defined in $NV_TAO_PYTORCH_TOP/docker/requirements-apt.txt. Once you have made the required change, please update the base docker using the build script in the same directory.

cd $NV_TAO_PYTORCH_TOP/docker
./build.sh --build

Test the newly built base docker

The build script tags the newly built base docker with the username of the account in the user's local machine. Therefore, the developers may tests their new docker by using the tao_pt command with the --tag option.

tao_pt --tag $USER -- script args

Update the new docker

Once you are sufficiently confident about the newly built base docker, please do the following

  1. Push the newly built base docker to the registry

    bash $NV_TAO_PYTORCH_TOP/docker/build.sh --build --push
  2. The above step produces a digest file associated with the docker. This is a unique identifier for the docker. So please note this, and update all references of the old digest in the repository with the new digest. You may find the old digest in the $NV_TAO_PYTORCH_TOP/docker/manifest.json.

Push you final updated changes to the repository so that other developers can leverage and sync with the new dev environment.

Please note that if for some reason you would like to force build the docker without using a cache from the previous docker, you may do so by using the --force option.

bash $NV_TAO_PYTORCH_TOP/docker/build.sh --build --push --force

Building a release container

The TAO docker is built on top of the TAO Pytorch base dev docker, by building a python wheel for the nvidia_tao_pyt module in this repository and installing the wheel in the Dockerfile defined in release/docker/Dockerfile. The whole build process is captured in a single shell script which may be run as follows:

git lfs install
git lfs pull
source scripts/envsetup.sh
cd $NV_TAO_PYTORCH_TOP/release/docker
./deploy.sh --build --wheel

In order to build a new docker, please edit the deploy.sh file in $NV_TAO_PYTORCH_TOP/release/docker to update the patch version and re-run the steps above.

Contribution Guidelines

TAO Toolkit PyTorch backend is not accepting contributions as part of the TAO 5.0 release, but will be open in the future.

License

This project is licensed under the Apache-2.0 License.

tao_pytorch_backend's People

Contributors

arun-george-zachariah avatar

Stargazers

Synapse Mobility avatar  avatar  avatar Sujit Nalawade avatar  avatar Manjunath Janardhan avatar Mona Jalal avatar So Uchida avatar Mai Huy TRUONG avatar Devin Blitzer avatar  avatar XuLei avatar j. yuan avatar  avatar  avatar Felipe Menegazzi avatar Milhouse avatar Nikhil Mehta avatar Aries Chen avatar Taiga Ishida avatar kangsanha avatar Bin Zhao avatar  avatar BigCai avatar ivan avatar Elliot Su avatar Gabriel Marinho avatar  avatar Silvio Casco avatar Tae-Ho Kim avatar Sean Stevens avatar Chun-Hung Yeh avatar ribahh avatar Jade Cong avatar  avatar Sanchay avatar  avatar Hou Lin Jie avatar Nathan Greffe avatar Dmytro avatar  avatar Maxime G avatar mario avatar bubble avatar Devonte Rogers avatar Julian avatar  avatar sudo avatar Mike Gedzius avatar Sandalots avatar 爱可可-爱生活 avatar Bowen Dong avatar Fabio Cermelli avatar Shu Ramakrishnan avatar morganh avatar Zhenming Lin avatar Miguel Xochicale, PhD avatar Bipin Singh avatar Sam Pedersen avatar Mikael Brudfors avatar  avatar Tanel Treuberg avatar Chris Kenny avatar  avatar Shengyu HUANG avatar Dmitry Mironov avatar LeoSun avatar Jiahui Huang avatar Sean Cha avatar Dheeraj Peri avatar  avatar

Watchers

James Cloos avatar Mona Jalal avatar Arun Sathiya avatar Kostas Georgiou avatar  avatar

tao_pytorch_backend's Issues

[QST] Structured pruning feature of TAO Pytorch backend

What is your question?

Hi,

First of all, thanks for the amazing work!

I am looking into the structured pruning support within TAO. Looking into the code it seems to be based on torch-pruning, but with modifications, and varying implementations. E.g., Torch-pruning implements high-level pruners, but TAO seems to go about it a bit differently. I am curious about the rationale of TAO's approach? is there maybe a feature that TAO's implementation supports but that torch-pruning doesn't ?

Thanks in advance for your time!

Building the base docker fails -- ERROR: failed to solve: process "/bin/sh -c pip install parametrized ninja" did not complete successfully: exit code: 1

I am trying to build the base docker and it fails

(sdgpose) mona@ada:/data/tao_pytorch_backend/docker$ ./build.sh --build
Building base docker ...
[+] Building 5.0s (10/29)                                                                                                                                                                    docker:default
 => [internal] load .dockerignore                                                                                                                                                                      0.0s
 => => transferring context: 2B                                                                                                                                                                        0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                                   0.0s
 => => transferring dockerfile: 2.73kB                                                                                                                                                                 0.0s
 => [internal] load metadata for nvcr.io/nvidia/pytorch:23.12-py3                                                                                                                                      2.6s
 => [auth] nvidia/pytorch:pull,push token for nvcr.io                                                                                                                                                  0.0s
 => [internal] load build context                                                                                                                                                                      0.0s
 => => transferring context: 230B                                                                                                                                                                      0.0s
 => [ 1/24] FROM nvcr.io/nvidia/pytorch:23.12-py3@sha256:da3d1b690b9dca1fbf9beb3506120a63479e0cf1dc69c9256055125460eb44f7                                                                              0.0s
 => CACHED [ 2/24] COPY docker/requirements-apt.txt requirements-apt.txt                                                                                                                               0.0s
 => CACHED [ 3/24] RUN apt-get upgrade && apt-get update &&   xargs apt-get install -y < requirements-apt.txt &&   rm requirements-apt.txt &&   rm -rf /var/lib/apt/lists/*                            0.0s
 => CACHED [ 4/24] RUN pip uninstall -y sacrebleu torchtext                                                                                                                                            0.0s
 => ERROR [ 5/24] RUN pip install parametrized ninja                                                                                                                                                   2.2s
------                                                                                                                                                                                                      
 > [ 5/24] RUN pip install parametrized ninja:                                                                                                                                                              
0.393 Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com                                                                                                                              
0.985 Collecting parametrized                                                                                                                                                                               
1.282   Downloading parametrized-66.0.3.tar.gz (1.2 kB)                                                                                                                                                     
1.286   Preparing metadata (setup.py): started                                                                                                                                                              
1.460   Preparing metadata (setup.py): finished with status 'done'
1.461 Requirement already satisfied: ninja in /usr/local/lib/python3.10/dist-packages (1.11.1.1)
1.466 Building wheels for collected packages: parametrized
1.466   Building wheel for parametrized (setup.py): started
1.678   Building wheel for parametrized (setup.py): finished with status 'error'
1.684   error: subprocess-exited-with-error
1.684   
1.684   × python setup.py bdist_wheel did not run successfully.
1.684   │ exit code: 1
1.684   ╰─> [47 lines of output]
1.684       /usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'readme'
1.684         warnings.warn(msg)
1.684       running bdist_wheel
1.684       running build
1.684       /usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
1.684       !!
1.684       
1.684               ********************************************************************************
1.684               Please avoid running ``setup.py`` directly.
1.684               Instead, use pypa/build, pypa/installer or other
1.684               standards-based tools.
1.684       
1.684               See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
1.684               ********************************************************************************
1.684       
1.684       !!
1.684         self.initialize_options()
1.684       installing to build/bdist.linux-x86_64/wheel
1.684       running install
1.684       Traceback (most recent call last):
1.684         File "<string>", line 2, in <module>
1.684         File "<pip-setuptools-caller>", line 34, in <module>
1.684         File "/tmp/pip-install-1oy0uqz6/parametrized_1b64a7f7c3b14ba096077f166889576c/setup.py", line 10, in <module>
1.684           setup(
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 103, in setup
1.684           return distutils.core.setup(**attrs)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 185, in setup
1.684           return run_commands(dist)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/core.py", line 201, in run_commands
1.684           dist.run_commands()
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 969, in run_commands
1.684           self.run_command(cmd)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 989, in run_command
1.684           super().run_command(command)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
1.684           cmd_obj.run()
1.684         File "/usr/local/lib/python3.10/dist-packages/wheel/bdist_wheel.py", line 403, in run
1.684           self.run_command("install")
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py", line 318, in run_command
1.684           self.distribution.run_command(command)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/dist.py", line 989, in run_command
1.684           super().run_command(command)
1.684         File "/usr/local/lib/python3.10/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
1.684           cmd_obj.run()
1.684         File "/tmp/pip-install-1oy0uqz6/parametrized_1b64a7f7c3b14ba096077f166889576c/setup.py", line 7, in run
1.684           raise RuntimeError("You are trying to install a stub package parametrized. Maybe you are using the wrong pypi?")
1.684       RuntimeError: You are trying to install a stub package parametrized. Maybe you are using the wrong pypi?
1.684       [end of output]
1.684   
1.684   note: This error originates from a subprocess, and is likely not a problem with pip.
1.684   ERROR: Failed building wheel for parametrized
1.685   Running setup.py clean for parametrized
1.810 Failed to build parametrized
1.810 ERROR: Could not build wheels for parametrized, which is required to install pyproject.toml-based projects
2.165 
2.165 [notice] A new release of pip is available: 23.3.1 -> 24.0
2.165 [notice] To update, run: python -m pip install --upgrade pip
------
Dockerfile:14
--------------------
  12 |     # uninstall stuff from base container
  13 |     RUN pip uninstall -y sacrebleu torchtext
  14 | >>> RUN pip install parametrized ninja
  15 |     # Installing custom packages in /opt.
  16 |     WORKDIR /opt
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install parametrized ninja" did not complete successfully: exit code: 1

Describe the bug

A clear and concise description of what the bug is.

Steps/Code to reproduce bug

(sdgpose) mona@ada:/data/tao_pytorch_backend/docker$ git log
commit 9c2d94c0635b1117edfea85a94a6e3d0ead53754 (HEAD -> main, origin/main, origin/HEAD)
Author: Arun George Zachariah <[email protected]>
Date:   Fri Mar 8 17:18:29 2024 -0800

    TAO 5.3 Release - PyTorch

Please list minimal steps or code snippet for us to be able to reproduce the bug.

A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports.

Expected behavior

A clear and concise description of what you expected to happen.

Environment overview (please complete the following information)

  • Environment location: [Bare-metal, Docker, Cloud(specify cloud provider - AWS, Azure, GCP, Collab)]
  • Method of TAO Toolkit Installation install: [docker container, launcher, pip install or from source]. Please specify exact commands you used to install.
  • If method of install is [Docker], provide docker pull & docker run commands used
  • If method of install in [Launcher], provide the output of the tao info --verbose command and pip show nvidia-tao command.
(sdgpose) mona@ada:/data/tao_pytorch_backend/docker$ tao info --verbose
Configuration of the TAO Toolkit Instance

task_group:         
    model:             
        dockers:                 
            nvidia/tao/tao-toolkit:                     
                5.0.0-tf2.11.0:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. classification_tf2
                        2. efficientdet_tf2
                5.0.0-tf1.15.5:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. bpnet
                        2. classification_tf1
                        3. converter
                        4. detectnet_v2
                        5. dssd
                        6. efficientdet_tf1
                        7. faster_rcnn
                        8. fpenet
                        9. lprnet
                        10. mask_rcnn
                        11. multitask_classification
                        12. retinanet
                        13. ssd
                        14. unet
                        15. yolo_v3
                        16. yolo_v4
                        17. yolo_v4_tiny
                5.2.0-pyt2.1.0:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. action_recognition
                        2. centerpose
                        3. deformable_detr
                        4. dino
                        5. mal
                        6. ml_recog
                        7. ocdnet
                        8. ocrnet
                        9. optical_inspection
                        10. pointpillars
                        11. pose_classification
                        12. re_identification
                        13. visual_changenet
                5.2.0.1-pyt1.14.0:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. classification_pyt
                        2. segformer
    dataset:             
        dockers:                 
            nvidia/tao/tao-toolkit:                     
                5.2.0-data-services:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. augmentation
                        2. auto_label
                        3. annotations
                        4. analytics
    deploy:             
        dockers:                 
            nvidia/tao/tao-toolkit:                     
                5.2.0-deploy:                         
                    docker_registry: nvcr.io
                    tasks: 
                        1. visual_changenet
                        2. centerpose
                        3. classification_pyt
                        4. classification_tf1
                        5. classification_tf2
                        6. deformable_detr
                        7. detectnet_v2
                        8. dino
                        9. dssd
                        10. efficientdet_tf1
                        11. efficientdet_tf2
                        12. faster_rcnn
                        13. lprnet
                        14. mask_rcnn
                        15. ml_recog
                        16. multitask_classification
                        17. ocdnet
                        18. ocrnet
                        19. optical_inspection
                        20. retinanet
                        21. segformer
                        22. ssd
                        23. trtexec
                        24. unet
                        25. yolo_v3
                        26. yolo_v4
                        27. yolo_v4_tiny
format_version: 3.0
toolkit_version: 5.2.0.1
published_date: 01/16/2024

(sdgpose) mona@ada:/data/tao_pytorch_backend/docker$ pip show nvidia-tao
Name: nvidia-tao
Version: 5.2.0.1
Summary: NVIDIA's Launcher for TAO Toolkit.
Home-page: 
Author: Varun Praveen
Author-email: [email protected]
License: NVIDIA Proprietary Software
Location: /home/mona/anaconda3/envs/sdgpose/lib/python3.10/site-packages
Requires: certifi, chardet, docker, docker-pycreds, idna, requests, rich, six, tabulate, urllib3, websocket-client
Required-by: 

Environment details

If NVIDIA docker image is used you don't need to specify these.
Otherwise, please provide:

  • OS version
(base) mona@ada:~$ uname -a
Linux ada 6.5.0-25-generic #25~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Feb 20 16:09:15 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
(base) mona@ada:~$ lsb_release -a
LSB Version:	core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.3 LTS
Release:	22.04
Codename:	jammy

  • TensorFlow version
  • Python version
(sdgpose) mona@ada:/data/tao_pytorch_backend/docker$ python
Python 3.10.0 (default, Mar  3 2022, 09:58:08) [GCC 7.5.0] on linux

  • CUDA version
    image
(base) mona@ada:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

  • CUDNN version
  • DALI version
  • GPU Driver

Additional context

Add any other context about the problem here.
Example: GPU model

[QST] Missing MultiScaleDeformableAttention.cpython-38-x86_64-linux-gnu.so in Deformable-DETR

What is your question?

Hi, I tried to load deformable_detr, and it comes out with the following errors.

Should I build so file from Deformable-DETR?

  File "/tao-pt/nvidia_tao_pytorch/cv/dino/model/build_nn_model.py", line 297, in build_model
    model = DINOModel(num_classes=num_classes,
  File "/tao-pt/nvidia_tao_pytorch/cv/dino/model/build_nn_model.py", line 172, in __init__
    transformer = DeformableTransformer(
  File "/tao-pt/nvidia_tao_pytorch/cv/dino/model/deformable_transformer.py", line 139, in __init__
    encoder_layer = DeformableTransformerEncoderLayer(d_model, dim_feedforward,
  File "/tao-pt/nvidia_tao_pytorch/cv/dino/model/deformable_transformer.py", line 836, in __init__
    self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)
  File "/tao-pt/nvidia_tao_pytorch/cv/deformable_detr/model/ops/modules.py", line 83, in __init__
    load_ops(ops_dir, lib_name)
  File "/tao-pt/nvidia_tao_pytorch/cv/deformable_detr/model/ops/functions.py", line 35, in load_ops
    torch.ops.load_library(module_path)
  File "/usr/local/lib/python3.8/dist-packages/torch/_ops.py", line 641, in load_library
    ctypes.CDLL(path)
  File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /tao-pt/nvidia_tao_pytorch/cv/deformable_detr/model/ops/MultiScaleDeformableAttention.cpython-38-x86_64-linux-gnu.so: cannot open shared object file: No such file or directory

lib_name = "MultiScaleDeformableAttention.cpython-38-x86_64-linux-gnu.so"

typo in Dockerfile "parametrized--> parameterized"

Describe the bug

A clear and concise description of what the bug is.

Steps/Code to reproduce bug

Please list minimal steps or code snippet for us to be able to reproduce the bug.

A helpful guide on on how to craft a minimal bug report http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports.

Expected behavior

A clear and concise description of what you expected to happen.

Environment overview (please complete the following information)

  • Environment location: [Bare-metal, Docker, Cloud(specify cloud provider - AWS, Azure, GCP, Collab)]
  • Method of TAO Toolkit Installation install: [docker container, launcher, pip install or from source]. Please specify exact commands you used to install.
  • If method of install is [Docker], provide docker pull & docker run commands used
  • If method of install in [Launcher], provide the output of the tao info --verbose command and pip show nvidia-tao command.

Environment details

If NVIDIA docker image is used you don't need to specify these.
Otherwise, please provide:

  • OS version
  • TensorFlow version
  • Python version
  • CUDA version
  • CUDNN version
  • DALI version
  • GPU Driver

Additional context

Add any other context about the problem here.
Example: GPU model

Will INT8 be available for PointPillars in the near future?

In pointcloud/pointpillars/tools/export/tensorrt.py, there is a comment goes PointPillars INT8 calibration APIs .

But in pointcloud/pointpillars/scripts/export.py, there is another comment goes INT8 is not yet fully supported, raise error if one tries to use it.

It seems that you are aiming to support INT8 for PointPillars, but haven't finished yet.

Is support for INT8 still on your plan ? Will you continue or pause it in the next step? Thanks.

typos in README.md

Describe the bug

typos:

./README.md:33: enviroment ==> environment
./README.md:78: enviroment ==> environment
./README.md:111: dependancies ==> dependencies

Steps/Code to reproduce bug

review README.md

Expected behavior

No typos

Environment overview (please complete the following information)

NA

Environment details

NA

Additional context

None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.