Giter Site home page Giter Site logo

galliot-us / smart-social-distancing Goto Github PK

View Code? Open in Web Editor NEW
139.0 9.0 37.0 42.71 MB

Social Distancing Detector using deep learning and capable to run on edge AI devices such as NVIDIA Jetson, Google Coral, and more.

Home Page: https://neuralet.com

License: Apache License 2.0

Dockerfile 3.21% Shell 0.70% Python 93.62% JavaScript 0.04% HTML 2.43%
edge-ai edge-tpu deep-learning deep-neural-networks edge-computing coral-dev-board coral-tpu jetson-nano jetson-tx2 jetson

smart-social-distancing's Introduction

License

Galliot now hosts all Neuralet’s content and expertise gained in three years of work and completing high-quality applications, mainly in Computer Vision and Deep Learning. More Info

Smart Social Distancing

Introduction

Smart Distancing is an open-source application to quantify social distancing measures using edge computer vision systems. Since all computation runs on the device, it requires minimal setup and minimizes privacy and security concerns. It can be used in retail, workplaces, schools, construction sites, healthcare facilities, factories, etc.

You can run this application on edge devices such as NVIDIA's Jetson Nano / TX2 or Google's Coral Edge-TPU. This application measures social distancing rates and gives proper notifications each time someone ignores social distancing rules. By generating and analyzing data, this solution outputs statistics about high-traffic areas that are at high risk of exposure to COVID-19 or any other contagious virus.

If you want to understand more about the architecture you can read the following post.

Please join our slack channel or reach out to [email protected] if you have any questions.

Getting Started

You can read the Get Started tutorial on Lanthorn's website. The following instructions will help you get started.

Prerequisites

Hardware

A host edge device. We currently support the following:

  • NVIDIA Jetson Nano
  • NVIDIA Jetson TX2
  • Coral Dev Board
  • AMD64 node with attached Coral USB Accelerator
  • X86 node (also accelerated with Openvino)
  • X86 node with Nvidia GPU

The features supported, the detection accuracy reached and the performance can vary from device to device.

Software

You should have Docker on your device.

Optionally, you can install docker-compose to build and run the processor containers easily. In some edge devices, such as Coral or Jetson Nano, the official installation guide can fail because there isn't in the repository an already build image for that device architecture. If this is the case, we recommend installing docker-compose using pip

Download a sample video (Optional)

If you don't have any camera to test the solution you can use any video as an input source. You can download an example with the following command.

# Download a sample video file from multiview object tracking dataset
# The video is complied from this dataset: https://researchdatafinder.qut.edu.au/display/n27416
./download_sample_video.sh

Usage

The smart social distancing app consists of two components: the frontend and the processor.

Frontend

The frontend is a public web app provided by lanthorn where you can signup for free. This web app allows you to configure some aspects of the processor (such as notifications and camera calibration) using a friendly UI. Moreover, it provides a dashboard that helps you to analyze the data that your cameras are processing.

The frontend site uses HTTPs, in order to have it communicate with the processor, the latter must be either Running with SSL enabled (See Enabling SSL on this Readme), or you must edit your site settings for https://app.lanthorn.ai in order to allow for Mixed Content (Insecure Content). Without doing any of these, communication with the local processor will fail

Running the processor

Make sure you have Docker installed on your device by following these instructions. The command that you need to execute will depend on the chosen device because each one has an independent Dockerfile.

There are three alternatives to run the processor in your device:

  1. Using git and building the docker image yourself (Follow the guide in this section).
  2. Pulling the (already built) image from Galliot's Docker Hub repository (Follow the guide in this section).
  3. Using docker-compose to build and run the processor (Follow the guide in this section).
Running a proof of concept

If you want to simply run the processor for just trying it out, then from the following steps you should only:

  1. Select your device and find its docker image. On x86, without a dedicated edge device, you should use either: a. If the device has access to an Nvidia GPU: GPU with TensorRT optimization. b. If the device has access to an Intel CPU: x86 using OpenVino. c. Otherwise: x86.
  2. Either build the image or pull it from Dockerhub. Don't forget to follow the script and download the model.
  3. Download the sample video running ./download_sample_video.sh.
  4. Run the processor using the script listed in its device.

This way you can skip security steps such as enabling HTTPs communication or oauth and get a simple version of the processor running to see if it fits your use case.

Afterwards, if you intend on running the processor while consuming from a dedicated video feed, we advise you to return to this README and read it fully.

Running the processor building the image

Make sure your system fulfills the prerequisites and then clone this repository to your local system by running this command:

git clone https://github.com/galliot-us/smart-social-distancing.git
cd smart-social-distancing

After that, checkout to the latest release:

git fetch --tags
# Checkout to the latest release tag
git checkout $(git tag | tail -1)
Run on Jetson Nano
  • You need to have JetPack 4.3 installed on your Jetson Nano.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_nano_trt.sh

# 2) Build Docker image for Jetson Nano
docker build -f jetson-nano.Dockerfile -t "galliot/smart-social-distancing:latest-jetson-nano" .

# 3) Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-jetson-nano
Run on Jetson TX2
  • You need to have JetPack 4.4 installed on your Jetson TX2. If you are using Openpifpaf as a detector, skip the first step as the TensorRT engine will be generated automatically with calling the generate_tensorrt.bash script by detector.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_tx2_trt.sh

# 2) Build Docker image for Jetson TX2
docker build -f jetson-tx2.Dockerfile -t "galliot/smart-social-distancing:latest-jetson-tx2" .

# 3) Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-jetson-tx2
Run on Coral Dev Board
# 1) Build Docker image (This step is optional, you can skip it if you want to pull the container from galliot dockerhub)
docker build -f coral-dev-board.Dockerfile -t "galliot/smart-social-distancing:latest-coral-dev-board" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-coral-dev-board
Run on AMD64 node with a connected Coral USB Accelerator
# 1) Build Docker image
docker build -f amd64-usbtpu.Dockerfile -t "galliot/smart-social-distancing:latest-amd64" .

# 2) Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-amd64
Run on x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 1) Build Docker image
docker build -f x86.Dockerfile -t "galliot/smart-social-distancing:latest-x86_64" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-x86_64
Run on x86 with GPU

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 1) Build Docker image
docker build -f x86-gpu.Dockerfile -t "galliot/smart-social-distancing:latest-x86_64_gpu" .

# 2) Run Docker container:
Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-x86_64_gpu
Run on x86 with GPU using TensorRT optimization

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# 1) Build Docker image
docker build -f x86-gpu-tensorrt-openpifpaf.Dockerfile -t "galliot/smart-social-distancing:latest-x86_64_gpu_tensorrt" .

# 2) Run Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-x86_64_gpu_tensorrt
Run on x86 using OpenVino
# download model first
./download_openvino_model.sh

# 1) Build Docker image
docker build -f x86-openvino.Dockerfile -t "galliot/smart-social-distancing:latest-x86_64_openvino" .

# 2) Run Docker container:
docker run -it -p HOST_PORT:8000 -v "$PWD":/repo  -e TZ=`./timezone.sh` galliot/smart-social-distancing:latest-x86_64_openvino
Running the processor from galliot Docker Hub repository

Before running any of the images available in the Docker repository, you need to follow these steps to have your device ready.

  1. Create a data folder.
  2. Copy the config file (available in this repository) corresponding to your device.
  3. Copy the bash script(s) (available in this repository) required to download the model(s) your device requires.
  4. Optionally, copy the script timezone.sh (available in this repository) to run the processor using your system timezone instead of UTC.

Alternatively you may simply pull the folder structure from this repository.

Run on Jetson Nano
  • You need to have JetPack 4.3 installed on your Jetson Nano.
# Download TensorRT engine file built with JetPack 4.3:
mkdir data/jetson
./download_jetson_nano_trt.sh

# Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-jetson-nano.ini:/repo/config-jetson-nano.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano
Run on Jetson TX2
  • You need to have JetPack 4.4 installed on your Jetson TX2. If you are using Openpifpaf as a detector, skip the first step as the TensorRT engine will be generated automatically with calling the generate_tensorrt.bash script by detector.
# Download TensorRT engine file built with JetPack 4.4
mkdir data/jetson
./download_jetson_tx2_trt.sh

# Run Docker container:
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-jetson-tx2.ini:/repo/config-jetson-tx2.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-tx2
Run on Coral Dev Board
# Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-coral.ini:/repo/config-coral.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-coral-dev-board
Run on AMD64 node with a connected Coral USB Accelerator
# Run Docker container:
docker run -it --privileged -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-coral.ini:/repo/config-coral.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-amd64
Run on x86
# Download the models
mkdir data/x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh
# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# Run Docker container:
docker run -it -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86.ini:/repo/config-x86.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64
Run on x86 with GPU

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# Download the models
mkdir data/x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh
# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-gpu.ini:/repo/config-x86-gpu.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu
Run on x86 with GPU using TensorRT optimization

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# Run Docker container:
# Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-gpu-tensorrt.ini:/repo/config-x86-gpu-tensorrt.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_gpu_tensorrt
Run on x86 using OpenVino
# Download the model
mkdir data/x86
./download_openvino_model.sh

# Run Docker container:
docker run -it -p HOST_PORT:8000 -v $PWD/data:/repo/data -v $PWD/config-x86-openvino.ini:/repo/config-x86-openvino.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64_openvino
Running the processor with docker-compose
Run on Jetson Nano
  • You need to have JetPack 4.3 installed on your Jetson Nano.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_nano_trt.sh

# 2) Build Docker image for Jetson Nano (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-jetson-nano.yml build

# 3) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-jetson-nano.yml up
Run on Jetson TX2
  • You need to have JetPack 4.4 installed on your Jetson TX2. If you are using Openpifpaf as a detector, skip the first step as the TensorRT engine will be generated automatically with calling the generate_tensorrt.bash script by detector.
# 1) Download TensorRT engine file built with JetPack 4.3:
./download_jetson_tx2_trt.sh

# 2) Build Docker image for Jetson TX2 (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-jetson-tx2.yml build

# 3) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-jetson-tx2.yml up
Run on Coral Dev Board
# 1) Build Docker image for Coral (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-coral-dev.yml build

# 2) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-coral-dev.yml up
Run on AMD64 node with a connected Coral USB Accelerator
# 1) Build Docker image for Coral USB Accelerator (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-amd64.yml build

# 2) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-amd64.yml up
Run on x86
# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 2) Build Docker image for x86 (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-x86.yml build

# 3) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-x86.yml up
Run on x86 with GPU

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# If you use the OpenPifPaf model, download the model first:
./download-x86-openpifpaf-model.sh

# If you use the MobileNet model run this instead:
# ./download_x86_model.sh

# 2) Build Docker image for gpu (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-gpu.yml build

# 3) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-gpu.yml up
Run on x86 with GPU using TensorRT optimization

Note that you should have Nvidia Docker Toolkit to run the app with GPU support

# 1) Build Docker image for gpu using TensorRT (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-gpu-tensorrt.yml build

# 2) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-gpu-tensorrt.yml up
Run on x86 using OpenVino
# download model first
./download_openvino_model.sh

# 2) Build Docker image for openvino (you can omit this step and use the docker-hub images)
docker-compose -f docker-compose.yml -f docker-compose-x86-openvino.yml build

# 2) Run Docker container:
docker-compose -f docker-compose.yml -f docker-compose-x86-openvino.yml up

Processor

Optional Parameters

This is a list of optional parameters for the docker run commands. They are included in the examples of the Run the processor section.

Logging in the system's timezone

By default all docker containers use UTC as timezone, passing the flag -e TZ=`./timezone.sh` will make the container run on your system's timezone.

You may hardcode a value rather than using the timezone.sh script, such as US/Pacific. Changing the processor's timezone allows to have better control of when the reports are generated and the hours to correlate to the place where the processor is running.

Please note that the bash script may require permissions to execute (run chmod +x timezone.sh)

If you are running the processor directly from the Docker Hub repository, remember to copy/paste the script in the execution folder before adding the flag -e TZ=`./timezone.sh`.

Persisting changes

We recommend adding the projects folder as a mounted volume (-v "$PWD":/repo) if you are building the docker image. If you are using the already built one we recommend creating a directory named data and mount it (-v $PWD/data:/repo/data).

Processing historical data

If you'd like to process historical data (videos stored on the device instead of a stream), you must follow two steps:

  • Enable the HistoricalDataMode parameter on the device's config-*.ini file (see Change the default configuration)
  • Run the /repo/run_historical_metrics.sh script on the docker run command.

Example using x86:

docker run -it -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64 /repo/run_historical_metrics.sh

Configuring AWS credentials

Some of the implemented features allow you to upload files into an S3 bucket. To do that you need to provide the envs AWS_BUCKET_REGION, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. An easy way to do that is to create a .env file (following the template .env.example) and pass the flag --env-file .env when you run the processor.

Enabling SSL

We recommend exposing the processors' APIs using HTTPS. To do that, you need to create a folder named certs with a valid certificate for the processor (with its corresponding private key) and configure it in the config-*.ini file (SSLCertificateFile and SSLKeyFile configurations).

If you don't have a certificate for the processor, you can create a self-signed one using openssl and the scripts create_ca.sh and create_processor_certificate.sh.

# 1) Create your own CA (certification authority)
./create_ca.sh
# After the script execution, you should have a folder `certs/ca` with the corresponding *.key, *.pem and *.srl files

# 2) Create a certificate for the processor
./create_processor_certificate.sh <PROCESSOR_IP>
# After the script execution, you should have a folder `certs/processor` with the corresponding *.key, *.crt, *.csr and *.ext files

As you are using a self-signed certificate you will need to import the created CA (using the .pem file) in your browser as a trusted CA.

Configuring OAuth2 in the endpoints

By default, all the endpoints exposed by the processors are accessible by everyone with access to the LAN. To avoid this vulnerability, the processor includes the possibility of configuring OAuth2 to keep your API secure.

To configure OAuth2 in the processor you need to follow these steps:

  1. Enabling OAuth2 in the API by setting in True the parameter UseAuthToken (included in the API section).

  2. Set into the container the env SECRET_ACCESS_KEY. This env is used to encode the JWT token. An easy way to do that is to create a .env file (following the template .env.example) and pass the flag --env-file .env when you run the processor.

  3. Create an API user. You can do that in two ways:

    1. Using the create_api_user.py script:

    Inside the docker container, execute the script python3 create_api_user.py --user=<USER> --password=<PASSWORD>. For example, if you are using an x86 device, you can execute the following script.

    docker run -it -p HOST_PORT:8000 -v "$PWD":/repo -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-x86_64 python3 create_api_user.py --user=<USER> --password=<PASSWORD>
    1. Using the /auth/create_api_user endpoint: Send a POST request to the endpoint http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/auth/create_api_user with the following body:
    {
        "user": <USER>,
        "password": <PASSWORD>
    }
    

    After executing one of these steps, the user and password (hashed) will be stored in the file /repo/data/auth/api_user.txt inside the container. To avoid losing that file when the container is restarted, we recommend mounting the /repo directory as a volume.

  4. Request a valid token. You can obtain one by sending a PUT request to the endpoint http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/auth/access_token with the following body:

    {
        "user": <USER>,
        "password": <PASSWORD>
    }
    

    The obtained token will be valid for 1 week (then a new one must be requested from the API) and needs to be sent as an Authorization header in all the requests. If you don't send the token (when the UseAuthToken attribute is set in True), you will receive a 401 Unauthorized response.

Supported video feeds formats

This processor uses OpenCV VideoCapture, which means that it can process:

  • Video files that are compatible with FFmpeg
  • Any URL of video stream in a public protocol such as RTSP and HTTP (protocol://host:port/script_name?script_params|auth)

Please note that:

  • Although this processor can read and process a video file, this is mostly a development functionality; this is due to the fact that loggers yield statistics that are time dependant that assume a real-time stream being processed, in which if the processing capacity is lower than the FPS, frames are lost in favour of processing new frames. With a video file all frames are processed and on a slower model this might take a while (and yield wrong analytics).
  • Some IP cameras implement their own private protocol that's not compatible with OpenCV.

If you want to integrate an IP camera that uses a private protocol, you should check with the camera provider if the device supports exporting its stream in a public protocol. For example, WYZE doesn't support RTSP as default, but you have the possibility of installing a firmware that supports it. Same goes for Google Nest Cameras, although here a token must be kept alive to access the RTSP stream

Change the default configuration

You can read and modify the configurations in config-*.ini files, accordingly:

config-jetson-nano.ini: for Jetson Nano

config-jetson-tx2.ini: for Jetson TX2

config-coral.ini: for Coral dev board / usb accelerator

config-x86.ini: for plain x86 (cpu) platforms without any acceleration

config-x86-openvino.ini: for x86 systems accelerated with Openvino

Please note that if you modify these values you should also set [App] HasBeenConfigured to "True". This allows for a client to recognize if this processor was previously configured.

You can also modify some of them using the UI. If you choose this option, make sure to mount the config file as a volume to keep the changes after any restart of the container (see section Persisting changes).

All the configurations are grouped in sections and some of them can vary depending on the chosen device.

  • [App]

    • HistoricalDataMode: A boolean parameter that determines wheter to process historical data instead of a video stream.
    • HasBeenConfigured: A boolean parameter that states whether the config.ini was set up or not.
    • Resolution: Specifies the image resolution that the whole processor will use. If you are using a single camera we recommend using that resolution.
    • Encoder: Specifies the video encoder used by the processing pipeline.
    • MaxProcesses: Defines the number of processes executed in the processor. If you are using multiple cameras per processor we recommend increasing this number.
    • DashboardURL: Sets the url where the frontend is running. Unless you are using a custom domain, you should keep this value as https://app.lanthorn.ai/.
    • DashboardAuthorizationToken: Configures the Authorization header required to sync the processor and the dashboard.
    • SlackChannel: Configures the slack channel used by the notifications. The chosen slack channel must exist in the configured workspace.
    • OccupancyAlertsMinInterval: Sets the desired interval (in seconds) between occupancy alerts.
    • MaxThreadRestarts: Defines the number of restarts allowed per thread.
    • HeatmapResolution: Sets the resolution used by the heatmap report.
    • LogPerformanceMetrics: A boolean parameter to enable/disable the logging of "Performance Metrics" in the default processor log. We recommend enabling it to compare the performance of different devices, models, resolutions, etc. When it's enabled, the processor logs will include the following information every time 100 frames are processed:
      • Frames per second (FPS):
      • Average Detector time:
      • Average Classifier time:
      • Average Tracker time:
      • Post processing steps:
        • Average Objects Filtering time:
        • Average Social Distance time:
        • Average Anonymizer time:
    • LogPerformanceMetricsDirectory: When LogPerformanceMetrics is enabled, you can store the performance metrics into a CSV file setting the destination directory.
    • EntityConfigDirectory: Defines the location where the configurations of entities (such as sources and areas) are located.
    • PorcessAreas: A boolean parameter to enable/disable the area processing in the processor.
  • [Api]

    • Host: Configures the host IP of the processor's API (inside docker). We recommend don't change that value and keep it as 0.0.0.0.
    • Post: Configures the port of the processor's API (inside docker). Take care that if you change the default value (8000) you will need to change the startup command to expose the configured endpoint.
    • UseAuthToken: A boolean parameter to enable/disable OAuth2 in the API. If you set this value in True remember to follow the steps explained in the section Configuring OAuth2 in the endpoints.
    • SSLEnabled: A boolean parameter to enable/disable https/ssl in the API. We recommend setting this value in True.
    • SSLCertificateFile: Specifies the location of the SSL certificate (required when you have SSL enabled). If you generate it following the steps defined in this Readme you should put /repo/certs/<your_ip>.crt
    • [SSLKeyFile]: Specifies the location of the SSL key file (required when you have SSL enabled). If you generate it following the steps defined in this Readme you should put /repo/certs/<your_ip>.key
  • [Core]:

    • Host: Sets the host IP of the QueueManager (inside docker).
    • QueuePort: Sets the port of the QueueManager (inside docker).
    • QueueAuthKey: Configures the auth key required to interact with the QueueManager.
  • [Area_N]:

    A single processor can manage multiple areas and all of them must be configured in the config file. You can generate this configuration in 3 different ways: directly in the config file, using the UI or using the API.

    • Id: A string parameter to identify each area. This value must be unique.
    • Name: A string parameter to name each area. Although you can repeat the same name in multiple areas, we recommend don't do that.
    • Cameras: Configures the cameras (using the ids) included in the area. If you are configuring multiple cameras you should write the ids separated by commas. Each area should have at least one camera.
    • NotifyEveryMinutes and ViolationThreshold: Defines the period of time and number of social distancing violations desired to send notifications. For example, if you want to notify when occurs more than 10 violations every 15 minutes, you must set NotifyEveryMinutes in 15 and ViolationThreshold in 10.
    • Emails: Defines the emails list to receive the notification. Multiple emails can be written separating them by commas.
    • EnableSlackNotifications: A boolean parameter to enable/disable the Slack integration for notifications and daily reports. We recommend not editing this parameter directly and manage it from the UI to configure your workspace correctly.
    • OccupancyThreshold: Defines the occupancy violation threshold. For example, if you want to notify when there is more than 20 persons in the area you must set OccupancyThreshold in 20.
    • DailyReport: When the parameter is set in True, the information of the previous day is sent in a summary report.
    • DailyReportTime: If the daily report is enabled, you can choose the time to receive the report. By default, the report is sent at 06:00.
  • [Source_N]:

    In the config files, we use the source sections to specifies the camera's configurations. Similarly to the areas, a single processor can manage multiple cameras and all of them must be configured in the config file. You can generate this configuration in 3 different ways: directly in the config file, using the UI or using the API.

    • Id: A string parameter to identify each camera. This value must be unique.
    • Name: A string parameter to name each area. Although you can repeat the same name in multiple cameras, we recommend don't do that.
    • VideoPath: Sets the path or url required to get the camera's video stream.
    • Tags: List of tags (separated by commas). This field only has an informative propose, change that value doesn't affect the processor behavior.
    • NotifyEveryMinutes and ViolationThreshold: Defines the period of time and number of social distancing violations desired to send notifications. For example, if you want to notify when occurs more than 10 violations every 15 minutes, you must set NotifyEveryMinutes in 15 and ViolationThreshold in 10.
    • Emails: Defines the emails list to receive the notification. Multiple emails can be written separating them by commas.
    • EnableSlackNotifications: A boolean parameter to enable/disable the Slack integration for notifications and daily reports. We recommend not editing this parameter directly and manage it from the UI to configure your workspace correctly.
    • DailyReport: When the parameter is set in True, the information of the previous day is sent in a summary report.
    • DailyReportTime: If the daily report is enabled, you can choose the time to receive the report. By default, the report is sent at 06:00.
    • DistMethod: Configures the chosen distance method used by the processor to detect the violations. There are three different values: CalibratedDistance, CenterPointsDistance and FourCornerPointsDistance. If you want to use CalibratedDistance you will need to calibrate the camera from the UI.
    • LiveFeedEnabled: A boolean parameter that enables/disables the video live feed for the source.
  • [Detector]:

    • Device: Specifies the device. The available values are Jetson, EdgeTPU, Dummy, x86, x86-gpu
    • Name: Defines the detector's models used by the processor. The models available varies from device to device. Information about the supported models are specified in a comment in the corresponding config-.ini file.
    • ImageSize: Configures the moedel input size. When the image has a different resolution, it is resized to fit the model ones. The available values of this parameter depends on the model chosen.
    • ModelPath: Some of the supported models allow you to overwrite the default one. For example, if you have a specific model trained for your scenario you can use it.
    • ClassID: When you are using a multi-class detection model, you can definde the class id related to pedestrian in this parameter.
    • MinScore: Defines the person detection threshold. Any person detected by the model with a score less than the threshold will be ignored.
    • TensorrtPrecision: When you are using TensorRT version of Openpifpaf with GPU, Set TensorRT Precison 32 for float32 and 16 for float16 precision based on your GPU, if it supports both of them, float32 engine is more accurate and float16 is faster.
    • DeviceId: Required to specify the device id of the coral accelerator attached to the computer. This field is only required when you have multiple accelerators connected to the same computer.
  • [Classifier]:

    Some of the supported devices include models that allow for detecting the body-pose of a person. This is a key component to Facemask Detection. If you want to include this feature, you need to uncomment this section, and use a model that supports the Classifier. Otherwise, you can delete or uncomment this section of the config file to save on CPU usage.

    • Device: Specifies the device. The available values are Jetson, EdgeTPU, Dummy, x86, x86-gpu
    • Name: Name of the facemask classifier used.
    • ImageSize: Configures the model input size. When the image has a different resolution, it is resized to fit the model ones. The available values of this parameter depends on the model chosen.
    • ModelPath: The same behavior as in the section Detector.
    • MinScore: Defines the facemask detection threshold. Any facemask detected by the model with a score less than the threshold will be ignored.
    • TensorrtPrecision: When you are using TensorRT version of Openpifpaf with GPU, Set TensorRT Precison 32 for float32 and 16 for float16 precision based on your GPU, if it supports both of them, float32 engine is more accurate and float16 is faster.
    • MinImageSize: Configures the minimum input size.
  • [Tracker]:

    • Name: Name of the tracker used.
    • MaxLost: Defines the number of frames that an object should disappear to be considered as lost.
    • TrackerIOUThreshold: Configures the threshold of IoU to consider boxes at two frames as referring to the same object at IoU tracker.
  • [SourcePostProcessor_N]:

    In the config files, we use the SourcePostProcessor sections to specify additional processing steps after running the detector and face mask classifier (if available) on the video sources. We support 3 different ones (identified by the field Name) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • objects_filtering: Used to remove invalid objects (duplicates or large).
      • NMSThreshold: Configures the threshold of minimum IoU to detect two boxes as referring to the same object.
    • social_distance: Used to measure the distance between objects and detect social distancing violations.
      • DefaultDistMethod: Defines the default distance algorithm for the cameras without DistMethod configuration.
      • DistThreshold: Configures the distance threshold for the social distancing violations
    • anonymizer: A step used to enable anonymization of faces in videos and screenshots.
  • [SourceLogger_N]:

    Similar to the section SourcePostProcessor_N, we support multiple loggers (right now 4) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • video_logger: Generates a video stream with the processing results. It is a useful logger to monitor in real-time your sources.
    • s3_logger: Stores a screenshot of all the cameras in a S3 bucket.
      • ScreenshotPeriod: Defines a time period (expressed in minutes) to take a screenshot of all the cameras and store them in S3. If you set the value to 0, no screenshots will be taken.
      • ScreenshotS3Bucket: Configures the S3 Bucket used to store the screenshot.
    • file_system_logger: Stores the processed data in a folder inside the processor.
      • TimeInterval: Sets the desired logging interval for objects detections and violations.
      • LogDirectory: Defines the location where the generated files will be stored.
      • ScreenshotPeriod: Defines a time period (expressed in minutes) to take a screenshot of all the cameras and store them. If you set the value to 0, no screenshots will be taken.
      • ScreenshotsDirectory: Configures the folder dedicated to storing all the images generated by the processor. We recommend to set this folder to a mounted directory (such as /repo/data/processor/static/screenshots).
    • web_hook_logger: Allows you to configure an external endpoint to receive in real-time the object detections and violations.
      • TimeInterval: Sets the desired logging interval (in seconds) for objects detections and violations.
      • Endpoint: Configures an endpoint url.
      • Authorization: Configures the Authorization header. For example: Bearer <your_token>.
      • SendingInterval: Configures the desired time interval (in seconds) to send data into the configured endpoint.
  • [AreaLogger_N]:

    Similar to the section SourceLogger_N (for areas instead of cameras), we support multiple loggers (right now only 1, but we plan to include new ones in the future) that you enable/disable uncommenting/commenting them or with the Enabled flag.

    • file_system_logger: Stores the occupancy data in a folder inside the processor.
      • LogDirectory: Defines the location where the generated files will be stored.
  • [PeriodicTask_N]:

    The processor also supports the execution of periodic tasks to generate reports, accumulate metrics, backup your files, etc. For now, we support the metrics and s3_backup tasks. You can enable/disable these functionalities uncommenting/commenting the section or with the Enabled flag.

    • metrics: Generates different reports (hourly, daily and live) with information about the social distancing infractions, facemask usage and occupancy in your cameras and areas. You need to have it enabled to see data in the UI dashboard or use the /metrics endpoints.
      • LiveInterval: Expressed in minutes. Defines the time interval desired to generate live information.
    • s3_backup: Back up into an S3 bucket all the generated data (raw data and reports). To enable the functionality you need to configure the aws credentials following the steps explained in the section Configuring AWS credentials.
      • BackupInterval: Expressed in minutes. Defines the time interval desired to back up the raw data.
      • BackupS3Bucket: Configures the S3 Bucket used to store the backups.

Use different models per camera

By default, all video streams are processing running against the same ML model. When a processing threads starts running it verifies if a configuration .json file exists in the path: /repo/data/processor/config/sources/<camera_id>/ml_models/model_.json If no custom configuration is detected, a file will be generated using the default values from the [Detector] section, documented above. These JSONs contain the configuration of which ML Model is used for processing said stream, and can be modified either manually or using the endpoint /ml_model documented below. Please note that models that differ in their location or name regarding the ./download_ scripts must specify their location in the field file_path.

API usage

After you run the processor on your node, you can use the exposed API to control the Processor's Core, where all the process is getting done.

The available endpoints are grouped in the following subapis:

  • /config: provides a pair of endpoint to retrieve and overwrite the current configuration file.
  • /cameras: provides endpoints to execute all the CRUD operations required by cameras. These endpoints are very useful to edit the camera's configuration without restarting the docker process. Additionally, this subapi exposes the calibration endpoints.
  • /areas: provides endpoints to execute all the CRUD operations required by areas.
  • /app: provides endpoints to retrieve and update the App section in the configuration file.
  • /api: provides endpoints to retrieve the API section in the configuration file.
  • /core: provides endpoints to retrieve and update the CORE section in the configuration file.
  • /detector: provides endpoints to retrieve and update the Detector section in the configuration file.
  • /classifier: provides endpoints to retrieve and update the Classifier section in the configuration file.
  • /tracker: provides endpoints to retrieve and update the Tracker section in the configuration file.
  • /source_post_processors: provides endpoints to retrieve and update the SourcePostProcessor_N sections in the configuration file. You can use that endpoint to enable/disable a post processor step, change a parameter, etc.
  • /source_loggers: provides endpoints to retrieve and update the SourceLoggers_N sections in the configuration file. You can use that endpoint to enable/disable a logger, change a parameter, etc.
  • /area_loggers: provides endpoints to retrieve and update the AreaLoggers_N sections in the configuration file. You can use that endpoint to enable/disable a post processor step, change a parameter, etc.
  • /periodict_tasks: provides endpoints to retrieve and update the PeriodicTask_N sections in the configuration file. You can use that endpoint to enable/disable the metrics generation.
  • /metrics: a set of endpoints to retrieve the data generated by the metrics periodic task.
  • /export: an endpoint to export (in zip format) all the data generated by the processor.
  • /slack: a set of endpoints required to configure Slack correctly in the processor. We recommend to use these endpoints from the UI instead of calling them directly.
  • /auth: a set of endpoints required to configure OAuth2 in the processors' endpoints.
  • /ml_model: an endpoint to edit the ML model and its parameters, that is used to process certain camera's video feed.

Additionally, the API exposes 2 endpoints to stop/start the video processing

  • PUT PROCESSOR_IP:PROCESSOR_PORT/start-process-video: Sends command PROCESS_VIDEO_CFG to core and returns the response. It starts to process the video adressed in the configuration file. If the response is true, it means the core is going to try to process the video (no guarantee if it will do it), and if the response is false, it means the process can not be started now (e.g. another process is already requested and running).

  • PUT PROCESSOR_IP:PROCESSOR_PORT/stop-process-video: Sends command STOP_PROCESS_VIDEO to core and returns the response. It stops processing the video at hand, returns the response true if it stopped or false, meaning it can not (e.g. no video is already being processed to stop!).

The complete list of endpoints, with a short description and the signature specification is documented (with swagger) in the url PROCESSOR_IP:PROCESSOR_PORT/docs.

NOTE Most of the endpoints update the config file given in the Dockerfile. If you don't have this file mounted (see section Persisting changes), these changes will be inside your container and will be lost after stopping it.

Interacting with the processors' generated information

Generated information

The generated information can be split into 3 categories:

  • Raw data: This is the most basic level of information. It only includes the results of the detector, classifier, tracker, and any configured post-processor step.
  • Metrics data: Only written if you have enabled the metrics periodic task (see section). These include metrics related to occupancy, social-distancing, and facemask usage; aggregated by hour and day.
  • Notifications: Situations that require an immediate response (such as surpassing the maximum occupancy threshold for an area) and need to be notified ASAP. The currently supported notification channels are email and slack.

Accessing and storing the information

All of the information that is generated by the processor is stored (by default) inside the edge device for security reasons. However, the processor provides features to easily export or backup the data to another system if required.

Storing the raw data

The raw data storage is managed by the SourceLogger and AreaLogger steps. By default, only the video_logger and the file_system_logger are enabled. As both steps store the data inside the processor (by default the folder /repo/data/processor/static/), we strongly recommend mounting that folder to keep the data safe when the process is restarted (Persisting changes). Moreover, we recommend keeping active these steps because the frontend and the metrics need them.

If you need to store (or process) the raw data in real-time outside the processor, you can activate the web_hook_logger and implement an endpoint that handles these events. The web_hook_logger step is configured to send an event (a PUT request) using the following format:

{
            "version": ...,
            "timestamp": ...,
            "detected_objects": ...,
            "violating_objects": ...,
            "environment_score": ...,
            "detections": ...,
            "violations_indexes": ...
        }

You only need to implement an endpoint that matches the previous signature; configure its URL in the config file and the integration will be done. We recommend this approach if you want to integrate "Smart social distancing" with another existing system with real-time data.

Another alternative is to activate the periodic task s3_backup. This task will back up all the generated data (raw data and metrics) inside the configured S3 bucket, according to the time interval defined by the BackupInterval parameter. Before enabling this feature remember to configure AWS following the steps defined in the section Configuring AWS credentials.

Accessing the metrics data

The data of aggregated metrics is stored in a set of CSV files inside the device. For now, we don't have implemented any mechanism to store these files outside the processor (the web_hook_logger only sends "raw data" events). However, if you enable the s3_backup task, the previous day's metrics files will be backed up at AWS at the beginning of the day.

You can easily visualize the metrics information in the dashboard exposed in the frontend. In addition, you can retrieve the same information through the API (see the metrics section in the API documentation exposed in http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/docs#/Metrics).

Exporting the data

In addition to the previous features, the processor exposes an endpoint to export in zip format all the generated data. The signature of this endpoint can be found in http://<PROCESSOR_HOST>:<PROCESSOR_PORT>/docs#/Export.

Issues and Contributing

The project is under substantial active development; you can find our roadmap at https://github.com/galliot-us/neuralet/projects/1. Feel free to open an issue, send a Pull Request, or reach out if you have any feedback.

Contact Us

License

Most of the code in this repo is licensed under the Apache 2 license. However, some sections/classifiers include separate licenses.

These include:

  • Openpifpaf model for x86 (see license)
  • OFM facemask classifier model (see license)

smart-social-distancing's People

Contributors

alpha-carinae29 avatar dependabot[bot] avatar emmawdev avatar gpicart avatar jfer11 avatar jsonsadler avatar kkrampa avatar lucabenvenuto avatar mats-claassen avatar mdegans avatar mhejrati avatar mohammad7t avatar mrn-mln avatar mrupgrade avatar pgrill avatar renzodgc avatar robert-p97 avatar sasikiran avatar undefined-references avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

smart-social-distancing's Issues

Error in face anonymizer when using PoseNet

I got following error in face anonymizer when I run app with posenet_1281x721 model on sample video file:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/repo/libs/engine_threading.py", line 46, in run
    self.engine.process_video(self.source['url'])
  File "/repo/libs/distancing.py", line 291, in process_video
    cv_image, objects, distancings = self.__process(cv_image)
  File "/repo/libs/distancing.py", line 127, in __process
    cv_image = self.anonymize_image(cv_image, objects_list)
  File "/repo/libs/distancing.py", line 557, in anonymize_image
    roi = self.anonymize_face(roi)
  File "/repo/libs/distancing.py", line 573, in anonymize_face
    return cv.GaussianBlur(image, (kernel_w, kernel_h), 0)
cv2.error: OpenCV(4.3.0) /tmp/opencv-4.3.0/modules/imgproc/src/smooth.dispatch.cpp:296: error: (-215:Assertion failed) ksize.width > 0 && ksize.width % 2 == 1 && ksize.height > 0 && ksize.height % 2 == 1 in function 'createGaussianKernels'

[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace: INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers

After I made fresh installation of Jetpack 4.3, I have build the docker files but I am facing the following issue on Jetson Nano device:

[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers

I did not make any changes to the config file and I am running the following commands:
sudo docker run -it --runtime nvidia --privileged -p 8080:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

Screenshot from 2020-07-29 19-12-35

Interface for analytics storage

I thought about an interface for analytics storage. The main Logger class would obtain these values and send them to whichever concrete logger, which will be responsible for saving it in some supported format (csv, protobuf, JSON, etc) following this spec:

  • Version
  • Timestamp
  • Number of detections
  • List of detections:
    • Position (x, y, z)
    • Optional:
      • Wearing face mask
      • BBox (Xmin, Ymin, Xmax, Ymax)
      • Id (Movement Tracking)
      • Orientation (needs pose detection)
      • Body Pose keypoints (needs pose detection)

An efficient way of storing this would be using something like protobufs for example. Here I show an example in JSON to see how it could look:
Example:

{
   "version": "1.0",
   "timestamp": "2020-07-08T13:36:53+0000",
   "detection_number": 1,
   "detections": [
      {
         "position": [2.32, 3.10, 0.24], // x,y,z
         "face_mask": true,
         "tracking_id": 3241,
         "bbox": [23, 50, 120, 80], // pixel values of box
         "orientation": 60, // degrees, explained below
         "keypoints": [
            [2.32, 3.02, 0.24], // x,y,z of each point
            "..."
         ]
      }
   ]
}

For both position and orientation we could define a line through the middle of the image which serves as z axis as well as orientation 0 (if the person looks into the camera). Orientation would mean degrees from that line to the right from the person’s perspective. Could be in radians as well.

Float values could be saved with a precision of 0.1 or 0.01.

Any comments and suggestions are welcome. What do you think @mhejrati ?

Include Jetpack 4.4 support for jetson nano

The master branch is broken because the devices Jetson Nano and Jetson TX2 use the same config-file but different dockerfiles. TheTX2 image uses Jetpack 4.4. However, the nano image uses Jetpack 4.3.

To fix the issue I see 2 approaches:

  • Split the config file into 2 (#117)
  • Make the jetson-nano image compatible with Jetpack 4.4.

@mhejrati , what do you think?

libnvinfer and wrong datadirectory, not writable?

Hello

I've been trying to get this to run for use in our facility, but to no avail. I feel I'm close though. I have two issues when trying to run the docker command:

sudo docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD":/repo neuralet/smart-social-distancing:latest-jetson-nano

INFO:libs.area_threading:[70] taking on notifications for 1 areas
ERROR:root:libnvinfer.so.6: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/repo/libs/engine_threading.py", line 49, in run
self.engine = CvEngine(self.config, self.source["section"], self.live_feed_enabled)
File "/repo/libs/cv_engine.py", line 23, in init
self.detector = Detector(self.config)
File "/repo/libs/detectors/detector.py", line 23, in init
self.detector = JetsonDetector(self.config)
File "/repo/libs/detectors/jetson/detector.py", line 22, in init
from . import mobilenet_ssd_v2
File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 3, in
import tensorrt as trt
File "/usr/local/lib/python3.6/dist-packages/tensorrt/init.py", line 1, in
from .tensorrt import *
ImportError: libnvinfer.so.6: cannot open shared object file: No such file or directory
100 4 100 4 0 0 8 0 --:--:-- --:--:-- --:--:-- 8
ok video is going to be processed

The processor keeps going though so I'm not sure this is an actual problem. Then it starts looping this:

INFO:root:Exception processing area Kitchen
INFO:root:Restarting the area processing
INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras
INFO:libs.area_engine:Area reporting on - area0: Kitchen is waiting for reports to be created
ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 67, in process_area
with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log:
FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
INFO:root:Exception processing area Kitchen
ERROR:root:[Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 54, in run
raise e
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 67, in process_area
with open(os.path.join(camera["file_path"], str(date.today()) + ".csv"), "r") as log:
FileNotFoundError: [Errno 2] No such file or directory: '/repo/data/processor/static/data/sources/default/objects_log/2020-12-30.csv'

The directory that it is looking for 'sources' does not exist. There is a directory data/processor/static/data/default/objects_log, but even when I create the sources directory, the bug persists. When I create the csv manually (which can't be intended), I get this:

IndexError: deque index out of range
INFO:root:Exception processing area Kitchen
INFO:root:Restarting the area processing
INFO:libs.area_engine:Enabled processing area - area0: Kitchen with 1 cameras
ERROR:root:deque index out of range
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 68, in process_area
last_log = deque(csv.DictReader(log), 1)[0]
IndexError: deque index out of range
INFO:root:Exception processing area Kitchen
ERROR:root:deque index out of range
Traceback (most recent call last):
File "/repo/libs/area_threading.py", line 54, in run
raise e
File "/repo/libs/area_threading.py", line 46, in run
self.engine.process_area()
File "/repo/libs/area_engine.py", line 68, in process_area
last_log = deque(csv.DictReader(log), 1)[0]
IndexError: deque index out of range

I'm by no measure a Linux or Docker expert so this might be a small issue to resolve, but I need help.

Thanks in advance and great work so far!

Alphapose Onnx and TensorRT models

Hi, I've been working on converting Alphapose (the model which contributors have developed for X86 devices at #113) to onnx and tensorRT.
I will send a PR after finishing it. here I wanna share my experience with this procedure.
I tried to convert a Pytorch-based model of HRnet net (one of the available backbones of Alphapose) to Onnx but the path was not straightforward to me.
First of all, I use Pytorch version 1.1.0 for exporting the onnx model but the version was buggy and I faced several errors. after trying some versions of PyTorch finally the below versions worked for me.

`
torch==1.5.1
torchvision==0.6.1

//Expporting parameters
pose_model.load_state_dict(torch.load(MODEL.pt, map_location=map_location))
dummy_input = torch.randn(1, 3, 256, 192, requires_grad=True).cuda()
torch.onnx.export(pose_model, dummy_input, "alphapose.onnx", export_params=True, opset_version=11)
`
Next step, I will check the onnx model results and compare them to the x86 model result in order to make sure that the onnx model is exported successfully.

I will update the procedure here.
please feel free to ask for more details if you need them.

Master is broken for x86-gpu

Hi,

Master does not work well for x86-gpu, seems prints from the Detector class are not showing (the first thing that made me think it is not working), but I can see some outputs in Lanthorn UI, which stops after a few frames and fps becomes zero.

The weird thing is that the lines:
INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4 INFO:openpifpaf.decoder.generator.cifcaf:3 annotations: [13, 10, 9] ...
are showing for x86, but not for x86-gpu (while it processes some frames initially). What has been changed?

ahmet@ahmet-desktop:~/smart-social-distancing$ docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request\\\\n\\\"\"": unknown. ERRO[0001] error waiting for container: context canceled

Please help me ? I didnt understand how I solve it ?

Crash on launch

Hi, I followed the instructions. I have JetPack 4.3 installed on TX2. This is the error I get. I tried the other repo and the result is same.

INFO:__main__:Services Started.
INFO:     Started server process [14]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace: 
[TensorRT] ERROR: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 6.2 got compute 5.3, please rebuild.
[TensorRT] ERROR: engine.cpp (1324) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "neuralet-distancing.py", line 14, in start_engine
    engine = CvEngine(config)
  File "/repo/libs/core.py", line 25, in __init__
    self.detector = Detector(self.config)
  File "/repo/libs/detectors/jetson/detector.py", line 18, in __init__
    self.net = mobilenet_ssd_v2.Detector(self.config)
  File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 66, in __init__
    self.context = self._create_context()
  File "/repo/libs/detectors/jetson/mobilenet_ssd_v2.py", line 34, in _create_context
    for binding in self.engine:
TypeError: 'NoneType' object is not iterable

Black feed previews when input feed is from IP Camera

Hello, I'm testing smart-social-distancing using a jetson nano and a D-Link DCS-5222LB IP Camera.
The frontend is running locally on my laptop, the Processor is running on the Jetson and the Dlink camera are all on the same subnet.

Laptop: 192.168.188.20
Dlink camera: 192.168.188.23
Jetson: 192.168.188.81

Here are my configurations:

config-frontend.ini

[App]
Host: 0.0.0.0
Port: 8000

[Processor]
Host: 192.168.188.81
Port: 8000

config-jetson.ini

[App]
VideoPath = rtsp://username:[email protected]/live1.sdp
Resolution = 640,360

Encoder: videoconvert ! video/x-raw,format=I420 ! x264enc speed-preset=ultrafast

[API]
Host = 0.0.0.0
Port = 8000

[CORE]
Host = 0.0.0.0
QueuePort = 8010
QueueAuthKey = shibalba

I'm running the Processor on the Jetson with this command:

docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .
docker run -it --runtime nvidia --privileged -p 8000:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

and this is the output I get on the console:

ok video is going to be processed
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
Device is:  Jetson
Detector is:  ssd_mobilenet_v2_pedestrian_softbio
image size:  [300, 300, 3]
INFO:libs.distancing:opened video rtsp://admin:[email protected]/live2.sdp
error: XDG_RUNTIME_DIR not set in the environment.
0:00:02.002293373    68   0x55a7901730 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
INFO:libs.distancing:processed frame 1 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 11 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 21 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 31 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 41 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 51 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 61 for rtsp://username:[email protected]/live1.sdp
INFO:libs.distancing:processed frame 71 for rtsp://username:[email protected]/live1.sdp

[...] and so on

I'm running the webapp locally on my laptop with this command:

docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
docker build -f web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui" .
docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui 

This is the output I get from the console:

Successfully built 0047284f1577
Successfully tagged neuralet/smart-social-distancing:latest-web-gui
INFO:     Started server process [1]
INFO:uvicorn.error:Started server process [1]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

When I browse the frontend locally with the latest Chrome on http://0.0.0.0:8000/panel/#/live I see the Camera Feed and the Bird's View box completely black. See the screenshot below:

screen_0

The plot underneath the camera feeds is working because if I step back and forth in front of the camera seems like it's recognizing me.

If I open Chrome's inspector I can see that there are two errors in the console.

screen_errors

And if I try performing a GET to that URL (or any similar path) I get a 404.

errors details

and a "Not Found" response.

screen_not_found

Has anyone of you experienced something similar when working with IP Cams? Any idea on how to fix this?

Thank you very much and big ups for the beautiful project!

Cheers

Import error for Detector

I am trying to run Smart Distance on Jetson TX2, facing below issue. Please advice. Thank you.

Env -

  1. Jetson Tx2
  2. Jetpack 4.3
  3. CUDA 10.2

Steps to reproduce -

  1. Follow all instructions for Jetson Tx2.
  2. Run below Docker command.
docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/repo/data neuralet/smart-social-distancing:latest-jetson-nano

Output -

image
image

Expected -
Live feed with smart distance for the video present in data folder
I also tried to correct the Download path, as per this PR

Issue with video feeds buffering and becoming out-of-sync with each other

Hi all,

I managed to get this up and running on a Jetson Nano a couple of months ago.
Coming back to it now everything has changed! 👍

I'm running both the front-end and the processor on the Nano. I have no experience with docker so was not sure how to get the two parts communicating, eventually I got something though. I detail the steps here just in case.

I built and run everything on the Nano itself in headless mode as follows:

Modified the config-frontend.ini file as follows:

[App]
Host: 0.0.0.0
Port: 8000

[Processor]
; The IP and Port on which your Processor node is runnning (according to your docker run's -p HOST_PORT:8000 ... for processor's docker run command
Host: 192.168.1.104 <-- changed this line to reflect the IP address of the Nano
Port: 8001

Build the docker files:

docker build -f frontend.Dockerfile -t "neuralet/smart-social-distancing:latest-frontend" .
docker build -f jetson-web-gui.Dockerfile -t "neuralet/smart-social-distancing:latest-web-gui-jetson" .

docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

Start the front-end:

sudo docker run -it -p 8000:8000 --rm neuralet/smart-social-distancing:latest-web-gui-jetson

# Output
INFO:     Started server process [1]
INFO:uvicorn.error:Started server process [1]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

Start the processor:

sudo docker run -it --runtime nvidia --privileged -p 8001:8000 -v "$PWD/data":/repo/data -v "$PWD/config-jetson.ini":/repo/config-jetson.ini -e TZ=`./timezone.sh` neuralet/smart-social-distancing:latest-jetson-nano

# Output
video file at  /repo/data/softbio_vid.mp4 not exists, downloading...
--2020-10-15 10:46:41--  https://social-distancing-data.s3.amazonaws.com/softbio_vid.mp4
Resolving social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)... 52.217.65.116
Connecting to social-distancing-data.s3.amazonaws.com (social-distancing-data.s3.amazonaws.com)|52.217.65.116|:443... connected.
HTTP request sent, awaiting response... INFO:__main__:Reporting disabled!
200 OK
Length: 25371423 (24M) [video/mp4]
Saving to: 'data/softbio_vid.mp4'

data/softbio_vid.mp4            2%[>                                                 ] 534.65K   730KB/s               INFO:libs.processor_core:Core's queue has been initiated
INFO:__main__:Core Started.
INFO:root:Starting processor core
INFO:libs.processor_core:Core is listening for commands ...
data/softbio_vid.mp4            7%[==>                                               ]   1.81M  1.17MB/s               INFO:api.processor_api:Connection established to Core's queue
data/softbio_vid.mp4            8%[===>                                              ]   2.15M  1.23MB/s               INFO:__main__:API Started.
data/softbio_vid.mp4           10%[====>                                             ]   2.47M  1.26MB/s               INFO:     Started server process [11]
INFO:uvicorn.error:Started server process [11]
INFO:     Waiting for application startup.
INFO:uvicorn.error:Waiting for application startup.
INFO:     Application startup complete.
INFO:uvicorn.error:Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:uvicorn.error:Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
data/softbio_vid.mp4          100%[=================================================>]  24.20M  1.62MB/s    in 16s

2020-10-15 10:46:57 (1.52 MB/s) - 'data/softbio_vid.mp4' saved [25371423/25371423]

running curl 0.0.0.0:8000/process-video-cfg
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0INFO:api.processor_api:process-video-cfg requests on api
INFO:api.processor_api:waiting for core's response...
INFO:libs.processor_core:command received: Commands.PROCESS_VIDEO_CFG
INFO:libs.processor_core:Setup scheduled tasks
INFO:libs.processor_core:should not send notification for camera default
INFO:libs.processor_core:started to process video ...
100     4  100     4    0     0     59      0 --:--:-- --:--:-- --:--:--    60
ok video is going to be processed
INFO:libs.engine_threading:[68] taking on 1 cameras
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:
INFO:libs.detectors.jetson.mobilenet_ssd_v2:allocated buffers
Device is:  Jetson
Detector is:  ssd_mobilenet_v2_coco
image size:  [300, 300, 3]
INFO:libs.distancing:opened video /repo/data/softbio_vid.mp4
error: XDG_RUNTIME_DIR not set in the environment.
0:00:00.812662754    79   0x55b7599660 ERROR                default gstvaapi.c:254:plugin_init: Cannot create a VA display
INFO:libs.distancing:processed frame 1 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 101 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 201 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 301 for /repo/data/softbio_vid.mp4

This seems to be good so far, I start getting the graph appearing in my browser when I visit 192.168.1.104:8000.

However, the video feeds seem to buffer and stutter a lot. Furthermore, the birds-eye view and the video feed quickly become out-of-sync with one another. If I look back at the running processor I observe something along the following lines (which happens an awful lot):

INFO:libs.distancing:processed frame 3301 for /repo/data/softbio_vid.mp4
INFO:libs.distancing:processed frame 3401 for /repo/data/softbio_vid.mp4
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
    await super().__call__(scope, receive, send)  # pragma: no cover
  File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
    "more_body": more_body,
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
    output = self.conn.send(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
    data_list = self.send_with_data_passthrough(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
    writer(event, data_list.append)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
    self.send_data(event.data, write)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
    raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 389, in run_asgi
    result = await app(self.scope, self.receive, self.send)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/fastapi/applications.py", line 181, in __call__
    await super().__call__(scope, receive, send)  # pragma: no cover
  File "/usr/local/lib/python3.6/dist-packages/starlette/applications.py", line 111, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 181, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 159, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 86, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 142, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 82, in __call__
    raise exc from None
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 71, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 566, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/routing.py", line 376, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/staticfiles.py", line 94, in __call__
    await response(scope, receive, send)
  File "/usr/local/lib/python3.6/dist-packages/starlette/responses.py", line 314, in __call__
    "more_body": more_body,
  File "/usr/local/lib/python3.6/dist-packages/starlette/exceptions.py", line 68, in sender
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/cors.py", line 148, in send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/starlette/middleware/errors.py", line 156, in _send
    await send(message)
  File "/usr/local/lib/python3.6/dist-packages/uvicorn/protocols/http/h11_impl.py", line 483, in send
    output = self.conn.send(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 469, in send
    data_list = self.send_with_data_passthrough(event)
  File "/usr/local/lib/python3.6/dist-packages/h11/_connection.py", line 502, in send_with_data_passthrough
    writer(event, data_list.append)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 78, in __call__
    self.send_data(event.data, write)
  File "/usr/local/lib/python3.6/dist-packages/h11/_writers.py", line 98, in send_data
    raise LocalProtocolError("Too much data for declared Content-Length")
h11._util.LocalProtocolError: Too much data for declared Content-Length

My questions are; Is this expected behavior? Is the software not robust/mature enough at this stage? Is there something wrong with how I have configured the docker images or how I am running them? Is it the Nano which is just not powerful enough (e.g. in terms of pfs, or for running both processor and serving front-end)?

I only ask because I seem to recall that the simpler version I used some months back worked better for me. The front-end was super trivial, but it didn't suffer from these issues. I'm happy to roll back to that older version if required but first wanted to check this is not something that can be addressed.

Many thanks

Pedestrian plot and environment score plot: yet to be implemented?

The tutorial page (https://neuralet.com/docs/tutorials/smart-social-distancing/) mentions the output view, and presents two nice analytic graphs: the pedestrian plot and the environment score plot. Using the newest version of the master branch on a jetson nano with jetpack 4.3 I see the video stream with detections and the birds eye view, yet I do not see the pedestrian and environment plots. Are they yet to be implemented, or am I perhaps missing something? Any hints would be greatly appreciated!

Edit: In #25 I see a picture of the plot graph, strange, it is not displaying for me.

XGD_Runtime Error and Loading Video problem

Hello, thank you to everyone who contributed to this project.
When I try it on Jetson nano, the video feels like loading continuously. It plays after 5 seconds, loads again, plays again for 5 seconds, then pauses. I refresh the page but the same problem continue. It also gives errors related to XGD_Runtime on console.

I created the docker image with the command below.
docker build -f jetson-nano.Dockerfile -t "neuralet/smart-social-distancing:latest-jetson-nano" .

Also I tried it on my PC, the result is same. What am i missing ?

Screen Shot 2020-06-26 at 20 45 45

docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

Hi, i am new to docker and follow exactly as per the instruction
$ sudo docker run -it --runtime nvidia --privileged -p HOST_PORT:8000 -v "$PWD/data":/data neuralet/smart-social-distancing:latest-jetson-nano
here's that i got
docker: invalid publish opts format (should be name=value but got 'HOST_PORT:8000')

please advise what i may probably miss

thank you

How to use PI camera V2.0 as a video input?

Thanks for such a great and useful project.

Recently, I've been trying to test it with my PI camera V2.0(CSI-Camera) on Jetson Nano with JetPack 4.3, Ubuntu 18.04, and CUDA 10.0.

I modified the Dockerfile of Nano you offer to disable running the command. Then I tried to run the gstreamer command inside the container but I got this error as shown here:

image

And the same command works on the same machine outside the docker container.

Could you help me to get it running with this CSI-Camera?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.