Giter Site home page Giter Site logo

iotedge-iva-nano's Introduction

Azure IoT Edge Workshop: Visual Anomaly Detection over multiple cameras with NVIDIA Jetson Nano devices

In this workshop, you'll discover how to build a solution that can process several real-time video streams with an AI model on a $100 device, how to build your own AI model to detect custom anomalies and finally how to operate it remotely.

We'll put ourselves in the shoes of a soda can manufacturer who wants to improve the efficienty of its plant. An improvement that he'd like to make is to be able to detect soda cans that fell down on his production lines, monitor his production lines from home and be alerted when this happen. He has 3 production lines, all moving at a fairly quick speed.

To satisfy the real-time, multiple cameras, custom AI model requirements, we'll build this solution using NVIDIA Deepstream on a NVIDIA Jetson Nano device. We'll build our own AI model with Azure Custom Vision. We'll deploy and connect it to the Cloud with Azure IoT Edge and Azure IoT Central. Azure IoT Central will be used to do the monitoring and alerting.

Check out this video recording to help you go through all the steps and concepts used in this workshop: Workshop Recording

Prerequisites

Jetson Nano

  • Flash your Jetson Nano SD Card: download and flash either this JetPack version 4.3 image if you have an HDMI screen or the image from this NVIDIA course otherwise (which is a great learning resource anyway!). The image from the course is also based on JetPack 4.3 but includes an USB Device Mode to use the Jetson Nano without HDMI screen. For the rest of this tutorial will assume that you use the device in USB Device Mode. In any cases, you can use BalenaEtcher tool to flash your SD card. Both of these images are based on Ubuntu 18.04 and already includes NVIDIA drivers version, CUDA and Nvidia-Docker. To double check your JetPack version, you can use the following command (JetPack 4.3 = Release 32, Revision 3):
head -n 1 /etc/nv_tegra_release
  • A developer's machine: You need a developer's machine (Windows, Linux or Mac) to connect to your Jetson Nano device and see its results with a browser and VLC.

  • A USB cable MicroB to Type A to connect your Jetson Nano to your developer's machine with the USB Device Mode: we'll use the USB Device Mode provided in NVIDIA's course base image. With this mode, you do not need to hook up a monitor directly to your Jetson Nano. Instead, boot your device and wait for 30 seconds then open your favorite browser, go to http://192.168.55.1:8888 and enter the password dlinano to get access to a command line terminal on your Jetson Nano. You can use this terminal to run instructions on your Jetson Nano (Ctrl+V is a handy shortcut to paste instructions) or your favorite SSH client if you prefer (ssh dlinano@your-nano-ip-address where password=dlinano and your nano ip address can be found with the command /sbin/ifconfig eth0 | grep "inet" | head -n 1)

Jupyter Notebook

  • Connect your Jetson Nano to the internet: Either use an ethernet connection, in which case you can skip this section or if your device supports WiFi (which is not out-of-the-box for standard dev kits) connect it to WiFi with the following commands from the USB Device Mode terminal:

    1. Re-scan available WiFi networks

      nmcli device wifi rescan
    2. List available WiFi networks, and find the ssid_name of your network.

      nmcli device wifi list
    3. Connect to a selected WiFi network

      nmcli device wifi connect <ssid_name> password <password>
  • VLC to view RTSP video streams: To visualize the output of the Jetson Nano without HDMI screen (there is only one per table), we'll use VLC from your laptop to view a RTSP video stream of the processed videos. Install VLC if you dont have it yet.

  • An Azure subscription: You need an Azure subscription to create an Azure IoT Central application.

A phone with IP Camera Lite app: To view & process a live video stream, you can use your phone with the IP Camera Lite app (iOS, Android) as an IP camera.

The next sections walks you step-by-step to deploy Deepstream on an IoT Edge device, update its configuration via a pre-built IoT Central application and build a custom AI model with Custom Vision. It explains concepts along the way.

Understanding the solution running at the Edge

The soda can manufucturer already asked a partner to build a first protoype solution that can analyze video streams with a given AI model and connect it to the cloud. The solution built by this partner is composed of two main blocks:

  1. NVIDIA DeepStream, which does all the video processing

DeepStream is an highly-optimized video processing pipeline, capable of running one ore more deep neural networks, e.g. AI models. It provides outstanding performances thanks to several techniques that we'll discover below. It is a must-have tool whenever you have complex video analytics requirements like real-time object detection or when employing cascading AI models.

DeepStream runs as a container, which can be deployed and managed by IoT Edge. It also is integrated with IoT Edge to send all its outputs to IoT Edge runtime.

The DeepStream application we are using was easy to build since we use the out-of the box once provided by NVIDIA in the Azure Marketplace here. We're using this module as-is and are only configuring it from the IoT Central bridge module.

Deepsteam in Azure Marketplace

  1. A bridge to IoT Central, which transforms telemetry sent by DeepStream into a format understood by IoT Central and configures DeepStream remotely.

It formats all telemetry, properties, and commands using IoT Plug and Play aka PnP, which is the declarative language used by IoT Central to understand how to communicate with a device.

Understanding NVIDIA DeepStream

Deesptream is a SDK based on GStreamer, an open source, battle-tested platform to create video pipelines. It is very modular with its concepts of plugins. Each plugins have sinks and sources. NVIDIA provides several plugins as part of Deepstream which are optimized to leverage NVIDIA's GPUs or other NVIDIA hardware like dedicated encoding/decoding chips. How these plugins are connected with each others is defined in the application's configuration file.

Here is an example of what an end-to-end DeepStream pipeline looks like:

NVIDIA Deepstream Application Architecture.

You can learn more about its architecture in NVIDIA's official documentation.

To better understand how NVIDIA DeepStream works, let's have a look at its default configuration file copied here in this repo (called Demo Mode in IoT Central UI later on).

Observe in particular:

  • The sources sections: they define where the source videos are coming from. We're using local videos to begin with and will switch to live RTSP streams later on.
  • The sink sections: they define where to output the processed videos and the output messages. We use RTSP to stream a video feed out and all out messages are sent to the Azure IoT Edge runtime.
  • The primary-gie section: it defines which AI model is used to detect objects. It also defines how this AI model is applied. As an example, note the interval property set to 4: this means that inferencing is actually executed only once every 5 frames. Bounding boxes are displayed continuously though because a tracking algorithm, which is computationally less expensive than inferencing, takes over in between. The tracking algorithm used is set in the tracking section. This is the kind of out-of-the-box optimizations provided by DeepStream that enables us to process 240 frames per second on a $100 device. Other notable optimizations are using dedicated encoding/decoding hardware, only loading frames in memory once (zero in-memory copy), pushing the vast majority of the processing to GPUs, batching frames from multiple streams, etc.

Understanding the connection to IoT Central

IoT Edge connects to IoT Central with the regular Module SDK (you can look at the source code here). Telemetry, Properties and Commands that the IoT Edge Central bridge module receives/sends follow IoT Plug and Play aka PnP format, which is enforced in the Cloud by IoT Central. IoT Central enforces them against a Device Capability Model (DCM), which is a file that defines what this IoT Edge device is capable of doing.

  • Click on Devices in the left nav of the IoT Central application
  • Observe the templates in the second column: they define all the devices that this IoT Central application understands. All the Jetson Nano devices of this workshop are using a version of the NVIDIA Jetson Nano DCM device template. In the case of IoT Edge, an IoT Edge deployment manifest is also attached to a DCM version to create a device template. If you want to see the details on how the device template that we use look like, you can look at this Device Capability Model and at this IoT Edge deployment manifest.

Enough documentation! Let's now see the solution built by our partner in action.

Operating the solution with IoT Central app

Let's start by creating a new IoT Central app to remotely control the Jetson Nano.

Create a new IoT Central app

Warning

Steps to create an IoT Central application have changed compared to the recording because the IoT Central team deprecated the copy of an IoT Central application that containers IoT Edge devices. The following steps have thus been revised to create an IoT Central application from scratch and manually add and configure the Jetson Nano device as an IoT Edge device in IoT Central.

We'll start from a new IoT Central application, add an Device Capability Model and an IoT Edge deployment manifest that describe the video analytics solution running on the NVIDIA Jetson Nano and optionally customize our IoT Central application.

  • Create a new IoT Centra application:
    • From your browser, go to: https://apps.azureiotcentral.com/build
    • Sign-in with your Azure account
    • Click on Custom apps
    • Give a name and URL to your application
    • Select your Azure subscription (you can opt-in for a 7 day free trial)
    • Select your location
    • Click on Create
  • Create a new Device Template:
    • Click on Device templates
    • Click on Azure IoT Edge
    • Click on Next: Customize
    • Click on Skip + Review
    • Click on Create
    • Replace the title of the newly created device template NVIDIA Jetson Nano DCMto be NVIDIA Jetson Nano DCM and hit Enter
    • Click on Import capability model
    • Select a local copy of the file NVIDIAJetsonNanoDcm.json from this repo
  • Configure your device dashboard:
    • Click on Views
    • Click on Visualizing the device
    • Rename this View Dashboard
    • From the Telemetry section, select Primary Detection Count and click on Add tile
    • Click on the Settings button of the Primary Detection Count tile
    • Click on the Settings button of the Primary Detection Count, select Count instead of Average and click on Update
    • From the Telemetry section, select Secondary Detection Count and click on Add tile
    • Click on the Settings button of the Secondary Detection Count tile
    • Click on the Settings button of the Secondary Detection Count, select Count instead of Average and click on Update
    • From the Telemetry section, select Free Memory and System Heartbeat and click on Add tile
    • From the Telemetry section, select Change Video Model, Device Restart, Processing Started, Processing Stopped and click on Add tile
    • From the Telemetry section, select Pipeline State and click on Add tile
    • Optionally, rearrange the tiles to your taste
    • Hit Save
  • Configure your device properties:
    • Click on Views
    • Click on Visualizing the device
    • Rename this View Device
    • From the Properties section, select Device model, Manufacturer, Operating system name, Processor architecture, Processor manufacturer, Software version, Total memory, Total storage, RTSP Video Url and click on Add tile
    • Resize the tile appropriately
    • Hit Save
  • Optionally, add an About view to give a description of your device
  • Optionally, white label your IoT Central application by going to Administration > Your Application and Administration > Customize your application
  • Upload your IoT Edge deployment manifest:
    • Click on Replace manifest
    • Click on Upload
    • Select a local copy of the file deployment.json in the config folder of this repo
    • Click on Replace
  • Finally, publish your device template so that it can be used:
    • Click on Publish and confirm

Create an IoT Edge device from your IoT Central app

We'll create a new IoT Edge device in your IoT Central application with the device template created above that will enable the NVIDIA Jetson Nano to connect to IoT Central.

  • Go to the Devices tab
  • Select the NVIDIA Jetson Nano DCM device template
  • Click on New
  • Give a name to your device by editing the Device ID and the Device name fields (let's use the same name for both of these fields in this workshop)
  • Click on Create
  • Click on your new device
  • Click on the Connect button in the top right corner
  • Copy your ID Scope value, Device ID value and Primary key value and save them for later.

Setting up your device to be used with your IoT Central application

We'll start from a blank Jetson installation (Jetpack v4.3), copy a few files locally that are needed for the application such as video files to simulate RTSP cameras and deepstream configuration files, install IoT Edge and configure it to connect to your IoT Central instance.

  1. On your Jetson Nano create a folder name data at the root:

    sudo mkdir /data
  2. Download and extra setup files in the data directory:

    cd /data
    sudo wget -O setup.tar.bz2 --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588625&authkey=ACUlRaKkskctLOA"
    sudo tar -xjvf setup.tar.bz2
  3. Make the folder accessible from a normal user account:

    sudo chmod -R 777 /data
  4. Install IoT Edge (instructions copied from here for convenience):

    curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
    sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
    curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
    sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
    sudo apt-get update
    sudo apt-get install iotedge
  5. Connect your device to your IoT Central application by editing IoT Edge configuration file:

    • Use your favorite text editor to edit IoT Edge configuration file:
    sudo nano /etc/iotedge/config.yaml
    • Comment out the "Manual provisioning configuration" section so it looks like this:
    # Manual provisioning configuration
    #provisioning:
    #  source: "manual"
    #  device_connection_string: ""
    • Uncomment the "DPS symmetric key provisioning configuration" (not the TPM section but the symmetric key one) and add your IoT Central app's scope id, registration_id which is your device Id and its primary symmetric key:

    ⚠️ Beware of spaces since YAML is space sensitive. In YAML exactly 2 spaces = 1 identation and make sure to not have any trailing spaces.

    # DPS symmetric key provisioning configuration
    provisioning:
        source: "dps"
        global_endpoint: "https://global.azure-devices-provisioning.net"
        scope_id: "<ID Scope>"
        attestation:
          method: "symmetric_key"
          registration_id: "<Device ID>"
          symmetric_key: "<Primary Key>"
    • Save and exit your editor (Ctrl+O, Ctrl+X)

    • Now Restart the Azure IoT Edge runtime with the following command:

    sudo systemctl restart iotedge
    • And let's verify that the connection to the cloud has been correctly established. If it isn't the case, please check your IoT Edge config file.
    sudo systemctl status iotedge

As you can guess from this last step, behind the scenes IoT Central is actually using Azure Device Provisioning Service to provision devices at scale.

With the IoT Edge device connected to the cloud, it can now report back its IP address to IoT Central. Let's verify that it is the case:

  1. Go to your IoT Central application
  2. Go to Devices tab from the left navigation
  3. Click on your device
  4. Click on its Device tab
  5. Verify that the RTSP Video URL starts with the IP address of your device

After a minute or so, IoT Edge should have had enough time to download all the containers from the Cloud per IoT Central's instructions and DeepStream should have had enough time to start the default video pipeline, called Demo mode in IoT Central UI. Let's see how it looks like:

  1. In IoT Central, copy the RTSP Video URL from the Device tab
  2. Open VLC and go to Media > Open Network Stream and paste the RTSP Video URL copied above as the network URL and click Play
  3. In IoT Central, go to to the Dashboard tab of your device (e.g. from the left nav: Devices > your-device > Dashboard)
  4. Verify that active telemetry is being sent by the device to IoT Central. In particular, the number of primary detections which are set to car by default should map to the objects detected by the 4 cameras.

At this point, you should see 4 real-time video streams being processed to detect cars and people with a Resnet 10 AI model.

4 video streams processed real-time

Operating the solution

To demonstrate how to remotely manage this solution, we'll send a command to the device to change its input cameras. We'll use your phone as an RTSP camera as a new input camera.

IoT Central

Changing input cameras

Let's first verify that your phone works as an RTSP camera properly:

  • Open the the IP Camera Lite
  • Go to Settings and remove the User and Password on the RTSP feed
  • Click on Turn on IP Camera Server

Let's just verify that the camera is functional. With VLC:

  • Go to Media > Open Network Stream
  • Paste the following RTSP Video URL: rtsp://your-phone-ip-address:8554/live
  • Click Play and verify that phone's camera is properly displaying.

Let's now update your Jetson Nano to use your phone's camera. In IoT Central:

  • Go to the Manage tab
  • Unselect the Demo Mode, which uses several hardcoded video files as input of car traffic
  • Update the Video Stream 1 property:
    • In the cameraId, name your camera, for instance My Phone
    • In the videoStreamUrl, enter the RTSP stream of this camera: rtsp://your-phone-ip-address:8554/live
  • Keep the default AI model of DeepStream by keeping the value DeepStream ResNet 10 as the AI model type.
  • Keep the default Secondary Detection Class as person
  • Hit Save

This sends a command to the device to update its DeepStream configuration file with these new properties and to restart DeepStream. If you were still streaming the output of the DeepStream application, this stream will be taken down as DeepStream will restart.

Let's have a closer look at DeepStream configuration to see what has changed compared to the initial Demo Mode configuration which is copied here. From a terminal connected to your Jetson Nano:

  1. Open up the default configuration file of DeepStream to understand its structure:

    nano /data/misc/storage/DSConfig.txt
  2. Look after the first source and observe how parameteres provided in IoT Central UI got copied here.

Within a minute, DeepStream should restart. You can observe its status in IoT Central via the Modules tab. Once deepstream module is back to Running, copy again the RTSP Video Url field from the Device tab and give it to VLC (Media > Open Network Stream > paste the RTSP Video URL > Play).

You should now detect people from your phone's camera. The count of Person in the dashboard tab of your device in IoT Central should go up. We've just remotely updated the configuration of this intelligent video analytics solution!

Use an AI model to detect custom visual anomalies

We'll use simulated cameras to monitor each of the soda cans production lines and we'll collect images and build a custom AI model to detects cans that are up or down. We'll then deploy this custom AI model to DeepStream via IoT Central. To do a quick Proof Of Concept, we'll use the Custom Vision service, a no-code computer vision AI model builder.

As a pre-requisite, let's create a new Custom Vision project in your subscription:

  • Go to http://customvision.ai
  • Sign-in
  • Create a new Project
  • Give it a name like Soda Cans Down
  • Pick up your resource, if none select create new and select SKU - F0 or (S0)
  • Select Project Type = Object Detection
  • Select Domains = General (Compact)

We then need to collect images to build a custom AI model. In the interest of time, here is a set of images that have already been captured for you that you can upload to Custom Vision. Download it, unzip it and upload all the images into your Custom Vision project.

We then need to label our images:

  • Click on an image
  • Label the cans that are up as Up and the ones that are down as Down
  • Hit the right arrow to move on to the next image and label the remaining 70+ images...or read below to use a pre-built model with this set of images

Labelling in Custom Vision

Once you're done labeling, let's train and export your model:

  • Train your model by clicking on Train
  • Export it by going to the Performance tab, clicking on Export and choosing ONNX
  • Right-click on the Download button and select copy link address to copy the anonymous location of a zip file of your ccustom model

In the interest of time, you can also use this link to a pre-built Custom Vision model.

Finally, we'll deploy this custom vision model to the Jetson Nano using IoT Central. In IoT Central:

  • Go to the Manage tab (beware of the sorting o f the fields)
  • Make sure the Demo Mode is unchecked
  • Update the first three Video Stream Input to the following values:
    • Video Stream Input 1 > CameraId = Cam01
    • Video Stream Input 1 > videoStreamUrl = file:///data/misc/storage/sampleStreams/cam-cans-00.mp4
    • Video Stream Input 2 > CameraId = Cam02
    • Video Stream Input 2 > videoStreamUrl = file:///data/misc/storage/sampleStreams/cam-cans-01.mp4
    • Video Stream Input 3 > CameraId = Cam03
    • Video Stream Input 3 > videoStreamUrl = file:///data/misc/storage/sampleStreams/cam-cans-02.mp4
  • Select Custom Vision as the AI model Type
  • Paste the URI of your custom vision model in the Custom Vision Model Url, for instance https://onedrive.live.com/download?0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21587636&authkey=AOCf3YsqcZM_3WM for the pre-built one.
  • Update the detection classes:
    • Primary Detection Class = Up
    • Secondary Detection Class = Down
  • Hit Save

After a few moments, the deepstream module should restart. Once it is in Running state again, look at the output RTSP stream via VLC (Media > Open Network Stream > paste the RTSP Video URL that you got from the IoT Central's Device tab > Play).

We are now visualizing the processing of 3 real time (e.g. 30fps 1080p) video feeds with a custom vision AI models that we built in minutes to detect visual anomalies!

Custom Vision

Creating an alert

To be alerted as soon as a soda can is down, we'll set up an alert to send an email whenever a new soda is detected as being down.

With IoT Central, you can easily define rules and alerts based on the telemetry received by IoT Central. Let's create one whenever a soda can is down.

  1. Go to the Rules tab in the left nav
  2. Click on New
  3. Give it a name like Soda can down!
  4. Select your device template NVIDIA Jetson Nano DCM
  5. Create a Condition with the following attributes:
    • Telemetry = Secondary Detection Count
    • Operator = Is greater than
    • Value = 1 and hit Enter
  6. Create an email Action with the following attributes:
    • Display name = Soda can down
    • To = your email address used to login to your IoT Central application
    • hit Done
  7. Save

In a few seconds, you should be receiving some mails :)

Clean-up

This is the end of the workshop. Because there will be another session that uses the same device and azure account after you, please clean up the resources you've installed to let others start fresh:

  • Clean up on the Jetson Nano, via a terminal connected to your Jetson Nano:

    sudo rm -r /data
    sudo apt-get remove --purge -y iotedge
  • Deleting your IoT Central application, from your browser:

    • Go to your IoT Central application
    • Click on the Administration tab from the left nav
    • Click on Delete the application and confirm
  • Deleting your Custom Vision project, from your browser:

    • Go to Custom Vision
    • Click on Delete your Custom Vision project and confirm

Going further

Thank you for going through this workshop! We hope that you enjoyed it and found it valuable.

There are other content that you can try with your Jetson Nano at http://aka.ms/jetson-on-azure!

iotedge-iva-nano's People

Contributors

emmanuel-bv avatar kartben avatar sseiber avatar syntaxc4 avatar toolboc avatar y07yoyo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

iotedge-iva-nano's Issues

Issue with using IoT Central API Token

Hi,

I have run this example program, but when I generate the API token and go to az rest -m get -u https://{subdomain{centralDnsSuffixInPath}/api/preview/devices/{device_id}/modules/{module_name}/components/ --headers Authorization={API_TOKEN}
I get the following:

{
"value": []
}

IoT central connectivity

I have run this demo in proxy network,all containers are running fine,But "sudo iotedge check" i am getting below error logs:

Connectivity checks

× host can connect to and perform TLS handshake with DPS endpoint - Error
Could not connect to global.azure-devices-provisioning.net

No data is coming into IoT central dashboard.Anything extra i need to do for proxy?

IoTCentral module needs restarting everytime iotedge is restarted

Hi @ebertrams,

I am finding another issue when I restart IoTEdge:
sudo systemctl restart iotedge
When I check the logs of the IoTCentralBridge module I get the following:
iotcentral module error
I then have to restart the IoTCentral module only and it works.
sudo iotedge restart IoTCentralBridge

Any idea as to why it does not connect the first time iotedge is restarted?

Kind Regards,

Yusuf

Health Check Restarts Device not IoTCentralBridge Module

Hi @ebertrams,

When running the new docker, when the IoTCentralBridge module fails the healthcheck, it appears that the whole device is restarted as opposed to just the IoTCentralBridge Module. Can you confirm this?

When the device restarts, the same issue takes place again as the IoTCentralBridge module does not connect on its first go and hence, the device keeps restarting.

Kind Regards,

Error building new docker

Hi @ebertrams,

When trying to build the new Docker with the updated health checks, I get the following errors:

src/apis/module.ts(32,31): error TS2345: Argument of type 'void' is not assignable to parameter of type 'string | object'. src/manifest.ts(7,26): error TS6133: 'config' is declared but its value is never read.

The build command returned a non-zero code: 2

Kind Regards,

Yusuf

New IoT Edge Module

Hi @ebertrams and @sseiber

I am trying to create a new IoT Edge Module where I can access some API to get some data and construct a message and send it to the IoTCentralBridge module. I have tried using Vscode (Right click deployment.json -> Add IoTEdge Module) but the templates that open are listening for an input where as the module I am creating wont be listening for an input, rather just be sending output messages.

A bit like the simulatedTemperature module, where it is not listening to an input, rather only sending an output for the IoTCentralBridge Module to process the message and send it to the cloud.

Can you give me some insights as to how I can do this?

Kind Regards,

Yusuf

Count Per Video Stream

Hi,

Is it possible to modify this program to have a separate dashboard display for count for each video stream? (Ie: Count for Cam01, Count for Cam02, count for Cam03, etc?)

Kind Regards,

Error when building IoTCentralBridge Module

Hi @ebertrams and @sseiber,

I have been using this module for a few different use cases, I am working on one now but I am now having issues building the docker for this.

When I have gone back to my older use cases and tried to rebuild those dockers I am getting this new error again. This was not there previously when I was building the dockers so I am not sure what this issue is. I am hoping you can help me as I urgently need this docker built:

npm WARN deprecated [email protected]: request has been deprecated, see request/request#3142
npm WARN deprecated [email protected]: this library is no longer supported
npm WARN deprecated [email protected]: request-promise-native has been deprecated because it extends the now deprecated request package, see request/request#3142
npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated
npm WARN lifecycle [email protected]~postinstall: cannot run in wd [email protected] node ./scripts/setupDevEnvironment.js (wd=/app)
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^2.1.2 (node_modules/jest-haste-map/node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

added 764 packages from 586 contributors and audited 766 packages in 81.746s

30 packages are looking for funding
run npm fund for details

found 0 vulnerabilities

node_modules/@types/hapi__hapi/index.d.ts(256,18): error TS2430: Interface 'RequestEvents' incorrectly extends interface 'Podium'.
Types of property 'on' are incompatible.
Type '{ (criteria: "peek", listener: PeekListener): void; (criteria: "finish" | "disconnect", listener: (data: undefined) => void): void; }' is not assignable to type '(criteria: string | CriteriaObject, listener: Listener, context?: Tcontext) => this'.
Types of parameters 'listener' and 'listener' are incompatible.
Types of parameters 'tags' and 'encoding' are incompatible.
Type 'string' is not assignable to type '{ [tag: string]: true; }'.
node_modules/@types/hapi__hapi/index.d.ts(629,18): error TS2430: Interface 'ResponseEvents' incorrectly extends interface 'Podium'.
Types of property 'on' are incompatible.
Type '{ (criteria: "peek", listener: PeekListener): void; (criteria: "finish", listener: (data: undefined) => void): void; }' is not assignable to type '(criteria: string | CriteriaObject, listener: Listener, context?: Tcontext) => this'.
Types of parameters 'listener' and 'listener' are incompatible.
Types of parameters 'tags' and 'encoding' are incompatible.
Type 'string' is not assignable to type '{ [tag: string]: true; }'.
node_modules/@types/hapi__hapi/index.d.ts(2359,43): error TS2314: Generic type 'Listener' requires 1 type argument(s).
node_modules/@types/hapi__hapi/index.d.ts(2377,18): error TS2430: Interface 'ServerEvents' incorrectly extends interface 'Podium'.
The types returned by 'on(...)' are incompatible between these types.
Type 'void' is not assignable to type 'this'.
'this' could be instantiated with an arbitrary type which could be unrelated to 'void'.
node_modules/@types/hapi__hapi/index.d.ts(2443,44): error TS2314: Generic type 'Listener' requires 1 type argument(s).
The command '/bin/sh -c npm install -q && ./node_modules/typescript/bin/tsc -p . && ./node_modules/tslint/bin/tslint -p ./tsconfig.json && npm prune --production && rm -f tslint.json && rm -f tsconfig.json && rm -rf src' returned a non-zero code: 2

Error with IoTCentralBridge connectivity

Hi @ebertrams and @sseiber,

I have just started a new IoTCentralApplication and when I go to IoTCentralBridge Logs I get the following:

[2020-11-18T11:14:21+0000] ERROR : [IoTCentralService,error] IoT Central connection error: mqtt.js returned Failure on first connection (Not authorized): getaddrinfo ENOTFOUND pm-cs-anly-svr01 error

Can you shed some light into why this maybe happening? This causes the device to constantly restart but I was able to find the device.restart and comment it off for troubleshooting.

I have restart iotedge and the module a few times with no luck.

IoTEdge Version: iotedge 1.0.10.2

Kind Regards,

Yusuf

Shutdown when running DeepStream container

Hi,
When I modify the config file and restart the Azure IoT Edge runtime (i.e., sudo systemctl restart iotedge), the Jetson Nano shutdown.

I inspect the deepstream container sudo docker logs --tail 200 deepstream, I can see the object detection model is already running:
image

But after 10 seconds, the device just shuts down.

Do you know what is the issue?

Many thanks,
Hieu

export data from iot central app to blob storage

Hi,

I am trying to export the data coming from the iot central application to a blob storage. I have set everything up like the docs but when I go to view the telemetry data in blob storage I get the following:

{"EnqueuedTimeUtc":"2020-06-18T06:04:34.5250000Z","Properties":{},"SystemProperties":{"connectionDeviceId":"NanoTest01","connectionModuleId":"IoTCentralBridge","connectionAuthMethod":"{\"scope\":\"module\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"637269117958401923","enqueuedTime":"2020-06-18T06:04:34.5250000Z"},"Body":"eyJ0bFN5c3RlbUhlYXJ0YmVhdCI6MX0="}

Do you have any insight as to why the "Body" field is displaying this string instead of all the telemetry data?

Thanks

How to add GPU in deployment Manifest when using Moby instead of Docker-CE

Hi @ebertrams and @sseiber,

I am trying to install IoT Edge on a AMD64 machine and run deepstream as well. When I install Moby engine I cannot use runtime: Nvidia in my deployment Manifest.

I have seen that in IoTEdge version 1.0.10.1 and upwards, you can add GPU in the create options. I have tried the following but I am getting errors. Can you please shed some light to this?

Using the options below:
"createOptions": {​​​​"HostConfig": {​​​​"DeviceRequests": [{​​​​"Capabilities": ["gpu"], "Count": -1}​​​​]}​​​

I get the following error:
One or more errors occurred. (Error calling Create module hydrogen_counting: Could not create module hydrogen_counting caused by: Could not create module hydrogen_counting caused by: json: cannot unmarshal string into Go struct field DeviceRequest.HostConfig.DeviceRequests.Capabilities of type []string) ---> Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling Create module hydrogen_counting: Could not create module hydrogen_counting caused by: Could not create module hydrogen_counting caused by: json: cannot unmarshal string into Go struct field DeviceRequest.HostConfig.DeviceRequests.Capabilities of type []string, StatusCode:500, at: at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Version_2020_07_07.ModuleManagementHttpClient.HandleException(Exception exception, String operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/version_2020_07_07/ModuleManagementHttpClient.cs:line 235

I have also followed instructions from this page: #3183
and implemented this code:
"createOptions": { "HostConfig": { "DeviceRequests": [ { "Driver": "", "Count": -1, "DeviceIDs": null, "Capabilities": [ [ "gpu" ] ], "Options": {} } ] } }
But I get the following error:
One or more errors occurred. (Error calling start module hydrogen_counting: Could not start module hydrogen_counting caused by: Could not start module hydrogen_counting caused by: could not select device driver "" with capabilities: [[gpu]]) ---> Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling start module hydrogen_counting: Could not start module hydrogen_counting caused by: Could not start module hydrogen_counting caused by: could not select device driver "" with capabilities: [[gpu]], StatusCode:500, at: at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Version_2020_07_07.ModuleManagementHttpClient.HandleException(Exception exception, String operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/version_2020_07_07/ModuleManagementHttpClient.cs:line 235 at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Versioning.ModuleManagementHttpClientVersioned.Execute[T](Func1 func, String operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/versioning/ModuleManagementHttpClientVersioned.cs:line 144

I am running IoTEdge Version: iotedge 1.0.10.2. I am trying to build on AMD64 machine. I have installed the Moby-Engine

Kind Regards,

Yusuf

Operating the solution with IoT Central app - instructions outdated

Hi, I tried to follow the instructions regarding Operating the solution with IoT Central app, I was not able..., instructions seems outdated, and even the deployment template is wrong, I tried to upload it and it doesn´t accept the parameters in the CreateOptions section, see error below:

{"code":"400.020.006.800","message":"Error validating uploaded manifest","innerError":{"code":"500.020.999.999","message":"An unexpected error occurred","context":[{"dataPath":"/modulesContent/$edgeAgent/properties.desired/systemModules/edgeHub/settings/createOptions","message":"type should be string"},{"dataPath":"/modulesContent/$edgeAgent/properties.desired/modules/deepstream/settings/createOptions","message":"type should be string"},{"dataPath":"/modulesContent/$edgeAgent/properties.desired/modules/IoTCentralBridge/settings/createOptions","message":"type should be string"}]}}

would you be so kind to review the proccess?

Regards.

Certificate Expired

Hi @ebertrams and @sseiber,

I recently came across an issue with a certificate causing our IoT Central application to stop working.
I have attached the logs from the edgeHub and edgeAgent from one of our applications. (Please scroll down to the last section of the logs)

I have also attached the logs from the IoTCentralBridge module of another application experiencing a certificate issue.

edgeHubLogs.txt
edgeAgentLogs.txt
IoTCentralBridgeLogs.txt

Do you know why this must be happening?

Kind Regards,

Yusuf Abdulhussein

Cannot View messages being sent to IOT Hub

Hi,

I have followed the instructions on this github and created the device using IOT Central.
I would like to monitor the messages being sent to see what the data is. I have created an IOT Edge device on IOT Hub with the same connection parameters as the one from IOT Central. I have clicked on monitor built-in events but there are no messages being sent.

Is thisa feature that is not available? I do see a line in the deployment.json that specifically states to route messages from IOTCentralBridge to Upstream (FiltertoIotHub)?

Thanks

Consider explaining what "jetcard" is used for

I think it would make sense to explain why the system needs to have jetcard installed, and what it adds to the stock Jetson Nano distro.

FWIW since the jetcard README says the below, and since we explicitly don't install PyTorch and tensorflow, it makes it hard to comprehend what/why we are installing.

JetCard is a system configuration that makes it easy to get started with AI. It comes pre-loaded with

* A Jupyter Lab server that starts on boot for easy web programming
* A script to display the Jetson's IP address (and other stats)
* The popular deep learning frameworks PyTorch and TensorFlow

Whitespace character breaks parsing of classifier labels in IoTCentralBridge

I noticed that when using a custom vision ai model, the output of deepstream seems to be appending a "\r" to the secondary classifier name. This has a side effect in the IoTCentralBridge which causes it not to send telemetry for the secondary classifier.

image

It seems the whitespace character is being introduced through labels.txt in /data/misc/storage/ONNXSetup/detector

image

Notice that the primary detector is not affected.

image

Converting labels.txt to use Unix line-endings with dos2unix and restarting the deepstream container alleviates the issue but it will return if the IoTCentralBridge us restarted as it will re-pull the custom vision model which comes down with Windows line-endings.

Issues updating Device Properties

Hi @ebertrams and @sseiber ,

I am currently experiencing issues updating the device properties. I have tried three different ways and all three got errors.

  1. Going to the Device Manage tab and updating the property:
    update property error

  2. Creating a job in the IoT Central Application to update a property:
    failedJob

  3. Running a function app to send a PUT Request to update the property:
    {
    "error": {
    "code": "InternalServerError",
    "message": "Something went wrong on our end. Please try again. You can contact support at https://aka.ms/iotcentral-support. Please include the following information. Request ID: ovtnqe, Time: Fri, 09 Oct 2020 00:08:44 GMT.",
    "requestId": "ovtnqe",
    "time": "Fri, 09 Oct 2020 00:08:44 GMT"
    }
    }

All these three options were working normally for the past few days and months. Just having this issue today.

Cannot install moby-engine on Arm64 device

Hi @ebertrams and @sseiber,

I am trying to install IoT Edge on an arm64 device running nvidia jetpack 4.4, but I am getting errors when I try to install the moby-engine.
It has something to do with having installed docker and nvidia-docker 2 already on the device. I require this to run deepstream.

The error is:

ddi@ddi-nx:~$ sudo apt-get install moby-engine
[sudo] password for ddi: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 moby-engine : Depends: moby-containerd (>= 1.2) but it is not going to be installed
               Depends: moby-runc (>= 1.0.0~rc10) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

I have followed the instructions provided here: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux

Do you know how I can overcome this and get iotedge to work?

Kind Regards,

Possibly incorrect source id

Following on from the DemoMode not being flagged I managed to modify my DSConfig for it to work. I dont think the issue was the Demo Mode flag but the [source*] type within DSConfig.

In the file it mentions #4 is of type RTSP, however in every other deep stream config I have always set this to #2. Changing DSConfig to this value and restarting the deepstream module allowed me to view the live camera

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.