Giter Site home page Giter Site logo

aws-samples / aws-panorama-samples Goto Github PK

View Code? Open in Web Editor NEW
82.0 14.0 58.0 356.88 MB

This repository has samples that demonstrate various aspects of AWS Panorama device and the Panorama SDK

Home Page: https://docs.aws.amazon.com/panorama/

License: MIT No Attribution

Jupyter Notebook 17.17% Python 82.83%
aws edge computer-vision

aws-panorama-samples's Introduction

AWS Panorama Samples and Test Utility

Introduction

AWS Panorama is a machine learning appliance and SDK, which enables you to add computer vision (CV) to your on-premises cameras or on new AWS Panorama enabled cameras. AWS Panorama gives you the ability to make real-time decisions to improve your operations, by giving you compute power at the edge.

This repository contains sample applications for AWS Panorama, and Test Utility which allows running Panorama applications in simulation environment without real Panorama appliance device.

About Test Utility

Test Utility is a set of python libraries and commandline commands, which allows you to test-run Panorama applications without Panorama appliance device. With Test Utility, you can start running sample applications and developing your own Panorama applications before preparing real Panorama appliance. Sample applications in this repository also use Test Utility.

For more about the Test Utility and its current capabilities, please refer to Introducing AWS Panorama Test Utility document.

To set up your environment for Test Utility, please refer to Test Utility environment setup.

To know how to use Test Utility, please refer to How to use Test Utility.

List of Samples

Application Description Framework Usecase Complexity Model Python Version
People Counter This is a sample computer vision application that can count the number of people in each frame of a streaming video (Start with this) MXNet Object Detection Easy Download 3.8
Car Detector and Tracker This is a sample computer vision application that can detect and track cars Tensorflow Object Detection Medium Download 3.7
Pose estimation This is a sample computer vision application that can detect people and estimate pose of them MXNet Pose estimation Advanced yolo3_mobilenet1.0_coco, simple_pose_resnet152_v1d 3.7
Object Detection Tensorflow SSD (TF37_opengpu) This example shows how to run a TF SSD Mobilenet Model using Tensorflow Tensorflow (Open GPU) Object Detection (BYO Container) Advanced N/A 3.7
Object Detection PyTorch Yolov5s (PT37_opengpu) This example shows how to run your own YoloV5s model using PyTorch PyTorch (Open GPU) Object Detection (BYO Container) Advanced N/A 3.7
Object Detection ONNX Runtime Yolov5s (ONNX_opengpu) This example shows how to run your own YoloV5s model using ONNX Runtime ONNX Runtime (Open GPU) Object Detection (BYO Container) Advanced N/A 3.8
Object Detection with Yolov5 ONNX model optimized for TensorRT (ONNX2TRT_opengpu) This sample shows how to run a Yolov5 ONNX model optimized for TensorRT TensorRT Runtime (Open GPU) Object Detection (BYO Container) Advanced N/A 3.6
Object Detection using TensorRT network definition APIs (TRTPT36_opengpu) This example shows how to get infernece from a YoloV6s model optimized using TensorRT Network definition API's TensorRT (OpenGPU) Object Detection (BYO Container) Advanced N/A 3.6
Inbound networking This sample explains how to enable inbound networking port on Panorama device, and how to run a simple HTTP server wihtin a Panorama application. N/A Network Easy N/A 3.7
MOT Analysis This sample shows how to build end to end multi object tracking solution using pretrained YOLOX model, kinesis video upstream by gstreamer and dashboard PyTorch Object Tracking Advanced YOLOX 3.7
Kinesis Video Streams This sample shows how to build an application to push multiple video streams from Panoram to Amazon Kinesis Video Streams service with AWS IoT. N/A Media Advanced N/A 3.8

Running the Samples

Step 1 : Go to aws-panorama-samples/samples and open your choice of project Step 2 : Open the .ipynb notebook and follow the instructions in the notebook Step 3 : To make any changes, change the corresponding node package.json or the graph.json in the application folder

For more information, check out the documentation for the AWS Panorama DX CLI here

Documentations

Tools

List of tools for ease of development of panorama. Please see details at corresponding tool page.

Getting Help

We use AWS Panorama Samples GitHub issues for tracking questions, bugs, and feature requests.

License

This library is licensed under the MIT-0 License.

aws-panorama-samples's People

Contributors

abest0 avatar ahmadzaheera avatar amazon-auto avatar animesh-bhadouria avatar aws-tec avatar dependabot[bot] avatar ericliu2000 avatar hightensan avatar kevhsu-k avatar lisaleejz avatar mtalreja-a avatar mwunderl avatar niklongstone avatar ranrotx avatar scottrfrancis avatar shimomut avatar suryakari avatar trellixvulnteam avatar ulmasov avatar zxinming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-panorama-samples's Issues

Test Utility and ARM preferences for Kinesis sample - CloudFormation link & template: instance Create fails in region: ca-central-1

The CF template itself does include an ami for ca-central-1.

I should first ask if this Panorama product & repo is still active?

Running the template fails in the Create Instance ... so forget preferred env for the moment ... tried using simple JupyterLabs:

  • I don't have time to become a CloudFormation expert

  • Despite not being able to have the preferred Test Utility environment, I can run & deploy most of the samples in a CPU-only JupyterLab notebook instance. They have all been x86_64 instances, and they all are able to deploy the samples, and connect to my RTSP camera, as expected.

As far as I can tell, Sagemaker images in ca-central-1 do not support ARM in any case! Is this a Canada availability issue? Which should I select when I create my notebook, if ARM is to be preferred? Your github note strongly suggests: It is highly recommended to use ARM64 based EC2 instance for the Test Utility.

  • I am now in need of getting the Kinesis multi camera sample to deploy. The notebook steps all succeed for build, package, etc. It fails to deploy, despite all cameras validated from other sample tests (connection strings & usernames/passwords):
    • samples/kinesis_video_streams:
    • I am building & packaging on x86_64:
      • Could this be the failure issue? There are no debug hints in the AWS console, such as "ARM machine cannot load binaries from x86_64 build" (... as sub-parts are gstreamer & KVS SDK libs).
      • Any suggestions for deploying this kinesis sample? What is simple way to set up a preferred ARM environment for JupyterLabs, if that is the likely cause?
      • Any suggestions for debugging the workflow?

Thanks for your time.

car_tracker_tutorial.ipynb failed with S3 404 error due to the wrong path `--model-s3-uri` of `panorama-cli add-raw-model`

In the car_tracker_tutorial.ipynb sample notebook, the below cell is failed with S3 404 error.

!cd ./car_tracker_app && panorama-cli add-raw-model \
    --model-asset-name {model_asset_name} \
    --model-s3-uri s3://{S3_BUCKET}/{app_name}/{ML_MODEL_FNAME}.tar.gz \
    --descriptor-path {model_descriptor_path}  \
    --packages-path {model_package_path}

The cause is --model-s3-uri is the wrong path, which does not contain {model_node_name}.

  • Fail : --model-s3-uri s3://{S3_BUCKET}/{app_name}/{ML_MODEL_FNAME}.tar.gz
  • Success : --model-s3-uri s3://{S3_BUCKET}/{app_name}/{model_node_name}/{ML_MODEL_FNAME}.tar.gz

SageMaker Com pilation job's output is specified as s3://{S3_BUCKET}/{app_name}/{model_node_name}/, so the path should contain /{model_node_name}/ prefix.

$ aws sagemaker describe-compilation-job --compilation-job-name Comp-Job20230714012911AM --region us-east-1 | jq '.OutputConfig'
{
  "S3OutputLocation": "s3://test-aws-panorama-samples-bucket-us-east-1/car_tracker_app/model_node/",
  "TargetPlatform": {
    "Os": "LINUX",
    "Arch": "X86_64"
  }
}

Screenshot 2023-07-14 at 11 02 18

Deployment error - GPU containers - PT37 sample

Hi,

I am trying to deploy the PT37 model (uses GPU) on aws panorama which already has people_counter_app and pose_estimation deployed.
The deployment error given is --
" Containers requesting GPU privileges cannot coexist with existing applications on the device that have model nodes. Failed validation for account:{accound_id}, device:arn:aws:panorama:us-east-1:{accound_id}:device/device-*"

Cant we deploy models that use/dont use GPU on the same machine at the same time?

Is it that we can either deploy models that dont use GPU or that use GPU on the same machine?

Please guide as there is no documentation on this. I was under impression that we can deploy multiple models (some might be using GPU) on the same machine concurrently!

Thanks.

PT37_opengpu deployment error in yolov5s_pt37_app_node

Hi,

I have been trying to deploy the PT37_opengpu sample (through PT37_opengpu/pytorch_example.ipynb) but get the error:

An error occurred (ValidationException) when calling the CreateApplicationInstance operation: {"reason":"No registered package version found for accountId:{account_id}, packageName:test_rtsp_camera_lab3, packageVersion:2.0","message":"The input fails to satisfy the constraints specified by an AWS service.","fields":[]}

The cloudwatch logs show a warning though:

2023-02-14 11:53:09.240 WARN loadApplicationGraph(338) Unable to open node state override file: /data/cloud/graphs/applicationInstance-atbspxydjsr6mcfmulyowfleiq/nodeDesiredStateOverrides.json

2023-02-14 11:53:09.240 WARN loadApplicationGraph(403) Empty node state override file /data/cloud/graphs/applicationInstance-atbspxydjsr6mcfmulyowfleiq/nodeDesiredStateOverrides.json

2023-02-14 11:53:09.240 WARN loadApplicationGraph(410) Recreating default node state override file /data/cloud/graphs/applicationInstance-atbspxydjsr6mcfmulyowfleiq/nodeDesiredStateOverrides.json

below is the tree structure

image

The final error in [yolov5s_pt37_app_node] (https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logsV2:log-groups/log-group/$252Faws$252Fpanorama$252Fdevices$252Fdevice-d3bqsk4okufrmw2nuttbjsibxq$252Fapplications$252FapplicationInstance-atbspxydjsr6mcfmulyowfleiq/log-events/yolov5s_pt37_app_node) reads

botocore.exceptions.CredentialRetrievalError: Error when retrieving credentials from container-role: Error retrieving metadata: Received non 200 response (500) from ECS metadata: "Unable to fetch credentials"

Requesting help in resolving this.

Thanks.

Unknown service: 'panorama'

Hello, I successfully provisioned an EC2 instance with the CF stack provided with the link in the README. I managed to connect to the jupyterlab, and tried to run the people_counter_tutorial notebook. However, one of the first lines of the first cell:

panorama_client = boto3.client('panorama')

fails with an error saying UnknownServiceError: Unknown service: 'panorama'. I tried also upgrading boto3, but it's already the newest version. Should I do something special with my AWS account that panorama service would be enabled?

Deployment error!

Hi,

I am trying to install the basic people counter app in Panorama device as per the aws-panorama-samples/people_counter_tutorial.ipynb

I am getting the following set of errors at the deployment stage.

time 0: ERROR fetchFromS3(76) Error: GetObject: : curlCode: 28, Timeout was reached

time 1: ERROR downloadArtifactsS3(76) download file failed: {accountid}/payloads/device-*/deployments/xxxabc.json

time 2:ERROR downloadApplicationGraph(102) Downloading application payload for application: applicationInstance-* failed!

time 3: ERROR installApplicationTask(264) Failed to install application: applicationInstance-* :
EXCEPTION THROWN:
at <time 3> ERROR loadApplicationGraph(301) Failed to install application:Unable to open application graph file: /data/cloud/graphs/applicationInstance-*/graph.json:Please redeploy the application or reset your device. If issue persists, ask Panorama team for assistance

The device is online the whole time and all other pre-requisites are satisfied. I have tried many times but still get the same error.
Wanted some help in this regard from AWS Panorama team.

Thanks.

how to pump streams into kvs?

hello,

playing with these boxes and they look appealing. i'm noticing that the current architecture is more for processing things on a frame by frame basis. i'm interested in pumping data into KVS, but KVS only accepts h264/encoded stream data. This is normally very easy when i have an RTSP url because I can use the KVS gstreamer sink to just pipe them together. However, since i don't have access to the underlying URL (i can get stream_url but no credentials), i have to work with single image frames.

the docs say:

AWS Panorama application results can be integrated with on-premises line of business applications for automation and can be routed to services such as Amazon Simple Storage Service (S3), Amazon Kinesis Video Streams, or Amazon CloudWatch for deriving actionable insights to drive process improvements.

is there a preferred approach to get data to pump into KVS?

thanks!

Using real RTSP streams from IP cameras in change_video_source(video_file):

Hello Dear AWS Panorama,

I am using the samples provided to learn the Panorama SDK and would like to connect real RTSP streams from my IP camera (AXIS) into the examples. We do not have a real Panorama appliance yet.
Is there any way to do that with the emulator?
Thanks in advance
Ari Telias
AiDANT.ai

Errors in setup scripts on WSL Ubuntu 20.04

Following the instructions found here.

Firstly python is not linked as python3, simple fix is to change the line in 0-test-compile.sh to:
python3 -m py_compile packages/*-SAMPLE_CODE-1.0/application.py

Secondly 3-build-container.sh is failing on the gzip command somewhere:

=> => naming to docker.io/library/code_asset                                                                                                              0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
docker export --output=code_asset.tar $(docker create code_asset:latest)
WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested
gzip -9 code_asset.tar
gzip: code_asset.tar.gz: Operation not permitted
gzip: code_asset.tar.gz: Operation not permitted
Error while compressing the exported docker tar file

AMIs do not exist

Please provide an AMI stack for ap-southeast-2. The AMI ids provided in the CloudFormation template do not exist in ap-southeast-2 region.

Yolov5 SageMaker Neo Compilation failed

I have followed the example AWS Panorama Object Detection with YOLOv5 Example on this repo. I ran the jupyter notebook on a SageMaker instance and followed the code (without changing any model parameter, just some paths or names to directories) until you create the traced model and then export it. Once the YoloV5 model is saved as a tar file I uploaded it to a AWS S3 bucket.
Then in SageMaker Neo I started a compilation job.
The parameters used were:
Location of model artifacts: the S3 URL to the model tar file
Data input configuration: [[1,3,640,640]]
Machine learning framework: PYTORCH
Stopping condition: 900
Target device: jetson_xavier
Output location: S3 bucket

Once the compilation job starts this compilation error shows up:

Failure reason ClientError: InputConfiguration: TVM cannot convert the PyTorch model. Invalid model or input-shape mismatch. Make sure that inputs are lexically ordered and of the correct dimensionality. Traceback (most recent call last):\n [bt] (4) /opt/amazon/lib/python3.6/site-packages/tvm/libtvm.so(TVMFuncCall+0x61) [0x7fa82733bf41]\n [bt] (3) /opt/amazon/lib/python3.6/site-packages/tvm/libtvm.so(+0x1ada205) [0x7fa8271fd205]\n [bt] (2) /opt/amazon/lib/python3.6/site-packages/tvm/libtvm.so(tvm::runtime::TVMMovableArgValueWithContext_::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const+0x5e) [0x7fa82720359e]\n [bt] (1) /opt/amazon/lib/python3.6/site-packages/tvm/libtvm.so(tvm::runtime::Array<tvm::RelayExpr, void> tvm::runtime::TVMPODValue_::AsObjectRef<tvm::runtime::Array<tvm::RelayExpr, void> >() const+0x413) [0x7fa826d0b853]\n [bt] (0) /opt/amazon/lib/python3.6/site-packages/tvm/libtvm.so(+0x15ddf7f) [0x7fa826d00f7f]\n File "/tvm/include/tvm/runtime/packed_func.h", line 687\nTVMError: In function relay.ir.Tuple: error while converting argument 0: [13:24:41] /tvm/include/tvm/runtime/packed_func.h:1564: \n---------------------------------------------------------------\nAn internal invariant was violated during the execution of TVM.\nPlease read TVM\'s error reporting guidelines.\nMore details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.\n---------------------------------------------------------------\n Check failed: !checked_type.defined() == false: Expected Array[RelayExpr], but got Array[index 1: Array]\n

Screenshot 2021-06-30 at 09 23 27

b and r channels reversed in tutorial

In people_detection_tutorial.ipynb, the preprocess function uses cv2.split() to extract the individual rgb channels from the image:

r, g, b = cv2.split(img)

however openCV uses BGR format so it should be:

b, g, r = cv2.split(img)

Few suggested changes/corrections

1- Instructions in README.md and Jupyter Notebooks. The aws configure command is only required to set the region. Adding the credentials will override the temporary credentials from the role attached to the EC2 instance.

2- In the Cloudformation the default value for the KeyName should be removed.

3- In the Jupyter Notebook, "Notebook Parameter" the string "PLEASE ENTER YOUR DEVICE ID" may confuse the user to enter the device ID there (replacing the text) instead of the previous line.

4- Instructions in the README.md. It would be nice to note that this test utils will require a correct device id for the deployment part. So for customer who does not own a device can try all cells before the deployment cell by using any fake device id.

TRTPT36_opengpu sample panorama-cli-dev command not found

In trtpt_example.ipynb error occurs when executing the following line

!cd ./trtpt_36_2_app && pwd && panorama-cli-dev setup-panorama-service-cli --region PDX --stage Gamma && panorama-cli-dev package-application

because panorama-cli-dev command not found.

If change panorama-cli-dev to panorama-cli, it complains that setup-panorama-service-cli is not a valid option.

Commands in run_after_clone.sh script in panorama_sdk are incorrect

In the README.md file, there are instructions to run sh-4.2$ sudo sh aws-panorama-samples/panorama_sdk/run_after_clone.sh but this produces the following:

chown: cannot access ‘AWSOmniCVExamples/’: No such file or directory
chgrp: cannot access ‘AWSOmniCVExamples/’: No such file or directory

It appears that the folder structure has changed. Recommend either removing this step or telling users to manually run the appropriate chown and chgrp commands.

creating and exporting a docker image fail, test docker image same issue too

[3/3] RUN python3.8 -m pip install opencv-python boto3:
0.250 exec /bin/sh: exec format error


Dockerfile:3

1 | FROM public.ecr.aws/panorama/panorama-application/sdkv1/python3.8/aarch64:latest
2 | COPY src /panorama
3 | >>> RUN python3.8 -m pip install opencv-python boto3
4 |

ERROR: failed to solve: process "/bin/sh -c python3.8 -m pip install opencv-python boto3" did not complete successfully: exit code: 1
Error while creating and exporting a docker image

pose_estimation sample compile error

Repro steps

  1. Create jupyter instance through provided CF template for running test utility

  2. Open pose_estimation sample and run first cell which contains 'import gluoncv'

  3. import fails because no installed related packages.

  4. pip install gluoncv mxnet (1.9.0)

  5. unable to locate 'libarmpl_lp64_mp.so' while import gluoncv again

  6. Install arm performance lib from https://developer.arm.com/tools-and-software/server-and-hpc/downloads/arm-performance-libraries and add lib path to ld_library_path

  7. model import fail on
    people_detection_model = gluoncv.model_zoo.get_model('yolo3_mobilenet1.0_coco', pretrained=True)

    message:
    MXNetError: MXNetError: Invalid Parameter format for layout expect int or None but value='', in operator Convolution(name="", stride="(2, 2)", no_bias="True", kernel="(3, 3)", layout="NCHW", num_group="1", dilate="(1, 1)", pad="(1, 1)", num_filter="32")

  8. downgrade mxnet==1.6.0 and import gluoncv again and fail with following message
    OSError: /usr/local/lib/python3.6/dist-packages/mxnet/libmxnet.so: cannot open shared object file:

But if I try this sample on sagemaker with conda_python3_mxnet kernel, it does not fail but eventually not run on testing stage
(Probably arch mismatch)

Protobuf update breaks TF37 base image

The May 28 update of Protobuf contains changes that cause TensorFlow to install but fail to import into Python at runtime when installed as per the TF3y sample. I suggest changing the base docker file in the TF37 example to specify a version of Protobuf (such as 3.20.1) where this is not an issue.

Suggested change to dockerfile for the TF37 base image, which is downloaded by panorama_test_utility.download_artifacts_gpu_sample('tensorflow', account_id)

Current:
RUN python3.7 -m pip install --no-cache-dir --verbose future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

Suggestion:
RUN python3.7 -m pip install --no-cache-dir --verbose future==0.18.2 mock==3.0.5 h5py==2.10.0 keras_preprocessing==1.1.1 keras_applications==1.0.8 gast==0.2.2 futures protobuf==3.20.1 pybind11

More info:
https://pypi.org/project/protobuf/3.20.1/#history
https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

PyTorch YoloV5 Model is not getting compiled with Neo.

IHAC who has some use cases where they want to build an object detection model using Pytorch YoloV5. They were able to build the model but not able to compile through Neo. here is the error we are getting :

"ClientError: InputConfiguration: TVM cannot convert the PyTorch model. Invalid model or input-shape mismatch. Make sure that inputs are lexically ordered and of the correct dimensionality. Traceback (most recent call last):\n 2: TVMFuncCall\n 1: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 0: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 3: TVMFuncCall\n 2: std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Tuple (tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)>::AssignTypedLambda<tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}>(tvm::relay::{lambda(tvm::runtime::Array<tvm::RelayExpr, void>, tvm::Span)#5}, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)\n 1: tvm::runtime::TVMMovableArgValueWithContext::operator tvm::runtime::Array<tvm::RelayExpr, void><tvm::runtime::Array<tvm::RelayExpr, void> >() const\n 0: tvm::runtime::Array<tvm::RelayExpr, void> tvm::runtime::TVMPODValue::AsObjectRef<tvm::runtime::Array<tvm::RelayExpr, void> >() const\n File "/tvm/include/tvm/runtime/packed_func.h", line 714\nTVMError: In function relay.ir.Tuple: error while converting argument 0: [09:36:18] /tvm/include/tvm/runtime/packed_func.h:1591: \n---------------------------------------------------------------\nAn error occurred during the execution of TVM.\nFor more information, please see: https://tvm.apache.org/docs/errors.html\n---------------------------------------------------------------\n Check failed: (!checked_type.defined()) is false: Expected Array[RelayExpr], but got Array[index 1: Array]\n”

S3UploadFailedError

When running the cell in the people counter example notebook:

s3.meta.client.upload_file('{}/panorama_sdk/models/{}.tar.gz'.format(workdir,ML_MODEL_FNAME), 
                           S3_BUCKET, '{}/{}.tar.gz'.format(project_name, ML_MODEL_FNAME))

I get this error:

S3UploadFailedError: Failed to upload /home/ubuntu/awspanoramasamples/samples/panorama_sdk/models/ssd_512_resnet50_v1_voc.tar.gz to panorama-xyz/people_counter/ssd_512_resnet50_v1_voc.tar.gz: An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

I have tried changing the S3 bucket policy to allow access, but that didn't solve it. My setup is the same as in the tutorial. Any help would be appreciated.

AttributeError: module 'jupyter_utils' has no attribute 'declare_globals'

in the notebook of HandWash-Panorama-Examples,

when run the cell


import os
import sys
path = os.path.abspath(os.path.join(os.path.dirname("panorama_sdk"), '../..'))
sys.path.insert(1, path + '/panorama_sdk')
import jupyter_utils

jupyter_utils.declare_globals({'mxnet_modelzoo_example': True,
'custom_model': False, 'task':'Action_Detection', 'framework':'MXNET'})

jupyter_utils.change_video_source(video_to_use)


AttributeError Traceback (most recent call last)
in
5 import jupyter_utils
6
----> 7 jupyter_utils.declare_globals({'mxnet_modelzoo_example': True,
8 'custom_model': False, 'task':'Action_Detection', 'framework':'MXNET'})
9

AttributeError: module 'jupyter_utils' has no attribute 'declare_globals'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.