Giter Site home page Giter Site logo

aws-samples / amazon-rekognition-video-analyzer Goto Github PK

View Code? Open in Web Editor NEW
364.0 74.0 157.0 813 KB

A working prototype for capturing frames off of a live MJPEG video stream, identifying objects in near real-time using deep learning, and triggering actions based on an objects watch list.

License: Other

Python 11.48% HTML 1.09% CSS 1.33% JavaScript 86.09%
video video-processing python opencv aws aws-lambda mjpeg amazon-web-services amazon-rekognition machine-learning

amazon-rekognition-video-analyzer's Introduction

Create a Serverless Pipeline for Video Frame Analysis and Alerting

Introduction

Imagine being able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects -- all with low latency and without a single server to manage.

This is exactly what this project is going to help you accomplish with AWS. You will be able to setup and run a live video capture, analysis, and alerting solution prototype.

The prototype was conceived to address a specific use case, which is alerting based on a live video feed from an IP security camera. At a high level, the solution works as follows. A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface.

Here's the prototype's conceptual architecture:

Architecture

Let's go through the steps necessary to get this prototype up and running. If you are starting from scratch and are not familiar with Python, completing all steps can take a few hours.

Preparing your development environment

Here’s a high-level checklist of what you need to do to setup your development environment.

  1. Sign up for an AWS account if you haven't already and create an Administrator User. The steps are published here.

  2. Ensure that you have Python 2.7+ and Pip on your machine. Instructions for that varies based on your operating system and OS version.

  3. Create a Python virtual environment for the project with Virtualenv. This helps keep project’s python dependencies neatly isolated from your Operating System’s default python installation. Once you’ve created a virtual python environment, activate it before moving on with the following steps.

  4. Use Pip to install AWS CLI. Configure the AWS CLI. It is recommended that the access keys you configure are associated with an IAM User who has full access to the following:

  • Amazon S3
  • Amazon DynamoDB
  • Amazon Kinesis
  • AWS Lambda
  • Amazon CloudWatch and CloudWatch Logs
  • AWS CloudFormation
  • Amazon Rekognition
  • Amazon SNS
  • Amazon API Gateway
  • Creating IAM Roles

The IAM User can be the Administrator User you created in Step 1.

  1. Make sure you choose a region where all of the above services are available. Regions us-east-1 (N. Virginia), us-west-2 (Oregon), and eu-west-1 (Ireland) fulfill this criterion. Visit this page to learn more about service availability in AWS regions.

  2. Use Pip to install Open CV 3 python dependencies and then compile, build, and install Open CV 3 (required by Video Cap clients). You can follow this guide to get Open CV 3 up and running on OS X Sierra with Python 2.7. There's another guide for Open CV 3 and Python 3.5 on OS X Sierra. Other guides exist as well for Windows and Raspberry Pi.

  3. Use Pip to install Boto3. Boto is the Amazon Web Services (AWS) SDK for Python, which allows Python developers to write software that makes use of Amazon services like S3 and EC2. Boto provides an easy to use, object-oriented API as well as low-level direct access to AWS services.

  4. Use Pip to install Pynt. Pynt enables you to write project build scripts in Python.

  5. Clone this GitHub repository. Choose a directory path for your project that does not contain spaces (I'll refer to the full path to this directory as <path-to-project-dir>).

  6. Use Pip to install pytz. Pytz is needed for timezone calculations. Use the following commands:

pip install pytz # Install pytz in your virtual python env

pip install pytz -t <path-to-project-dir>/lambda/imageprocessor/ # Install pytz to be packaged and deployed with the Image Processor lambda function

Finally, obtain an IP camera. If you don’t have an IP camera, you can use your smartphone with an IP camera app. This is useful in case you want to test things out before investing in an IP camera. Also, you can simply use your laptop’s built-in camera or a connected USB camera. If you use an IP camera, make sure your camera is connected to the same Local Area Network as the Video Capture client.

Configuring the project

In this section, I list every configuration file, parameters within it, and parameter default values. The build commands detailed later extract the majority of their parameters from these configuration files. Also, the prototype's two AWS Lambda functions - Image Processor and Frame Fetcher - extract parameters at runtime from imageprocessor-params.json and framefetcher-params.json respectively.

NOTE: Do not remove any of the attributes already specified in these files.

NOTE: You must set the value of any parameter that has the tag NO-DEFAULT

config/global-params.json

Specifies “global” build configuration parameters. It is read by multiple build scripts.

{
    "StackName" : "video-analyzer-stack"
}

Parameters:

  • StackName - The name of the stack to be created in your AWS account.

config/cfn-params.json

Specifies and overrides default values of AWS CloudFormation parameters defined in the template (located at aws-infra/aws-infra-cfn.yaml). This file is read by a number of build scripts, including createstack, deploylambda, and webui.

{
    "SourceS3BucketParameter" : "<NO-DEFAULT>",
    "ImageProcessorSourceS3KeyParameter" : "src/lambda_imageprocessor.zip",
    "FrameFetcherSourceS3KeyParameter" : "src/lambda_framefetcher.zip",

    "FrameS3BucketNameParameter" : "<NO-DEFAULT>",

    "FrameFetcherApiResourcePathPart" : "enrichedframe",
    "ApiGatewayRestApiNameParameter" : "VidAnalyzerRestApi",
    "ApiGatewayStageNameParameter": "development",
    "ApiGatewayUsagePlanNameParameter" : "development-plan"
}

Parameters:

  • SourceS3BucketParameter - The Amazon S3 bucket to which your AWS Lambda function packages (.zip files) will be deployed. If a bucket with such a name does not exist, the deploylambda build command will create it for you with appropriate permissions. AWS CloudFormation will access this bucket to retrieve the .zip files for Image Processor and Frame Fetcher AWS Lambda functions.

  • ImageProcessorSourceS3KeyParameter - The Amazon S3 key under which the Image Processor function .zip file will be stored.

  • FrameFetcherSourceS3KeyParameter - The Amazon S3 key under which the Frame Fetcher function .zip file will be stored.

  • FrameS3BucketNameParameter - The Amazon S3 bucket that will be used for storing video frame images. There must not be an existing S3 bucket with the same name.

  • FrameFetcherApiResourcePathPart - The name of the Frame Fetcher API resource path part in the API Gateway URL.

  • ApiGatewayRestApiNameParameter - The name of the API Gateway REST API to be created by AWS CloudFormation.

  • ApiGatewayStageNameParameter - The name of the API Gateway stage to be created by AWS CloudFormation.

  • ApiGatewayUsagePlanNameParameter - The name of the API Gateway usage plan to be created by AWS CloudFormation.

config/imageprocessor-params.json

Specifies configuration parameters to be used at run-time by the Image Processor lambda function. This file is packaged along with the Image Processor lambda function code in a single .zip file using the packagelambda build script.

{
	"s3_bucket" : "<NO-DEFAULT>",
	"s3_key_frames_root" : "frames/",

	"ddb_table" : "EnrichedFrame",

	"rekog_max_labels" : 123,
    "rekog_min_conf" : 50.0,

	"label_watch_list" : ["Human", "Pet", "Bag", "Toy"],
	"label_watch_min_conf" : 90.0,
	"label_watch_phone_num" : "",
	"label_watch_sns_topic_arn" : "",
	"timezone" : "US/Eastern"
}
  • s3_bucket - The Amazon S3 bucket in which Image Processor will store captured video frame images. The value specified here must match the value specified for the FrameS3BucketNameParameter parameter in the cfn-params.json file.

  • s3_key_frames_root - The Amazon S3 key prefix that will be prepended to the keys of all stored video frame images.

  • ddb_table - The Amazon DynamoDB table in which Image Processor will store video frame metadata. The default value,EnrichedFrame, matches the default value of the AWS CloudFormation template parameter DDBTableNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

  • rekog_max_labels - The maximum number of labels that Amazon Rekognition can return to Image Processor.

  • rekog_min_conf - The minimum confidence required for a label identified by Amazon Rekognition. Any labels with confidence below this value will not be returned to Image Processor.

  • label_watch_list - A list of labels for to watch out for. If any of the labels specified in this parameter are returned by Amazon Rekognition, an SMS alert will be sent via Amazon SNS. The label's confidence must exceed label_watch_min_conf.

  • label_watch_min_conf - The minimum confidence required for a label to trigger a Watch List alert.

  • label_watch_phone_num - The mobile phone number to which a Watch List SMS alert will be sent. Does not have a default value. You must configure a valid phone number adhering to the E.164 format (e.g. +1404XXXYYYY) for the Watch List feature to become active.

  • label_watch_sns_topic_arn - The SNS topic ARN to which you want Watch List alert messages to be sent. The alert message contains a notification text in addition to a JSON formatted list of Watch List labels found. This can be used to publish alerts to any SNS subscribers, such as Amazon SQS queues.

  • timezone - The timezone used to report time and date in SMS alerts. By default, it is "US/Eastern". See this list of country codes, names, continents, capitals, and pytz timezones).

config/framefetcher-params.json

Specifies configuration parameters to be used at run-time by the Frame Fetcher lambda function. This file is packaged along with the Frame Fetcher lambda function code in a single .zip file using the packagelambda build script.

{
    "s3_pre_signed_url_expiry" : 1800,

    "ddb_table" : "EnrichedFrame",
    "ddb_gsi_name" : "processed_year_month-processed_timestamp-index",

    "fetch_horizon_hrs" : 24,
    "fetch_limit" : 3
}
  • s3_pre_signed_url_expiry - Frame Fetcher returns video frame metadata. Along with the returned metadata, Frame Fetcher generates and returns a pre-signed URL for every video frame. Using a pre-signed URL, a client (such as the Web UI) can securely access the JPEG image associated with a particular frame. By default, the pre-signed URLs expire in 30 minutes.

  • ddb_table - The Amazon DynamoDB table from which Frame Fetcher will fetch video frame metadata. The default value,EnrichedFrame, matches the default value of the AWS CloudFormation template parameter DDBTableNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

  • ddb_gsi_name - The name of the Amazon DynamoDB Global Secondary Index that Frame Fetcher will use to query frame metadata. The default value matches the default value of the AWS CloudFormation template parameter DDBGlobalSecondaryIndexNameParameter in the aws-infra/aws-infra-cfn.yaml template file.

  • fetch_horizon_hrs - Frame Fetcher will exclude any video frames that were ingested prior to the point in the past represented by (time now - fetch_horizon_hrs).

  • fetch_limit - The maximum number of video frame metadata items that Frame Fetcher will retrieve from Amazon DynamoDB.

Building the prototype

Common interactions with the project have been simplified for you. Using pynt, the following tasks are automated with simple commands:

  • Creating, deleting, and updating the AWS infrastructure stack with AWS CloudFormation
  • Packaging lambda code into .zip files and deploying them into an Amazon S3 bucket
  • Running the video capture client to stream from a built-in laptop webcam or a USB camera
  • Running the video capture client to stream from an IP camera (MJPEG stream)
  • Build a simple web user interface (Web UI)
  • Run a lightweight local HTTP server to serve Web UI for development and demo purposes

For a list of all available tasks, enter the following command in the root directory of this project:

pynt -l

The output represents the list of build commands available to you:

pynt -l output

Build commands are implemented as python scripts in the file build.py. The scripts use the AWS Python SDK (Boto) under the hood. They are documented in the following section.

Prior to using these build commands, you must configure the project. Configuration parameters are split across JSON-formatted files located under the config/ directory. Configuration parameters are described in detail in an earlier section.

Build commands

This section describes important build commands and how to use them. If you want to use these commands right away to build the prototype, you may skip to the section titled "Deploy and run the prototype".

The packagelambda build command

Run this command to package the prototype's AWS Lambda functions and their dependencies (Image Processor and Frame Fetcher) into separate .zip packages (one per function). The deployment packages are created under the build/ directory.

pynt packagelambda # Package both functions and their dependencies into zip files.

pynt packagelambda[framefetcher] # Package only Frame Fetcher.

Currently, only Image Processor requires an external dependency, pytz. If you add features to Image Processor or Frame Fetcher that require external dependencies, you should install the dependencies using Pip by issuing the following command.

pip install <module-name> -t <path-to-project-dir>/lambda/<lambda-function-dir>

For example, let's say you want to perform image processing in the Image Processor Lambda function. You may decide on using the Pillow image processing library. To ensure Pillow is packaged with your Lambda function in one .zip file, issue the following command:

pip install Pillow -t <path-to-project-dir>/lambda/imageprocessor #Install Pillow dependency

You can find more details on installing AWS Lambda dependencies here.

The deploylambda build command

Run this command before you run createstack. The deploylambda command uploads Image Processor and Frame Fetcher .zip packages to Amazon S3 for pickup by AWS CloudFormation while creating the prototype's stack. This command will parse the deployment Amazon S3 bucket name and keys names from the cfn-params.json file. If the bucket does not exist, the script will create it. This bucket must be in the same AWS region as the AWS CloudFormation stack, or else the stack creation will fail. Without parameters, the command will deploy the .zip packages of both Image Processor and Frame Fetcher. You can specify either “imageprocessor” or “framefetcher” as a parameter between square brackets to deploy an individual function.

Here are sample command invocations.

pynt deploylambda # Deploy both functions to Amazon S3.

pynt deploylambda[framefetcher] # Deploy only Frame Fetcher to Amazon S3.

The createstack build command

The createstack command creates the prototype's AWS CloudFormation stack behind the scenes by invoking the create_stack() API. The AWS CloudFormation template used is located at aws-infra/aws-infra-cfn.yaml under the project’s root directory. The prototype's stack requires a number of parameters to be successfully created. The createstack script reads parameters from both global-params.json and cfn-params.json configuration files. The script then passes those parameters to the create_stack() call.

Note that you must, first, package and deploy Image Processor and Frame Fetcher functions to Amazon S3 using the packagelambda and deploylambda commands (documented later in this guid) for the AWS CloudFormation stack creation to succeed.

You can issue the command as follows:

pynt createstack

Stack creation should take only a couple of minutes. At any time, you can check on the prototype's stack status either through the AWS CloudFormation console or by issuing the following command.

pynt stackstatus

Congratulations! You’ve just created the prototype's entire architecture in your AWS account.

The deletestack build command

The deletestack command, once issued, does a few things. First, it empties the Amazon S3 bucket used to store video frame images. Next, it calls the AWS CloudFormation delete_stack() API to delete the prototype's stack from your account. Finally, it removes any unneeded resources not deleted by the stack (for example, the prototype's API Gateway Usage Plan resource).

You can issue the deletestack command as follows.

pynt deletestack

As with createstack, you can monitor the progress of stack deletion using the stackstatus build command.

The deletedata build command

The deletedata command, once issued, empties the Amazon S3 bucket used to store video frame images. Next, it also deletes all items in the DynamoDB table used to store frame metadata.

Use this command to clear all previously ingested video frames and associated metadata. The command will ask for confirmation [Y/N] before proceeding with deletion.

You can issue the deletedata command as follows.

pynt deletedata

The stackstatus build command

The stackstatus command will query AWS CloudFormation for the status of the prototype's stack. This command is most useful for quickly checking that the prototype is up and running (i.e. status is "CREATE_COMPLETE" or "UPDATE_COMPLETE") and ready to serve requests from the Web UI.

You can issue the command as follows.

pynt stackstatus # Get the prototype's Stack Status

The webui build command

Run this command when the prototype's stack has been created (using createstack). The webui command “builds” the Web UI through which you can monitor incoming captured video frames. First, the script copies the webui/ directory verbatim into the project’s build/ directory. Next, the script generates an apigw.js file which contains the API Gateway base URL and the API key to be used by Web UI for invoking the Fetch Frames function deployed in AWS Lambda. This file is created in the Web UI build directory.

You can issue the Web UI build command as follows.

pynt webui

The webuiserver build command

The webuiserver command starts a local, lightweight, Python-based HTTP server on your machine to serve Web UI from the build/web-ui/ directory. Use this command to serve the prototype's Web UI for development and demonstration purposes. You can specify the server’s port as pynt task parameter, between square brackets.

Here’s sample invocation of the command.

pynt webuiserver # Starts lightweight HTTP Server on port 8080.

The videocaptureip and videocapture build commands

The videocaptureip command fires up the MJPEG-based video capture client (source code under the client/ directory). This command accepts, as parameters, an MJPEG stream URL and an optional frame capture rate. The capture rate is defined as 1 every X number of frames. Captured frames are packaged, serialized, and sent to the Kinesis Frame Stream. The video capture client for IP cameras uses Open CV 3 to do simple image processing operations on captured frame images – mainly image rotation.

Here’s a sample command invocation.

pynt videocaptureip["http://192.168.0.2/video",20] # Captures 1 frame every 20.

On the other hand, the videocapture command (without the trailing 'ip'), fires up a video capture client that captures frames from a camera attached to the machine on which it runs. If you run this command on your laptop, for instance, the client will attempt to access its built-in video camera. This video capture client relies on Open CV 3 to capture video from physically connected cameras. Captured frames are packaged, serialized, and sent to the Kinesis Frame Stream.

Here’s a sample invocation.

pynt videocapture[20] # Captures one frame every 20.

Deploy and run the prototype

In this section, we are going use project's build commands to deploy and run the prototype in your AWS account. We’ll use the commands to create the prototype's AWS CloudFormation stack, build and serve the Web UI, and run the Video Cap client.

  • Prepare your development environment, and ensure configuration parameters are set as you wish.

  • On your machine, in a command line terminal change into the root directory of the project. Activate your virtual Python environment. Then, enter the following commands:

$ pynt packagelambda #First, package code & configuration files into .zip files

#Command output without errors

$ pynt deploylambda #Second, deploy your lambda code to Amazon S3

#Command output without errors

$ pynt createstack #Now, create the prototype's CloudFormation stack

#Command output without errors

$ pynt webui #Build the Web UI

#Command output without errors
  • On your machine, in a separate command line terminal:
$ pynt webuiserver #Start the Web UI server on port 8080 by default
  • In your browser, access http://localhost:8080 to access the prototype's Web UI. You should see a screen similar to this:

Empty Web UI

  • Now turn on your IP camera or launch the app on your smartphone. Ensure that your camera is accepting connections for streaming MJPEG video over HTTP, and identify the local URL for accessing that stream.

  • Then, in a terminal window at the root directory of the project, issue this command:

$ pynt videocaptureip["<your-ip-cam-mjpeg-url>",<capture-rate>]
  • Or, if you don’t have an IP camera and would like to use a built-in camera:
$ pynt videocapture[<frame-capture-rate>]
  • Few seconds after you execute this step, the dashed area in the Web UI will auto-populate with captured frames, side by side with labels recognized in them.

When you are done

After you are done experimenting with the prototype, perform the following steps to avoid unwanted costs.

  • Terminate video capture client(s) (press Ctrl+C in command line terminal where you got it running)
  • Close all open Web UI browser windows or tabs.
  • Execute the pynt deletestack command (see docs above)
  • After you run deletestack, visit the AWS CloudFormation console to double-check the stack is deleted.
  • Ensure that Amazon S3 buckets and objects within them are deleted.

Remember, you can always setup the entire prototype again with a few simple commands.

License

Licensed under the Amazon Software License.

A copy of the License is located at

http://aws.amazon.com/asl/

The AWS CloudFormation Stack (optional read)

Let’s quickly go through the stack that AWS CloudFormation sets up in your account based on the template. AWS CloudFormation uses as much parallelism as possible while creating resources. As a result, some resources may be created in an order different than what I’m going to describe here.

First, AWS CloudFormation creates the IAM roles necessary to allow AWS services to interact with one another. This includes the following.

  • ImageProcessorLambdaExecutionRole – a role to be assumed by the Image Processor lambda function. It allows full access to Amazon DynamoDB, Amazon S3, Amazon SNS, and AWS CloudWatch Logs. The role also allows read-only access to Amazon Kinesis and Amazon Rekognition. For simplicity, only managed AWS role permission policies are used.

  • FrameFetcherLambdaExecutionRole – a role to be assumed by the Frame Fetcher lambda function. It allows full access to Amazon S3, Amazon DynamoDB, and AWS CloudWatch Logs. For simplicity, only managed AWS permission policies are used. In parallel, AWS CloudFormation creates the Amazon S3 bucket to be used to store the captured video frame images. It also creates the Kinesis Frame Stream to receive captured video frame images from the Video Cap client.

Next, the Image Processor lambda function is created in addition to an AWS Lambda Event Source Mapping to allow Amazon Kinesis to trigger Image Processor once new captured video frames are available.

The Frame Fetcher lambda function is also created. Frame Fetcher is a simple lambda function that responds to a GET request by returning the latest list of frames, in descending order by processing timestamp, up to a configurable number of hours, called the “fetch horizon” (check the framefetcher-params.json file for more run-time configuration parameters). Necessary AWS Lambda Permissions are also created to permit Amazon API Gateway to invoke the Frame Fetcher lambda function.

AWS CloudFormation also creates the DynamoDB table where Enriched Frame metadata is stored by the Image Processor lambda function as described in the architecture overview section of this post. A Global Secondary Index (GSI) is also created; to be used by the Frame Fetcher lambda function in fetching Enriched Frame metadata in descending order by time of capture.

Finally, AWS CloudFormation creates the Amazon API Gateway resources necessary to allow the Web UI to securely invoke the Frame Fetcher lambda function with a GET request to a public API Gateway URL.

The following API Gateway resources are created.

  • REST API named “RtRekogRestAPI” by default.

  • An API Gateway resource with a path part set to “enrichedframe” by default.

  • A GET API Gateway method associated with the “enrichedframe” resource. This method is configured with Lambda proxy integration with the Frame Fetcher lambda function (learn more about AWS API Gateway proxy integration here). The method is also configured such that an API key is required.

  • An OPTIONS API Gateway method associated with the “enrichedframe” resource. This method’s purpose is to enable Cross-Origin Resource Sharing (CORS). Enabling CORS allows the Web UI to make Ajax requests to the Frame Fetcher API Gateway URL. Note that the Frame Fetcher lambda function must, itself, also return the Access-Control-Allow-Origin CORS header in its HTTP response.

  • A “development” API Gateway deployment to allow the invocation of the prototype's API over the Internet.

  • A “development” API Gateway stage for the API deployment along with an API Gateway usage plan named “development-plan” by default.

  • An API Gateway API key, name “DevApiKey” by default. The key is associated with the “development” stage and “development-plan” usage plan.

All defaults can be overridden in the cfn-params.json configuration file. That’s it for the prototype's AWS CloudFormation stack! This stack was designed primarily for development/demo purposes, especially how the Amazon API Gateway resources are set up.

FAQ

Q: Why is this project titled "amazon-rekognition-video-analyzer" despite the security-focused use case?

A: Although this prototype was conceived to address the security monitoring and alerting use case, you can use the prototype's architecture and code as a starting point to address a wide variety of use cases involving low-latency analysis of live video frames with Amazon Rekognition.

amazon-rekognition-video-analyzer's People

Contributors

moanany avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-rekognition-video-analyzer's Issues

deploylambda build script fails if region is "us-east-1" and S3 bucket doesn't exist

If the S3 bucket used as a deployment target does not exist, the deploylambda build script attempts to create it by calling the create_bucket() Boto3 API specifying a LocationConstraint set to the region name of the current AWS CLI profile.

Boto3 generates an InvalidLocationConstraint exception if the region specified as an argument is us-east-1 (see boto/boto3#125).

Issue originally reported by Jenil Shah as a comment on the blog post:

https://aws.amazon.com/blogs/ai/create-a-serverless-solution-for-video-frame-analysis-and-alerting/

Same Local Area Network as Video Capture Client

Hello,
We installed all software on ec2. Our IP camera is working but when I run the command pynt videocaptureip["http://[email protected]******/video/mjpg.cgi",20] the connection is not made.

Would this have been related to the comment in your documentation below:

If you use an IP camera, make sure your camera is connected to the same Local Area Network as the Video Capture client?

Thank you

'JobStatus': 'FAILED', 'StatusMessage': 'Unsupported codec/format.'

Hi I am able to search matched faces in a video. But sometimes I get above error. The error comes when the video format is not supported by aws recognition.
When I get that video from an android phone it works but if I get that video from other sources like web then it throw the error.

I want to know how can I convert any video(whatever the source) to the aws supported video using ffmpeg
Thanks

Typo in the Readme: parameter RtRekogS3BucketNameParameter in cfn-params.json does not exist

For parameter s3_bucket of imageprocessor-params.json, the instructions in the readme file say:

The value specified here must match the value specified for the RtRekogS3BucketNameParameter parameter in the cfn-params.json file.

However, the parameter RtRekogS3BucketNameParameter is not defined in cfn-params.json.
This parameter is not used in any programs either.

This could be a typo, or is it the FrameS3BucketNameParameter that it is trying to refer to?

Unable to access CCTV IP camera

Hi,
I have cloned amazon-rekognition-video-analyzer and installed all the prerequisite software's, got IAM user to access all necessary AWS services.

After setting the environment , I have configured the code and executed.

The amazon-rekognition-video-analyzer is working very fine with my in-build camera when i run the following command
> pynt videocapture[20]

But Iam unable to access the CCTV IP camera. The CCTV IP Camera has credentials too.
I have tried following commands, But not sending the video to kinesis stream.

> pynt videocaptureip["12..*.161", 20]** , as it have credentials I have also tried

pynt videocaptureip["http://username:password@12.**..161", 20]
But not working
In capturing terminal it shows Capturing but not streaming to Kinesis service.
I have also tried by accessing a camera connected to the same network by providing it's IP. Iam able to ping. But it's raising I/O error.

Kindly help me to access the CCTV IP Camera with credentials.

Duplicate SNS text

I have this demo code up and running against my front door IP camera. It's working well, but when a person is in more than one frame, I get multiple SMS notifications. What's the best way to de-dup notifications?

Would it be possible to group frames and send one message?

Multiple streams

Is there a way to setup multiple stream inputs; example multiple IP cameras.

Realtime Analysis

How do I get realtime analytics of video frames on web UI and DynamoDB table?

Making it work with a USB camera

Thank you for this wonderful project and the extensive detailed documentation. Really appreciate the effort you have put into this.

All the steps worked wonderfully well on a Fedora 26 x86_64 machine with compiled opencv 3.3.1. The stack is created successfully too.

I do not have a IP camera (yet) and working on installing a IP camera app on my phone to test. In the mean time, I'm using a USB webcam Logitech C930e to test. pynt videocapture[20] does not seem to capture video frames.

pynt videocapture[20]
[ build.pyc - Starting task "videocapture" ]
[ build.pyc - Completed task "videocapture" ]

Any help would be appreciated if I have to pass any additional parameters for USB device. I will try the IP camera in the mean time.

Invalid parameter: TopicArn Reason: An ARN must have at least 6 elements, not 1

I successfully built the entire project and run several times and it worked on an ipcam. Then, I started to insert some elements in the code to time the differents stages and now the video flux doesn't show up in the web ui.

I have checked the logs of lambda functions, and the bug seems to come from imageprocessor. Indeed, this is what I get when i run the script on my built-in webcam :

 07:18:01 On 09/19/18, 9:18 AM CEST...
 07:18:01 - "Human" was detected with 98.52% confidence.
 07:18:02 An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: TopicArn Reason: An ARN must have at least 6 elements, not 1: InvalidParameterException Traceback (most recent call last): File "/var/task/imageprocessor.py", line 176, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 138, in process_image "labels": labels
 07:18:02 END RequestId: a33af59c...

So I tried to remove the phone number and arn topic in the image-processor-params.json, rebuild the entire project multiple times (deletedata, package lambda, deploylambda, updatestack, webui, webuiserver), tried also updatelambda command but it still doesn't work and I don't understand why.

Please help!

Stuck in the createstack

Well, I have done everything I can possible think off, if anyone can help I really appreciate.
pynt createstack
[ build.pyc - Starting task "createstack" ]
[ build.pyc - Error in task "createstack" ]
[ build.pyc - Aborting build ]
Traceback (most recent call last):
File "/usr/bin/pynt", line 11, in
load_entry_point('pynt==0.8.2', 'console_scripts', 'pynt')()
File "/usr/lib/python2.7/dist-packages/pynt/_pynt.py", line 298, in main
build(sys.argv[1:])
File "/usr/lib/python2.7/dist-packages/pynt/_pynt.py", line 59, in build
_run_from_task_names(module,args.tasks)
File "/usr/lib/python2.7/dist-packages/pynt/_pynt.py", line 109, in _run_from_task_names
_run(module, logger, task, completed_tasks, True, args, kwargs)
File "/usr/lib/python2.7/dist-packages/pynt/_pynt.py", line 180, in _run
task(*(args or []),**(kwargs or {}))
File "/usr/lib/python2.7/dist-packages/pynt/_pynt.py", line 245, in call
self.func.call(*args,**kwargs)
File "build.py", line 156, in createstack
stack_name = global_params_dict["StackName"]

deploylambda error: Invalid stage identifier specified

Hi,

I got an error while deploying my CloudFormation Script.

botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

The Issue is somewhere around the VidAnalyzerApiKey scope, because I got this error:
CREATE_FAILED | AWS::ApiGateway::ApiKey | VidAnalyzerApiKey | Invalid stage identifier specified

Any ideas on this?
screen shot 2018-04-24 at 14 15 10

pynt webui:

hi,

pynt webui giving error: an error occured when calling the describestackresource operation: resource vidanalyzerrestapi does not exist for stack video-analyzer-stack.

please help me

Thanks

Final step videocapture

i've made it to last step pynt videocaptureip["<your-ip-cam-mjpeg-url>"],but im running with some issues,im hoping someone can tell me if im missing something when executing the last part

via ip camera:

i keep getting and error type,it connects for a second and then it dies....the message that im getting is thi:

[ build.py - Starting task "videocaptureip" ] Capturing from 'http://192.168.1.78:8081' at a rate of 1 every 20 frames... Traceback (most recent call last): File "video_cap_ipcam.py", line 141, in <module> main() File "video_cap_ipcam.py", line 105, in main bytes += stream.read(16384*2) TypeError: can only concatenate str (not "bytes") to str [ build.py - Completed task "videocaptureip"

Via web cam:

when i run it it seems to work because i keep getting

{'ShardId': 'shardId-000000000000', 'SequenceNumber': '49597758662347182382206873118270965332318602047252332546', 'ResponseMetadata': {'RequestId': 'c2c360dd-dd9c-3541-976f-38d97fe5e46b', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'c2c360dd-dd9c-3541-976f-38d97fe5e46b', 'x-amz-id-2': 'WoUPxGafWCb/ZB3aakhu/0/F8jCK9Hzbylv22q4UZHrK0pQ4XlWF9stz0GoBX7jbX9UUWDxuZmXXvsxZGuRD2rpdV38z8HFWPhw+Jg7iSVk=', 'date': 'Fri, 19 Jul 2019 23:44:04 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '110'}, 'RetryAttempts': 0}}

but after i execute this step, nothing appears on the dashed area in the Web UI(waited a couple minutes)

importan note: i did changed/convert some files to python3 since im running the project on it
files changed:

  • imageprocessor.py
  • video_cap.py
  • video_cap_ipcam.py
  • build.py
    also changed the cPickle lib to pickle

Pd: pynt stackstatus is a green light
[ build.py - Starting task "stackstatus" ] Stack 'video-analyzer-stack' has the status 'CREATE_COMPLETE' [ build.py - Completed task "stackstatus" ]
Any help on this is gladly appreciated

ImportError: No module named cv2

Hi,
I followed all the steps when i run pynt videocaptureip["",] , I get
[ build.pyc - Starting task "videocaptureip" ]
Traceback (most recent call last):
File "video_cap_ipcam.py", line 13, in
import cv2
ImportError: No module named cv2
[ build.pyc - Completed task "videocaptureip" ]
. can anyone help me.

Sending image to kenisis on an EC2 instance

I'm running this on an EC2 instance and believe that I have everything properly installed. I'm using a MJPEG url to use with videocaptureip. When I run the program it appears as though it is capturing frames, but they are not populating on the web ui.

i.e. it says:
Sending image to Kinesis
{u'ShardId': u'shardId-000000000000', 'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': 'ce3652dd-a3b3-457b-80e8-01084f372dc3', 'HTTPHeaders': {'x-amzn-requestid': 'ce3652dd-a3b3-457b-80e8-01084f372dc3', 'x-amz-id-2': 'tvUJ1BPccjt617BADReelAm0Mw/Q5mK/bn10D+Jv+++dAuC7gE22Ti9/vWrNcOwxR1mM9RioAjcdbfCH5tJVCiH7BUoF1F6y', 'content-length': '110', 'date': 'Thu, 31 May 2018 22:07:51 GMT', 'content-type': 'application/x-amz-json-1.1'}}, u'SequenceNumber': u'49584995657566029379979076055971568312156861850135298050'}

But when I check my s3 bucket and the webui on port 8080, there is nothing there. What could be causing this?

localhost:8080 page is static

I completed all the build steps successfully on macOS 10.13.1.

webui is running:

[ build.pyc - Starting task "webuiserver" ]
Starting local Web UI Server in directory 'build/web-ui/' on port 8080
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /src/app.css HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /src/js/axios.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /src/js/vue.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /logo.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /src/apigw.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:31] "GET /src/app.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:32] "GET /favicon-16x16.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:32] "GET /favicon-32x32.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:32] "GET /favicon-96x96.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:42] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:44] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:08:45] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:10:59] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:10:59] "GET /src/apigw.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /src/app.css HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /src/js/axios.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /src/js/vue.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /logo.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /src/apigw.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:07] "GET /src/app.js HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:08] "GET /favicon-96x96.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:08] "GET /favicon-32x32.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:08] "GET /favicon-16x16.png HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:13] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Dec/2017 09:11:14] "GET / HTTP/1.1" 200 -

videocapture[20] is returning output.

But the http://localhost:8080 page is static and the dotted box contains no output.

What could be going wrong?

Thanks!

Error with build.py file

Hi,

I am trying to run pynt -l and I am getting an syntax error:

File "build.py", line 25
print 'zipping %s as %s' % (os.path.join(dirname, filename),
^
SyntaxError: invalid syntax

Looking at the code in build.py the For loop does not seem to be complete. Could you please assist, thanks.

Accessing VideoCaptureIP

thanks for this wonderful project @moanany , i hope you can help me out connecting to a ip camera, whenever i tried to access the analyzer via IP camera,i keep getting the same issue, i already checked the URL and it is correct, i have tried it on a browser and works just fine, the configuration camera is set to mjpeg format and im not sure what to do from here. the message that im getting is the next one:

`(cv) Hugos-MacBook-Air:amazon-rekognition-video-analyzer luismora$ pynt videocaptureip["http://user:[email protected]:80/Streaming/Channels/101/httpPreview",30]
[ build.py - Starting task "videocaptureip" ]
Capturing from 'http://user:[email protected]:80/Streaming/Channels/101/httpPreview' at a rate of 1 every 30 frames...
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py", line 938, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 707, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "video_cap_ipcam.py", line 140, in
main()
File "video_cap_ipcam.py", line 94, in main
stream = urllib.request.urlopen(ip_cam_url)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1345, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/urllib/request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>
[ build.py - Completed task "videocaptureip" ]
`

the project is build on python 3,i did convert the files so it can be compatible #50
if a run it from the computers built in camera it works perfectly fine.

pynt create stack:

Hi,
i had created my own bucket and deployed lamada .
i had only Amazoncloudformationreadonlyaccess ,so below policy i have added for complete access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cloudformation:DetectStackResourceDrift",
"cloudformation:EstimateTemplateCost",
"cloudformation:SetStackPolicy",
"cloudformation:DetectStackDrift",
"cloudformation:CreateStack",
"cloudformation:UpdateStack",
"cloudformation:CreateChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:ValidateTemplate",
"cloudformation:DeleteStack",
"cloudformation:ListStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:CreateChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:ValidateTemplate"
],
"Resource": "*"
}
]
}

if i am creating a stack i am getting error.please find the below attachment.
image

can you please help me out exactly where i am missing?

unsupported pickle protocol: 3

I think it has something to do with running python 3 on my computer for the pynt videocapture and
and imageprocessor.py running in python 2? I don't want to reset up my environment. Is this is whats happening? Is there another fix?

unsupported pickle protocol: 3: ValueError
Traceback (most recent call last):
File "/var/task/imageprocessor.py", line 243, in handler
return process_image(event, context)
File "/var/task/imageprocessor.py", line 69, in process_image
frame_package = cPickle.loads(base64.b64decode(frame_package_b64))
ValueError: unsupported pickle protocol: 3

Raspberry pi Camera

My project is to run a Raspberry Pi camera as a slave to other devices already in place for an experiment. Once the camera is armed, it will continuously record a circular stream of video in memory. When a digital trigger is received, the video will begin being saved to disk. In addition to saving the video after a trigger, the video before the trigger will also be saved. So basically the condition of master slave concept where checking the masters output that had to be high to trigger the slaves input without any delay.So can someone help me out to code to the above condition asap?

Video takes infinite time to analyze

EDIT it started working now. I want to know is there any limit on how many video we can process. if yes how can i increase this limit?

Hi some time before it was working fine and I was able to analyze videos with in minute or so but suddenly my video is not getting analyze. It just goes on forever. I am trying so search faces in a videos. faces are indexed. they are about 300 face indexed

I am analyzing 3553477.mp4 (only 10 second video)

here is the output

Faces indexed:
  Face ID: a1c8847f-7e80-4dd7-8b51-63c0d22ab765
  Location: {'Width': 0.7492865920066833, 'Height': 0.8562403321266174, 'Left': 0.1664663851261139, 'Top': 0.032046884298324585}
started video analyzing 
Start Job Id: 952741e2adaf526ca8fd270912d2cf578d24b69039edcac05dcc507b7b2193fb
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................
....................

It goes on and does not stop

here is the code

import boto3
import json
import sys


class VideoDetect:
    rek = boto3.client('rekognition')
    queueUrl = 'https://sqs.us-west-2.amazonaws.com/655136467581/Rekoginition'
    roleArn = 'arn:aws:iam::655136467581:role/face_matching'
    topicArn = 'arn:aws:sns:us-west-2:655136467581:image-rekoginition-sns'
    bucket = 'face-reckognition-video-1'
    video = ''

    def __init__(self, video):
        self.video = video

    def main(self):

        jobFound = False
        sqs = boto3.client('sqs')

        # =====================================
        response = self.rek.start_face_search(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}},
                                              CollectionId='FaceCollection',
                                              NotificationChannel={'RoleArn': self.roleArn,
                                                                   'SNSTopicArn': self.topicArn})

        # =====================================
        print('Start Job Id: ' + response['JobId'])
        dotLine = 0

        faces = set()
        while not jobFound:
            sqsResponse = sqs.receive_message(QueueUrl=self.queueUrl, MessageAttributeNames=['ALL'],
                                              MaxNumberOfMessages=10)

            if sqsResponse:

                if 'Messages' not in sqsResponse:
                    if dotLine < 20:
                        print('.', end='')
                        dotLine = dotLine + 1
                    else:
                        print()
                        dotLine = 0
                    sys.stdout.flush()
                    continue

                for message in sqsResponse['Messages']:
                    notification = json.loads(message['Body'])
                    rekMessage = json.loads(notification['Message'])
                    print(rekMessage['JobId'])
                    print(rekMessage['Status'])
                    if str(rekMessage['JobId']) == response['JobId']:
                        print('Matching Job Found:' + rekMessage['JobId'])
                        jobFound = True
                        # =============================================
                        f = self.GetResultsFaceSearchCollection(rekMessage['JobId'])

                        faces.update(f)

                        # =============================================

                        sqs.delete_message(QueueUrl=self.queueUrl,
                                           ReceiptHandle=message['ReceiptHandle'])
                    else:
                        print("Job didn't match:" +
                              str(rekMessage['JobId']) + ' : ' + str(response['JobId']))
                    # Delete the unknown message. Consider sending to dead letter queue
                    sqs.delete_message(QueueUrl=self.queueUrl,
                                       ReceiptHandle=message['ReceiptHandle'])

        print('done')

        return faces

    def GetResultsFaceSearchCollection(self, jobId):
        maxResults = 10
        paginationToken = ''
        faces = []
        finished = False

        while not finished:
            response = self.rek.get_face_search(JobId=jobId,
                                                MaxResults=maxResults,
                                                NextToken=paginationToken)
            print(response)
            print(response['VideoMetadata']['Codec'])
            print(str(response['VideoMetadata']['DurationMillis']))
            print(response['VideoMetadata']['Format'])
            print(response['VideoMetadata']['FrameRate'])

            for personMatch in response['Persons']:

                print('Person Index: ' + str(personMatch['Person']['Index']))
                print('Timestamp: ' + str(personMatch['Timestamp']))

                if 'FaceMatches' in personMatch:
                    for faceMatch in personMatch['FaceMatches']:
                        print('Face ID: ' + faceMatch['Face']['FaceId'])
                        faces.append(faceMatch['Face']['FaceId'])
                        print('Similarity: ' + str(faceMatch['Similarity']))
                print()
            if 'NextToken' in response:
                paginationToken = response['NextToken']
            else:
                finished = True
            print()

        return faces

any one help?

pynt deploylambda
[ build.pyc - Starting task "deploylambda" ]
Checking if S3 Bucket 'anbanglee2' exists...
Uploading function 'framefetcher' to 'src/lambda_framefetcher.zip'
[ build.pyc - Error in task "deploylambda" ]
[ build.pyc - Aborting build ]
Traceback (most recent call last):
File "/Users/anbanglee/anaconda2/bin/pynt", line 11, in
sys.exit(main())
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/pynt/_pynt.py", line 298, in main
build(sys.argv[1:])
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/pynt/_pynt.py", line 59, in build
_run_from_task_names(module,args.tasks)
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/pynt/_pynt.py", line 109, in _run_from_task_names
_run(module, logger, task, completed_tasks, True, args, kwargs)
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/pynt/_pynt.py", line 180, in _run
task(*(args or []),**(kwargs or {}))
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/pynt/_pynt.py", line 245, in call
self.func.call(*args,**kwargs)
File "build.py", line 143, in deploylambda
s3_client.upload_fileobj(data, src_s3_bucket_name, s3_keys[function])
File "/Users/anbanglee/anaconda2/lib/python2.7/site-packages/boto3/s3/inject.py", line 539, in upload_fileobj
return future.result()
File "/Users/anbanglee/.local/lib/python2.7/site-packages/s3transfer/futures.py", line 73, in result
return self._coordinator.result()
File "/Users/anbanglee/.local/lib/python2.7/site-packages/s3transfer/futures.py", line 233, in result
raise self._exception
botocore.exceptions.ClientError: An error occurred (SignatureDoesNotMatch) when calling the PutObject operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

pynt createstack throwing error

This is the error I am getting

(aws) vi@vi-Vostro-3558:/project/amazon-rekognition-video-analyzer:anaconda2-5.0.1$ pynt createstack
[ build.pyc - Starting task "createstack" ]
Attempting to CREATE 'video-analyzer-stack' stack using CloudFormation.
Waiting until 'video-analyzer-stack' stack status is CREATE_COMPLETE
[ build.pyc - Error in task "createstack" ]
[ build.pyc - Aborting build ]
Traceback (most recent call last):
  File "/pkg/anaconda2-5.0.1/bin/pynt", line 11, in <module>
    sys.exit(main())
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/pynt/_pynt.py", line 295, in main
    build(sys.argv[1:])
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/pynt/_pynt.py", line 59, in build
    _run_from_task_names(module,args.tasks)
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/pynt/_pynt.py", line 109, in _run_from_task_names
    _run(module, logger, task, completed_tasks, True, args, kwargs)
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/pynt/_pynt.py", line 180, in _run
    task(*(args or []),**(kwargs or {}))
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/pynt/_pynt.py", line 242, in __call__
    self.func.__call__(*args,**kwargs)
  File "build.py", line 184, in createstack
    cfn_stack_delete_waiter.wait(StackName=stack_name)
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/botocore/waiter.py", line 53, in wait
    Waiter.wait(self, **kwargs)
  File "/pkg/anaconda2-5.0.1/lib/python2.7/site-packages/botocore/waiter.py", line 323, in wait
    last_response=response,
botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

Would appreciate any help on this. Thanks.

createstack error

Getting stack ROLLBACK_COMPLETE status.

[ build.pyc - Error in task "createstack" ]
[ build.pyc - Aborting build ]
Traceback (most recent call last):
File "/usr/local/bin/pynt", line 9, in
load_entry_point('pynt==0.8.1', 'console_scripts', 'pynt')()
File "/usr/local/lib/python2.7/dist-packages/pynt/_pynt.py", line 295, in main
build(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/pynt/_pynt.py", line 59, in build
_run_from_task_names(module,args.tasks)
File "/usr/local/lib/python2.7/dist-packages/pynt/_pynt.py", line 109, in _run_from_task_names
_run(module, logger, task, completed_tasks, True, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/pynt/_pynt.py", line 180, in _run
task(*(args or []),**(kwargs or {}))
File "/usr/local/lib/python2.7/dist-packages/pynt/_pynt.py", line 242, in call
self.func.call(*args,**kwargs)
File "build.py", line 184, in createstack
cfn_stack_delete_waiter.wait(StackName=stack_name)
File "/usr/local/lib/python2.7/dist-packages/botocore/waiter.py", line 53, in wait
Waiter.wait(self, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/waiter.py", line 323, in wait
last_response=response,
botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

pynt createstack failed

pynt createstack failed because of Status reason:
The following resource(s) failed to delete: [EnrichedFrameTable] in cloudformation.Help needed.

pynt createstack - Waiter encountered a terminal failure state

When calling pynt createstack, I am encountering the following error: botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state.

If I insert boto3 logging, the following DEBUG information gets outputted before the build crashes:

botocore.hooks [DEBUG] Event needs-retry.cloudformation.DescribeStacks: calling handler <botocore.retryhandler.RetryHandler object at 0x7fcd5c83ad10>
botocore.retryhandler [DEBUG] No retry needed.

If I run pynt stackstatus after the error, it gives me the status ROLLBACK_COMPLETE.

It seems to be retrying this a couple of times before finally giving up and aborting. Is there any insight on why this might be happening?

Thanks in advance.

Cloudwatch LOG

Unable to import module 'imageprocessor': No module named pytz

Speeding up processing

I'm noticing that the screenshots in the web ui are about a minute behind real time. What part of this configuration is the main cause of the slowdown? Is it Kinesis? How would you increase Kinesis's processing speed?

videocapture and videocaptureip not working as expected

Hi
I have done all the setup and i am able to see the web UI but when i invoke the videocaptureip['video_url',20] i am not getting any output in Web UI. Is any other settings i have to do. Please mention those steps also. when building stack I am getting CREATE_COMPLETE for overall Stack but for many componets i am seeing CREATE_IN_PROGRESS. Please find the attached image.
stack_build_issue

Having Trouble Creating Stack

I am getting the 'ROLLBACK_COMPLETE' status when I check on my stack. When I try to run pynt createstack again, I get the error: CreateStack operation: Stack [video-analyzer-stack] already exists.

When I try to check on the status of my stack via the cloudFormation console attached to my AWS account, it says that I do not have any stacks.

I am a little bit stuck because I cannot create a new stack since apparently it exists somewhere... That being said, it dos not appear on the cloudformation console associated with my account. I have checked that my aws cli has been configured correctly and the lambda function has been successful in creating a bucket.

Please let me know if you have any next steps for debugging. Thanks!

CF Template is giving problems

It went well for initial steps.But If I am using pynt createstack the stack is going to ROLLBACK_FAILED state and not finishing the remaing steps.And its showing rollback reason as Status reason:
The following resource(s) failed to delete: [EnrichedFrameTable].

pynt createstack :

hi
command pynt createstack giving error : AlreadeyExistsException: an error occured when calling the createstack operation: stack [video-analyzer-stack] already exists.

please help me.

Thanks

pynt webui gives error

hello,

when i run pynt webui it gives following error

[ build.pyc - Starting task "webui" ]
Copying Web UI source from 'web-ui/' to build directory.
Retrieving API key from stack 'video-analyzer-stack'.
[ build.pyc - Error in task "webui" ]
[ build.pyc - Aborting build ]
Traceback (most recent call last):
File "/home/einfochips/kinesisvideo/bin/pynt", line 11, in
sys.exit(main())
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/pynt/_pynt.py", line 295, in main
build(sys.argv[1:])
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/pynt/_pynt.py", line 59, in build
_run_from_task_names(module,args.tasks)
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/pynt/_pynt.py", line 109, in _run_from_task_names
_run(module, logger, task, completed_tasks, True, args, kwargs)
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/pynt/_pynt.py", line 180, in _run
task(*(args or []),**(kwargs or {}))
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/pynt/_pynt.py", line 242, in call
self.func.call(*args,**kwargs)
File "build.py", line 326, in webui
LogicalResourceId=cfn_params_dict["ApiGatewayRestApiNameParameter"]
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/botocore/client.py", line 317, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/einfochips/kinesisvideo/local/lib/python2.7/site-packages/botocore/client.py", line 615, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationError) when calling the DescribeStackResource operation: Resource VidAnalyzerRestApi does not exist for stack video-analyzer-stack

build.py assumes Python 2

Although early in the instructions you mention Python 2 or Python 3, when I got to

pynt -l

I got an error from build.py

I took a quick look and see it that at the very least the print statements are written for Python 2 only.

Are you intending to release a Python 3 version of this?

I'm bummed because I now have to go back and build a new environment using Python 2.

createstack issue

some time it create stack and some time it's fails. there is any way so that on first command run (pynt createstack) it can create a stack in cloud formation.

where to write these settings ?

hi everyone !
i have completed first 10 steps and now i got stuck. i have no idea how to proceed further. where to write these configurations. please help me.
thanks

video not appearing on web ui

sir,thank you soo much for an such wonderfull project.the problem what i face is that videos are not showing up at webui..i checked the cloudwatch logs for improcessor lambda and it shown an error which i am posting below:


13:15:25
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

13:15:25
REPORT RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Duration: 2029.31 ms Billed Duration: 2100 ms Memory Size: 128 MB Max Memory Used: 53 MB

13:15:26
START RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Version: $LATEST

13:15:26
Expecting property name: line 10 column 2 (char 206): ValueError Traceback (most recent call last): File "/var/task/imageprocessor.py", line 182, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 48, in process_image config = load_config() File "/var/task/imageprocessor.py", line 24, in load_config return json.loads(conf_json) File "/usr/l
Expecting property name: line 10 column 2 (char 206): ValueError
Traceback (most recent call last):
File "/var/task/imageprocessor.py", line 182, in handler
return process_image(event, context)
File "/var/task/imageprocessor.py", line 48, in process_image
config = load_config()
File "/var/task/imageprocessor.py", line 24, in load_config
return json.loads(conf_json)
File "/usr/lib64/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting property name: line 10 column 2 (char 206)


13:15:26
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

13:15:26
REPORT RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Duration: 546.06 ms Billed Duration: 600 ms Memory Size: 128 MB Max Memory Used: 53 MB

13:15:27
START RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Version: $LATEST

13:15:27
Expecting property name: line 10 column 2 (char 206): ValueError Traceback (most recent call last): File "/var/task/imageprocessor.py", line 182, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 48, in process_image config = load_config() File "/var/task/imageprocessor.py", line 24, in load_config return json.loads(conf_json) File "/usr/l

13:15:27
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

13:15:27
REPORT RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Duration: 157.44 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 53 MB

13:15:28
START RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Version: $LATEST

13:15:28
Expecting property name: line 10 column 2 (char 206): ValueError Traceback (most recent call last): File "/var/task/imageprocessor.py", line 182, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 48, in process_image config = load_config() File "/var/task/imageprocessor.py", line 24, in load_config return json.loads(conf_json) File "/usr/l

13:15:28
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

13:15:28
REPORT RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Duration: 184.41 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 54 MB

13:15:30
START RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Version: $LATEST

13:15:30
Expecting property name: line 10 column 2 (char 206): ValueError Traceback (most recent call last): File "/var/task/imageprocessor.py", line 182, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 48, in process_image config = load_config() File "/var/task/imageprocessor.py", line 24, in load_config return json.loads(conf_json) File "/usr/l

13:15:30
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

13:15:30
REPORT RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Duration: 176.75 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 54 MB

13:15:34
START RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4 Version: $LATEST

13:15:34
Expecting property name: line 10 column 2 (char 206): ValueError Traceback (most recent call last): File "/var/task/imageprocessor.py", line 182, in handler return process_image(event, context) File "/var/task/imageprocessor.py", line 48, in process_image config = load_config() File "/var/task/imageprocessor.py", line 24, in load_config return json.loads(conf_json) File "/usr/l

13:15:34
END RequestId: 0b6d13ac-f996-4f68-95ce-66f345f9bbc4

ValidationError

I am getting following error while creating stack.

An error occurred (ValidationError) when calling the DescribeStackResource operation: Resource VidAnalyzerRestApi does not exist for stack video-analyzer-stack @moanany

Where do Lambda Functions print to?

I'm curious to see the content of the Rekognition JSON response. I've noticed that there are a few print statements in imageprocessor.py--where do these print to? For example line 146:
print("Successfully published alert message to SNS.")
Where can I see this output?

If I know the location, I can simply print out the contents of the Rekognition response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.