Giter Site home page Giter Site logo

aws-solutions / video-on-demand-on-aws Goto Github PK

View Code? Open in Web Editor NEW
495.0 71.0 241.0 8.66 MB

An automated reference implementation leveraging AWS Step Functions and AWS Media Services to deploy a scalable fault tolerant Video on demand workflow

Home Page: https://aws.amazon.com/solutions/video-on-demand-on-aws/

License: Apache License 2.0

JavaScript 63.66% Shell 2.85% Python 5.30% TypeScript 28.19%

video-on-demand-on-aws's Introduction

Video on Demand on AWS

How to implement a video-on-demand workflow on AWS leveraging AWS Step Functions, AWS Elemental MediaConvert, and AWS Elemental MediaPackage. Source code for Video on Demand on AWS solution.

On this Page

Architecture Overview

Architecture

Deployment

The solution is deployed using a CloudFormation template with a lambda backed custom resource. For details on deploying the solution please see the details on the solution home page: Video on Demand on AWS

Please ensure you test the new template before updating any production deployments.

Workflow Configuration

The workflow configuration is set at deployment and is defined as environment variables for the input-validate lambda function (which is the first step in the ingest process).

Environment Variables:

  • Archive Source: If enabled, the source video file will be tagged for archiving to glacier at the end of the workflow
  • CloudFront: CloudFront domain name, used to generate the playback URLs for the MediaConvert outputs
  • Destination: The name of the destination S3 bucket for all of the MediaConvert outputs
  • FrameCapture: If enabled frame capture is added to the job submitted to MediaConvert
  • InputRotate: Defines how the MediaConvert rotates your video
  • MediaConvert_Template_2160p: The name of the UHD template in MediaConvert
  • MediaConvert_Template_1080p: The name of the HD template in MediaConvert
  • MediaConvert_Template_720p: The name of the SD template in MediaConvert
  • Source: The name of the source S3 bucket
  • WorkflowName: Used to tag all of the MediaConvert encoding jobs
  • acceleratedTranscoding Enabled Accelerated Transcoding in MediaConvert. options include ENABLE, DISABLED, PREFERRED. for more details please see: Accelerated Transcoding.
  • enableSns Send SNS notifications for the workflow results.
  • enableSqs Send the workflow results to an SQS queue

WorkFlow Triggers

Source Video Option

If deployed with the workflow trigger parameter set to VideoFile, the CloudFormation template will configure S3 event notifications on the source S3 bucket to trigger the workflow whenever a video file (mpg, mp4, m4v, mov, or m2ts) is uploaded.

Source Metadata Option

If deployed with the workflow trigger parameter set to MetadataFile, the S3 notification is configured to trigger the workflow whenever a JSON file is uploaded. This allows different workflow configuration to be defined for each source video processed by the workflow.

Important: The source video file must be uploaded to S3 before the metadata file is uploaded, and the metadata file must be valid JSON with a .json file extension. With source metadata enabled uploading video files to Amazon S3 will not trigger the workflow.

Example JSON metadata file:

{
    "srcVideo": "example.mpg",
    "archiveSource": true,
    "frameCapture": false,
    "jobTemplate": "custom-job-template"
}

The only required field for the metadata file is the srcVideo. The workflow will default to the environment variables settings for the ingest validate lambda function for any settings not defined in the metadata file.

Full list of options:

{
    "srcVideo": "string",
    "archiveSource": string,
    "frameCapture": boolean,
    "srcBucket": "string",
    "destBucket": "string",
    "cloudFront": "string",
    "jobTemplate_2160p": "string",
    "jobTemplate_1080p": "string",
    "jobTemplate_720p": "string",
    "jobTemplate": "custom-job-template",
    "inputRotate": "DEGREE_0|DEGREES_90|DEGREES_180|DEGREES_270|AUTO",
    "captions": {
        "srcFile": "string",
        "fontSize": integer,
        "fontColor": "WHITE|BLACK|YELLOW|RED|GREEN|BLUE"
    }
}

The solution also supports adding additional metadata, such as title, genre, or any other information, you want to store in Amazon DynamoDB.

Encoding Templates

At launch the Solution creates 3 MediaConvert job templates which are used as the default encoding templates for the workflow:

  • MediaConvert_Template_2160p
  • MediaConvert_Template_1080p
  • MediaConvert_Template_720p

By default, the profiler step in the process step function will check the source video height and set the parameter "jobTemplate" to one of the available templates. This variable is then passed to the encoding step which submits a job to AWS Elemental MediaConvert. To customize the encoding templates used by the solution you can either replace the existing templates or you can use the source metadata version of the workflow and define the jobTemplate as part of the source metadata file.

To replace the templates:

  1. Use the system templates or create 3 new templates through the MediaConvert console (see the AWS Elemental MediaConvert documentation for details).
  2. Update the environment variables for the input validate lambda function with the names of the new templates.

To define the job template using metadata:

  1. Launch the solution with source metadata parameter. See Appendix E for more details.
  2. Use the system templates or create a new template through the MediaConvert console (see the AWS Elemental MediaConvert documentation for details).
  3. Add "jobTemplate":"name of the template" to the metadata file, this will overwrite the profiler step in the process Step Functions.

QVBR Mode

AWS MediaConvert Quality-defined Variable Bit-Rate (QVBR) control mode gets the best video quality for a given file size and is recommended for OTT and Video On Demand Content. The solution supports this feature and it will create HLS, MP4 and DASH custom presets with the following QVBR levels and Single Pass HQ encoding:

Resolution MaxBitrate QvbrQualityLevel
2160p 15000Kbps 9
1080p 8500Kbps 8
720p 6000Kbps 8
720p 5000Kbps 8
540p 3500Kbps 7
360p 1500Kbps 7
270p 400Kbps 7

For more detail please see QVBR and MediaConvert.

Accelerated Transcoding

Version 5.1.0 introduces support for accelerated transcoding which is a pro tier feature of AWS Elemental MediaConvert. This feature can be configured when launching the template with one of the following options:

  • ENABLED All files upload will have acceleration enabled. Files that are not supported will not be processed and the workflow will fail
  • PREFERRED All files uploaded will be processed but only supported files will have acceleration enabled, the workflow will not fail.
  • DISABLED No acceleration.

For more detail please see Accelerated Transcoding.

Source code

Node.js 18

  • archive-source: Lambda function to tag the source video in s3 to enable the Glacier lifecycle policy.
  • custom-resource: Lambda backed CloudFormation custom resource to deploy MediaConvert templates configure S3 event notifications.
  • dynamo: Lambda function to Update DynamoDB.
  • encode: Lambda function to submit an encoding job to AWS Elemental MediaConvert.
  • error-handler: Lambda function to handler any errors created by the workflow or MediaConvert.
  • input-validate: Lambda function to parse S3 event notifications and define the workflow parameters.
  • media-package-assets: Lambda function to ingest an asset into MediaPackage-VOD.
  • output-validate: Lambda function to parse MediaConvert CloudWatch Events.
  • profiler: Lambda function used to send publish and/or error notifications.
  • step-functions: Lambda function to trigger AWS Step Functions.

Python 3.9

  • mediainfo: Lambda function to run mediainfo on an S3 signed url.

./source/mediainfo/bin/mediainfo must be made executable before deploying to lambda.

Creating a custom build

The solution can be deployed through the CloudFormation template available on the solution home page: [Video on Demand on AWS][vod-ig]. To make changes to the solution, download or clone this repo, update the source code and then run the deployment/build-s3-dist.sh script to deploy the updated Lambda code to an Amazon S3 bucket in your account.

Prerequisites:

1. Running unit tests for customization

Run unit tests to make sure added customization passes the tests:

cd ./deployment
chmod +x ./run-unit-tests.sh
./run-unit-tests.sh

2. Create an Amazon S3 Bucket

The CloudFormation template is configured to pull the Lambda deployment packages from Amazon S3 bucket in the region the template is being launched in. Create a bucket in the desired region with the region name appended to the name of the bucket (e.g. for us-east-1 create a bucket named my-bucket-us-east-1).

aws s3 mb s3://my-bucket-us-east-1

3. Build MediaInfo

Build MediaInfo using the following commands on an EC2 instance running an Amazon Linux AMI.

sudo yum update -y
sudo yum groupinstall 'Development Tools' -y
sudo yum install libcurl-devel -y
wget https://mediaarea.net/download/binary/mediainfo/20.09/MediaInfo_CLI_20.09_GNU_FromSource.tar.xz
tar xvf MediaInfo_CLI_20.09_GNU_FromSource.tar.xz
cd MediaInfo_CLI_GNU_FromSource/
./CLI_Compile.sh --with-libcurl

Run these commands to confirm the compilation was successful:

cd MediaInfo/Project/GNU/CLI/
./mediainfo --version

Copy the mediainfo binary into the source/mediainfo/bin directory of your cloned respository.

If you'd like to use a precompiled MediaInfo binary for Lambda built by the MediaArea team, you can download it here. For more information, check out the MediaInfo site.

4. Create the deployment packages

First change directory into the deployment directory. Run the following commands to build the distribution.

chmod +x ./build-s3-dist.sh
./build-s3-dist.sh my-bucket video-on-demand-on-aws version

Notes: The build-s3-dist script expects the bucket name as one of its parameters, and this value should not include the region suffix.

Run this command to ensure that you are an owner of the AWS S3 bucket you are uploading files to.

aws s3api head-bucket --bucket my-bucket-us-east-1 --expected-bucket-owner <YOUR-AWS-ACCOUNT-ID>

Deploy the distributable to the Amazon S3 bucket in your account:


aws s3 sync ./regional-s3-assets/ s3://my-bucket-us-east-1/video-on-demand-on-aws/<version>/ 
aws s3 sync ./global-s3-assets/ s3://my-bucket-us-east-1/video-on-demand-on-aws/<version>/

5. Launch the CloudFormation template.

  • Get the link of the video-on-demand-on-aws.template uploaded to your Amazon S3 bucket.
  • Deploy the Video on Demand to your account by launching a new AWS CloudFormation stack using the link of the video-on-demand-on-aws.template.

Additional Resources

Services

Other Solutions and Demos


Collection of operational metrics

This solution collects Anonymized operational metrics to help AWS improve the quality of features of the solution. For more information, including how to disable this capability, please see the implementation guide.


Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied. See the License for the specific language governing permissions and limitations under the License.

video-on-demand-on-aws's People

Contributors

aassadza avatar aws-solutions-github-bot avatar dch90 avatar dependabot[bot] avatar eggoynes avatar georgebearden avatar jimtharioamazon avatar jpeddicord avatar sandimciin avatar shsenior avatar stevemorad avatar tomnight avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

video-on-demand-on-aws's Issues

Feature Request: Ability to use already existing Destination S3 Bucket when deploying new stack

Hi Daniel,

We recently upgraded our VOD workflow from version 4 to 5.0.0 Everything went smoothly with the exception of the below.

In the previous version 4 stack we had multiple assets generated and made available to customers via CDN.

Requirement:
Even after we deploy the new version 5.0.0 stack we still wanted customers to be able to have access to previous assets.

Issue:
As you are aware each time, we deploy a new VOD on demand stack it generates a new destination bucket. So if we want to make previous assets available to user then we had to copy the asset from previous destination bucket to new bucket.

Due to the very large number of assets we had it took us a little over 4 1/2 days to copy the assets from previous destination bucket to new destination bucket.

I was wondering if you can update the CloudFormation template such that

IF destination_bucket_speficied_by_user THEN
use already existing destination bucket
ELSE
created new destination bucket
END

Thanks

Sam

'NoneType' object is not subscriptable

Hi, i am getting error "'NoneType' object is not subscriptable" in -mediainfo lambda which is present in the video-on-demand template.

Any idea what i am missing? There is very little log to debug this!

SQS

Would love the ability to optionally subscribe an SQS queue to the SNS topic.

Error when encoding video with no audio

I'm getting the follow error when I attempt to encode video with no audio:

Invalid audio track specified for audio_selector [1]. Audio track [1] not found in input container.

Looking through some of the solutions I don't see an easy way to modify the output presets to allow for the successful processing of video with no audio. I found this solution on StackOverflow but I'm missing how I might make that change in the presets defined in the solution. Thanks for your time!

Acceleration mode causes side effects

hi there,

I am using Video On Demand on AWS . (https://github.com/awslabs/video-on-demand-on-aws#source-code, https://aws.amazon.com/solutions/video-on-demand-on-aws/)

When we have jobs running in Acceleration mode ENABLED/PREFERRED with have the following two side effects.

  1. Thumbnail URLs not generated

It was noted that for jobs that run with Acceleration there is no thumbnail URL information in Dynamodb table used by VOD solution. However in the destination S3 bucket the thumbnail folders and thumbnails itself seem to be getting generated correctly.

Is there a quick fix you can do / suggest so that dynamoDB table entry used by VOD will contain the correct URL for thumbnail information ?

  1. Status updates SNS message information seems to be incorrect

We have configured status update interval to SECONDS_10 .
With no acceleration we get updates on regular intervals with progress such as 10%, 30%, 55%, 75%i, 80%, 100% etc

However with acceleration enabled the updates we get contain the first value for a long period e.g: 5%, 5%, 5%, 5%.. 100% (basically you see 5% and then 100%). When we log on to AWS MediaConvert console directly and continue to monitor job progress (by refreshing screen continuously) the job status shown is identical to the progress updates we get ( stays at around 5% for a long time and thereafter skips directly to 100%).
Is this a issue on AWS VOD solution or AWS media convert itself ?
Can you suggest any work around

Additional information:
We are using Custom Job templates on AWS MediaConvert. We have been told that currently there is a known issue where when submitting jobs via AWS VOD solution the template settings such as Acceleration will not work correctly.

Therefore to get Acceleration working with custom templates we were asked to modify the VOD code as below. In the below search for comments "// Christer's fix"

Thanks

Sam

/*******************************************************************************

  • Copyright 2019 Amazon.com, Inc. and its affiliates. All Rights Reserved.
  • Licensed under the Amazon Software License (the "License").
  • You may not use this file except in compliance with the License.
  • A copy of the License is located at
  • http://aws.amazon.com/asl/
  • or in the "license" file accompanying this file. This file is distributed
  • on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
  • express or implied. See the License for the specific language governing
  • permissions and limitations under the License.

********************************************************************************/

const AWS = require('aws-sdk');
const error = require('./lib/error.js');

//fix patch
const _ = require('lodash');

const applySettingsIfNeeded = (isCustomTemplate, originalGroup, customGroup) => {
if (isCustomTemplate) {
return _.merge({}, originalGroup, customGroup);
}

return originalGroup;
};
//fix patch

exports.handler = async (event) => {
console.log('REQUEST:: ', JSON.stringify(event, null, 2));

const mediaconvert = new AWS.MediaConvert({
endpoint: process.env.EndPoint
});

try {
// define paths:
let inputPath = 's3://' + event.srcBucket + '/' + event.srcVideo;
let outputPath = 's3://' + event.destBucket + '/' + event.guid;

// Baseline for the job parameters
let job = {
  "JobTemplate": event.jobTemplate,
  "Role": process.env.MediaConvertRole,
  "UserMetadata": {
    "guid": event.guid,
    "workflow": event.workflowName,
    "Qvbr": process.env.Qvbr
  },
  // Christer's fix
  "StatusUpdateInterval": "SECONDS_10",
  "AccelerationSettings": {
  //    "Mode": "PREFERRED"
      "Mode": "ENABLED"
  //      "Mode": "DISABLED"
  },
  "Settings": {
    // Christer's fix
    "TimecodeConfig": {
      "Source": "ZEROBASED"
    },
    "Inputs": [{
      "AudioSelectors": {
        "Audio Selector 1": {
          "Offset": 0,
          "DefaultSelection": "NOT_DEFAULT",
          "ProgramSelection": 1,
          "SelectorType": "TRACK",
          "Tracks": [
            1
          ]
        }
      },
      "VideoSelector": {
        "ColorSpace": "FOLLOW"
      },
      "FilterEnable": "AUTO",
      "PsiControl": "USE_PSI",
      "FilterStrength": 0,
      "DeblockFilter": "DISABLED",
      "DenoiseFilter": "DISABLED",
      // Christer's fix
      //"TimecodeSource": "EMBEDDED",
      "TimecodeSource": "ZEROBASED",
      "FileInput": inputPath,
    }],
    "OutputGroups": []
  }
};

let mp4 = {
  "Name": "File Group",
  "OutputGroupSettings": {
    "Type": "FILE_GROUP_SETTINGS",
    "FileGroupSettings": {
      "Destination": outputPath + '/mp4/',
    }
  },
  "Outputs": []
};

let hls = {
  "Name": "HLS Group",
  "OutputGroupSettings": {
    "Type": "HLS_GROUP_SETTINGS",
    "HlsGroupSettings": {
      "SegmentLength": 5,
      "MinSegmentLength": 0,
      "Destination": outputPath + '/hls/',
    }
  },
  "Outputs": []
};

let dash = {
  "Name": "DASH ISO",
  "OutputGroupSettings": {
    "Type": "DASH_ISO_GROUP_SETTINGS",
    "DashIsoGroupSettings": {
      "SegmentLength": 30,
      "FragmentLength": 3,
      "Destination": outputPath + '/dash/',
    }
  },
  "Outputs": []
};

let cmaf = {
  "Name": "CMAF",
  "OutputGroupSettings": {
    "Type": "CMAF_GROUP_SETTINGS",
    "CmafGroupSettings": {
      "SegmentLength": 30,
      "FragmentLength": 3,
      "Destination": outputPath + '/cmaf/',
    }
  },
  "Outputs": []
};

let mss = {
  "Name": "MS Smooth",
  "OutputGroupSettings": {
    "Type": "MS_SMOOTH_GROUP_SETTINGS",
    "MsSmoothGroupSettings": {
      "FragmentLength": 2,
      "ManifestEncoding": "UTF8",
      "Destination": outputPath + '/mss/',
    }
  },
  "Outputs": []
};

let frameCapture = {
  "CustomName": "Frame Capture",
  "Name": "File Group",
  "OutputGroupSettings": {
    "Type": "FILE_GROUP_SETTINGS",
    "FileGroupSettings": {
      "Destination": outputPath + "/thumbnails/"
    }
  },
  "Outputs": [{
    "NameModifier": "_tumb",
    "ContainerSettings": {
      "Container": "RAW"
    },
    "VideoDescription": {
      "ColorMetadata": "INSERT",
      "AfdSignaling": "NONE",
      "Sharpness": 100,
      "Height": event.frameHeight,
      "RespondToAfd": "NONE",
      "TimecodeInsertion": "DISABLED",
      "Width": event.frameWidth,
      "ScalingBehavior": "DEFAULT",
      "AntiAlias": "ENABLED",
      "CodecSettings": {
        "FrameCaptureSettings": {
          "MaxCaptures": 10000000,
          "Quality": 80,
          "FramerateDenominator": 5,
          "FramerateNumerator": 1
        },
        "Codec": "FRAME_CAPTURE"
      },
      "DropFrameTimecode": "ENABLED"
    }
  }]
};

let params = {
  Name: event.jobTemplate
};

let tmpl = await mediaconvert.getJobTemplate({
  Name: event.jobTemplate
}).promise();
console.log(`TEMPLATE:: ${JSON.stringify(tmpl, null, 2)}`);

tmpl.JobTemplate.Settings.OutputGroups.forEach(group => {
  let found = false,
    defaultGroup = {};

  if (group.OutputGroupSettings.Type === 'FILE_GROUP_SETTINGS') {
    found = true;
    defaultGroup = mp4;
  }

  if (group.OutputGroupSettings.Type === 'HLS_GROUP_SETTINGS') {
    found = true;
    defaultGroup = hls;
  }

  if (group.OutputGroupSettings.Type === 'DASH_ISO_GROUP_SETTINGS') {
    found = true;
    defaultGroup = dash;
  }

  if (group.OutputGroupSettings.Type === 'MS_SMOOTH_GROUP_SETTINGS') {
    found = true;
    defaultGroup = mss;
  }

  if (group.OutputGroupSettings.Type === 'CMAF_GROUP_SETTINGS') {
    found = true;
    defaultGroup = cmaf;
  }

  if (found) {
    console.log(`${group.Name} found in Job Template`);

    const outputGroup = applySettingsIfNeeded(event.isCustomTemplate, defaultGroup, group);
    job.Settings.OutputGroups.push(outputGroup);
  }
});


// let tmpl = await mediaconvert.getJobTemplate(params).promise();

// // OutputGroupSettings:Type is required and must be one of the following
// // HLS_GROUP_SETTINGS | DASH_ISO_GROUP_SETTINGS | FILE_GROUP_SETTINGS | MS_SMOOTH_GROUP_SETTINGS | CMAF_GROUP_SETTINGS,
// // Using this to determing the output types in the the job Template



// tmpl.JobTemplate.Settings.OutputGroups.forEach(function(output) {

//   if (output.OutputGroupSettings.Type === 'FILE_GROUP_SETTINGS') {
//     console.log(output.Name, ' found in Job Template');
//     job.Settings.OutputGroups.push(mp4);
//   }

//   if (output.OutputGroupSettings.Type === 'HLS_GROUP_SETTINGS') {
//     console.log(output.Name, ' found in Job Template');
//     job.Settings.OutputGroups.push(hls);
//   }

//   if (output.OutputGroupSettings.Type === 'DASH_ISO_GROUP_SETTINGS') {
//     console.log(output.Name, ' found in Job Template');
//     job.Settings.OutputGroups.push(dash);
//   }

//   if (output.OutputGroupSettings.Type === 'MS_SMOOTH_GROUP_SETTINGS') {
//     console.log(output.Name, ' found in Job Template');
//     job.Settings.OutputGroups.push(mss);
//   }

//   if (output.OutputGroupSettings.Type === 'CMAF_GROUP_SETTINGS') {
//     console.log(output.Name, ' found in Job Template');
//     job.Settings.OutputGroups.push(cmaf);
//   }

// });

if (event.frameCapture) {
  job.Settings.OutputGroups.push(frameCapture);
}

let data = await mediaconvert.createJob(job).promise();
event.encodingJob = job;
event.ecodeJobId = data.Job.Id;
console.log(JSON.stringify(data, null, 2));

} catch (err) {
console.log(err);
await error.handler(event, err);
throw err;
}
return event;
};

Assistance with investigating a VOD V5.0.0 missing job issue

Hi @dscpinheiro,

Need your assistance in investigating a VOD V5.0.0 workflow issue; where sometimes jobs submitted to VOD does NOT trigger a MediaConvert job.

To give you some background of the issue.

We have configured our VOD stack to trigger the workflow on MetaDataFile. We used the prior VOD version 4.0 for about 4 months and submitted 2890 VOD jobs. For that entire duration the occurrence of the particular issue is zero; as in the issue did not exist in previous version of VOD.

We have been using VOD V5.0.0 for above 2 month and have submitted around 1280 jobs and the particular issue has occurred approximately around 4 times.

**Expected Behaviour: **
After the media file Is added to the source S3 bucket and then the Metadata file is added we expect VOD workflow to trigger an AWS MediaConvert job.

**Problem Behaviour: **
On some occasions after Metadata file is added to S3 we have noticed no corresponding AWS VOD job getting created.

Process of elimination

  • We can eliminate the possibility of corrupted source Media File as cause for issue as I have manually verified the integrity of media files for problem jobs
  • we can eliminate the possibility of invalid metadata file as adding the same file a 2nd time around manually for failed jobs triggers the job successfully
  • aws-vod-v5-0-0-error-handler logs indicate no corresponding errors
  • logs for aws-vod-v5-0-0-mediainfo, aws-vod-v5-0-0-step-functions ..etc does show that the particular lost jobs did actually go through these functions; therefore we know that the when the Metadata file got added to the source bucket it did trigger the VOD V5.0.0 workflow but however it didn't end up triggering the AWS MediaConvert job

For the most part the VOD 5.0.0 is Blackbox for me. Therefore it would be great if you can guide me in in expected function flow (function call sequence) as well as any tips on where to look for to identify potential root cause for issue

Thanks in advance

Sam

MOV files not supported?

Hi, I see that the whole suite works well when i place a .MP4 file to the Source Bucket. Roughly around 4 mins later i can see the 3 different types of Streamable CDN links.
However when I upload a .MOV file(i recorded a 1 minute video in my iPhone 6s(default settings) which is around 120MB, i dont see the workflow getting triggered at all! There is notification to my email.
Where can i find the Input video file types that are supported?

Want to create different buckets in the end

I want to implement a lambda function, where every video converted, will automatically create a new bucket with tag and if the video is from the same customer that video will go into that bucket

How often will Node.js updates cause breaking changes?

Disclaimer: I am a new user and also new to AWS in general so forgive if I've misunderstood anything.

I have been doing some testing and trying to build a video upload & streaming website using the Video On Demand template and I recently started receiving the following automated emails:

Subject: AWS Lambda: Node.js 8.10 is EOL, please migrate your functions to a newer runtime version

Hello,

We are contacting you as we have identified that your AWS Account currently has one or more Lambda functions using Node.js 8.10, which will reach its EOL at the end of 2019.

What’s happening?

The Node community has decided to end support for Node.js 8.x on December 31, 2019 [1]. From this date forward, Node.js 8.x will stop receiving bug fixes, security updates, and/or performance improvements. To ensure that your new and existing functions run on a supported and secure runtime, language runtimes that have reached their EOL are deprecated in AWS [2].

For Node.js 8.x, there will be 2 stages to the runtime deprecation process:

  1. Disable Function Create – Beginning January 6, 2020, customers will no longer be able to create functions using Node.js 8.10

  2. Disable Function Update – Beginning February 3, 2020, customers will no longer be able to update functions using Node.js 8.10

After this period, both function creation and updates will be disabled permanently. However, existing Node 8.x functions will still be available to process invocation events.

What do I need to do?

We encourage you to update all of your Node.js 8.10 functions to the newer available runtime version, Node.js 10.x[3] or Node.js 12.x[4]. You should test your functions for compatibility with either the Node.js 10.x or Node.js 12.x language version before applying changes to your production functions.

What if I have issues/What if I need help?

Please contact us through AWS Support [5] or the AWS Developer Forums [6] should you have any questions or concerns.

[1] https://github.com/nodejs/Release
[2] https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html
[3] https://aws.amazon.com/about-aws/whats-new/2019/05/aws_lambda_adds_support_for_node_js_v10/
[4] https://aws.amazon.com/about-aws/whats-new/2019/11/aws-lambda-supports-node-js-12/
[5] https://aws.amazon.com/support
[6] https://forums.aws.amazon.com/forum.jspa?forumID=186

Sincerely,
Amazon Web Services

The video on demand solution was 'sold' to us by an AWS sales rep as a more or less set and forget solution, it took me some time and reading and going back and forth with your support staff to establish that this had something to do with the lambda functions that are part of the VOD workflow. The pdf documentation had been updated last month to include the following notice (easily missed in my opinion):

Solution Updates:
Video on Demand on AWS version 5.0 features MediaPackage functionality, and uses the most up-to-date Node.js runtime. Version 4.2 uses the Node.js 8.10 runtime, which reaches end-of-life on December 31, 2019. In January, AWS Lambda will block the create operation and, in February, Lambda will block the update operation. For more information, see Runtime Support Policy in the AWS Lambda Developer Guide.

To continue using this solution with the latest features and improvements, you must deploy version 5.0 as a new stack. For customers who do not want to use the new functionality, you can update your existing stack to version 4.3. Version 4.3 keeps the same functionality as version 4.2 but uses the most up-to-date runtimes. You can find both versions in the solution’s GitHub repository. You can also download the version 4.3 template.

(please forgive all the copy pastes, I'm trying to include as much info as possible in one place just for the benefit of anyone else in a similar boat to me!)

My concern was that deploying a new stack would involve losing the crucial data in the DynamoDB table associated to the stack (on a live version of my site this could be hundreds of videos+metadata), and whether or not the contents of the destination S3 buckets would be somehow orphaned. I don't think it's very clear from your documentation how to perform an update to version 5.0 on a live site or project that depends on this stack and what is affected.

When I asked support staff for some guidance on this I was told:

  • Create a new stack with version 5.0
  • Update this stack by deleting/removing dynamodb::table resource from the template
  • Once the above update is completed successfully, perform a second update on this stack to import the dynambodb table which you were using into this stack.

This might make sense to someone with a bit more AWS experience but for a relative newbie like me it didn't fully address my concerns about losing my current data.

When I enquired further I was given instructions for updating from version 4.2 to version 4.3 instead, but I don't see that as a future proof solution at all, how long until 4.3 is obsoleted due to use of some deprecated/EOL code?

I'll include the response to the rest of my questions from the support staff who also recommended I post here for more info. Again, apologies for so much reading, but this is useful info for anyone stuck in my position.

Below answers are not relevant when updating from 4.2 to 4.3. These are concerned to version 5.0 (as creating a new stack is the only option for version 5.0) and you can completely ignore this information if you would like to.

===========

Q1: When you say create a new stack do you mean to run "Launch Solution" again from the PDF like I did last time?

Yes, the pdf that you were referring to [2] is now updated and "Launch solution" will now launch a new stack with version 5.0. This is a quick solution. But there are other methods to upload a template to cloudformation [4].

Q2: How do I delete/remove the dynamodb::table resource from "the template"? Which "template" are you referring to here, is the CloudFormation "stack" a template?

Here is a documentation[5] which details steps to modify a cloudformation template. Once you get to "step 3: Modify the template", search for AWS::DynamoDB::Table resource and remove this resource section from the template file.

Q3: How do I perform updates on "stacks", and how would I perform an update which imports a dynamodb table?

To import dynamodb table, please see this documentation[6]. And, here is a documentation[3] that describes steps to perform updates on a stack.

Q4: Do I need to export my current dynamoDB table and if so how do I do that? (and also reimport).

If you follow documentation [6] it is not necessary to export dynamodb resource. Importing itself helps.

Q5: Also I need to know how often there will be breaking changes like this because my concern is that if this is something that can happen often then I'm not sure I can trust what I'm using. Will there be potential of losing data every time I attempt to update the stack/workflow/cloudformation/template? (I'm still not clear what the correct terminology is here).

I would suggest moving from version 4.2 to 4.3 which is not a breaking change as this is not replacing the resources. As Premium Support members do not have an idea of future updates, I would recommend you to try creating a github issue under VideoOnDemand GitHub issues [7]. One of our soultions architect will address your concern.

Q6: will all of the security credentials and IAM roles and Cognito Identity pools etc. need to be redone as well?

These resources need not be redone.

======

I hope this information helps you. Please do not hesitate to reach back to me if you have further questions or concerns. I will be glad to assist you further.

Thank you and have a great rest of the day!

References:

  1. GitHub ChangeLog: https://github.com/awslabs/video-on-demand-on-aws/blob/master/CHANGELOG.md
  2. VideoOnDemand: https://s3.amazonaws.com/solutions-reference/video-on-demand-on-aws/latest/video-on-demand-on-aws.pdf
  3. Updating stack: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-direct.html
  4. Selecting a template: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-console-create-stack-template.html
  5. Modifying a template: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-get-template.html
  6. Importing resources into stack: https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/
  7. github issues page: https://github.com/awslabs/video-on-demand-on-aws/issues

Can anyone shed some light on whether version 5.0 is something I can rely on going forward, or should I expect to have to do the above steps relatively often? I imagine a lot of users would need to know this so that they can budget accordingly for maintenance & manual updating costs. Is there any way to automate updating the stack? Is that wise? How would you suggest maintaining this stack on a production environment?

HTTP/1.1 307 Temporary Redirect

How to reproduct

  1. Deploy the solution in eu-west-1
  2. upload the video
  3. Download the transcoded file HTTP GET https://xxx.cloudfront.net/yyyy/dash/zzzz.mpd

Issue
HTTP/1.1 307 Temporary Redirect

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>TemporaryRedirect</Code><Message>Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.</Message>

Solution
Add Region to the S3 origin domain name

Unexpected behaviour while using additional MC template

hi there,
I have been using your aws vod solution for a while. (https://github.com/awslabs/video-on-demand-on-aws#source-code, https://aws.amazon.com/solutions/video-on-demand-on-aws/

I am creating metadata json files to trigger workflows.
{"srcVideo": "02067ab9-953a-4764-9683-5638d93e14d3", "jobTemplate": "VOD-25fps-DRM", "priority": 50, "archiveSource": false, "frameCapture": false, "qvbr": true, "StatusUpdateInterval": "SECONDS_60", "accelerationSettings": {"Mode": "ENABLED"}}.

attached my own MC templates I am using for this.

we verified that the dash version (not the hls) was generated with unexpected segment length ( 30 secs vs 6 secs) despite the template settings (attached), adding also the dynamo db record for this job and the generated mpd.

aws-mediaconvert-job-1570075372526-rrcqr6.json.zip
MediaConvert-25fps-DRM.zip
dydb-record.zip

Any idea on this? Your advice?

Additionally, do you advice to change your own MC templates or just add new ones?
Ideally, as we did, I would prefer just to add them so that we do not modify your solution at all, just putting a wrapper on it.
4a36aeda-6aaa-4a6c-9a3e-45b3c3c9bd8b.mpd.zip

output-validate doesnt support non-root file paths

How to reproduce

If you're using the Source Metadata Option triggers and passing a bucket folder on destBucket.

Ex:

  {
    srcVideo: any,
    destBucket: `${environment.assetsBucket}/folder1/folder2`
  };

Reason

I believe the reason is the buildUrl function, which is ignoring everything after the 3rd item when calling splice. In the past example, all urls would be incomplete because we would have more items.

Test case

I have create this fork with a test case and a fix for the problem. I would gladly create a PR. Just need to confirm if there is any reason why the buildUrl function is ignoring everything after the 3rd item.

SCTE management and support

Hi, would need to add Scte markers for advertising.

Apart from using a template having already embedded the esam sccml file, is there any other way using the solution? Is the management of an external xml file for that already in your roadmap?

Add support for MongoDB

Very nice work with this stack. I was wondering if you would be able to add support for MongoDB. I love how it will auto update the DynamoDB when it is done processing and in the stages it is processing but sadly we don't use DynamoDB and I can't figure out how to get it to link with MongoDB instead.

How to use custom job template

I have deployed the solution without any changes. I enabled the workflow that triggers when a metadata file is uploaded.

I have to make a few simple changes:

  1. I don't need to output files in DASH format
  2. I also wanted to keep the file name for mp4 file to be the same as input file.

I cloned an existing template.
I removed "DASH ISO" outputs
I removed one of the outputs from filegroup.
I changed name modifier from "NameModifier": "_Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr" to
"NameModifier" to blank. I assumed that remove modifier to let me keep the original file name.

My hope was that this will be enough to remove dash output and keep the name of the mp4 file to be same as input file.

I created a job template name and referred to the new template in a new metadata file.

Here is my metadata file
{
"srcVideo": "99_Overview_Lecture.mp4",
"ArchiveSource": true,
"FrameCapture":false,
"JobTemplate":"video-on-demand-v102_Ott_1080p_Avc_Aac_16x9_qvbr_v2"
}

When the ran the workflow it still created dash files and appended _Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr to the mp4 file name.

So it seems to me that even though metadata file refers to my template it was not used.

How can I control the name of output mp4 file?
Why is my custom template not used?

"directory" support

There are some lines of code that assume no directory structure (at least as far as s3 supports directories by "/" convention) or assume at most one directory in the key, resulting in errors like

"function": "VidTest-validate-outputs",
 "error": "Forbidden: null"

Confusion on the inputRotate option

I see that there is an environment variable of inputRotate for the input-validate lambda. However when I set that to AUTO in the lambda web interface and add the line: inputRotate: process.env.InputRotate to the index.js the video output doesn't seem to respect the rotational settings.

I'm trying to understand if I'm just misreading the documentation. Do I need to do the Source Metadata Option and then pass inputRotate there or should it be sufficient to modify the input-validate lambda with the environmental variable? Thanks!

Custom template in metadata file not working- which minimal fix / Failing to find right preset for custom template

Hi,
we need to personalise the solution, using our own video settings, creating at least 2 custom templates tobe used for different business scenarios.
We created our own template and I am trying to use that specifying it in my metadata file.
{
"srcVideo": "DIG_MINIMATCH_ARS_TOT-Test-AWS-Sol.mp4",
"ArchiveSource": false,
"FrameCapture": false,
"JobTemplate": "Company-XX_VOD_50fps",
"Priority": 100,
"AccelerationSettings": {
"Mode": "ENABLED"
}
}

I got 2 issues, not sure if they are related, so for the time being I am just creating this unique issue, happy to split this in 2.

Issues:

Issue 1)
the solution does not seem to consider properly the template specified in the metadata file as you can see from the attached logs. Here a quick summary for your analysis/review.

  • In the input-validate function the event object has not the template value in it, while that is present in the parameters data ("JobTemplate": "Company-XX_VOD_50fps")

  • that value ("JobTemplate": "Company-XX_VOD_50fps") is still present in the vod-profiler function parameters data.

  • that value ("JobTemplate": "Company-XX_VOD_50fps") is still present in the vod-profiler function parameters data.

  • in the profiler function code (line 73), the original code seems to search the value in the event.jobTemplate , i.e. if (!event.jobTemplate)

    console.log("EVENT\n" + JSON.stringify(event, null, 2));
    // Update:: added support to pass in a custom encoding Template instead of using the
    if (!event.jobTemplate) {
    // Match the jobTemplate to the encoding Profile.
    const jobTemplates = {
    '2160': event.jobTemplate_2160p,
    '1080': event.jobTemplate_1080p,
    '720': event.jobTemplate_720p
    };

So, before that line, I added some print code

//ff
console.log('ff - event jobTemplate before:: ', event.jobTemplate); <-- checking the value of the event object attribute, it is undefined

Question: why is that and how would we need to fix that compatibly with the original codebase?

Issue 2:

to overcome that for debugging purposes and for testing our template programmatically, I added the following code in the encode function before the if (!event.jobTemplate) :
event.jobTemplate = 'Company-XX-Vod-Hd'

when I hard-code the template in the encode function, that finally uses the right template but then I get the below error: "The specified preset was not found: presetArn=arn:aws:mediaconvert:ap-southeast-2:160924941480:presets/VOD_Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr"

Question is : in which function is specified to look for VOD_Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr and not look for the presets assigned to the new template? Please keep in mind that the same template works with the AWS console ui, so it would seem that the template association with the presets should be correct.

So, summarising, my questions are:

  • how to pass properly a custom template in the metadata file.
  • is that really working? Anyone has ever used that?
  • what do you make of my investigation?
    Finally, I could fix that, but I would like to keep the current codebase without any (or minimal) changes. according to my investigation that seems necessary. any advice on this?

how do we need to manage the presets for custom templates? Ideally, we would create at least 2 custom templates

2019-09-18T04:57:50.146Z 82bc3222-4094-46e4-98ea-6498c4a93cf0 { NotFoundException: The specified preset was not found: presetArn=arn:aws:mediaconvert:ap-southeast-2:160924941480:presets/VOD_Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr.
at Object.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:51:27)
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/rest_json.js:55:8)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
message: 'The specified preset was not found: presetArn=arn:aws:mediaconvert:ap-southeast-2:160924941480:presets/VOD_Mp4_Avc_Aac_16x9_1920x1080p_24Hz_6Mbps_qvbr.',
code: 'NotFoundException',
time: 2019-09-18T04:57:50.145Z,
requestId: '9fab83da-0777-450d-a0e8-7ba8adbef9e3',
ticket1.txt
ticket1.1.txt

MediаInfo throwing a fatal

2018-10-17T05:19:55.755Z 547088fc-235d-497e-9a1c-1cebb8f4ce9c TypeError: result.Mediainfo.File.track.forEach is not a function
at $xmlParserInstance.parseString (/var/task/lib/mediaInfoCommand.js:82:35)
at Parser.<anonymous> (/var/task/node_modules/xml2js/lib/parser.js:303:18)
at emitOne (events.js:96:13)
at Parser.emit (events.js:188:7)
at Object.onclosetag (/var/task/node_modules/xml2js/lib/parser.js:261:26)
at emit (/var/task/node_modules/xml2js/node_modules/sax/lib/sax.js:624:35)
at emitNode (/var/task/node_modules/xml2js/node_modules/sax/lib/sax.js:629:5)
at closeTag (/var/task/node_modules/xml2js/node_modules/sax/lib/sax.js:889:7)
at Object.write (/var/task/node_modules/xml2js/node_modules/sax/lib/sax.js:1436:13)
at Parser.exports.Parser.Parser.parseString (/var/task/node_modules/xml2js/lib/parser.js:322:31)

I'm still trying to understand what's going on here, but I think it's from a corrupted source video. When pulling the source video, it's not playable, and various desktop video players aren't able to inspect the video.

This error would make sense, but it's a misleading error, there should probably be a validation check that this collection exists before trying to iterate with forEach

Python throws exception when ./build-s3-dist.sh is run

Hi,

When I run ./build-s3-dist.sh <bucket_name> video-on-demand-on-aws v5.0.0-custom, Python3.7 throws the error pasted below:

Installing collected packages: urllib3, six, python-dateutil, jmespath, docutils, botocore, s3transfer, boto3 Exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 360, in run prefix=options.prefix_path, File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 784, in install **kwargs File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 851, in install self.move_wheel_files(self.source_dir, root=root, prefix=prefix) File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 1064, in move_wheel_files isolated=self.isolated, File "/usr/lib/python3/dist-packages/pip/wheel.py", line 247, in move_wheel_files prefix=prefix, File "/usr/lib/python3/dist-packages/pip/locations.py", line 153, in distutils_scheme i.finalize_options() File "/usr/lib/python3.6/distutils/command/install.py", line 274, in finalize_options raise DistutilsOptionError("can't combine user with prefix, " distutils.errors.DistutilsOptionError: can't combine user with prefix, exec_prefix/home, or install_(plat)base Command failed with exit code: 2
This occurs when creating a deployment package for mediainfo.

"Should have at least 1 items" If Settings Are Incorrect

When running the MediaConvert template, the following error gets thrown if your preset choices aren't correct of if you leave one empty (if you dont want DASH, for example):

Error
"BadRequestException"
Cause
{
  "errorMessage": "/outputGroups/0/outputs: Should have at least 1 items",
  "errorType": "BadRequestException",
  "stackTrace": [
    "Object.extractError (/var/task/node_modules/aws-sdk/lib/protocol/json.js:48:27)",
    "Request.extractError (/var/task/node_modules/aws-sdk/lib/protocol/rest_json.js:52:8)",
    "Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
    "Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
    "Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:683:14)",
    "Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)",
    "AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)",
    "/var/task/node_modules/aws-sdk/lib/state_machine.js:26:10",
    "Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)",
    "Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:685:12)"
  ]
}

For example, if you exclude one:

. . .
MP4:
    Description: Specify the Mp4 presets to be used, leave blank to disable
    Type: String
    Default: "720"

  HLS:
    Description: Specify the Hls presets to be used, leave blank to disable
    Type: String
    Default: "1080,720,540,360,270"

  DASH:
    Description: Specify the Dash presets to be used, leave blank to disable
    Type: String
    Default: ""
. . .

or enter an invalid choice (270 isn't a preset for MP4):

. . .
MP4:
    Description: Specify the Mp4 presets to be used, leave blank to disable
    Type: String
    Default: "270"

  HLS:
    Description: Specify the Hls presets to be used, leave blank to disable
    Type: String
    Default: "1080,720,540,360,270"

  DASH:
    Description: Specify the Dash presets to be used, leave blank to disable
    Type: String
    Default: "720"
. . .

The step functions for the ingest process (stackname-Process) sends the following inputs to the Lambda that calls process/media-convert-encode.js.

{
  "guid": "4ed0adb8-4609-4e53-9e4e-d9587d04d764",
  "startTime": "2018-03-15 23:18.9",
  "workflowStatus": "ingest",
  "frameCapture": false,
  "mp4": [
    270
  ],
  "srcVideo": "source.mp4",
  "srcBucket": "STACK-source-fjgtig8f0qyl",
  "mp4Bucket": "STACK-mp4destination-864x0xmc99nb",
  "abrBucket": "STACK-abrdestination-svrx46v74rkl",
  "hls": [
    720
  ],
  "dash": [
    720
  ],
  "srcHeight": 720,
  "srcWidth": 1280
}

The process then checks each type of video output defined in the templates and tries to create an output group for the job. If the presets are empty or invalid, the following lines will insert an empty OutputGroup to the Outputs array for each preset:
https://github.com/awslabs/video-on-demand-on-aws/blob/master/source/process/media-convert-encode.js#L236
https://github.com/awslabs/video-on-demand-on-aws/blob/master/source/process/media-convert-encode.js#L288
https://github.com/awslabs/video-on-demand-on-aws/blob/master/source/process/media-convert-encode.js#L344

I will try to get a bugfix in for this, but making sure the presets are correct will prevent this from happening :)

MediaInfo step: stdout maxBuffer exceeded

Hi,

I deployed the stack and tried to upload a file, it's a 1080p mp4 with 22.6MB. The Media Info lambda function fails. It outputs the following error:

{
  "errorMessage": "stdout maxBuffer exceeded",
  "errorType": "Error",
  "stackTrace": [
    "Socket.<anonymous> (child_process.js:257:14)",
    "emitOne (events.js:96:13)",
    "Socket.emit (events.js:188:7)",
    "readableAddChunk (_stream_readable.js:176:18)",
    "Socket.Readable.push (_stream_readable.js:134:10)",
    "Pipe.onread (net.js:547:20)"
  ]
}

My guess, is that child process started to call MediaInfo fails here

I guess You could pass a maxBuffer parameter to this call. The default value is 200KB if I'm not mistaken. Maybe It shoudl be exported as an environment variable for the lambda function such as "ErrorHandler" is. But I'm not sure this is the problem.

AWS VOD Template - Mission Missing Subscription Notification

I successfully created the VOD stack, yet I only received one subscription notification to the admin email address I provided. The pdf instructions mention of three subscriptions that I need to activate encoding, publishing and notifications. I checked spam folders. I tried three times the installation of the stack and got the same result. I tested uploading a video and the event didn't call the respective lambda function. Can you help me here? Thank you. https://aws.amazon.com/solutions/video-on-demand-on-aws/

Possible missing IAM role policy - 403 AccessDenied for GET HEAD

Just a heads up, something using the media MediaConvert role is triggering HTTP 403 on both mp4 and abr buckets.

be9fe0fecc654069de91eae291bd5c9ced5738e6ee018e8fe406a46d35a1b5a9 [redacted]-mp4destination-[redacted] [01/Nov/2018:00:28:18 +0000] 172.31.52.255 arn:aws:sts::[redacted]:assumed-role/[redacted]-MediaConvertRole-[redacted]/EmeSession_[redacted] 232D99AD8CED877E REST.HEAD.BUCKET - "HEAD /[redacted]-mp4destination-[redacted] HTTP/1.1" 403 AccessDenied 243 - 9 - "-" "-" -

Add support for captions/subtitles

Enjoying the project and using it a lot. Up to this point we've been burning subtitles into the videos themselves (which has been fine), but we recently got the request for more languages being supported. I know that mediaconvert supports subtitles but has anyone modified this project to support passing in those files or does the project intend to in the future?

I'm also not against doing that work myself if there's no interest in supporting that as a feature. Thanks for your time.

Missing required key 'Name' in params (VOD encode)

I pulled the latest version of the template(v5.0.0) and I have customized the output options (removed low quality options) and deployed the template successfully. But after uploading a video to source bucket getting this error in encode lambda function:

"errorType": "MissingRequiredParameter",
 "errorMessage": "Missing required key 'Name' in params",
 "code": "MissingRequiredParameter",

just checked the code in encode/index.js the code expecting the request event to has jobTemplate parameter. For example:

182: let tmpl = await mediaconvert.getJobTemplate({ Name: event.jobTemplate }).promise();

The lambda function printing the event variables like below before failing:

{
    "guid": "b26b7974-5c55-47c6-9dba-fa1bf1cf5e28",
    "destBucket": "video-stream-destination-njqsuvm4k2q1",
    "workflowStatus": "Ingest",
    "frameCapture": true,
    "jobTemplate_2160p": "video-stream_Ott_2160p_Avc_Aac_16x9_qvbr",
    "jobTemplate_720p": "video-stream_Ott_720p_Avc_Aac_16x9_qvbr",
    "jobTemplate_1080p": "video-stream_Ott_1080p_Avc_Aac_16x9_qvbr",
    "workflowName": "video-stream",
    "workflowTrigger": "Video",
    ....

as you see there is no jobTemplate parameter which the function expecting. Did I miss something or this is an issue ?

Trouble deploying the solution

I downloaded the entire repo and checked it into AWS CodeCommit. We will make code changes for my initial attempt was to follow the instructions to deploy the solution.
I had an existing S3 bucket. We use us-east-1 region

./build-s3-dist.sh my-bucket 1.00
aws s3 sync dist/ s3://my-bucket/video-on-demand-on-aws/1.00/
It did upload the tempate and zip files to the s3 bucket.
I want to AWS Console Cloud Formation and referred to the template in S3 bucket.

My stack failed to deploy.
CustomResource failed to provision.
Error was: Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 4efa5ade-4a90-4dcd-b08d-c4416408da51)

#Custom Resource Lambda Function
CustomResource:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub ${AWS::StackName}-custom-resource
Description: Used to deploy Step Functions and additional, cloudfront s3 and sns Configuration
Handler: index.handler
Role: !GetAtt CustomResourceRole.Arn
Code:
S3Bucket: !Join ["-", [!FindInMap ["SourceCode", "General", "S3Bucket"], Ref: "AWS::Region"]]
S3Key: !Join ["/", [!FindInMap ["SourceCode", "General", "KeyPrefix"], "custom-resource.zip"]]
Runtime: nodejs8.10
Timeout: 180

I examined the template file and it did have correct bucket name and path.
I did this from Mac not windows PC.

When I directly refer to the template in AWS s3 bucket the deployment is successful.
This issue is similar to aws-solutions/live-streaming-on-aws#5

Let me know if you have a suggestion for me.

Corrupted Media Files cause Video On Demand Crashes

Hi @dscpinheiro,

We recently encountered an issue where the Video On Demand was crashing; and after investigation it turned out to be due an corrupted media file.

Would it be possible for you to suggest a quick fix so that rather than crashing it will send an error code and gracefully abort the job.

Below is our high level understanding of the problem

Normally when a file is handed over to VOD; it is passed onto mediainfo to extract information and JSON format output is redirected to a file.
Elsewhere in the code the JSON structure in file is iterated the media info is extracted.

When a corrupted file goes through the above process mediainfo output indicates an incomplete file ( sets IsTruncated to Yes ). However the code that reads the MediaInfo does not handle the scenario where the JSON file does not contain required fields and crashes.

Below are details of our investigation

error-handler cloud watch contains the following entries

2019-11-18T20:25:41.195Z	c8c66424-aabf-464c-8e3a-746b673980c4	{ guid: 'fef1131d-0f55-4225-95b7-d95573ca08af',
  workflowStatus: 'Error',
  workflowErrorAt: 'V0d-R3v-mediainfo',
  errorMessage: 'TypeError: tracks.forEach is not a function',
  errorDetails: 'https://console.aws.amazon.com/cloudwatch/home?region=ap-southeast-2#logStream:group=/aws/lambda/V0d-R3v-mediainfo' }

Media Info Cloud watch contains the following entries

2019-11-18T20:25:40.334Z	9c940511-1ea9-4847-ac76-40a4103ae8c6	TypeError: tracks.forEach is not a function at MediaInfo.analyze (/var/task/lib/mediaInfo.js:425:14) at <anonymous> at process._tickDomainCallback (internal/process/next_tick.js:228:7)
2019-11-18T20:25:40.334Z	9c940511-1ea9-4847-ac76-40a4103ae8c6	TypeError: tracks.forEach is not a function
    at MediaInfo.analyze (/var/task/lib/mediaInfo.js:425:14)
    at <anonymous>
    at process._tickDomainCallback (internal/process/next_tick.js:228:7)
2019-11-18T20:25:42.627Z	9c940511-1ea9-4847-ac76-40a4103ae8c6	{"errorMessage":"tracks.forEach is not a function","errorType":"TypeError","stackTrace":["MediaInfo.analyze (/var/task/lib/mediaInfo.js:425:14)","<anonymous>","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
2019-11-18T20:25:42.627Z	9c940511-1ea9-4847-ac76-40a4103ae8c6	
{
    "errorMessage": "tracks.forEach is not a function",
    "errorType": "TypeError",
    "stackTrace": [
        "MediaInfo.analyze (/var/task/lib/mediaInfo.js:425:14)",
        "<anonymous>",
        "process._tickDomainCallback (internal/process/next_tick.js:228:7)"
    ]
}

The actual crash seems to occur in mediainfo.js at start of following for loop

      tracks.forEach((track) => {
        switch (track.$.type) {
          case 'General':
            this.container = MediaInfo.parseGeneralAttributes(track);
            break;
          case 'Video':
            this.videoES.push(MediaInfo.parseVideoAttributes(track));
            break;

DynamoDB contained the following entry for the problem job

{
  "errorDetails": "https://console.aws.amazon.com/cloudwatch/home?region=ap-southeast-2#logStream:group=/aws/lambda/Optus-Sp0rt-V0d-R3v-mediainfo",
  "errorMessage": "TypeError: tracks.forEach is not a function",
  "guid": "fef1131d-0f55-4225-95b7-d95573ca08af",
  "workflowErrorAt": "V0d-R3v-mediainfo",
  "workflowStatus": "Error"
}

I am assuming you will be able to reproduce the issue with any invalid file.

Any suggestions for a quick fix for this issue would be very much appreciated

Thanks

Sam

CF Template fails with "Custom Resource failed to stabilize in expected time"

I used the build-s3-dist.sh and created my own bucket with "bucket-<region name" and with respect to S3 everything is correct as it is properly able to fetch the contents.
But my template always fails here.
The following resource(s) failed to create: [S3Config, Uuid, MediaConvertEndPoint]. . Rollback requested by user.

Not sure what exactly is the problem, if I directly launch the template from AWS VOD link then it works correctly, only when I use the uploaded code from my S3 bucket this problem comes.
Any help will be highly appreciated.
Thanks

How to customise the Cloud Formation template and deploy to AWS?

I've managed to launch the VOD CloudFormation stack on my account and get everything working. I'd like to do some customisation of the code and CloudFormation template to achieve two things: use different presets for MediaConvert to support portrait video and be able to include the MP4 destination bucket in the CloudFront distribution as an origin. I'm looking through the VOD source code and I see there is a build bash script but all it does is zip up the lambda functions and customise the code bucket name in the CloudFormation template. There doesn't seem to be any documentation on what is needed to do once the Lambda functions are zipped up and the template is updated. Do I need to write another script that uploads those lambda functions to an S3 bucket? I could do that but isn't there an issue with the cfn-response module not being available to lambda code in an S3 bucket? According to this: The cfn-response module isn't available for source code that's stored in Amazon S3 buckets. To send responses, write your own functions.

So that seems like uploading the custom-resources lambda function to S3 won't work? How should the custom-resources lambda function be deployed then?

Any help in this matter would be greatly appreciated.

support for MediaPackage VOD

Hi @ffiore1,
Yes, we do plan on working on this solution in the next few months. Our main priority for the next
release is to add support for MediaPackage VOD and address any outstanding issues (such as the
metadata file not overwriting the defaults).

@dscpinheiro that would be very interesting.

Just trying to use MediaConvert with this stack, but facing few issues with an our partner: their mini set top box seems able to play clips encoded/drm-ed with MediaPackage but not with MediaConvert. Probbly, we might to play around with the Mediaconvert settings.

How might we integrate the two with this solution/stack? We could extend that and use the MP api (create asset etc.) to generate the new asset like we are already curently doing for the live2vod.

To implement that, going in your same direction, how are you going to do that? .smil file?
https://docs.aws.amazon.com/mediapackage/latest/ug/supported-inputs-vod.html

would you have any beta of that I might start playing with? any indicative deadline for that?

thaks, any help would be really appreciated

Posibility of updating VOD workflow to validate MediaFile

Hi Daniel,

On the weekend our production system was impaired by AWS MediaConvert failing on multiple VOD jobs. Even though the particular files were submitted multiple time they still continued to failed over and over again while at the time other files seemed to get processed without a issue

MC simply gave error code 1999 - Unknown error. We have opened case ID 6814048611 to investigate the issue.

We have no insights of the internal implementation of AWS MediaConvert but suspecting for whatever reason MC didn't like a batch of particular files.

Since your team have more insights to MC implementation would it be possible to update AWS VOD workflow such that files which might possibly cause MC to throw up error will get rejected by VOD ?

P.S: If you can provide a secure way for us to upload the files I can provide you access to the flles which caused AWS MediaConvert to error; or alternatively if you can get access to the above case number the files have been uploaded there

Thanks

Sam

I am using the video-on-demand template but I need a predictable output folder that will be served on CloudFront.

Hi,

I am building an learning website, and using the Video on Demand solution to store and process the videos : https://aws.amazon.com/answers/media-entertainment/video-on-demand-on-aws/

My problem is I'm using Wordpress and my users will upload the file to the source S3 bucket, and the Wordpress plugin is expecting to be able to map the source file to the cloudfront root folder, e.g:

Source File name : abcdef.mp4
Destination Address : xxxxx.cloudfront.net/hls/abcdef.mp4 ( or some known folder pattern )

The cloudformation solution generates a uuid folder for each file and will drop the file inside this random and unknown folder on the destination bucket. Is there a way to configure it to always have a predictable destination folder, so I can avoid having to write a lot of custom code to find the file ?

Base Use Case :

  1. Drop a file in the source folder
  2. AWS Video on demand will process the file
  3. I can refer to the target file on CloudFront without having to code my application to discover the file's destination.

Add support for Audio Only files

Hi,

I have a use case where i need to use the workflow to transcode Videos and Audio only files.
Right now the Profiler function applies the MediaConvert template based on the resolution gathered from MediaInfo.
I know that i can overwrite the template by uploading a manifest file along with the media. However it would be nice to implement a logic that can process Audio Only files and detect the template to use once uploaded to the source bucket.

Can you please give me some thoughts/recommendation on how this can be accomplished?
I want to keep the Video template profiling based on Height but i want to add support for audio only .wav files

Thanks,
-Kader

Encountered Permission Issue in MediaInfo

After successfully provisioning the resources from the CloudFormation template, I tried to upload a video to test. However, I encountered a permission issue on Media Info step accessing the video from signed S3 url.

Any advise how to fix this?

Add support for portrait videos

Please add support for videos shot in portrait resolutions.

  • Include MediaConvert templates for portrait resolutions
  • Include support for Rotate parameter

Add thumbnail URLs to event data

If thumbnails are enabled, there isn't any information in the events about their location in CloudFront. Any thoughts on how to tackle this? Happy to submit a PR, just need some guidance on how to get started.

error 404

I have configured mediapackage VOD asset on the AWS mediapckage console but AWS provided URL does not work it throws error 404.

getting an error with input(.json file in s3)

I have modified my 'encode-lambda', which enables 'input-clippings' and works perfectly whenever I input 'start' and 'stop' time in .json file.
But the workflow fails if I don't mention the 'start-stop' time in the .json file (for eg. I might not want to cut some videos).
Is the because of the media convert's internal structure?

Settings: { Inputs: [{ InputClippings: [{ EndTimecode: event.specifiedEnd, StartTimecode: event.specifiedStart }, { EndTimecode: event.specifiedEnd1, StartTimecode: event.specifiedStart1 }],

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.