Giter Site home page Giter Site logo

gulp-s3-upload's Introduction

gulp-s3-upload

Version 1.7.3

Use for uploading assets to Amazon S3 servers. This helps to make it an easy gulp task.

This package uses the aws-sdk (node).

NPM / Changelog

See full details in the Changelog.

Install

npm install gulp-s3-upload

Usage

Including + Setting Up Config

    var gulp = require('gulp');
    var s3 = require('gulp-s3-upload')(config);

...where config is something like...

var config = {
    accessKeyId: "YOURACCESSKEY",
    secretAccessKey: "YOUACCESSSECRET"
}

//  ...or...

var config = JSON.parse(fs.readFileSync('private/awsaccess.json'));

//  ...or to use IAM settings...

var config = { useIAM: true };

// ...or to use IAM w/ S3 config settings ...

var s3 = require('gulp-s3-upload')(
    {useIAM:true},  // or {} / null
    { /* S3 Config */ }
);

The optional config argument can include any option available (like region) available in the AWS Config Constructor. By default all settings are undefined.

Per AWS best practices, the recommended approach for loading credentials is to use the shared credentials file (~/.aws/credentials). You can also set the aws_access_key_id and aws_secret_access_key environment variables or specify values directly in the gulpfile via the accessKeyId and secretAccessKey options.

If you want to use an AWS profile in your ~/.aws/credentials file just set the environment variable AWS_PROFILE with your profile name before invoking your gulp task:

AWS_PROFILE=myprofile gulp upload

If you are using IAM settings, just pass the noted config ({useIAM:true}) in order to default to using IAM. More information on using IAM settings here.

You can also use a node_module like config (+ js-yaml) to load config files in your gulpfile.js. You can also use fs.readFileSync to read from a local file to load your config.

Feel free to also include credentials straight into your gulpfile.js, though be careful about committing files with secret credentials in your projects!

Having AWS Key/Secrets may not be required by your AWS/IAM settings. Errors thrown by the request should give your permission errors.

Gulp Task

The s3 plugin can take a second object parameter that exposes the options hash for the AWS S3 Constructor Property. Please note, if you have different configurations for different upload sets, you'll need to make a different task for each set. You won't need the accessKeyId and secret since the plugin initially takes those in for the AWS Constructor.

Create a task.

gulp.task("upload", function() {
    gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'your-bucket-name', //  Required
            ACL:    'public-read'       //  Needs to be user-defined
        }, {
            // S3 Constructor Options, ie:
            maxRetries: 5
        }))
    ;
});

Options

Bucket (bucket) (required)

Type: string

The bucket that the files will be uploaded to.

Other available options are the same as the ones found in the AWS-SDK docs for S3. The end of the readme below for a list of availble AWS-SDK resources that this plugin constantly references.

NOTE: Key, Body, and ContentType are the only options availble in putObject that do NOT need to be defined because the gulp will handle these for you. If these are defined, the plugin will filter them out.

gulp-s3-plugin options

charset

Type: string

Use this to add a charset to the mimetype. "charset=[CHARSET]" gets appended to the mimetype if this is defined.

etag_hash

Type: string

Default: 'md5'

Use this to change the hashing of the files' ETags. The default is MD5. More information on AWS's Common Response Headers can be found here. You shouldn't have to change this, but AWS says the "ETag may or may not be an MD5 diest of the object data", so this option has been implemented should any other case arise.

keyTransform (nameTransform)

Type: function

Use this to transform your file names before they're uploaded to your S3 bucket. (Previously known as name_transform).

    gulp.task("upload_transform", function() {
        gulp.src("./dir/to/upload/**")
            .pipe(s3({
                Bucket: 'example-bucket',
                ACL: 'public-read',
                keyTransform: function(relative_filename) {
                    var new_name = changeFileName(relative_filename);
                    // or do whatever you want
                    return new_name;
                }
            }))
        ;
    });

maps.ParamName {}

Type: object + function

Upon reviewing an issue with metadataMap and manualContentEncoding, a standard method for mapping each s3.putObject param was created. For now, metadataMap and manualContentEncoding are still available, but they will be depricated in the next major version (2.0).

Each property of the maps option must be a function and must match the paramter being mapped. The files' keyname will be passed through (keep in mind, this is after any keyTransform calls). The function should return the output S3 expects. You can find more information and the available options here.

For example, to define metadataMap and separate expirations in this way:

    var metadata_collection = { /* your info here */ };
    var expirations = { /* your info here */ };

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            maps: {
                Metadata: function(keyname) {
                    path.basename(keyname); // just get the filename
                    return metadata_collection[keyname]; // return an object
                },
                Expires: function(keyname) {
                     path.basename(keyname); // just get the filename
                     return new Date(expirations[keyname]);
                }
            }
        }));
    });

If anything but a function is passed through, nothing will happen. If you want to send a consistent value to all of your files this way, just simply set the option straight in the main options like so:

    var expires = new Date();
    expires.setUTCFullYear(2020);

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            Metadata: {
                "example1": "This is an example"
            },
            Expires: expires
        }));
    });

metadataMap

NOTE: It is preferred you use the maps.ParamsName method to define and map specific metadata to files. Also, if you set both maps.Metadata and this, metadataMap will take precedence.

Type: object or function

If you have constant metadata you want to attach to each object, just define the object, and it will be included to each file object being upload.

If you wish to change it per object, you can pass a function through to modify the metadata based on the (transformed) keyname.

Example (passing an object):

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            metadataMap: {
                "uploadedVia": "gulp-s3-upload",
                "exampleFlag":  "Asset Flag"
            }
        }));
    });

Passing the s3.putObject param option Metadata is effectively the same thingas passing an object to metadataMap. Metadata is defined and metadataMap is not it will use the object passed to Metadata as metadata for all the files that will be uploaded. If both Metadata and metadataMap are defined, Metadata will take precedence and be added to each file being uploaded.

Example (passing a function):

    // ... setup gulp-s3-upload ...
    var path = require('path');
    var metadata_collection = {
        "file1.txt": {
            "uploadedVia": "gulp-s3-upload",
            "example": "Example Data"
        },
        "file2.html": {
            "uploadedVia": "gulp-s3-upload"
        }
    };

    gulp.task("uploadWithMeta", function() {
        gulp.src("./upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            metadataMap: function(keyname) {
                path.basename(keyname); // just get the filename
                return metadata_collection[keyname]; // return an object
            }
        }));
    });

When passing a function, it's important to note that the file will already be transformed either by the keyTransform you defined or by the default function which creates a keyname relative to your S3 bucket, e.g. you can get "example.txt" or "docs/example.txt" depending on how it was structured locally (hence why in the example, the path module is used to just get the filename).

Note: You should be responsible for handling mismatching/unmatched keynames to the metadata you're mapping.

mimeTypeLookup

Type: function

Use this to transform what the key that is used to match the MIME type when uploading to S3.

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            mimeTypeLookup: function(original_keyname) {
                return original_keyname.replace('.gz', ''); // ignore gzip extension
            },
        }));
    });

manualContentEncoding

NOTE: It is preferred you use the maps.ParamsName method to define and map specific Content Encoding values to files. If you set both maps.ContentEncoding and manualContentEncoding, manualContentEncoding will take priority.

Type: string or function

If you want to add a custom content-encoding header on a per file basis, you can define a function that determines the content encoding based on the keyname. Defining a string is like passing the s3.putObject param option ContentEncoding.

Example (passing a string):

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            manualContentEncoding: 'gzip'
        }));
    });

Example (passing a function):

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            manualContentEncoding: function(keyname) {
                var contentEncoding = null;

                if (keyname.indexOf('.gz') !== -1) {
                  contentEncoding = 'gzip';
                }
                return contentEncoding;
            }
        }));
    });

Post-Upload Callbacks

onChange

Type: function

This function gets called with the S3 keyname as the first parameter if the uploaded file resulted in a change. Note the keyname passed is after any keyTransform modifications.

Example:

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**")
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read',
            onChange: function(keyname) {
                logChangedFiles(keyname);   // or whatever you want
            }
        }));
    });
onNoChange

Type: function

This function gets called with the S3 keyname as the first parameter if the uploaded file did not result in a change, much like onChange.

onNew

Type: function

This function gets called with the S3 keyname as the first parameter if the uploaded file is a new file in the bucket, much like onChange.

uploadNewFilesOnly

Type: boolean

Set uploadNewFilesOnly: true if you only want to upload new files and not overwrite existing ones.

Stream Support

When uploading large files you may want to use gulp.src without buffers. Normally this plugin calculates an ETag hash of the contents and compares that to the existing files in the bucket. However, when using streams, we can't do this comparison.

Furthermore, the AWS SDK requires us to have a ContentLength in bytes of contents uploaded as a stream. This means streams are currently only supported for gulp sources that indicate the file size in file.stat.size, which is automatic when using a file system source.

Example:

    gulp.task("upload", function() {
        gulp.src("./dir/to/upload/**", {buffer:false}) // buffer:false for streams
        .pipe(s3({
            Bucket: 'example-bucket',
            ACL: 'public-read'
        }));
    });

Added by @algesten

AWS-SDK References


License

Copyright (c) 2015, Caroline Amaba

Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

gulp-s3-upload's People

Contributors

algesten avatar arqex avatar benib avatar benthemonkey avatar bewong-atl avatar clineamb avatar combostyle avatar eagletmt avatar joshhunt avatar jozecuervo avatar kevinrob avatar leemhenson avatar rufman avatar stayman avatar thedancingcode avatar thomaswelton avatar tobiasweibel avatar xdissent avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gulp-s3-upload's Issues

TypeError: dest.on is not a function

I've dropped the plugin in, using node 4.3.2, Gulp, CLI version 1.2.1, Local version 3.9.1.

My upload task looks like this:

$.util.log(`S3 upload to profile [${process.env.AWS_PROFILE}]`);
    gulp.src(["./S3/config/*.json"])
        .pipe($.s3Upload({
            Bucket: 'a bucket',
            ACL: 'public-read',
            Metadata: {
                'Content-Type': 'application/json',
                'Cache-Control': 'public, max-age=3600'
            }
        }));

My attempt to upload fails with the following error:

[00:29:39] TypeError: dest.on is not a function
    at DestroyableTransform.Readable.pipe (/Users/luke/git/roc/roc-aws/node_modules/gulp/node_modules/vinyl-fs/node_modules/through2/node_modules/readable-stream/lib/_stream_readable.js:516:8)
    at Gulp.<anonymous> (/Users/luke/git/roc/roc-aws/gulpfile.js:37:10)

Have I missed something obvious?

Timeout Error after 2 min of start of upload

Hi,

I am getting exception from AWS-SDK after 2 minute of start of upload. Uploading 1K+ files to S3.
This is occurring every time if the upload process take more then 2 minutes.

Below is the traces of the exception.
...\node_modules\aws-sdk\lib\req
uest.js:31
throw err;
^

Error: S3 headObject Error: Error: write EPROTO
at Object.exports._errnoException (util.js:870:11)
at exports._exceptionWithHostPort (util.js:893:20)
at WriteWrap.afterWrite (net.js:763:14)
at Request.callListeners (...\node_modules\aws-sdk\lib\sequential_executor.js:108:43)
at Request.emit (...\node_mo
dules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (...\node_mo
dules\aws-sdk\lib\request.js:668:14)
at Request.transition (...\n
ode_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (...\node_modules\aws-sdk\lib\state_machine.js:14:12)
at ...\node_modules\aws-sdk
lib\state_machine.js:26:10
at Request. (...
node_modules\aws-sdk\lib\request.js:38:9)
at Request. (...
node_modules\aws-sdk\lib\request.js:670:12)
at Request.callListeners (...\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
at Request.emit (...\node_mo
dules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (...\node_mo
dules\aws-sdk\lib\request.js:668:14)
at Request.transition (...\n
ode_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (..\node_modules\aws-sdk\lib\state_machine.js:14:12)
at ...\node_modules\aws-sdk
lib\state_machine.js:26:10
at Request. (...
node_modules\aws-sdk\lib\request.js:38:9)
at Request. (...
node_modules\aws-sdk\lib\request.js:670:12)

feature request: ACL transform

First, I would like to thank you all the hard work you've done. It's extremely useful for my needs.

I have one issue wherein I don't want all files to be uploaded to the bucket with the same ACL type. Specifically, I'm using an S3 bucket for web pages (and then using Cloudfront to serve the page).

I would like to include .map files but I would want the files to be private instead of public-read. If there were some sort of translation I could perform based on the filename/keyname, that would easily solve my issues. For the time being I'm going to try to use multiple pipes.

Using IAM from Lambda Function fails

I've been wrestling with this for a while and have determined that the issue lies with this package. Here's the code I am using in a Lambda Function:

        var config = require("config.json");

	var AWS = require("aws-sdk");
	var gulp = require("gulp");
	var gulpS3Upload = require("gulp-s3-upload");
	var gulpS3 = gulpS3Upload(config.s3);
	var s3 = new AWS.S3(config.s3);

	var myBucket = "mycdnbucket";
	var myKey = "test/test.txt";

	await gulp
		.src("config.json")
		.pipe(
			gulpS3({
				Bucket: `${myBucket}/test`,
				ACL: "public-read",
			}, {
				maxRetries: 5
			})
		)
		.on("error", function (err) {
			console.log("S3 error", err);
		})
		.on("end", function () {
			console.log(
				`Files successfully uploaded to S3 to the path: /${myBucket}/${myKey}`
			);
		});

	return s3.putObject({
			Bucket: myBucket,
			Key: myKey,
			Body: "Hello world!"
		},
		function (err, data) {
			if (err) {
				console.log("error!", err);
			} else {
				console.log(`Successfully uploaded data to ${myBucket}/${myKey}`);
				console.log(data);
			}
		}
	);

The above code fails for the gulpS3 code, but succeeds with the aws-sdk code. Any help with determining the actual issue within the code would be appreciated. Here's the error stacktrace I am getting:

S3 error { [Error: S3 putObject Error: AccessDenied: Access Denied
at Request.extractError (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/services/s3.js:580:35)
at Request.callListeners (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:38:9)
at Request. (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:116:18)]
message: 'S3 putObject Error: AccessDenied: Access Denied\n at Request.extractError (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/services/s3.js:580:35)\n at Request.callListeners (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:683:14)\n at Request.transition (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request. (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:38:9)\n at Request. (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/request.js:685:12)\n at Request.callListeners (/var/task/node_modules/gulp-s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:116:18)',

Feature Request: Specify s3 Path

Is is possible to add an option to specify the upload path? AWS API suggests prepending the file name with /folder/to/upload/to/. Can't figure out how to implement this here.

slow / not multithreaded

Been getting complaints from my team that this is much slower than the old plugin we were using (gulp-s3) and, looking at the source, I think I have an idea why - are they multithreaded and this one isn't? Their code uses event-stream and map to split out the uploads to do multiple at once. Is that something you'd consider implementing?

https://github.com/nkostelnik/gulp-s3/blob/master/index.js

contentEncoding = null results in contentEncoding `gzip`

I'm running into some issues regarding compression. I'm using contentEncoding to see which files should be marked in the metadata as gzipped, like this

gulp.task('uploadToS3', function() {
  var expires = new Date();
  expires.setUTCFullYear(2020);

  return pump([
      gulp.src('./dist/**'),
      s3({
          Bucket: process.env.S3_BUCKET,
          ACL:    'public-read',
          Expires: expires,
          manualContentEncoding: function(keyname) {
              var contentEncoding = null;

              if (keyname=='index.html') contentEncoding = 'gzip'
              if (keyname=='js/scripts.js') contentEncoding = 'gzip'
              if (keyname=='css/css.css') contentEncoding = 'gzip'

              return contentEncoding;
          }
        },{
          maxRetries: 5
      })
    ]);
});

This results is some files outside that range (for example files in img folder), uploaded with:

screen shot 2016-11-08 at 4 23 00 pm

This is not the excpected behaivour, right?

I'll do more digging tonight in order to debug the request to AWS

PS: I'm using version "1.6.1"

Bug: TypeError: mime.lookup is not a function

env:

  • node 8.15.0
  • npm 6.4.1
  • gulp 3.9.1
  • gulp-s3-upload 1.7.2

When trying to upload my project i received the error:
TypeError: mime.lookup is not a function

the same script works fine in gulp-s3-upload 1.7.1

maps - shared mutable options

gulp.src('./**')
  .pipe(s3({
    maps: {
      ContentDisposition: function (keyname) { return keyname; }
    }
  }));

It send files with value from latest file. Probably because it overrides options.ContentDisposition on each file.

The solution might be to make a copy of options for each file.

Unable to override metadata for existing files

Hi,

I am trying to override the metadata set every time when I run the upload script. New files looks good but I am unable to override metadata for existing files. Please do the needful.

For Example:
Metadata: {
'Cache-Control': 'max-age=31536000, no-transform, public',
'Last-Modified': today.getTime().toString(),
'x-amz-acl': 'private'
}

Last-Modified is not updated and I see "No Change" in the logger

dest.on is not a function at DestroyableTransform.Readable.pipe

Hey,

I'm getting the following error when running this code:

const gulp = require('gulp');
const s3 = require('gulp-s3-upload');
const config = require('../config');

gulp.task('s3-assets', function () {
  gulp.src(config.paths.assets + '/css/**')
    .pipe(s3({
      Bucket: config.s3.bucket,
      ACL: config.s3.acl,
      keyTransform: function(filename) {
        var newKey = "folder/" + filename;
        return newKey;
      }
    }));
});

which throws the following error:

TypeError: dest.on is not a function
    at DestroyableTransform.Readable.pipe (MYFOLDER/node_modules/gulp/node_modules/vinyl-fs/node_modules/readable-stream/lib/_stream_readable.js:516:8)
    at Gulp.<anonymous> (MYFOLDER/gulp/tasks/s3-assets.js:19:4)
    at module.exports (MYFOLDER/node_modules/gulp/node_modules/orchestrator/lib/runTask.js:34:7)
    at Gulp.Orchestrator._runTask (MYFOLDER/node_modules/gulp/node_modules/orchestrator/index.js:273:3)
    at Gulp.Orchestrator._runStep (MYFOLDER/node_modules/gulp/node_modules/orchestrator/index.js:214:10)
    at Gulp.Orchestrator.start (MYFOLDER/node_modules/gulp/node_modules/orchestrator/index.js:134:8)
    at /usr/local/lib/node_modules/gulp-cli/lib/versioned/^3.7.0/index.js:46:20
    at _combinedTickCallback (internal/process/next_tick.js:67:7)
    at process._tickCallback (internal/process/next_tick.js:98:9)
    at Module.runMain (module.js:607:11)

I'm using the ~/.aws/credentials way of authorising s3.

Cheers

Craig.

Adding support for custom Keys

This is necessary since, in uses cases like, uploading multiple resized versions of images into an s3 bucket, a rename system wont work, since gulp will add its own key. Just need the key filtering removed.

using profiles from ~/.aws

I use multiple profiles in my ~.aws/credentials file. Easy to use in the aws-cli.

But not understanding your comment ""you can specify the profile name inline with the call to gulp."

Are you talking about gulp.task? or in the require statment?

Example of this usage would be helpful here and in the documentation.

Thx.

Possible to set custom HTTP headers?

Can we use gulp-s3-upload to set custom HTTP headers? I know of the Metadata argument, but with code like this:

gulp.task('uploadS3', function() {
    return gulp.src('deploy/assets/style.css')
        .pipe(s3({
            Bucket: 'www.example.net',
            ACL: 'public-read',
            uploadNewFilesOnly: false,
            Metadata: {
                "Example-Header": "Example"
            },
        }, {
            maxRetries: 5,
        }));
});

The HTTP header that's set is:

x-amz-meta-example-header:"Example"

I'm not sure whether the plugin or aws-sdk-js adds the 'x-amz-meta' prefix and performs lowercasing, but I do know from these issues (here and here) that it's possible to upload objects with custom headers.

Am I overlooking a feature of this plugin, or isn't this possible?

Edit: I just learned now that AWS' PUT object operation can specify a couple of HTTP headers. Because the HTTP header that I want to add is not in that allowed list, it doesn't seem to be possible to set the custom header I'm looking for.

(This issue can be closed from my standpoint.)

Replace deprecated dependency gulp-util

Replace deprecated dependency gulp-util

gulp-util has been deprecated recently. Continuing to use this dependency may prevent the use of your library with the latest release of Gulp 4 so it is important to replace gulp-util.

The README.md lists alternatives for all the components so a simple replacement should be enough.

Your package is popular but still relying on gulp-util, it would be good to publish a fixed version to npm as soon as possible.

See:

Method not taking IAM role

When running the command as part of our build process we have an IAM role assigned which has full access to S3, however the gulp-s3-upload throws an exception every time.

If I pass an empty constructor to the require statement there exception is that 'key' is not passed in, however if there's no constructor there's an exception for a missing 'on' method.

Is there any special config required for using IAM or is this just not working?

Add option for SSL

I am trying to upload static files to S3. It works with node 4.10 but throws following error with 8.1.2

here is the task

var s3 = require('gulp-s3-upload')(config.aws_credentials);

gulp.task("upload", function () {
  log('Syncing files with s3 bucket');
  gulp.src(config.build + '**/*')
    .pipe(s3({
      Bucket: config.bucket, //  Required
      ACL: 'public-read',       //  Needs to be user-defined,
      keyTransform: function (relative_filename) {
        var new_name = 'build/' + relative_filename;
        log(new_name);
        // or do whatever you want
        return new_name;
      }
    }, {
      // S3 Constructor Options, ie:
      maxRetries: 5
    }));
});

I am sure that I have access to S3. As I can update files with node 4.1.0 version. The issue with SSL.
Can we simply put an option to use SSL or not? like in aws sdk we can use sslEnabled: false

[14:26:00] build/images/img/default/fancybox/fancybox_overlay.png
[14:26:00] build/images/img/default/fancybox/fancybox_sprite.png
/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/request.js:31
            throw err;
            ^

Error: S3 headObject Error: Error: Hostname/IP doesn't match certificate's altnames: "Host: sample2017-staging.s3.amazonaws.com. is not in the cert's altnames: DNS:*.s3.amazonaws.com, DNS:s3.amazonaws.com"
    at Object.checkServerIdentity (tls.js:221:17)
    at TLSSocket.<anonymous> (_tls_wrap.js:1104:29)
    at emitNone (events.js:105:13)
    at TLSSocket.emit (events.js:207:7)
    at TLSSocket._finishInit (_tls_wrap.js:628:8)
    at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:458:38)
    at Request.callListeners (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/sequential_executor.js:107:43)
    at Request.emit (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/request.js:668:14)
    at Request.transition (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/request.js:670:12)
    at Request.callListeners (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
    at Request.emit (/home/saad/PycharmProjects/sample/web-app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)

stops at 16 files

I am testing out your plugin, but it seems to stop at the first 16 files that I include as source files.

Is there any limit to the size or number of files that it can handle?

S3 Error: null

Error when uploading files to bucket containing dots(like example.com)

Gulp-s3-upload breaks pipe

A very simple gulptask, where s3-upload breaks the pipe.

// Dependencies
var s3Download    = require("gulp-download");
var s3Upload      = require('gulp-s3-upload')(config); //config is aws config

// Task
gulp.task('default', function() {
    s3Download(url)
    .pipe(imageResize({
        width: 300,
        upscale: true,
        format: 'jpeg'
    }))
    .pipe(rename({
        dirname: "resized",
        basename: "baseName",
        prefix: "",
        suffix: "-300px"
    }))
    .pipe(s3Upload({
        Bucket: 'mybucket',
        ACL:    'public-read',
    }, {
        maxRetries: 5
    }))
    .pipe(gulp.dest('./dist'));
});

The last operation in the pipeline (saving the file to ./dist) never happens. Which most likely means, at the upload pipe does not return the stream properly.

Even with uploadNewFilesOnly: false, doesn't always update files

Need an option that force-uploads files, even if they don't "look" like they've changed - sometimes a file will get corrupted while uploading, and the only way to fix it right now is to manually log in to the server and delete it.

For what it's worth, the process where it checks to see if it should update the file also dramatically increases the time it takes to upload because of the extra server call.

[12:25:48] Uploading ..... todd/low/1/sled_arrow.png
[12:25:49] Updated ....... todd/low/1/sled_arrow.png
[12:25:49] Uploading ..... todd/med/1/ad.css
[12:25:50] No Change ..... todd/med/1/ad.css
[12:25:50] Uploading ..... todd/med/1/ad.html
[12:25:51] No Change ..... todd/med/1/ad.html

Unchanged file are uploaded

Here are call to headObject is made https://github.com/clineamb/gulp-s3-upload/blob/master/index.js#L104
This call returns info on a file in the bucket including an ETag. The Etag is often (but not always depending on server side encryption) an md5 hash of the file.
So it is safe to assume that if the ETag returned by AWS matches the MD5 hash of a local file then this should not be uploaded.

Checking the md5 hash of a local file would greatly speed up this task after repeated runs.

Example access policy?

Sorry to be newb on the aws, but what's an example IAM policy?

var config = {
  "useIAM": true,
  "accessKeyId": "asdfasdfasdf",
  "secretAccessKey": "adsfasdfasdf"
}

w/ a policy of:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::bucket.com"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::bucket.com/*"
            ]
        }
    ]
}

Overriding ContentType

I noticed on the docs that it says ContentType does not need to be defined, and the plugin will filter out any attempts at changing it.

But for files where I have gzip compression, I need to set the Content-Type to "application/javascript" and "text/css" for ".jz.gz" and ".css.gz" respectively.

Can this be done at all with gulp-s3-upload

Add a dry-run option

Sometimes while testing it would be useful to run the upload on a "dry-run" case, where all the maps are executed but the final upload step is not run.

Feature Request: send files with their paths to s3

Can you change code in /gulp-s3-upload/index.js

from:
keyname = keyTransform(file.relative);
to:
keyname = keyTransform(file.relative, file.path);

in this code:

if(keyTransform) {

    //  Allow the transform function to take the
    //  complete path in case the user wants to change
    //  the path of the file, too.

    // >>>>>>> change here
    keyname = keyTransform(file.relative);

} else {
    // ...Otherwise keep it exactly parallel.

    keyparts = helper.parsePath(file.relative);
    keyname  = helper.buildName(keyparts.dirname, keyparts.basename + keyparts.extname);
}

It help send files with their paths when we use keyTransform (see my answer in #33)
Thank you

Can't run the upload

Hi, I implemented gulp-s3-upload, but I can't make it work.
What am I doing wrong?

the config.js file

  deploy: {
    key: "s3key",
    secret: "s3secret"
  }

gulptask:

var gulp = require('gulp');
var config = require('../config');
var s3 = require('gulp-s3-upload')(config.deploy);

gulp.task('upload', function(){
    return gulp.src('../../build/index.html')
    .pipe(s3({
        Bucket: "somebucket-test",
        ACL: 'public-read'
    }))
});

console output:

gulp upload
[15:50:42] Using gulpfile /var/www/admin/gulpfile.js
[15:50:42] Starting 'upload'...
[15:50:42] Finished 'upload' after 25 ms

It doesn't run this function
stream = es.map(function (file, callback) {

Is this possible to use routingRules ?

WebsiteConfiguration: { /* required /
ErrorDocument: {
Key: 'STRING_VALUE' /
required /
},
IndexDocument: {
Suffix: 'STRING_VALUE' /
required /
},
RedirectAllRequestsTo: {
HostName: 'STRING_VALUE', /
required /
Protocol: http | https
},
RoutingRules: [
{
Redirect: { /
required /
HostName: 'STRING_VALUE',
HttpRedirectCode: 'STRING_VALUE',
Protocol: http | https,
ReplaceKeyPrefixWith: 'STRING_VALUE',
ReplaceKeyWith: 'STRING_VALUE'
},
Condition: {
HttpErrorCodeReturnedEquals: 'STRING_VALUE',
KeyPrefixEquals: 'STRING_VALUE'
}
},
/
more items */
]

version 1.7.2+ doesn't work in node 4.4

We're using gulp-s3-upload with Node -v 4.4.

The 1.7.2 update modified the ansi-colors dependency from ^1.0.1 to ^3.2.3.

ansi-colors 3.2.3 is not compatible with node 4.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.