Giter Site home page Giter Site logo

grunt-s3's Introduction

UNMAINTAINED!

I wrote this plugin a long time ago back in the pre grunt 0.4.0 days! I think it has only really lived on this long because of the nice, succinct name.

I do not maintain this plugin anymore. It doesn't really behave the way modern grunt plugins behave. It has this stupid rel option that everybody needs but nobody knows about. I urge you to use a better plugin! Here is a list of alternatives:

Build Status

NOTE: This is the README for grunt-s3 v0.2.0-alpha. For v0.1.0, go here.

Grunt 0.4.x + Amazon S3

About

Amazon S3 is a great tool for storing/serving data. Thus, there is a chance it is part of your build process. This task can help you automate uploading/downloading files to/from Amazon S3. All file transfers are verified and will produce errors if incomplete.

Dependencies

  • knox
  • mime
  • async
  • underscore
  • underscore.deferred

Installation

npm install grunt-s3 --save-dev

Then add this line to your project's Gruntfile.js:

grunt.loadNpmTasks('grunt-s3');

##S3 User Setup

Log into your AWS Console and go to the Users management console. Click the Create New Users button and enter a username.

###Credentials File

Have AWS create a new key pair for the user and copy the contents into a grunt-aws.json file in your home directory.

{ 
    "key": "PUBLIC_KEY", 
    "secret": "SECRET_KEY", 
    "bucket": "BUCKET_NAME" 
}

###User Permissions

From the AWS IAM Users Console select the newly created user, then the Permissions Tab, then click the Attach User Policy button. Paste in the following (substituting BUCKET_NAME as appropriate).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:*"
      ],
      "Sid": "AllowNewUserAccessToMyBucket",
      "Resource": [
        "arn:aws:s3:::BUCKET-NAME",
        "arn:aws:s3:::BUCKET-NAME/*"
      ],
      "Effect": "Allow"
    }
  ]
}

Options

The grunt-s3 task is now a multi-task; meaning you can specify different targets for this task to run as.

A quick reference of options

  • key - (string) An Amazon S3 credentials key

  • secret - (string) An Amazon S3 credentials secret

  • bucket - (string) An Amazon S3 bucket

  • region - (string) An Amazon AWS region (see http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)

  • maxOperations - (number) max number of concurrent transfers - if set to 0, will be unlimited. Default: 20

  • encodePaths - (boolean) if set to true, will encode the uris of destinations to prevent 505 errors. Default: false

  • headers - (object) An object containing any headers you would like to send along with the transfers i.e. { 'X-Awesomeness': 'Out-Of-This-World', 'X-Stuff': 'And Things!' }

  • access - (string) A specific Amazon S3 ACL. Available values: private, public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control

  • gzip - (boolean) If true, uploads will be gzip-encoded.

  • gzipExclude - (array) Define extensions of files you don't want to run gzip on, an array of strings ie: ['.jpg', '.jpeg', '.png'].

  • upload - (array) An array of objects, each object representing a file upload and containing a src and a dest. Any of the above values may also be overriden.

    Passing rel:DIR will:

    • Cause the filenames to be expanded relative to some relative or absolute path on the filesystem (DIR). This operation is exclusive of DIR, i.e., DIR itself will not be included in the expansion.
    • Cause wildcards in 'src' to be replaced with actual paths and/or filenames.
  • download - (array) An array of objects, each object representing a file download and containing a src and a dest. Any of the above values may also be overriden.

  • del - (array) An array of objects, each object containing a src to delete from s3. Any of the above values may also be overriden.

  • sync - (array) An array of ojects, each oject containing a src and dest. Default behavior is to only upload new files (that don't exist). Set a key called verify with the value true on this object's options property (i.e. options: {verify: true}) to upload existing files if and only if they are newer than the versions of those same files on the server. This is implemented via an MD5 hash and by checking the modified times of the files.

  • debug - (boolean) If true, no transfers with S3 will occur, will print all actions for review by user

  • logSuccess - (boolean) If false, output for successful transfers will be ignored. Default: true

  • logErrors - (boolean) If false, output for failed transfers will be ignored. Default: true

Example

Template strings in grunt will allow you to easily include values from other files. The below example demonstrates loading aws settings from another file, Where grunt-aws.json is just a json key:value file like package.json. (Special thanks to @nanek)

This is important because you should never check in your S3 credentials to github! Load them from an external file that is outside of the repo.

grunt.initConfig({
  aws: grunt.file.readJSON('~/grunt-aws.json'),
  s3: {
    options: {
      key: '<%= aws.key %>',
      secret: '<%= aws.secret %>',
      bucket: '<%= aws.bucket %>',
      access: 'public-read',
      headers: {
        // Two Year cache policy (1000 * 60 * 60 * 24 * 730)
        "Cache-Control": "max-age=630720000, public",
        "Expires": new Date(Date.now() + 63072000000).toUTCString()
      }
    },
    dev: {
      // These options override the defaults
      options: {
        encodePaths: true,
        maxOperations: 20
      },
      // Files to be uploaded.
      upload: [
        {
          src: 'important_document.txt',
          dest: 'documents/important.txt',
          options: { gzip: true }
        },
        {
          src: 'passwords.txt',
          dest: 'documents/ignore.txt',

          // These values will override the above settings.
          options: {
            bucket: 'some-specific-bucket',
            access: 'authenticated-read'
          }
        },
        {
          // Wildcards are valid *for uploads only* until I figure out a good implementation
          // for downloads.
          src: 'documents/*.txt',

          // But if you use wildcards, make sure your destination is a directory.
          dest: 'documents/'
        }
      ],

      // Files to be downloaded.
      download: [
        {
          src: 'documents/important.txt',
          dest: 'important_document_download.txt'
        },
        {
          src: 'garbage/IGNORE.txt',
          dest: 'passwords_download.txt'
        }
      ],

      del: [
        {
          src: 'documents/launch_codes.txt'
        },
        {
          src: 'documents/backup_plan.txt'
        }
      ],

      sync: [
        {
          // only upload this document if it does not exist already
          src: 'important_document.txt',
          dest: 'documents/important.txt',
          options: { gzip: true }
        },
        {
          // make sure this document is newer than the one on S3 and replace it
          options: { verify: true },
          src: 'passwords.txt',
          dest: 'documents/ignore.txt'
        },
        {
          src: path.join(variable.to.release, "build/cdn/js/**/*.js"),
          dest: "jsgz",
          // make sure the wildcard paths are fully expanded in the dest
          rel: path.join(variable.to.release, "build/cdn/js"),
          options: { gzip: true }
        }
      ]
    }

  }

});

Running grunt s3 using the above config produces the following output:

$ grunt s3
Running "s3" task
>> ↙ Downloaded: documents/important.txt (e704f1f4bec2d17f09a0e08fecc6cada)
>> ↙ Downloaded: garbage/IGNORE.txt (04f7cb4c893b2700e4fa8787769508e8)
>> ↗ Uploaded: documents/document1.txt (04f7cb4c893b2700e4fa8787769508e8)
>> ↗ Uploaded: passwords.txt (04f7cb4c893b2700e4fa8787769508e8)
>> ↗ Uploaded: important_document.txt (e704f1f4bec2d17f09a0e08fecc6cada)
>> ↗ Uploaded: documents/document2.txt (04f7cb4c893b2700e4fa8787769508e8)
>> ✗ Deleted: documents/launch_codes.txt
>> ✗ Deleted: documents/backup_plan.txt
Done, without errors.

Alternative ways of including your s3 configuration

Environment variables

If you do not pass in a key and secret with your config, grunt-s3 will fallback to the following environment variables:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY

Helpers

Helpers have been removed from Grunt 0.4 to access these methods directly. You can now require the s3 library files directly like so:

var s3 = require('grunt-s3').helpers;

Make sure you explicitly pass the options into the method. If you've used grunt.initConfig() you can use grunt.config.get('s3') to access them.

s3.upload(src, dest, options)

Upload a file to s3. Returns a Promises/J-style Deferred object.

src (required) - The path to the file to be uploaded. Accepts wildcards, i.e. files/*.txt

dest (required) - The path on s3 where the file will be uploaded, relative to the bucket. If you use a wildcard for src, this should be a directory.

options (optional) - An object containing any of the following values. These values override any values specified in the main config.

  • key - An Amazon S3 credentials key
  • secret - An Amazon S3 credentials secret
  • bucket - An Amazon S3 bucket
  • headers - An object containing any headers you would like to send along with the upload.
  • access - A specific Amazon S3 ACL. Available values: private, public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control
  • gzip - (boolean) If true, uploads will be gzip-encoded.

s3.download(src, dest, options)

Download a file from s3. Returns a Promises/J-style Deferred object.

src (required) - The path on S3 from which the file will be downloaded, relative to the bucket. Does not accept wildcards

dest (required) - The local path where the file will be saved.

options (optional) - An object containing any of the following values. These values override any values specified in the main config.

  • key - An Amazon S3 credentials key
  • secret - An Amazon S3 credentials secret
  • bucket - An Amazon S3 bucket
  • headers - An object containing any headers you would like to send along with the upload.

s3.delete(src, options)

Delete a file from s3. Returns a Promises/J-style Deferred object.

src (required) - The path on S3 of the file to delete, relative to the bucket. Does not accept wildcards

options (optional) - An object containing any of the following values. These values override any values specified in the main config.

  • key - An Amazon S3 credentials key
  • secret - An Amazon S3 credentials secret
  • bucket - An Amazon S3 bucket
  • headers - An object containing any headers you would like to send along with the upload.

Examples

var upload = s3.upload('dist/my-app-1.0.0.tar.gz', 'archive/my-app-1.0.0.tar.gz');

upload
  .done(function(msg) {
    console.log(msg);
  })
  .fail(function(err) {
    console.log(err);
  })
  .always(function() {
    console.log('dance!');
  });

var download = s3.download('dist/my-app-0.9.9.tar.gz', 'local/my-app-0.9.9.tar.gz');

download.done(function() {
  s3.delete('dist/my-app-0.9.9.tar.gz');
});

Changelog

v0.1.0

  • Update to be compatible with grunt version 0.4.x.

v0.0.9

  • Bump version of knox to 0.4.1.

v0.0.6

  • Bump version of underscore.deferred to 0.1.4. Version 0.1.3 would fail to install sometimes due to there being two versions of the module with different capitalizations in npm.

grunt-s3's People

Contributors

aaaristo avatar asaayers avatar baffles avatar coen-hyde avatar collin avatar davb avatar dxops avatar geedew avatar hereandnow avatar jesseditson avatar jgable avatar juriejan avatar lbeschastny avatar mattrobenolt avatar mponizil avatar mreinstein avatar nanek avatar owiber avatar paladox avatar pifantastic avatar plasticut avatar sampsasaarela avatar smithclay avatar tanepiper avatar thanpolas avatar tleen avatar weltonrodrigo avatar zdexter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grunt-s3's Issues

Removal of helpers

Hi.

I can't see any issue regarding this. But if its been covered i'm sorry.

Helpers have been removed in Grunt 0.4 in favour of using require(). I notice your Readme still gives examples of using helpers. These no longer work.

I've managed to get them working by requiring the lib file directly like so:

var s3 = require('grunt-s3/tasks/lib/s3').init(grunt);

This all seems to work as long as you explicitly pass in the s3 options when you use it. Like so:

var pull = s3.pull('file.txt', 'file.txt', grunt.config.get('s3'));

I'm happy to update the ReadMe documentation with this method. But was wondering if there would be a cleaner way of requiring the s3 lib file directly. Perhaps by changing something in something in tasks/s3.js to return a reference to it when required directly? I'm not sure how this would work.

Ideally you could do this to get access to the s3 methods in lib/s3.js:

var s3 = require('grunt-s3').init(grunt);

security?

I have one question. What do you think about security? I mean having access and secret keys written in plain text in your grunt.js config file. In my configuration grunt.js is in my /webroot folder so anyone (who knows the url) can access it. I don't want to expose my S3 login info to the whole world!
Right now I am using s3cmd to sync my files with S3 and I am looking how to automate this with grunt (as I am already using it for other tasks).
What is your solution for this? Thanks!

Kris

No debug output when running `grunt s3`

I'm having trouble getting the s3 grunt task working. I'm currently on grunt 0.4.0. Here is what i have in my config:

    s3: {
        options: {
            key: '<%= localConfig.aws.key %>',
            secret: '<%= localConfig.aws.secret %>',
            bucket: '<%= localConfig.aws.bucket %>',
            access: "public-read"
        },
        deploy: {
            options: {},
            upload: [
                {
                    src: "build/example-<%= pkg.version %>.min.js",
                    dest: "shawn/",
                    gzip: true
                },
                {
                    src: "build/example-<%= pkg.version %>.js",
                    dest: "shawn/"
                }
            ]
        }
    }

The keys are definitely being substituted properly. This is the output:

$ grunt s3
Running "s3" task

Done, without errors.

Without any debug info it's a bit hard to figure out what is wrong. Any idea what's wrong?

FIle with full path not getting uploaded

my Gruntfile.js contains below files to be uploaded.

      upload: [
        {
          src: 'public/javascripts/widget/*.*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget.css',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget_body.css',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/images/widget/*',
          dest: 'widget/',
          gzip: true
        } 
      ]

path that contains wildcard was getting uploaded to s3 but full path files were not. Then I tried changing it to

 upload: [
        {
          src: 'public/javascripts/widget/*.*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/assets/widget*',
          dest: 'widget/',
          gzip: true
        },
        {
          src: 'public/images/widget/*',
          dest: 'widget/',
          gzip: true
        } 
      ],

and this works. Full path is not getting uploaded can you have a look into this.

Do not upload if file exists?

Is it possible to not upload when a file already exists? We use md5 hashed assets and a lot never change, so we are prolonging the deployment process significantly

Thanks
PS: Great project.

Error: getaddrinfo ENOTFOUND

I'm running into this issue with the latest version of grunt-s3:
Automattic/knox#192

I get Error: getaddrinfo ENOTFOUND after a while when uploading many files.

I found that adding res.resume() to the client.putFile callback in tasks/lib/s3.js solved the problem.

log destination name, not source name

when using gzip: true the source is a temp directory and name, making the log output unreadable and useless:

>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271643.1118 (28bc2e0afb2333a24a47ceab21f533e8)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271645.8916 (fedddb00b156319fa99a2da566cfdcbd)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271646.9639 (71b70c273bc0f3ed2869fbcd3bfe1807)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271639.2773 (89fe4b001560d43ffc150eeb412761a8)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271644.859 (b42486b3e2b7bf1983ea28a55e316012)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271647.9175 (6f3f28c887ff5f155c27d89f60e8b766)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271647.2693 (0d3c6ccd5d26a0a5f958a176ec1f2345)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271648.5952 (7f87204f61c66867c2d4bc4346da949f)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271670.9746 (e96a13bf74c41f2009bdac7f6d1a2580)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271667.337 (3395ee25e1c1a681f8861de9533bfad7)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271661.6167 (71a044130fb520427ee460693870165e)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271660.1301 (0347b01dbbcae8655bd30a9c27ec384d)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271644.9058 (39e20b85fffc903172b74fb66c3d824b)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271669.4417 (090591ee91599a5cf9d1163715cd6c22)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271672.44 (a8e8d0ea25b6d56338c1807ed9d64eed)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271659.609 (6e72c8691437324b198e7a4753711b01)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271671.025 (e45d8cc4675ce8a5c33aa720d4d33232)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271674.4954 (01fa9ea9f18dfb89751b29250bd40830)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271673.5227 (b14e178bf59923145088a1a093be344c)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271665.5266 (72c3fcf9f71fc3782433adc70c0d588b)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271678.92 (124ad5de5f812e983c831e8e385ccd72)
>> ↗ Uploaded: /var/folders/7n/01l92jms11j_bzh0wl4bt42r0000gp/T/1371067271683.458 (2496a8b9c80137d41decefa69fadbe1a)

Bump NPM Version

I was just about to submit a pull request to fix directories in windows, switching '' to '/' for uploads; but I noticed the fix is already applied, but hasn't been pushed to NPM.

I know it is an awkward version, since you are on an alpha right now, just wondering how long you expect until you push the next version?

Thank you!

Feature Request: Allow for "Dry Run" logging/output

I have a moderately complex build process I'm managing, and I'd like to be able to prevent my build script from uploading to S3 until I've verified that all the rules I have in place are correct.

Could a debug: true option be added that would cause all the input files, as well as their intended destination to be logged, rather than performing an actual upload to S3?

'rel' option not documented

I found the 'rel' option for uploads by reading the source code and it is really useful (only way to replace /**/ wild card paths from src). It should probably be added to the readme.md so people know about it...

No "sync" capabilities

The command line tool s3cmd has a great little sync command that only uploads files that differ/do not exist in the target bucket.

This is a pretty common case for deploying static sites, and could save a lot of time/bandwidth for large projects.

As outlined on the s3cmd site, it could be done by comparing MD5 checksum and filesize with the files in the bucket before attempting to upload them.

Different JSON structure needed to upload

Hi, first of all thanks for this project.

I'm using grunt v0.4.1 and grunt-s3 v0.1.0.
I've tried using the example provided in the home of the project but without any luck. I've looked here https://github.com/pifantastic/grunt-s3/blob/master/tasks/lib/s3.js#L97-L99 and found out that I was in need of a slightly different JSON structure:

s3: {
  key: 'key',
  secret: 'secret,
  bucket: 'bucket',
  access: 'public-read',
  upload: [{
    src: 'dist/scripts/main.min.js',
    dest: 'main.min.js',
    gzip: true
  }]
}

Am I doing something wrong?
With the above JSON my file gets correctly uploaded.
I'm kind of new with grunt so I don't know if there's a convention I'm missing here.
If the above JSON is correct I could submit a PR.

Cheers

enable 'region' configuration of the knox client

Here (and in some other place): https://github.com/pifantastic/grunt-s3/blob/master/tasks/lib/s3.js#L98
you use:

// Pick out the configuration options we need for the client.
var client = knox.createClient(_(config).pick([
'endpoint', 'port', 'key', 'secret', 'access', 'bucket', 'secure'
]));

could you enable also the 'region' attribute? Like this:

var client = knox.createClient(_(config).pick([
  'region', 'endpoint', 'port', 'key', 'secret', 'access', 'bucket', 'secure'
]));

Feature Request: Upload both compressed/uncompressed (gzip) version of same file

Howdy, just a small feature request. Could the gzip configuration option be extended to allow for a file to be uploaded with both an uncompressed, as well as a compressed version of the same file to s3?

My thought was this, if gzip: true, then it will continue to act as it does now. But if gzip: "some-file-name.gz.ext", then the original file is uploaded to dest and a gzip-compressed version is uploaded as name.gz.ext. Either that, or the extension is simply replaced on the fly. (ie. .js > .gz.js)

option to not log every transfer

We're uploading hundreds of files, there's no reason to log them all with grunt.log.ok. grunt.verbose.ok would be fine, with a `grunt.log.ok' of total stats, such as total files uploaded/downloaded/deleted, etc.

Error: Hostname/IP doesn't match certificate's altnames

Hi,

I am trying to use a bucket name with dots in it (ex: media.mywebsite.com), and it fails with the message: Error: Hostname/IP doesn't match certificate's altnames.

This issue has been fixed with: "knox": "0.8.0". See "style" section of Knox documentation. I have updated my local grunt-s3 with the change, and it looks like working for my use case. Not sure about the rest though ...

Best,
Olivier

knox-0.0.11 seems to break grunt-s3

Just a heads up an issue we ran into today -- it looks like the latest version of knox, 0.0.11, seems to break grunt-s3... on upload/change/delete, only HTTP 400 errors are returned.

I'll update this ticket when I get a chance to look into it more, but locking down the version of knox to 0.0.9 seems to be the best workaround for now.

Gzipped files have their 'Content-Type' overwritten

This may not be best practice, but... in my project I've been uploading HTML views without the .html suffix, so that the URLs show up as .../about rather than .../about.html. Haven't been able to find a better solution for this problem, given S3's routing limitations.

It works rather well – I simply specify Content-Type: text/html in the headers, and the page renders correctly in the browser.

However, when I gzip these files – that same text/html is overwritten with application/octet-stream, and the HTML view is downloaded as opposed to rendering in-browser.

Hardly ideal!

Is there a way to circumvent this issue? Perhaps by favoring any manually-designated headers over the ones provided by gzip?

Documentation update request for upload.rel

My discovery of the upload item .rel property (below) allowed me to remove about 75 lines of Gruntfile.js configuration. Unfortunately that very useful property is entirely undocumented and left out of your README examples. Might be helpful for others if you could add that to the doc/examples.

upload: [
{    
    src: 'dist/**',
    dest: 'myapp/',
    rel : 'dist/' 
}

getConfig() is not defined

I'm trying to use the delete functionality:

        s3: {
            options: {
                key: process.env.AWS_KEY,
                secret: process.env.AWS_SECRET,
                access: 'public-read'
            },

            clean: {
                options: {
                    del: [{
                        src: '**/*.*'
                    }]
                }
            }
        },

I get the following error:

$ grunt s3:clean --stack
Running "s3:clean" (s3) task
Warning: getConfig is not defined Use --force to continue.
ReferenceError: getConfig is not defined
    at Object.exports.del (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/s3.js:342:38)
    at /Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/S3Task.js:53:22
    at /Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:86:13
    at Array.forEach (native)
    at _forEach (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:26:24)
    at Object.async.forEach (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/node_modules/async/lib/async.js:85:9)
    at Object.S3Task.run (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/lib/S3Task.js:52:5)
    at Object.<anonymous> (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt-s3/tasks/s3.js:37:10)
    at Object.<anonymous> (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/lib/grunt/task.js:258:15)
    at Object.thisTask.fn (/Users/nick.heiner/opower/x-web-deploy/node_modules/grunt/lib/grunt/task.js:78:16)

Aborted due to warnings.

Am I doing something wrong?

301, write ECONNRESET

I got the following error

Running "s3:dev" (s3) task
>> Error: Upload error: /home/user/node_project/public/file1 (301)
Fatal error: write ECONNRESET

with the following configuration

s3:{
        options:{
            key: '***',
            secret: '***',
            bucket: 'bucket.01',
            access: 'public-read'
        },
        dev:{
            // Files to be uploaded.
            upload:[
                {
                    src:'public/*',
                    dest:'/'
                }
            ]
        }
    }

How to keep subdirectories when uploading?

Hi,

Is it possible to copy a directory-structure from src to dst?

So let's say you have:

src/scripts/main.js
src/scripts/vendor/require.js
src/scripts/vendor/jquery.js

But after running an upload with:

upload: [
          {
            src: 'src/scripts/**/*.js',
            dest: '/scripts/'
          }
]

the require.js and jquery.js files are in the /scripts/ dir, instead of /scripts/vendor/ dir.

Cheers,
Alwin

Use temporary folder during deployment

A few times, the .gz files when using gzip: true stuck around after the deploy. If they were moved to a temp folder instead, then even if there is a problem with the cleanup, they will no longer litter the folders.

This issue was created from #51

After 5th file, uploads slow drastically

Hey there, I'm using the 2.0 alpha off of master, and I seem to have an issue that the first 5 uploads go very quickly, then they drastically slow afterwards.

We're talking ~3 seconds total for the first 5 uploads, then ~ seconds per upload thereafter. Filtering a tcpdump by "s3" seems to show that there's just nothing going on. The first 5 uploads seem to happen nearly immediately, then there's a 4-5 minute pause, then uploads resume at a slower speed, with perhaps 15-30 seconds between uploads.

Here is what my configuration looks like

s3:
  options:
    key: "key"
    secret: "secret"
    bucket: "my.bucket.with.periods"
    secure: false
  production:
    options: {}
    upload: [
      {src: "build/img/*", dest: "/img"},
      {src: "build/js/*", dest: "/js"},
      {src: "build/css/*", dest: "/css"},
      {src: "build/*", dest: "/"}
    ]

I completely realize I'm using alpha software, so bugs might exist :) Unfortunately, I need the exposed secure: false flag for my bucket name with periods.

Any ideas what could be going on or how I could help to further debug?

Per-file options seem to get ignored/overridden

I'm using this config:

s3: {
    options: {
        key: process.env.AWS_KEY,
        secret: process.env.AWS_SECRET,
        bucket: 'static.pagerank.nfriedly.com',
        access: 'public-read',
        maxOperations: 4,
        gzip: true,
        headers: {
            'Cache-Control': 'max-age=' + 60*60*24*365 // 1 year
        }
    },
    'prod': {
        // These options override the defaults
        options: {

        },
        // Files to be uploaded.
        upload: [{
            src: 'public/*.html',
            dest: '/',
            // do gzip (default)
            headers: {
                'Cache-Control': 'max-age=' + 60*1 // 1 minute
            }
        }, {
            src: 'public/*.{js,css}',
            dest: '/',
            // do gzip (default)
            // 1-year caching (default)
        }, {
            src: 'public/*.{jpg,png,gif}',
            dest: '/',
            gzip: false
            // 1-year caching(default)
        }]
    }
}

And everything uploads correctly, but all files have the default options: my .html files have a 1-year cache control header and my images are gzipped.

I know I can work around this with multiple sub-tasks, but I thought I'd let you know (and check if I'm doing anything wrong)

Problems with AWS regions and windows

The current version of grunt-s3 uses an old version of knox, package.json explicity defines knox as
"knox": "0.0.9". The latest available version is 0.4.1
The old knox doesn't know how to calculate correct url's for s3 buckets that are not in the default us region, e.g. in Ireland. So all uploads are erroneous with 307 status code.

Also the current system doesn't allow nested folder structure. The destination path is generated using path.join(upload.dest, path.basename(file)); which removes the path from the source file.

Better alternative would be to allow users to define a base path, and then generate the destination using something like
var dest = file.replace(options.basePath, "");

At the moment i am using
upload: [
{
src: 'release/**',
dest: '',
gzip: true,
basePath: 'release/'
}
With my custom version where i want to upload the whole release folder without source folder name.

Migrate Away from Knox to Official AWS Node SDK

As Amazon has released an official AWS SDK for Node (http://aws.amazon.com/sdkfornodejs/), this plugin should move away from using Knox to using the official SDK. This will make it easier to add needed functionality such as the ability to set Cache-Control headers when uploading new files to S3. Tthis takes Knox out of the loop as a potential bottleneck for new functionality.

npm 0.0.4 release upload not working

Just noting that the npm 0.0.4 release upload is not working.

The master branch does work, so you may have just forgot to publish the updates the npm? Or the config examples I'm using from the repo may be more up to date than what is in npm.

Deleting S3 objects?

Just curious if support for deleting objects in s3 is in the general roadmap, or if you're looking for someone to contribute delete support? Thanks!

Temp file shown when uploading files

I'm getting the following output for every file when uploading since upgrading from alpha to alpha.2:

↗ Uploaded: /var/folders/9_/wpwnc64j71n7fn_vc4h1bdqr0000gn/T/1372195118776.9172

How to do glob uploads?

I tried the normal Grunt expansion syntax:

                upload: [{
                    expand: true,
                    cwd: "release/",
                    src: ["**/*.js"],
                    dest: ""
                }]

but this was not working :(

Error: socket hang up

I sometimes get socket errors when trying to push content to s3:

image

Why does this happen? It seems to be non-deterministic.

grunt-s3 doesn't work with node-0.8

It seems like the underscore.deferred grunt-s3 requires doesn't work with node-0.8. When you do an npm install in grunt-s3 after upgrading to 0.8, you get this error:

npm ERR! Error: No compatible version found: underscore.deferred@'>=0.1.2- <0.2.0-'
npm ERR! No valid targets found.
npm ERR! Perhaps not compatible with your version of node?
npm ERR!     at installTargetsError (/usr/local/lib/node_modules/npm/lib/cache.js:506:10)
npm ERR!     at next_ (/usr/local/lib/node_modules/npm/lib/cache.js:452:17)
npm ERR!     at next (/usr/local/lib/node_modules/npm/lib/cache.js:427:44)
npm ERR!     at /usr/local/lib/node_modules/npm/lib/cache.js:419:5
npm ERR!     at saved (/usr/local/lib/node_modules/npm/node_modules/npm-registry-client/lib/get.js:136:7)
npm ERR!     at /usr/local/lib/node_modules/npm/node_modules/graceful-fs/graceful-fs.js:230:7
npm ERR!     at Object.oncomplete (fs.js:297:15)
npm ERR!  [Error: No compatible version found: underscore.deferred@'>=0.1.2- <0.2.0-'
npm ERR! No valid targets found.
npm ERR! Perhaps not compatible with your version of node?]```

Doesn't seem to be doing anything

It simply says:

$ grunt s3
Running "s3" task

Done, without errors.

But it never actually does anything.

s3: {
            options: {
                key: 'My key',
                secret: 'my secret',
                bucket: 'mybucket',
                access: 'public-read'
            },
            dev: {
                options: {
                    encodePaths: true,
                    maxOperations: 20
                },
                upload: [
                    {
                        src: 'important_document.txt',
                        dest: 'documents/important.txt',
                    }
                ]
            }
        }

Deleting with wildcard

👍 I am able to remove individual files by name after updating to 40f01fe, thanks! But trying

clean: {
  del: [{
    src: '**/*.*'
  }]
}

as per #70 doesn't remove anything. Is there a way to wipe a folder/bucket?

Download (and only download) fails with 'aws "key" required'

Same settings work for upload, but not download:

s3: {
            options: {
                encodePaths: true,
                maxOperations: 50,
                access: 'public-read'
                key: '<%= aws.key %>',
                secret: '<%= aws.secret %>',
                region: '<%= aws.region %>',
                bucket: '<%= aws.bucket %>'
            },
            upload_file: {                
                upload: [
                    {
                        src: '/path/to/a/file1.tar.gz',
                        dest: 'file1.tar.gz'
                    }
                ]
            },
            download_file: {                
                download: [
                    {
                      src: 'file2.tar.gz',
                      dest: 'file2.tar.gz'
                    }
                ]
            }
        }

grunt s3:upload_file works, grunt s3:download_file fails with Warning: aws "key" required Use --force to continue

Error: "has no method 'replace'" when src is array

Using fresh install from Git repository link (0.1.0 with Grunt 0.4.0rc7).

Here is my grunt-s3 configuration:

s3: {
    key: '<%= aws.key %>',
    secret: '<%= aws.secret %>',
    bucket: '<%= aws.bucket %>',
    access: 'public-read',
    upload: [
        {
            rel: '<%= siteConfig.output %>',
            src: ['<%= siteConfig.output %>/**/*.*', '!<%= siteConfig.output %>/js/*.js', '!<%= siteConfig.output %>/css/*.css', '!<%= siteConfig.output %>/img/*.*' ],
            dest: '/',
            gzip: true
        },
        {
            rel: '<%= siteConfig.output %>',
            src: ['<%= siteConfig.output %>/js/*.js', '<%= siteConfig.output %>/css/*.css', '<%= siteConfig.output %>/img/*.*'],
            dest: '/',
            gzip: true,
            headers: { 'Cache-Control': 'public, max-age=' + (60 * 60 * 24 * 365) }
        }
    ]
}

Seems to work fine when src properties are not arrays, but I get the following error with the above configuration:

Warning: Object /Users/Andrew/Dropbox/Projects/andrewduthie.com/output/**/*.*,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/js/*.js,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/css/*.css,!/Users/Andrew/Dropbox/Projects/andrewduthie.com/output/img/*.* has no method 'replace' Use --force to continue.

From my own debugging, seems to be caused at s3.js:54

upload.src = path.resolve(grunt.template.process(upload.src));

When I change upload.src to file, it seems to work correctly for me, but I'm not familiar enough with it to be able to say it's a fix in all cases.

Feature Request: Multi-Task Support?

Would it be easy to make this task into a multi-task?

I could see a situation where a user may have several different s3 operations for their project. For example:

  • A cdn upload script that uploads built files to s3
  • A deployment script that downloads config files from s3

The ideal situation would be for those tasks to be able to be separated from one another, which a multi-task would do nicely.

has no method 'init'

Trying to use with Yeoman 1.0.

  1. Installed grunt-s3 via npm .
  2. Added s3 config section as per documentation.
  3. Registered npm task like grunt.loadNpmTasks('grunt-s3');.

When I run grunt s3 I get:

Loading "s3.js" tasks...ERROR

TypeError: Object # has no method 'init'

Any suggestions?

Grunt 0.4 Release

I'm posting this issue to let you know that we will be publishing Grunt 0.4 on Monday, February 18th.

If your plugin is not already Grunt 0.4 compatible, would you please consider updating it? For an overview of what's changed, please see our migration guide.

If you'd like to develop against the final version of Grunt before Monday, please specify "grunt": "0.4.0rc8" as a devDependency in your project. After Monday's release, you'll be able to use "grunt": "~0.4.0" to actually publish your plugin. If you depend on any plugins from the grunt-contrib series, please see our list of release candidates for compatible versions. All of these will be updated to final status when Grunt 0.4 is published.

Also, in an effort to reduce duplication of effort and fragmentation in the developer community, could you review the grunt-contrib series of plugins to see if any of your functionality overlaps significantly with them? Grunt-contrib is community maintained with 40+ contributors—we'd love to discuss any additions you'd like to make.

Finally, we're working on a new task format that doesn't depend on Grunt: it's called node-task. Once this is complete, there will be one more conversion, and then we'll never ask you to upgrade your plugins to support our changes again. Until that happens, thanks for bearing with us!

If you have any questions about how to proceed, please respond here, or join us in #grunt on irc.freenode.net.

Thanks, we really appreciate your work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.