Giter Site home page Giter Site logo

s3-folder-upload's Introduction

s3 folder upload

JavaScript Style Guide npm version npm

A little script to upload statics to a S3 bucket by using the official Amazon SDK.

AWS Credentials

In order to use this module, you'll need to have AWS Credentials. You can load them, two ways:

  • By passing directly to the method as second parameter.
  • By having a ENV variable with the path to a file with the credentials. The ENV variable is AWS_CREDENTIALS_PATH and it should have accessKeyId, secretAccessKey, region and bucket.

Install

npm install s3-folder-upload -D

In case you want to use the CLI, you can install it globally:

npx s3-folder-upload

Require

const s3FolderUpload = require('s3-folder-upload')
// or the ES6 way
// import s3FolderUpload from 's3-folder-upload'

const directoryName = 'statics'
// I strongly recommend to save your credentials on a JSON or ENV variables, or command line args
const credentials = {
  "accessKeyId": "<Your Access Key Id>",
  "secretAccessKey": "<Your Secret Access Key>",
  "region": "<Your Aimed Region>",
  "bucket": "<Your Bucket Name>"
}

// optional options to be passed as parameter to the method
const options = {
  useFoldersForFileTypes: false,
  useIAMRoleCredentials: false
}

// optional cloudfront invalidation rule
const invalidation = {
  awsDistributionId: "<Your CloudFront Distribution Id>",
  awsInvalidationPath: "<The Path to Invalidate>"
}

s3FolderUpload(directoryName, credentials, options, invalidation)

Options

  • useFoldersForFileTypes (default: true): Upload files to a specific subdirectory according to its file type.
  • useIAMRoleCredentials (default: false): It will ignore all the credentials passed via parameters or environment variables in order to use the instance IAM credentials profile.
  • uploadFolder (default: undefined): If it's specified, the statics will be uploaded to the folder, so if you upload static.js to https://statics.s3.eu-west-1.amazonaws.com with a uploadFolder with value my-statics the file will be uploaded to: https://statics.s3.eu-west-1.amazonaws.com/my-statics/static.js.
  • ACL (default: public-read): It defines which AWS accounts or groups are granted access and the type of access.
  • CacheControl (default: public, max-age=31536000): HTTP header holds directives (instructions) for caching in both requests and responses.
  • Expires (default: 31536000): Header contains the date/time after which the response is considered stale. If there is a Cache-Control header with the max-age or s-maxage directive in the response, the Expires header is ignored.

If you use programatically the library, you could overwrite the ACL, CacheControl and Expires values to file level.

const options = {
  useFoldersForFileTypes: false,
  useIAMRoleCredentials: false,
}

const filesOptions = {
  'index.html': {
    CacheControl: 'public, max-age=300',
    Expires: new Date("Fri, 01 Jan 1971 00:00:00 GMT")
  }
}

s3FolderUpload(directoryName, credentials, options, filesOptions)

CLI

s3-folder-upload <folder>

Example:
s3-folder-upload statics

For the AWS Credentials

  • you can define a ENV variable called AWS_CREDENTIALS_PATH with the path of the file with the needed info.
  • you can pass the needed info via command line parameters:
    s3-folder-upload <folder> --accessKeyId=<your access key id> --bucket=<destination bucket> --region=<region> --secretAccessKey=<your secret access key>
  • you can use useIAMRoleCredentials option in order to rely on IAM Profile instance instead any passed by variables and environment

For Options

  • you can pass the needed info via command line parameters:
    s3-folder-upload <folder> <credentials parameters> --useFoldersForFileTypes=false

For CloudFront invalidation

  • you can pass the needed info via command line parameters, the invalidation needs both parameters:
    s3-folder-upload <folder> <credentials parameters> --awsDistributionId=<distributionId> --awsInvalidationPath="/js/*"
    

Environment Variables

S3_FOLDER_UPLOAD_LOG: You could specify the level of logging for the library.

  • none: No logging output
  • only_errors: Only errors are logged
  • all (default): Errors, progress and useful messages are logged.

Example of use:

S3_FOLDER_UPLOAD_LOG=only_errors s3-folder-upload <folder>

If you use the library programatically, this ENVIRONEMNT_VARIABLE will be read as well. For example:

S3_FOLDER_UPLOAD_LOG=only_errors node upload-script.js

Wish list

  • Upload a entire folder to S3 instead file
  • Async upload of files to improve time
  • Detect automatically the content type of (limited support)
  • Return the list of files uploaded with the final URL
  • Better support for parameters with the CLI
  • Improve content type function in order to get more and better types of files
  • Avoid to re-upload files if they didn't change
  • Check if cache is blocking updates of statics on website.
  • Map uploaded paths to create a default invalidation paths rule in CloudFront.

s3-folder-upload's People

Contributors

danielbalog86 avatar danil-smirnov avatar kud avatar midudev avatar miduga avatar oscar-raig avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

s3-folder-upload's Issues

path.resolve does not compatible with globby on windows

Basically, it is all about the backslash path only happening on Windows
the path.resolve() returns "C:\Users\User\myproject"
which cause globby does not find any files under it and the remove folder path does not work too.

My suggestion is simply add .replace(/\/g, "/") after the path
i.e.
globby([`${directoryPath}/**/*`.replace(/\\/g, "/")], {onlyFiles: true}) file.replace(`${directoryPath}/`.replace(/\\/g, "/"), '')

Exclude file/folder

I want to exclude file and folder for specific path. For example:
Exclude app.js and fonts folder from 'C://workstation/test/2022-11-17T13_03_00.519Z/'

Feature Request: Make ACL configurable

I noticed that the ACL in upload config is hardcoded ACL: 'public-read'.

I need the uploaded files to have configurable ACL.

Please make this configurable.

doesn't overwrite files

i need to overwrite, and it would be better if i could delete, files that already exist, but this doesn't support it apparently

Async / Callback

Hi, Can you may implement an async/callback way of doing this job?

Debug option

Hi,

could you add a debug option to the logging so it can be turned off when not needed? Or maybe use the debug npm module?
Thank you!

.

.

Feature Request: Make the folder name the name of the folder you're uploading.

I noticed that if the folder you are uploading to S3 is not within the path, the name of the folder defaults to other.

The feature request would be to simply parse the given folder path input in order to grab the folder name.

E.g.
If I run: s3folderUpload('path/to/myFolder', credentials)
the folder name that is created on s3 would be myFolder and not other.

useIAMRoleCredentials does not seem working in CLI mode

Has useIAMRoleCredentials option been carefully tested?

I didn't manage to make it working. Credentials from IAM role are attached and available if checking with curl:

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/my_role
... creds ...

But if I run the task using npx:

npx s3-folder-upload test --region=eu-west-1 --bucket=my.bucket --useIAMRoleCredentials=true --uploadFolder=test/1234567

I keep getting error:

root/aws-credentials.json
[warn] Impossible load credentials from /root/aws-credentials.json
[error] Invalid credentials

Cannot upload files to a bucket that has ACL disabled

We can´t upload files to a bucket that has ACL disabled. This is due to the fact that we can´t set ACL option to undefined since it'll be overwritten in this line.

This is the output when trying to upload to the bucket:

Uploading files from "dist/apps/**** to "s3://***"
[credentials] Checking credentials from parameters...
[credentials] Using credentials passed by parameters
[config] Directory to upload:
         /home/**************
[aws] Initialize AWS Client...
[config] Load config credentials and start AWS Client
[s3] Amazon S3 initialized
[fs] Reading directory...
[fs] Got 15 files to upload

[network] Upload 15 files...
- Uploading 679.f108534251926380.esm.js.LICENSE.txt...
AccessControlListNotSupported: The bucket does not allow ACLs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.