Giter Site home page Giter Site logo

bep / s3deploy Goto Github PK

View Code? Open in Web Editor NEW
523.0 12.0 43.0 2.91 MB

A simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. "Cache-Control")

License: MIT License

Go 99.50% Makefile 0.17% Shell 0.33%
static-site amazon-s3 deploy

s3deploy's Introduction

s3deploy

Project status: active – The project has reached a stable, usable state and is being actively developed. GoDoc Test Go Report Card codecov Release

A simple tool to deploy static websites to Amazon S3 and CloudFront with Gzip and custom headers support (e.g. "Cache-Control"). It uses ETag hashes to check if a file has changed, which makes it optimal in combination with static site generators like Hugo.

Install

Pre-built binaries can be found here.

s3deploy is a Go application, so you can also install the latest version with:

 go install github.com/bep/s3deploy/v2@latest

To install on MacOS using Homebrew:

brew install bep/tap/s3deploy

Note The brew tap above currently stops at v2.8.1; see this issue for more info.

Note that s3deploy is a perfect tool to use with a continuous integration tool such as CircleCI. See this for a tutorial that uses s3deploy with CircleCI.

Configuration

Flags

The list of flags from running s3deploy -h:

-V print version and exit
-acl string
    provide an ACL for uploaded objects. to make objects public, set to 'public-read'. all possible values are listed here: https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl (default "private")
-bucket string
    destination bucket name on AWS
-config string
    optional config file (default ".s3deploy.yml")
-distribution-id value
    optional CDN distribution ID for cache invalidation, repeat flag for multiple distributions
-endpoint-url url
	optional AWS endpoint URL override
-force
    upload even if the etags match
-h	help
-ignore string
    regexp pattern for ignoring files
-key string
    access key ID for AWS
-max-delete int
    maximum number of files to delete per deploy (default 256)
-path string
    optional bucket sub path
-public-access
    DEPRECATED: please set -acl='public-read'
-quiet
    enable silent mode
-region string
    name of AWS region
-secret string
    secret access key for AWS
-source string
    path of files to upload (default ".")
-try
    trial run, no remote updates
-v	enable verbose logging
-workers int
    number of workers to upload files (default -1)

The flags can be set in one of (in priority order):

  1. As a flag, e.g. s3deploy -path public/
  2. As an OS environment variable prefixed with S3DEPLOY_, e.g. S3DEPLOY_PATH="public/".
  3. As a key/value in .s3deploy.yml, e.g. path: "public/"
  4. For key and secret resolution, the OS environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (and AWS_SESSION_TOKEN) will also be checked. This way you don't need to do any special to make it work with AWS Vault and similar tools.

Environment variable expressions in .s3deploy.yml on the form ${VAR} will be expanded before it's parsed:

path: "${MYVARS_PATH}"
max-delete: "${MYVARS_MAX_DELETE@U}"

Note the special @U (Unquoute) syntax for the int field.

Routes

The .s3deploy.yml configuration file can also contain one or more routes. A route matches files given a regexp. Each route can apply:

header : Header values, the most notable is probably Cache-Control. Note that the list of system-defined metadata that S3 currently supports and returns as HTTP headers when hosting a static site is very short. If you have more advanced requirements (e.g. security headers), see this comment.

gzip : Set to true to gzip the content when stored in S3. This will also set the correct Content-Encoding when fetching the object from S3.

Example:

routes:
    - route: "^.+\\.(js|css|svg|ttf)$"
      #  cache static assets for 1 year.
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: true
    - route: "^.+\\.(png|jpg)$"
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: false
    - route: "^.+\\.(html|xml|json)$"
      gzip: true

Global AWS Configuration

See https://docs.aws.amazon.com/sdk-for-go/api/aws/session/#hdr-Sessions_from_Shared_Config

The AWS SDK will fall back to credentials from ~/.aws/credentials.

If you set the AWS_SDK_LOAD_CONFIG environment variable, it will also load shared config from ~/.aws/config where you can set the global region to use if not provided etc.

Example IAM Policy

{
   "Version": "2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":"arn:aws:s3:::<bucketname>"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::<bucketname>/*"
      }
   ]
}

Replace with your own.

CloudFront CDN Cache Invalidation

If you have configured CloudFront CDN in front of your S3 bucket, you can supply the distribution-id as a flag. This will make sure to invalidate the cache for the updated files after the deployment to S3. Note that the AWS user must have the needed access rights.

Note that CloudFront allows 1,000 paths per month at no charge, so S3deploy tries to be smart about the invalidation strategy; we try to reduce the number of paths to 8. If that isn't possible, we will fall back to a full invalidation, e.g. "/*".

Example IAM Policy With CloudFront Config

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::<bucketname>"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::<bucketname>/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudfront:GetDistribution",
                "cloudfront:CreateInvalidation"
            ],
            "Resource": "*"
        }
    ]
}

Background Information

If you're looking at s3deploy then you've probably already seen the aws s3 sync command - this command has a sync-strategy that is not optimised for static sites, it compares the timestamp and size of your files to decide whether to upload the file.

Because static-site generators can recreate every file (even if identical) the timestamp is updated and thus aws s3 sync will needlessly upload every single file. s3deploy on the other hand checks the etag hash to check for actual changes, and uses that instead.

Alternatives

  • go3up by Alexandru Ungur
  • s3up by Nathan Youngman (the starting-point of this project)

Stargazers over time

Stargazers over time

s3deploy's People

Contributors

bep avatar blimmer avatar deining avatar dependabot[bot] avatar earthboundkid avatar joona avatar jsibbiso avatar mistobaan avatar nathany avatar natrim avatar oodavid avatar satotake avatar titanous avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3deploy's Issues

Upload no public objects to S3

When CloudFront is used in together with S3, you can restrict the access to S3 by using an Origin Access Identity (see https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html). However, as objects are uploaded with a public-read ACL (see https://github.com/bep/s3deploy/blob/master/lib/s3.go#L100), these objects can be accessed directly via S3.

Maybe this is the most used behaviour, however would be nice to parameterise (maybe via yaml config) it.

Force setting does not force?

Perhaps I misunderstand, but I read this in the README:

-force
upload even if the etags match

I run s3deploy 2.3.0 on CircleCI with this command:

s3deploy -bucket mysite -region eu-central-1 -source public -key $AWS_KEY -secret $AWS_SECRET -config .s3deploy.yml -public-access true -force

I added the -force command to the last of that command. This is however the output:

s3deploy 2.3.0, commit ed74ea6018a0859b89806a809368ec812e9cc9dd, built at 2019-01-01T14:01:40Z
404.html (ETag) ↑ about/index.html (size) ↑ contact/index.html (size) ↑ csharp/atom.xml (size) ↑ csharp/computer-drive/check-ready-drive/index.html (size) ↑ csharp/computer-drive/difference-total-free-space/index.html (size) ↑ csharp/computer-drive/drive-free-space/index.html (size) ↑ csharp/computer-drive/drive-root-directory/index.html (size) ↑ csharp/computer-drive/filter-computer-drives/index.html (size) ↑ csharp/computer-drive/get-drive-info/index.html (size) ↑ csharp/computer-drive/inaccessible-drives/index.html (size) ↑ csharp/computer-drive/index.html (size) 

[... removed the hundreds of HTML files for brevity ]

Total in 2.93 seconds
Deleted 0 of 0, uploaded 306, skipped 1006 (23% changed)

I don't get why s3deploy still skipped 1,006 files when I used force. Am I using the feature wrong?

Security-related Headers in s3deploy.yml?

Hi - in the s3deploy.yml the readme shows examples of adding caching and gzip headers. Will it also work for security-related headers like this?

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
X-Frame-Options: SAMEORIGIN
Referrer-Policy: strict-origin

Performance issues when running in a Docker container on CircleCI

I'm running s3deploy on CircleCI using version 2.3.0. The performance is very poor, though I've increased the size of my container to 8 vCPUs and 16gb ram. I've also increased the number of s3deploy workers to 8. Often, the process runs out of memory and is shut down by Circle. If it succeeds, it takes a very long time, nearly 10 minutes or more. My project is quite large, and the particular set of uploads that break involve lots of image files in several subfolders.

I've tried a variety of optimizations, in particular modifying the config file to optimize the regex or reducing the search paths (my hunch being that there's some deep recursion happening and requiring heavy memory use).

I'm able to run the same routine on OS X which has similar specs as the Docker container I'm running on CircleCI. It runs very quickly in my local environment.

Also note that there are no new files that need uploading, so this slowness is not due to upload speeds. Are there any performance tips anyone can offer?

No binaries for the 2.10 release

I have just noticed (due to broken automation on my side) that the s3deploy 2.10.0 release has no binaries available. I can easily work this around and point to the previous release (instead of latest) or use the go install (which installs 2.10.0).

But I wanted to report this in case you missed it and you want to fix it.

Gziping files makes binary files corrupted

If you set to gzip some files that are not text (ie. .wasm files)

they then have Content-Type: application/x-gzip in S3

making them not automatically un-gziped in the browser even with Content-Encoding: gzip
causing errors (as they are still gzipped)

the workaround i am using is to set Content-Type manually, but it gets tedious if you have many of these files as you need to set Content-Type for every extension separately

it would be nice if the s3deploy detected Content-Type before gzipping and set it appropriately

Add some basic tests

I started this a fork, and it had no tests. I continued to use this to deploy my own sites, which was the testing I needed.

But I notice other people are using this, so it would be nice with some basic tests. Not sure I want to bother with the S3 integration part ...

Invalid memory address or nil pointer dereference

It looks like the latest release v2.8.0 somehow broke s3deploy on a pipeline we have on circleci with the following error :

s3deploy v2, commit none, built at unknown
iframe.html (ETag) ↑ panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x8b65cc]

goroutine 6 [running]:
github.com/bep/s3deploy/v2/lib.(*osFile).ACL(0xc0002da510, 0xc0004f4060, 0xc000381000)
	/home/circleci/go/pkg/mod/github.com/bep/s3deploy/[email protected]/lib/files.go:130 +0xc
github.com/bep/s3deploy/v2/lib.(*s3Store).Put(0xc0000d07d0, 0xaddf58, 0xc0000c5040, 0xae0408, 0xc0002da510, 0xc000402018, 0x1, 0x1, 0x0, 0xc000114e88)
	/home/circleci/go/pkg/mod/github.com/bep/s3deploy/[email protected]/lib/s3.go:88 +0x6d
github.com/bep/s3deploy/v2/lib.(*store).Put(0xc00018f400, 0xaddf58, 0xc0000c5040, 0xae0408, 0xc0002da510, 0xc000402018, 0x1, 0x1, 0x0, 0x0)
	/home/circleci/go/pkg/mod/github.com/bep/s3deploy/[email protected]/lib/store.go:66 +0xe7
github.com/bep/s3deploy/v2/lib.(*Deployer).upload(0xc00030e0e0, 0xaddf58, 0xc0000c5040, 0x0, 0x0)
	/home/circleci/go/pkg/mod/github.com/bep/s3deploy/[email protected]/lib/deployer.go:317 +0x1a6
github.com/bep/s3deploy/v2/lib.Deploy.func2(0x0, 0x0)
	/home/circleci/go/pkg/mod/github.com/bep/s3deploy/[email protected]/lib/deployer.go:112 +0x3f
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc0002de750, 0xc00000e3a8)
	/home/circleci/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:58 +0x59
created by golang.org/x/sync/errgroup.(*Group).Go
	/home/circleci/go/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:55 +0x66

Exited with code exit status 2
CircleCI received exit code 2

We don't have any special config when it comes to s3deploy arguments :

s3deploy -source=storybook-static/ -region=$AWS_REGION -key=$AWS_ACCESS_KEY_ID -secret=$AWS_SECRET_ACCESS_KEY -distribution-id=$AWS_CLOUDFRONT_DISTRIBUTION_ID -bucket=$AWS_BUCKET_NAME

Currently we downgraded to v2.7.0 and the pipeline is back, not sure exactly what causes the problem is the newer version.

Allow configuration in yml file

  • I examined open issues in this repository.
  • I read the README file.
  • I Googled for examples of the .s3deploy.tml configuration file.
  • I Googled for how other people use s3deploy.

I like the tool and the advanced route configuration that we can do in the .s3deploy.yml file. But I think the tool would be easier to use from the command line when we can just type s3deploy, and have all other settings loaded from the config file. That makes version control of the settings possible too.


Should this feature already be possible, I ask in this issue to give a quick example in the README file for reference.

My approach at least didn't work:

bucket: example.com
key: xcsds
region: us-east-2
secret: xdsfdsf
source: public

routes:
    - route: "^.+\\.(js|css|svg|ttf)$"
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: true
    - route: "^.+\\.(png|jpg)$"
      headers:
         Cache-Control: "max-age=31536000, no-transform, public"
      gzip: true
    - route: "^.+\\.(html|xml|json|js)$"
      gzip: true
C:\site>s3deploy -try -config .s3deploy-us.yml
s3deploy 2.0.2, commit cc7116a41bbeed8cc9f250b48143c461a1fb4ef6, built at 2018-04-24T20:31:38Z
error: AWS bucket is required

Thanks for the time and effort put in making this tool. 🙂

Invalidating multiple CDN

Hello,

Is it possible to invalidate multiple cdn? I'm hosting my website at a S3 bucket with an apex domain (without www) through a CDN. However, for SEO purposes I did not just direct my www subdomain CNAME to same CDN. Instead I created a different S3 bucket that redirect access request to other (main) S3 bucket. It is also served through a different CDN. When I push changes by using s3deploy, it updates my main S3 bucket which is fine, and it also invalidates my main CDN with my apex domain. However, my www subdomain keeps showing older version.

Filenames with Unicode differs in MacOS and S3

I am using Mac OS to build my Hugo website.

It works perfectly when served with hugo server

But when I uploaded generated files on S3 Bucket I had noticed that some links with unicode characters not accessible via inner links or direct input.

Seems that problem is MacOS related. The OS uses NFD filenames.

Another S3 CLI tool has Same problems (unresolved)

s3tools/s3cmd#639

As I can see problem was solved for Hugo in server mode
https://discourse.gohugo.io/t/categories-with-accented-characters/505

Can you add same feature in this package too?

fatal error: all goroutines are asleep - deadlock!

When I try to deply, such as s3deploy -v -source=public/ -region=eu-west-2 -bucket=mybucketname.com -key=<mykey> -secret=<mysecret>, I just get:

s3deploy 1.1, commit deb2d965d340f5114b7a307a52f30cdb7a6aa596, built at 2017-08-28T12:04:39Z
fatal error: all goroutines are asleep - deadlock!

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc4200ea44c)
	/usr/local/go/src/runtime/sema.go:56 +0x39
sync.(*WaitGroup).Wait(0xc4200ea440)
	/usr/local/go/src/sync/waitgroup.go:131 +0x72
github.com/bep/s3deploy/lib.Deploy(0xc420118000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:206 +0x4c2
main.main()
	/go/src/github.com/bep/s3deploy/main.go:46 +0x1f7

goroutine 5 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 6 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 7 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 8 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 9 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 10 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 11 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

goroutine 12 [chan receive]:
github.com/bep/s3deploy/lib.(*Deployer).worker(0xc4200ea440, 0xc420016180, 0xc4200ea460, 0xc42005e300, 0xc4200161e0)
	/go/src/github.com/bep/s3deploy/lib/deployer.go:383 +0xca
created by github.com/bep/s3deploy/lib.Deploy
	/go/src/github.com/bep/s3deploy/lib/deployer.go:198 +0x3c9

Any ideas?

Using Circleci Auto-deploy is suddenly stopped

Using the Circleci auto-deploy in Aws S3 is stopped working. Here is the error comes while deploying.

go get -v github.com/bep/s3deploy

Exit code: 1
#!/bin/bash -eo pipefail
go get -v github.com/bep/s3deploy
github.com/bep/s3deploy (download)
package github.com/bep/s3deploy/v2/lib: cannot find package "github.com/bep/s3deploy/v2/lib" in any of:
/usr/local/go/src/github.com/bep/s3deploy/v2/lib (from $GOROOT)
/go/src/github.com/bep/s3deploy/v2/lib (from $GOPATH)

Exited with code exit status 1

Continuous_Integration_and_Deployment

The -path flag seems to be partially ignored

I am trying to use the -path flag to put my static site in a sub directory of my bucket, as I believe is supported. However, regardless of what I try, all the source files are put in the root of the bucket. Oddly, when specifying a -path parameter, s3deploy claims that all the files were missing in the destination, regardless of how many times I run the command.

Repro steps:
s3deploy -bucket mydomain.com -path web -region us-east-1 -key AAAA -secret SSS -source public -v -path web/

Results:
All files are in the root of the mybucket.com domain. Subsequent runs of the same command shows Deleted 0 of 0, uploaded X, skipped 0 (100% changed), seemingly as if it's looking in the sub directory to see what existed, but then uploading to the root directory.

Expected results:
All files are placed in the given path/subdirectory.

Version/System
s3deploy v2, commit none, built at unknown running on Mac OS X 10.12.6.
Installed via go get -v github.com/bep/s3deploy this evening.

s3deploy_2.9.0_windows-amd64.zip contains `hugo.exe`

Thank you for your great project.

I found the content of s3deploy_2.9.0_windows-amd64.zip looks wrong.

$ zipinfo s3deploy_2.9.0_windows-amd64.zip 
Archive:  s3deploy_2.9.0_windows-amd64.zip
Zip file size: 5325529 bytes, number of entries: 3
-rwx---     2.0 fat  9677312 bl defN 80-000-00 00:00 hugo.exe
-rw----     2.0 fat     8488 bl defN 80-000-00 00:00 README.md
-rw----     2.0 fat     1086 bl defN 80-000-00 00:00 LICENSE
3 files, 9686886 bytes uncompressed, 5325183 bytes compressed:  45.0%

On the other hand, the content of s3deploy_2.8.1_Windows-64bit.zip looks no problem.

$ zipinfo s3deploy_2.8.1_Windows-64bit.zip 
Archive:  s3deploy_2.8.1_Windows-64bit.zip
Zip file size: 3099348 bytes, number of entries: 3
-rw-r--r--  2.0 unx     1086 bX defN 22-Aug-25 06:33 LICENSE
-rw-r--r--  2.0 unx     7616 bX defN 22-Aug-25 06:33 README.md
-rwxr-xr-x  2.0 unx  9157120 bX defN 22-Aug-25 06:36 s3deploy.exe
3 files, 9165822 bytes uncompressed, 3098940 bytes compressed:  66.2%

Probably this change is related.

https://github.com/bep/s3deploy/releases/tag/v2.9.0

We have ported the release script to Hugoreleaser. This means that the archive names have changed (standardised), but it also means that you get only one unviversal, notarized MacOS PKG archive.

Encode unsafe characters in CloudFront cache invalidation paths

The cloudfront cache invalidation implemented in #32 is throwing error in a specific case

If the file name contains unsafe characters then cloudfront cache invalidation fails with a status code 400 bad request.

For instance, files generated by webpack can be something like vendors~about.js
To invalidate cache for such paths, we should be doing /vendors%7Eabout.js and not /vendors~about.js as the latter doesn't work.

s3deploy credentials sourcing

Maybe a stupid question but... I use s3deploy as part of an AWS CodeBuild project. In that project I pull the latest version available on the site and launch:

s3deploy -bucket $BUCKET -source ./public/ -v -region $REGION

This has been working the last time i ran it (many months ago). I launched the same build project today and it failed with an Access Denied error message (to the bucket).

Apparently there is something that happened in the way s3deploy sources the creds (the CodeBuild project has a role associated that gives full access to all buckets). I even tried to run the same command on the AWS Shell (with an administrative user) and I got the same error. The only way I was able to work around this was by explicitly using the --key and --secret flags.

Was this behavior introduced with the latest releases? Is this a regression?

Thanks!

Build instructions? [Windows]

I wanted to make a small contribution, my first for Go, to this project. But I cannot build my adjusted code for testing: go build, go build main.go, go build ./... all render an exe file (called from the project folder). But when I run that file on the command line, I still get the same program behaviour. In other words, code changes are not build.

I didn't experience this with other Go projects I made & build on my computer, so something seems 'special' about this project. After an hour Googling and trying, I can't make it work. So I think I miss some kind of tool on my computer to build this project.

Can you list out what tools are needed? Thanks (and sorry to bother you for this). 🙂

Edit: I found the problem; the fork ended up under the wrong directory of GOPATH. Fixed that and now it works. 🙂

Inconsistent verbose output

In lib/deployer.go:

func (d *Deployer) skipFile(f *osFile) prints newline

but

func (d *Deployer) enqueueUpload(ctx context.Context, f *osFile) does not.

It makes verbose output a little hard to read if you are looking for just a few uploaded files.

Content-type header for JSON incorrect?

I see that here in the code s3deploy sets the content-type header.

But for JSON files, the header becomes application/octet-stream in my bucket, which seems to be the basic header for arbitrary binary data from what I know. The proper header would be application/json for JSON files.

For me this is relevant because now Firefox and Chrome download the JSON file from my website each time, instead of allowing people to see the file's content. I attribute that to the inproper header.

Environment variable interpolation in config file

It would be convenient if you could include environment variables in the YAML config. In particular, I'm trying to set the Expires header which in my case requires a calculated value (the actual date 1 week from now, for example). If you could populate the config file using environment variables you could include any number of dynamic values if necessary.

Resetting so s3deploy will re-upload all the files

Hello @bep, thanks for this utility. It works great and even picks up on the awscli settings. Nice!

A question, I uploaded my files to S3 via an FTP client at first, for testing, then I learned about this. The first time it ran, I could see it appeared to upload everything, and, subsequent runs, I could see that it ran on a delta of files.

How do I reset it, so that it will "re-upload" everything? Is there a manifest somewhere that it is looking at, that I can delete?

Delete source options

It seems like this lib will delete the s3 data if the uploading data is not matching with s3 current data.

Can you add an options to toggle this feature?
I think most of the user will not think of this package will delete their data.

flag provided but not defined: -distribution-id

I'm getting the error flag provided but not defined: -distribution-id in the last few weeks on my CircleCI builds, even though there is no new release.

This is the command running:
s3deploy -source=out/ -region=us-east-1 -key=$AWS_ACCESS_KEY_ID -secret=$AWS_SECRET_ACCESS_KEY -distribution-id=$PRODUCTION_AWS_CLOUDFRONT_DISTRIBUTION_ID -bucket=$PRODUCTION_BUCKET_NAME

Decrease max-age Cache-control value used in examples to follow RFC

Default max-age Cache-control value in the project's README.md and tests suggest users to use very high value 630720000 (20y). RFC 2616 mentions the Expires value SHOULD NOT be more than 1y.

I suggest to use value of 31536000 (1y) in the doc/tests/examples to encourage this awesome project's users to be good netizens.

Add support for SSE aws:kms and kms-key-id

I would love to use s3deploy instead of aws s3 cp, but I have to specify aws s3 cp --sse aws:kms --sse-kms-key-id arn:aws:kms:my-key in order to transfer files into my bucket. I do not see any options for using a customer-managed KMS key with s3deploy. If you could please add this feature, I would be able to take advantage of your tool. Thank you!

Missing newline in output when enqueuing file

I am not sure if this is a bug or feature, thus I have not done a PR to fix this.

When running s3deploy and a file is marked for uploading, the output looks skewed to me because of a missing newline

posts/index.html skipping …
presentations.html (size) ↑ projects.html skipping …
search/feed.rss skipping …

The issue is in deployer.go#L165 where a space instead of a newline is used. The space indicates to me that someone thought about this case, so I am hesitant to just send a PR :-)

If it's a feature feel free to close, if it's a bug I am happy to do a famous one liner!

Ignore Sub-folders Feature

It is possible to ignore folders (local and remote) for a deploy?

What I would like to achieve is to sync everything from a folder, except files in some sub-folder, like "pdf-docs", preserving any remote (in S3) files in this folder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.