Giter Site home page Giter Site logo

circleci-public / aws-s3-orb Goto Github PK

View Code? Open in Web Editor NEW
10.0 10.0 20.0 74 KB

Integrate Amazon AWS S3 with your CircleCI CI/CD pipeline easily with the aws-s3 orb.

Home Page: https://circleci.com/orbs/registry/orb/circleci/aws-s3

License: MIT License

Shell 100.00%

aws-s3-orb's People

Contributors

brivu avatar christopherhackett avatar felicianotech avatar jaryt avatar kbravh avatar kyletryon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-s3-orb's Issues

Add a Changelog

I'm currently upgrading from 1.0.16 to 1.1.1 as a result of #13 getting merged, but I noticed there are no release notes, no change log and no way of knowing without analyzing the source myself. So this request would be to add a change log and use git tags aligned with the orb version so it's easier to see the diffs.

Default env vars are aggressively chosen over specific env vars

Orb Version 1.0.15

Default env vars AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY win over specific env vars like AWS_ACCESS_KEY_ID_BLUE and AWS_SECRET_ACCESS_KEY_BLUE when all are present. When using a config like the one found in the example

      - aws-s3/sync:
          arguments: |
            --acl public-read \
            --cache-control "max-age=86400"
          aws-access-key-id: AWS_ACCESS_KEY_ID_BLUE
          aws-region: AWS_REGION_BLUE
          aws-secret-access-key: AWS_SECRET_ACCESS_KEY_BLUE

You can break that example by creating environment vars named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with junk values.

I expect "defaults" to be fall back values in this scenario.

We worked around it by naming both sets with suffixes and staying away from the default key names.

Cannot find a definition for command named copy

Orb Version
circleci/[email protected]

Describe the bug
I'm getting a build error when trying to run aws-s3/copy

To Reproduce

I added the following step (it's inside a custom command).
All AWS env params are set.

 - aws-s3/copy:
                arguments: |
                  --acl public-read \
                  --cache-control "max-age=10"
                from: <FILE_LOCATION>
                to: <S3_LOCATION>

Expected behavior

Being able to run the copy command.

s3 commands can only be executed with the default profile. Need to add --profile flag

Describe the solution you'd like
After authenticating with the AWS cli orb, s3 commands ran on this orb always run on the default profile. The AWS cli orb enables users to specify a profile to be configured. If a profile other than the default is configured, these s3 commands cannot be run with it.

Describe alternatives you've considered
The --profile flag needs to be added to the sync and copy commands in order to use different profiles

aws command not found orb 3.1.1

Orb version
3.1.1

Describe the bug
I'm getting "/bin/bash: line 14: aws: command not found" when using "aws-s3/sync"

To Reproduce

my_job:
docker:
- image: cimg/node:lts-browsers
- aws-s3/sync:
arguments: |
--cache-control "max-age=86400"
from: screenshots
to: 's3://my_s3_bucket/yyyy'
when: on_fail

Expected behavior

successfully set up aws-cli, because "install-aws-cli" param is default to true

Additional context
I think this is because the Shebang used in the orb source: #!/bin/sh
and this might be related #4

[FEATURE] Support AWS endpoint

Is your feature request related to a problem? Please describe.
I want to use this ORB with Digital Ocean Spaces. DO spaces is an s3-compatible object store.

Describe the solution you'd like
I would like to use a variable to setup the default AWS endpoint. See the s3cmd example.

Describe alternatives you've considered
Using s3cmd in CircleCI.

Additional context
This solution will make this orb suitable for many s3-compatible storage services.

Use AWS_DEFAULT_REGION instead of AWS_REGION by default

Is your feature request related to a problem? Please describe.
All circleci/asw-* orbs I checked (cli, ecs, ecr) are using AWS_DEFAULT_REGION as environment variable by default for aws region. Except aws-s3. This orb uses AWS_REGION.

Describe the solution you'd like
Instead of overwriting env variable name when running job it would be more intuive using the same env variable the other aws-* are using.

about arguments in aws s3 sync

Orb Version
aws-s3: circleci/[email protected]

Describe the bug

In orb version 4.0, I tried to ignore hidden files when aws sync.
to do that I tried to use like this

      - aws-s3/sync:
          arguments: --delete --exclude '.*'
          from: ./
          to: {target}

but ci runs like that and then it failed to ignore hidden files...

aws s3 sync ./ s3://web-docs-apne1-stg/CINF-2620-test/ --profile default --delete --exclude ''\''.*'\'''

is there any way to do that without escaping? thanks!
To Reproduce

Expected behavior

Additional context

Windows support

Is your feature request related to a problem? Please describe.
Add suppport for windows environments.

I might be missing something, but it seems that this orb doesn't work on windows. Could you please add support or provide a workarond in the orb description?

Thanks

bash: line 37: sudo: command not found

Add optional override AWS credentials flag

Is your feature request related to a problem? Please describe.

When you setup the aws CLI base on assume role (from aws-cli orb for example) if you don't setup the default env_var_name values on this orb.. the aws cli setup is overridden making fail the s3 operation

Describe the solution you'd like

Add a flag at aws-cli/setup step level to be optional and by default set in false

Describe alternatives you've considered

Other option is set the override variables regarding to aws.. but I think doesn't repeat the setup cli if you already setup before

Additional context

Setup of aws cli credentials could by env vars or by iam-role or by sts credentials too (session-token). All from aws-cli orb. This orb should be aware from those alternatives

error: argument --acl: Invalid choice / Unknown options: --acl

Orb Version
circleci/[email protected]

Describe the bug
In Orb version 2.x, I was using the following syntax:

- aws-s3/sync:
    arguments: |
      --acl=public-read \
      --cache-control="max-age=86400"

note that acl=public-read has a = in the command. The AWS CLI (to my knowledge) supports both variations; typically, CLI tools would support both, with the reason for supporting = to avoid ambiguity when the value for an argument starts e.g. with a dash like in foo -1vs. foo=-1 makes clear, that the first is not a syntax error).

After upgrading the Orb to 3.x, the above syntax fails with:

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: argument --acl: Invalid choice, valid choices are:

private                                  | public-read                             
public-read-write                        | authenticated-read                      
aws-exec-read                            | bucket-owner-read                       
bucket-owner-full-control                | log-delivery-write                      


Exited with code exit status 252

The CircleCi docs document this command with a space instead of =, fair enough, so I changed it to:

- aws-s3/sync:
    arguments: |
      --acl public-read \
      --cache-control "max-age=8640"

which now results in

Unknown options: --acl public-read --cache-control max-age=86400

Exited with code exit status 252

Here the full snippet I use:

jobs:
  sync-bucket:
    resource_class: small
    docker:
      - image: cimg/base:current
    steps:
      - checkout
      - attach_workspace:
          at: .
      - aws-s3/sync:
          arguments: |
            --acl public-read \
            --cache-control "max-age=86400"
          from: public
          to: 's3://my-bucket'

Using Orb version 2.x, both variations (space and =) work btw.

To Reproduce

see above snippet

Expected behavior

It's expected to work as documented

Additional context

I saw, that the example uses cimg/python:3.10 as an image, and I wonder why? Why should I as a user need to know/care, that the AWS CLI under the hood is written in python? There is no obvious use of python, plus, AWS might change the language.

Shouldn't this dependency rather be inside the Orb, so that I can use the base image?

https://github.com/CircleCI-Public/aws-s3-orb/blob/master/src/executors/default.yml#L16

The orb actually uses the AWS image, which builds on top of cimg/deploy which installs python actually https://github.com/CircleCI-Public/cimg-deploy/blob/main/2023.07/Dockerfile#L14

I've also tested running the orb in a job that uses the base image, and that works.

Quite frankly, I came across breaking changes with almost every Orb's major version update, and I never found any proper documentation on those breaking changes. Often, they were bugs :(
For lots of the AWS-related Orbs, I'm using year-old versions since newer versions all are way too buggy :( -- I suspect some underlying issue regarding the quality of the codebase for this.

Unclear that overwrite means --delete for sync command

Orb Version
All versions

Describe the bug
This is more a question/discussion about the overwrite parameter of the sync command. It is not obvious what this parameter actually does.

aws s3 sync will aways overwrite files that already exist in the bucket if the files being sync'd are newer.

What the overwrite parameter is actually doing is adding the --delete flag to the s3 command. This has a very different meaning to what you would expect from overwrite. --delete will remove any files that already existed on the bucket but are not in the directory being sync'd. In other words, existing files will be overwritten regardless of the overwrite parameter, but this parameter causes files that may already exist in the bucket to be removed if they are not found in the source directory for the sync command.

Ideally, the overwrite parameter would be renamed to delete to match what it is actually doing to the s3 sync command.

This unexpected behaviour caused our web app to break due to files being deleted from the bucket when running the sync command with overwrite: true.

Expected behavior

Either the documentation for this orb makes it clear that overwrite: true means --delete, or the parameter be renamed to delete.

AWS-S3 orb not working, "aws: command not found"

Orb version

circleci/[email protected]

What happened

Screenshot 2020-02-04 15 00 34

Expected behavior

AWS CLI should be setup properly.
Relevant config pieces from config.yml:

version: 2.1
orbs:
  aws-s3: circleci/[email protected]
s3config: &s3config
  aws-access-key-id: AWS_ACCESS_KEY_ID
  aws-secret-access-key: AWS_SECRET_ACCESS_KEY
  aws-region: AWS_REGION
  arguments: |
    --acl public-read \
    --cache-control "max-age=86400"
  overwrite: true

executors:
  gcloud:
    docker:
      - image: google/cloud-sdk

jobs:
  deploy-prod:
    executor: gcloud
    steps:
      - checkout
      - run: bash ./scripts/auth-gcp.sh
      - setup_remote_docker
      - attach_workspace:
          at: workspace
      - run: bash ./scripts/deploy-prod.sh
      - aws-s3/sync:
          <<: *s3config
          from: workspace/build/static/js
          to: 's3://path/app/static/js'
      - aws-s3/sync:
          <<: *s3config
          from: workspace/build/static/css
          to: 's3://path/app/static/css'
      - aws-s3/sync:
          <<: *s3config
          from: workspace/build/static/media
          to: 's3://path/app/static/media'

Sync from container

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
I would like to be able to use aws-ecr/build-and-push-image to build my frontend and then to be able to use aws-s3/sync to be able to sync my build directory from said image into s3 before trigging cloudfront invalidation.

Describe alternatives you've considered
Because I can only use aws-s3/sync to sync from a workspace, I resort to building my frontend within my workspace and forego building this part as a docker image.

Additional context
Docker layer image caching is substantially faster than restoring/storing a dependency cache. Being able to sync directly from my built image will be a substantially faster workflow and will create better consistency as everything can be managed via a parallel aws-ecr/build-and-push-image and a follow up helm install/upgrade + aws-s3/sync + cloudfront invalidation

"Unknown options" error when running copy

Orb Version
3.1.1

Describe the bug
After updating from 3.0.0 to 3.1.1, the copy command fails throwing the Unknown options: --content-type application/json error. According to the AWS CLI docs, this argument is supported (https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/cp.html).

To Reproduce

Steps to reproduce the behavior:

  1. Go to https://app.circleci.com/pipelines/github/mui/material-ui/86615/workflows/7be373c8-4164-4c81-9645-bbf0c4743e0e/jobs/460113
  2. Click on the failed step
  3. See error

Expected behavior
The command succeeds, as in the previous version.

Additional context
The config of the failing step is at https://github.com/mui/material-ui/blob/0d3ea74d1ef76f06deff16984d13608d388cfed9/.circleci/config.yml#L721-L737

S3 Orb broken on CircleCI Provided (non-python) images based on Ubuntu 20.04

Orb Version
1.0.16

If you use any CircleCI provided image (non-python) based on 20.04 Ubuntu, for example cimg/go:1.14, this Orb will fail to execute as Ubuntu 20.04 no longer has python in the path, only python3. This does not manifest with cimg/python:3.6 but the orb's aren't much use if I have to do a bunch of os preparation first.

To Reproduce
Run this job:

jobs:
  build:
    docker:
      - image: 'cimg/go:1.14'
    steps:
      - checkout
      - run: mkdir bucket && echo "lorem ipsum" > bucket/build_asset.txt
      - aws-s3/sync:
          arguments: |
            --acl public-read \
            --cache-control "max-age=86400"
          aws-access-key-id: AWS_ACCESS_KEY_ID_BLUE
          aws-region: AWS_REGION_BLUE
          aws-secret-access-key: AWS_SECRET_ACCESS_KEY_BLUE
          from: bucket
          overwrite: true
          to: 's3://my-s3-bucket-name/prefix'
      - aws-s3/copy:
          arguments: '--dryrun'
          from: bucket/build_asset.txt
          to: 's3://my-s3-bucket-name'
orbs:
  aws-s3: circleci/[email protected]
version: 2.1

Error Message:

#!/bin/bash -eo pipefail
if [ "false" == "false" ] && which aws > /dev/null; then
  echo "The AWS CLI is already installed. Skipping."
  exit 0
fi

export PIP=$(which pip pip3 | head -1)
if [[ -n $PIP ]]; then
  if which sudo > /dev/null; then
    sudo $PIP install awscli --upgrade
  else
    # This installs the AWS CLI to ~/.local/bin. Make sure that ~/.local/bin is in your $PATH.
    $PIP install awscli --upgrade --user
  fi
elif [[ $(which unzip curl | wc -l) -eq 2 ]]; then
  cd
  curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
  unzip awscli-bundle.zip
  if which sudo > /dev/null; then
    sudo ~/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
  else
    # This installs the AWS CLI to the default location (~/.local/lib/aws) and create a symbolic link (symlink) at ~/bin/aws. Make sure that ~/bin is in your $PATH.
    awscli-bundle/install -b ~/bin/aws
  fi
  rm -rf awscli-bundle*
  cd -
else
  echo "Unable to install AWS CLI. Please install pip."
  exit 1
fi
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.1M  100 16.1M    0     0  77.0M      0 --:--:-- --:--:-- --:--:-- 77.0M
Archive:  awscli-bundle.zip
  inflating: awscli-bundle/install   
  inflating: awscli-bundle/packages/six-1.15.0.tar.gz  
  inflating: awscli-bundle/packages/pyasn1-0.4.8.tar.gz  
  inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz  
  inflating: awscli-bundle/packages/botocore-1.17.62.tar.gz  
  inflating: awscli-bundle/packages/PyYAML-5.3.1.tar.gz  
  inflating: awscli-bundle/packages/colorama-0.4.3.tar.gz  
  inflating: awscli-bundle/packages/futures-3.3.0.tar.gz  
  inflating: awscli-bundle/packages/urllib3-1.25.10.tar.gz  
  inflating: awscli-bundle/packages/virtualenv-16.7.8.tar.gz  
  inflating: awscli-bundle/packages/docutils-0.15.2.tar.gz  
  inflating: awscli-bundle/packages/colorama-0.4.1.tar.gz  
  inflating: awscli-bundle/packages/python-dateutil-2.8.0.tar.gz  
  inflating: awscli-bundle/packages/jmespath-0.10.0.tar.gz  
  inflating: awscli-bundle/packages/s3transfer-0.3.3.tar.gz  
  inflating: awscli-bundle/packages/awscli-1.18.139.tar.gz  
  inflating: awscli-bundle/packages/urllib3-1.25.7.tar.gz  
  inflating: awscli-bundle/packages/PyYAML-5.2.tar.gz  
  inflating: awscli-bundle/packages/setup/setuptools_scm-3.3.3.tar.gz  
  inflating: awscli-bundle/packages/setup/wheel-0.33.6.tar.gz  
/usr/bin/env: ‘python’: No such file or directory

Exited with code exit status 127
CircleCI received exit code 127

Expected behavior
Orb would run after installing.

"The config profile (default) could not be found" in 4.0.0

Orb Version
4.0.0

Describe the bug
The refactoring from #51 started using --profile default by default, instead of omitting the option entirely. AWS CLI seems to consider these two different things, so our build fails if we try to use the latest release.

To Reproduce
Set environment variables only, without configuring an AWS CLI profile: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN. In our case, these values came from aws sts assume-role.

Then run the aws-s3/sync job. Starting with v4.0.0, it fails with an error: "The config profile (default) could not be found"

Expected behavior
It works fine in v3.1.1 of the orb.

Additional context
v4.0.0 happens to be the only version that doesn't trigger a deprecation warning from CircleCI, so we're strongly motivated to upgrade. Otherwise we see this: "This job is using a deprecated deploy step, please update .circleci/config.yml to use a run step"

Upload to non-amazon S3

Is your feature request related to a problem? Please describe.
I need to be able to deploy my files to Cloudflare R2. S3 is too expensive.

Describe the solution you'd like
I would like an s3-url argument, so i can specify a different upload location.

Invalid endpoint: https://s3..amazonaws.com at step "S3 Sync"

Orb Version
circleci/[email protected]

Describe the bug
I followed the guide in https://circleci.com/developer/orbs/orb/circleci/aws-s3. But the failure was observed at step "S3 Sync".
I did add AWS_ACCESS_KEY_ID, AWS_REGION, AWS_SECRET_ACCESS_KEY under my IAM user (AmazonS3FullAccess) in the Context.

Here is the config.yaml file:

version: '2.1'
orbs:
  aws-s3: circleci/[email protected]
jobs:
  build:
    docker:
      - image: 'cimg/python:3.6'
    steps:
      - checkout
      - run: mkdir bucket && echo "lorem ipsum" > bucket/build_asset.txt
      - aws-s3/sync:
          arguments: |
            --acl public-read \
            --cache-control "max-age=86400"
          from: bucket
          to: 's3://awsbucketduyl/circleCI-test/'
      - aws-s3/copy:
          arguments: '--dryrun'
          from: bucket/build_asset.txt
          to: 's3://awsbucketduyl'
workflows:
  s3-example:
    jobs:
      - build

Here is the failure log:

#!/bin/bash -eo pipefail
aws s3 sync \
  bucket s3://awsbucketduyl/circleCI-test/  \
  --acl public-read \
--cache-control "max-age=86400"



Invalid endpoint: https://s3..amazonaws.com

Exited with code exit status 255
CircleCI received exit code 255

Set S3 upload to run on failure.

Original request: https://discuss.circleci.com/t/run-orb-commands-on-fail/34396

Is your feature request related to a problem? Please describe.
The "when" attribute is not automatically applied to orb commands, meaning by default these orb commands can not be given the instruction to run "on_failure" as standard job steps are.
https://circleci.com/docs/2.0/configuration-reference/#the-when-attribute

This functionality should likely be extended to all orbs so I have created this feature request here: https://ideas.circleci.com/ideas/CCI-I-1360

Describe the solution you'd like
For the time being to allow users to upload important logs, etc, a parameter for setting the "when" condition can be added.

Describe alternatives you've considered
Two options described above.

Additional context
Slight concern that naming the parameter "when" (if able) may result in compatibility issues in the future if the above feature request is implemented.

3.1 version not available via orb registry

Orb Version
3.1

Describe the bug
Are there plans to release 3.1 officially? The circle hosted version is 3.0 ( https://circleci.com/developer/orbs/orb/circleci/aws-s3 ) and doesn't include the IAM role authentication arn option.
If I try and use the newer release in my pipeline it fails.

To Reproduce

Use example code -

orbs:
aws-s3: circleci/[email protected]

Circle gives and error -

"Cannot find circleci/[email protected] in the orb registry. Check that the namespace, orb name and version are correct."

Expected behavior

Circleci will use correct version of the orb that supports role-arn as a parameter.

Additional context

Bump to aws-cli 2.x

I'm using cimg/node:16.0.0 with an orb that transitive uses circleci/aws-s3 and hence circleci/aws-cli.

circleci/aws-cli 1.x doesn't work on cimg/node b/c of a python vs. python3 issue, but circleci/aws-cli 2.x works.

I can work around this by manually running aws-cli/setup first, before calling into aws-s3, but would be nice to bump.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.