Giter Site home page Giter Site logo

oxyno-zeta / s3-proxy Goto Github PK

View Code? Open in Web Editor NEW
267.0 5.0 27.0 2.61 MB

S3 Reverse Proxy with GET, PUT and DELETE methods and authentication (OpenID Connect and Basic Auth)

Home Page: https://oxyno-zeta.github.io/s3-proxy/

License: Apache License 2.0

Dockerfile 0.05% Makefile 0.79% Go 98.06% Smarty 0.62% JavaScript 0.46% Shell 0.01% Open Policy Agent 0.01%
s3 openid-connect basic-authentication s3-proxy s3-bucket serve-static opa reverse-proxy

s3-proxy's Introduction

Mentioned in Awesome Go Go Doc Github Actions Go Report Card

Coverage Status Docker Pulls GitHub license GitHub release (latest by date)


Features

  • Multi S3 bucket proxy
  • Index document (display index document instead of listing when found)
  • Custom templates
  • Custom S3 endpoints supported
  • Basic Authentication support
  • Multiple Basic Authentication support
  • OpenID Connect Authentication support
  • Multiple OpenID Connect Provider support
  • Redirect to original host and path with OpenID Connect authentication
  • Bucket mount point configuration with hostname and multiple path support
  • Authentication by path and http method on each bucket
  • Prometheus metrics
  • Allow to publish files on S3 bucket
  • Allow to delete files on S3 bucket

And many others.

Documentation

There is an online documentation generated for this project.

You can find it here: https://oxyno-zeta.github.io/s3-proxy/

Advanced interfaces

Looking for more advanced interfaces. Take a look on this project: s3-proxy-interfaces.

Want to contribute ?

Inspired by

Thanks

  • My wife BH to support me doing this

Author

  • Oxyno-zeta (Havrileck Alexandre)

License

Apache 2.0 (See in LICENSE)

s3-proxy's People

Contributors

abeltay avatar dacut avatar iskandar avatar oxyno-zeta avatar redat00 avatar renovate-bot avatar renovate[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

s3-proxy's Issues

CORS - header in the response must not be the wildcard '*' when the request's credentials mode is 'include

Error reporter in chrome devtools

Access to CSS stylesheet at 'https://..' from origin 'https://...' has been blocked by CORS policy:
 The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard
  '*' when the request's credentials mode is 'include'.
16:11:31.942

So I think the CORS support needs to change from sending * and instead send the origin of the request in the Access-Control-Allow-Origin header

Uploading large objects using multipart

Is your feature request related to a problem? Please describe.
I'm trying to upload large files through s3-proxy, but I get EntityTooLarge: Your proposed upload exceeds the maximum allowed size error.

Describe the solution you'd like
After some research, I found this aws s3 doc which describes how to perform multipart upload.
Maybe it works without change, but i don't manage to upload file using curl:

curl -XPUT -H "Content-Type: multipart/form-data" --form file='@gentoo_root.img'  -L https
://s3-proxy.local/ -v
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://REDACTED
* [HTTP/2] [1] [:method: PUT]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:path: /]
* [HTTP/2] [1] [user-agent: curl/8.4.0]
* [HTTP/2] [1] [accept: */*]
* [HTTP/2] [1] [content-length: 6442451163]
* [HTTP/2] [1] [content-type: multipart/form-data; boundary=------------------------FL8SuDEZRyPHrR90fgYD9T]
> PUT / HTTP/2
> Host: s3-proxy.local
> User-Agent: curl/8.4.0
> Accept: */*
> Content-Length: 6442451163
> Content-Type: multipart/form-data; boundary=------------------------FL8SuDEZRyPHrR90fgYD9T
< HTTP/2 500
< date: Thu, 19 Oct 2023 13:39:06 GMT
< content-type: text/html; charset=utf-8
< content-length: 300
< cache-control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
< expires: Thu, 01 Jan 1970 00:00:00 UTC
< pragma: no-cache
< strict-transport-security: max-age=15724800; includeSubDomains

<!DOCTYPE html>
<html>
  <body>
    <h1>Internal Server Error</h1>
    <p>EntityTooLarge: Your proposed upload exceeds the maximum allowed size
	status code: 400, **********
  </body>

I don't know if it's my utilization who is wrong, or if there some missing feature.

Thanks in advance

Potential memory leak

after startup, it slowly starts using more memory, in a few hours it reaches over 1GB
because it works just fine with much less memory usage at startup, I think it's caused by a memory leak

To Reproduce
here is the config file I'm using my provider is Idrive e2

# Log configuration
log:
  # Log level
  level: info
  # Log format
  format: text
  # Log file path
  # filePath:

#Server configurations
server:
  listenAddr: ""
  port: 8080

# Targets map
targets:
  first-bucket:
    ## Mount point
    mount:
      path:
        - /mybucket/
    # ## Actions
    actions:
      # Action for GET requests on target
      GET:
        # Will allow GET requests
        enabled: true
        # Configuration for GET requests
        config:
          # Redirect with trailing slash when a file isn't found
          redirectWithTrailingSlashForNotFoundFile: true
          # Index document to display if exists in folder
          indexDocument: index.html
          # Allow to add headers to streamed files (can be templated)
          streamedFileHeaders: {}
          # Redirect to a S3 signed URL
          redirectToSignedUrl: false
          # Signed URL expiration time
          signedUrlExpiration: 15m
          # Disable listing
          # Note: This will return an empty list or you should change the folder list template (in general or in this target)
          disableListing: true
          # Webhooks
          webhooks: []
      # Action for PUT requests on target
      PUT:
        # Will allow PUT requests
        enabled: false
        # Configuration for PUT requests
        config:
          # Metadata key/values that will be put on S3 objects.
          # Values can be templated. Empty values will be flushed.
          metadata:
            key: value
          # System Metadata cases.
          # Values can be templated. Empty values will be flushed.
          systemMetadata:
            # Cache-Control value (will be put as header after)
            cacheControl: ""
            # Content-Disposition value (will be put as header after)
            contentDisposition: ""
            # Content-Encoding value (will be put as header after)
            contentEncoding: ""
            # Content-Language value (will be put as header after)
            contentLanguage: ""
            # Expires value (will be put as header after)
            # Side note: This must have the RFC3339 date format at the end.
            expires: ""
          # Storage class that will be used for uploaded objects
          # See storage class here: https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
          # Values can be templated. Empty values will be flushed.
          storageClass: STANDARD # GLACIER, ...
          # Will allow override objects if enabled
          allowOverride: false
          # Canned ACL put on each file uploaded.
          # https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl
          # cannedACL: ""
          # Webhooks
          webhooks: []
      # Action for DELETE requests on target
      DELETE:
        # Will allow DELETE requests
        enabled: false
        # Configuration for DELETE requests
        config:
          # Webhooks
          webhooks: []
   
    ## Bucket configuration
    bucket:
      name: mybucket
      prefix:
      region: eu-west-1
      s3Endpoint: "idrive-e2.com"
      disableSSL: false
      credentials:
        accessKey:
          env: S3_ACCESS
        secretKey:
          env: S3_KEY

Expected behavior
less and stable ram usage

Version and platform (please complete the following information):

  • Platform: Linux
  • Arch: amd64
  • Version: 4.12 (should be latest)

Could not run docker container due config file not found

Hello friend!

I'm trying to run your s3-proxy tool from docker but getting this error:

$ docker run -d --name s3-proxy -p 8080:8080 -p 9090:9090 -v $PWD/config:/config oxynozeta/s3-proxy
fe4fb5a7c43324170e8879fd9516fa860556dc166e8043bec3baef2e4bcee582

$ docker logs fe4fb5a7c43324170e8879fd9516fa860556dc166e8043bec3baef2e4bcee582
{"level":"fatal","msg":"Config File "config" Not Found in "[/conf /]"","time":"2020-05-22T11:40:54Z"}

However I have config directory in my $HOME dir with config.yml inside:
$ ls -la $HOME/config/config.yaml
-rw-rw-r-- 1 db db 6291 May 22 14:17 /home/db/config/config.yaml

Any ideas what can be wrong here?

Thank you

Consider supporting ranger authorization systems?

Is your feature request related to a problem? Please describe.
Ranger is a popular solution for big data permission verification, but it does not support AWS-S3 permission verification at present.

Describe the solution you'd like
At present, I have an idea that the Proxy intercepts the S3 request, sends the request to ranger-S3-Plugin for authorization verification, and then sends the request to AWS-S3 Server after verification.

This way, if my data is stored on S3, I can use proxy to do request interception and permission authentication as well.

Reference link: https://github.com/apache/ranger

Provide a way to disable folder listing

We are using your component in our application and during a pentest it was hightlighted that XSS vulnerabilities are present in folder listing and 404 template. We were able to modify the templates in order to avoid these but as an additionnal step, we would like to be able to disable folder listing as a global parameter. I could see it was present in your inspiration project pottava/aws-s3-proxy, maybe would be good to re-integrate it.

Describe the solution you'd like
By providing a config parameter or environment variable, folder listing would be disabled and return 403

Describe alternatives you've considered
Today we have updated folder-listing.tpl template and returned HTTP status code 403

Additional context
NA

Add support for filesystem as a target

Is your feature request related to a problem? Please describe.
I'd like to expose a filesystem directory as an S3 bucket.
Minio Gateway is now deprecated so I won't use it and I'm looking for an alternative.

Describe the solution you'd like
I'd love a configurable target to a directory so I can use this project as a s3 server.

Describe alternatives you've considered
https://github.com/gaul/s3proxy is an option. I'd love to get an Golang alternative of this nice java tool.

Invalid redirect URI

Describe the bug
When redirect URL is fully set inside Keycloak (http://localhost:8080/auth/provider1/callback for example), the redirect uri on login page of Keycloak is marked as invalid.

To Reproduce
Steps to reproduce the behavior:

  1. Configure your Keycloak with a full redirect URI without any star
  2. Go on a page, for example: http://localhost:8080/v2/test without being connected
  3. The redirect to Keycloak will be done
  4. See error

Expected behavior
Login should work without a star in Keycloak redirect URI.

Version and platform (please complete the following information):

  • Platform [e.g. Linux, Mac, Windows, ...]: Linux
  • Arch [e.g. amd64, ...]: amd64
  • Version [e.g. 22]: 3.0.2

Additional context

Support Range header

Is your feature request related to a problem? Please describe.
I am using this S3 proxy as a MPEG dash server backed by S3 storage. Video files are not split in chunks so the client is fetching the same file but with byte range. Sadly, this proxy does not handle this header feature so the whole file is fetched every team a client is requesting a video chunk.

Describe the solution you'd like
When the GET API, the proxy could look for the "Range" option in the HTTP header and specify it to the AWS SDK when forwarding the request to the S3 storage.

[oidc] JWT auth failures respond with HTTP 500 instead of 401

Describe the bug

When JWT authorization validation fails, the HTTP response status is 500 instead of the standard 401.

To Reproduce

Steps to reproduce the behavior:

  1. configure OIDC auth for provider A
  2. get a JWT from provider B
  3. try to access a file using the JWT from provider B
  4. See 500 error response

Expected behavior

Whenever authorization fails, the response status should be set to 401, since it is not a server side error at all.

The same applies to any other concrete JWT validation error case, e.g.:

  • wrong signature
  • access token expired
  • wrong scope

Screenshots

Example for a wrong issuer:

curl -v -H "Authorization: Bearer $TOKEN" 'http://localhost:8080/v1/__REDACTED__'
*   Trying [::1]:8080...
* Connected to localhost (::1) port 8080
> GET /v1/__REDACTED__ HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/8.4.0
> Accept: */*
> Authorization: Bearer __REDACTED__
> 
< HTTP/1.1 500 Internal Server Error
< Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
< Content-Type: text/html; charset=utf-8
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Pragma: no-cache
< X-Accel-Expires: 0
< Date: Wed, 22 Nov 2023 16:40:16 GMT
< Content-Length: 225
< 
<!DOCTYPE html>
<html>
  <body>
    <h1>Internal Server Error</h1>
    <p>oidc: id token issued by a different provider, expected "__ISSUER_A__" got "__ISSUER_B__"</p>
  </body>
</html>
* Connection #0 to host localhost left intact

Version and platform (please complete the following information):

  • Docker
  • Version: 4.12

Additional context

Add any other context about the problem here.

Question: how to better deal with requests to folder prefixes?

Is your feature request related to a problem? Please describe.
Currently we're using docusaurus to deploy static sites to s3, some urls that Docusaurus generates do not contain trailing slashes (ex. mysite.com/some_section), though clicking through from a parent to the sub-section page will work due to JS/HTML magic, a hard-link will not redirect to mysite.com/some_section/ which correctly serves up the indexDocument (in this case: index.html).

Describe the solution you'd like
An option to force the proxy to check if a document is a folder, if so recurse into it and serve up indexDocument etc.

Describe alternatives you've considered
An alternate solution is to just use URL rewriting, but we're using AWS ALBs which won't work in that scenario and though other reverse-proxies will remedy the situation, I am hesitant to put another layer in between the ALB and S3Proxy.

Signed URL for PUT requests

Is your feature request related to a problem? Please describe.
At the moment there is a limitation of signed URLs only for GET requests, can you tell me if there are plans to do this for PUT requests too?
The fact is that I upload files larger than 100GB, sometimes 300GB each. And this is a very large load on the file system.

Describe the solution you'd like
Uploading of files without overhead on the application and file system.
Here's and here an approach to how you can achieve this.

How to integrate with an SPA using a bearer token

We have a Single Page App written in Angular that is also an OIDC client. The user hits the target URL, is redirected to the SSO (in our case Keycloak), logs in and is redirected back to the SPA. From there we have different menu items the app uses to pull in data via an Apache proxy running mod_auth_openidc by requesting a token from KC and then adding that token in a header that mod_auth_openidc can parse, validate and authorize the request based off of the claims in the token.

Is it possible to configure s3-proxy to do something similar? That is, be called in a restful way and validate a token presented to it by the SPA? This would be done without any kind of browser redirection as the URL protected by s3-proxy is not meant to be accessed directly.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
Dockerfile
  • alpine 3.19
Dockerfile.docs
  • squidfunk/mkdocs-material 9.5.15
github-actions
.github/workflows/ci.yml
  • actions/checkout v3
  • dorny/paths-filter v3
  • actions/checkout v3
  • actions/setup-go v5
  • golangci/golangci-lint-action v4
  • actions/checkout v3
  • actions/setup-go v5
  • docker/setup-qemu-action v2
  • actions/checkout v3
  • actions/setup-go v5
  • actions/checkout v3
  • actions/setup-go v5
  • mikepenz/action-junit-report v3
.github/workflows/docs.yml
  • actions/checkout v3
  • dorny/paths-filter v3
  • actions/checkout v3
  • actions/setup-python v5
  • actions/checkout v3
  • actions/setup-python v5
.github/workflows/labeler.yml
  • actions/labeler v5
.github/workflows/size.yml
  • pascalgn/size-label-action v0.5.0
.github/workflows/stale.yml
  • actions/stale v9
gomod
go.mod
  • go 1.22
  • go 1.22.0
  • emperror.dev/errors v0.8.1
  • github.com/Masterminds/sprig/v3 v3.2.3
  • github.com/coreos/go-oidc/v3 v3.10.0
  • github.com/dimiro1/health v0.0.0-20231118160444-e388c68d7d7e@e388c68d7d7e
  • github.com/dustin/go-humanize v1.0.1
  • github.com/fsnotify/fsnotify v1.7.0
  • github.com/go-chi/chi/v5 v5.0.12
  • github.com/go-chi/cors v1.2.1
  • github.com/go-chi/httptracer v0.3.0
  • github.com/go-playground/validator/v10 v10.19.0
  • github.com/go-resty/resty/v2 v2.12.0
  • github.com/gobwas/glob v0.2.3
  • github.com/johannesboyne/gofakes3 v0.0.0-20240217095638-c55a48f17be6@c55a48f17be6
  • github.com/opentracing/opentracing-go v1.2.0
  • github.com/prometheus/client_golang v1.19.0
  • github.com/sirupsen/logrus v1.9.3
  • github.com/spf13/cobra v1.8.0
  • github.com/spf13/viper v1.18.2
  • github.com/stretchr/testify v1.9.0
  • github.com/thoas/go-funk v0.9.3
  • github.com/uber/jaeger-client-go v2.30.0+incompatible
  • github.com/uber/jaeger-lib v2.4.1+incompatible
  • go.uber.org/mock v0.4.0
  • golang.org/x/net v0.22.0
  • golang.org/x/oauth2 v0.18.0
  • golang.org/x/sync v0.6.0
  • gopkg.in/yaml.v3 v3.0.1
  • github.com/imdario/mergo v0.3.16

  • Check this box to trigger a request for Renovate to run again on this repository

GitLab Token Refresh Failing

When using GitLab OIDC it seems like the authorization token doesn't get refreshed after the initial login. I can log in successfully and browse the files, but after awhile I get an internal server error stating oidc: token is expired with not much to go on in the proxy logs. I assume some call to GitLab is failing in the background. Any thoughts on how to debug this?

Here is my config yaml:

# Log configuration
    log:
      # Log level
      level: info
      # Log format
      format: text

    # Authentication Providers
    authProviders:
      oidc:
        gitlab:
          clientID: <redacted>
          clientSecret:
            <redacted>
          state: <redacted>
          issuerUrl: https://<redacted>.net:5443
          redirectUrl: https://<redacted>.io # /auth/oidc/callback will be added automatically
          scopes: # OIDC Scopes (defaults: openid, email, profile)
            - openid
            - email
            - profile
          # groupClaim: groups # path in token
          # cookieSecure: true # Is the cookie generated secure ?
          # cookieName: oidc # Cookie generated name
          emailVerified: false # check email verified field from token
          # loginPath: /auth/provider1 # Override login path dynamically generated from provider key
          callbackPath: /oauth2/callback

    # List targets feature
    # This will generate a webpage with list of targets with links using targetList template
    listTargets:
      # To enable the list targets feature
      enabled: false
      ## Mount point
      mount:
        path:
          - /
        # A specific host can be added for filtering. Otherwise, all hosts will be accepted
        # host: localhost:8080
      ## Resource configuration
      resource:
        # A Path must be declared for a resource filtering
        path: /
        # HTTP Methods authorized (Must be in GET, PUT or DELETE)
        methods:
          - GET
        # Whitelist
        whitelist: true
        # A authentication provider declared in section before, here is the key name
        provider: gitlab
        # OIDC section for access filter
        oidc:
          # NOTE: This list can be empty ([]) for authentication only and no group filter
          authorizationAccesses: [] # Authorization accesses : groups or email or regexp

    # Targets map
    targets:
      api-docs:
        ## Mount point
        mount:
          path:
            - /
        # ## Resources declaration
        # ## WARNING: Think about all path that you want to protect. At the end of the list, you should add a resource filter for /* otherwise, it will be public.
        resources:
          # A Path must be declared for a resource filtering (a wildcard can be added to match every sub path)
          - path: "/*"
            provider: gitlab
            # OIDC section for access filter
            oidc:
              # NOTE: This list can be empty ([]) for authentication only and no group filter
              authorizationAccesses: [] # Authorization accesses : groups or email or regexp
        # Actions
        actions:
          # Action for GET requests on target
          GET:
            # Will allow GET requests
            enabled: true
          PUT:
            # Will allow PUT requests
            enabled: false
            # Configuration for PUT requests
          DELETE:
            # Will allow DELETE requests
            enabled: false
        # Bucket configuration
        bucket:
          name: <redacted>
          prefix:
          region: <redacted>
          s3Endpoint:
          disableSSL: false
          # s3ListMaxKeys: 1000
          credentials:
            accessKey:
              <redacted>
            secretKey:
              <redacted>

Expected behavior
The token should be refreshed and the content should be able to be accessed.

Version and platform (please complete the following information):

  • Docker image tag 4.1.0 via Helm deployment

PUT and DELETE operations not working

Getting 405(Method Not Allowed) when trying to PUT and DELETE file operations. Get operation is working as expected.

To Reproduce

Docker Run:

docker run -d --name s3-proxy -p 0.0.0.0:8080:8080 -p 0.0.0.0:9090:9090 -v $PWD/conf:/proxy/conf -v $PWD/password:/proxy/password oxynozeta/s3-proxy

conf.yaml

log:
  level: debug
  format: text

# Server configurations
server:
  listenAddr: ""
  port: 8080
  compress:
    enabled: false
    level: 5
    types:
      - text/html
      - text/css
      - text/plain
      - text/javascript
      - application/javascript
      - application/x-javascript
      - application/json
      - application/atom+xml
      - application/rss+xml
      - image/svg+xml

# Authentication Providers
authProviders:
  basic:
    provider2:
      realm: "My Basic Auth Realm"

# Targets map
targets:
  drops:
    ## Mount point
    mount:
      path:
        - /
    resources:
    #   # A Path must be declared for a resource filtering (a wildcard can be added to match every sub path)
      - path: /*
        methods:
          - PUT
          - GET
          - DELETE
        provider: provider2
        basic:
          credentials:
            - user: user1
              password:
                path: /proxy/password
    ## Bucket configuration

    bucket:
      name: example.com
      prefix:
      region: us-east-1
      s3Endpoint:
      disableSSL: false
      s3ListMaxKeys: 100

Error(Delete):

curl -vvv -X DELETE  -u user1:test http://example.com:8080/conf/conf.yaml
*   Trying 10.205.196.37...
* TCP_NODELAY set
* Connected to example.com (10.205.196.37) port 8080 (#0)
* Server auth using Basic with user 'user1'
> DELETE /conf/conf.yaml HTTP/1.1
> Host: example.com:8080
> Authorization: Basic dXNlcjE6dGVzdA==
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 405 Method Not Allowed
< Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
< Expires: Thu, 01 Jan 1970 00:00:00 UTC
< Pragma: no-cache
< X-Accel-Expires: 0
< Date: Tue, 02 Aug 2022 09:41:10 GMT
< Content-Length: 0
<
* Connection #0 to host example.com left intact
* Closing connection 0

Error(PUT):

 curl -vvv  -X PUT -u user1:test  -F file=@/Users/natsaiso/conf/conf.yaml http://example.com:8080/conf/
*   Trying 10.205.196.37...
* TCP_NODELAY set
* Connected to example.com (10.205.196.37) port 8080 (#0)
* Server auth using Basic with user 'user1'
> PUT /conf/ HTTP/1.1
> Host: example.com:8080
> Authorization: Basic dXNlcjE6dGVzdA==
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Length: 1567
> Content-Type: multipart/form-data; boundary=------------------------f2ad7fbb60de7cc7
> Expect: 100-continue
>
< HTTP/1.1 405 Method Not Allowed
< Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
< Expires: Thu, 01 Jan 1970 00:00:00 UTC
< Pragma: no-cache
< X-Accel-Expires: 0
< Date: Tue, 02 Aug 2022 09:43:54 GMT
< Content-Length: 0
< Connection: close
<
* Closing connection 0

Version and platform (please complete the following information):

  • Platform Linux (docker)
  • Arch x86_64
  • Version v4.5.0 (git commit: 53d4768) built on 2022-03-30T19:42:02Z

s3 client example

Describe the bug
I have tried to get the minio client working with s3-proxy, but I always receive an error

$ ./mc ls proxy/airflow/logs/

mc.exe: <ERROR> Unable to list folder. Get "http://127.0.0.1:8080/?delimiter=%2F&encoding-type=url&fetch-owner=true&list-type=2&prefix=logs%2F": net/http: invalid header field value "AWS4-HMAC-SHA256 Credential=xxx/20220309/\n  \n/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=32330c79527f6f7c1498f56ad01324a5f535d6935784d438eeb01eb8368f0bac" for key Authorization

Configuration

authProviders:
  basic:
    provider1:
      realm: MyBasic

targets:  
  target1:
    mount:
      path:
        - /airflow/
    bucket:
      name: airflow # Bucket Name on remote storage system
      prefix:
      region: eu-west-1
      s3Endpoint: https://somewhere:10447
      disableSSL: false

      credentials:
        accessKey:
          value: xxx   
        secretKey:
          value: xxx 
    resource:
      - path: /airflow/*
        provider: provider1
        basic:
          credentials:
            - user: xxx
              password:
                value: xxx

mc client config

"proxy": {
   "url": "http://127.0.0.1:8080",
    "accessKey": "xxx",
    "secretKey": "xxx",
    "api": "S3v4",
    "path": "auto"
},

Sub bucket

Is your feature request related to a problem? Please describe.
I would like to split a single bucket and expose the sub bucket as a real bucket to the users. The split-ing will be done via the authN/username.

Describe the solution you'd like
Divide a single bucket among many users and make it so that each user feels like they got a unique bucket. Is this possible with s3-proxy?

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Accept self-signed certificate S3 endpoint

I'm connecting with the s3 backend, but as the certificate being used is self-signed.
Then I get the error "x509: certificate signed by unknown authority".

Is there a possibility to accept self-signed certificate for S3endpoint?

Let the docker image build itself

Is your feature request related to a problem? Please describe.
I would like to build the docker image without first needing to build it on the host.

Describe the solution you'd like
I would like to build the image without dependencies, like:

docker build -t s3-proxy .

Describe alternatives you've considered
I have made a multi-stage Dockerfile that builds the application and image in one step: see #327

Additional context

For example its really nice and dependency free from docker-compose, just give it the URL/branch and it will build it itself:

# docker-compose.yaml
services:
  s3-proxy:
    build:
      context: https://github.com/EnigmaCurry/s3-proxy.git#self-buildable

indexDocument not handled when bucket contains > 1k objects

Describe the bug
indexDocument doesn't work when folder contains large number of files before i

To Reproduce
Steps to reproduce the behavior:

  1. create s3 bucket with hundreds of files - mostly numbered, but with one index.html
  2. create target config to that bucket etc.
  3. page will display index of bucket up to about page 3-4/5, but not the actual index.html

Expected behavior
Proxy serves index.html

Screenshots
If applicable, add screenshots to help explain your problem.

Version and platform (please complete the following information):

  • Platform Linux
  • Arch: x86_64
  • Version latest docker

Proxy pre-compressed data as-is

Hi,
Thanks for this project!
If you take a file which is gzipped and has Content-Encoding set to gzip on the object's meta data it seems to be served uncompressed.
Is it possible to proxy those objects as-is?

Thanks!

OIDC scope configuration key name mismatch

Describe the bug
The documentation as well as the helm chart differ from the source code in regards to OIDC scope specification.

To Reproduce
Configure an OIDC provider:

authProviders:
  oidc:
    provider1:
      scopes:
        - openid
        - email
        - profile
        - groups

Browse to s3-proxy unauthenticated and you will be redirected to the OIDC provider, but it will only have requested the default scopes of [ openid, email, profile ], it will not have added groups.

Expected behavior
I would expect all of the scopes I requested be added to the auth request to the OIDC provider.

Version and platform (please complete the following information):
Running on k8s 1.19 on IBM Cloud, with the latest helm chart and s3-proxy image.

Additional context
It seems to just be a mismatch between the documentation as well as helm chart and the code. The docs and helm chart say to use scopes, but the code itself uses scope. Not sure which direction you prefer to change it.

Dynamic bucket name configuration. Acquire bucket name from URL-path

Is your feature request related to a problem? Please describe.
I don't want to reconfigure s3-proxy everytime someone creates a new bucket in our cloud storage. I don't always know what our buckets are named beforehand. And the people creating the buckets don't have access to reconfigure s3-proxy.

The endpoint, region, credentials, etc, are the same for all buckets.

Describe the solution you'd like
Instead of configuring multiple buckets in the configuration file for s3-proxy in advance, the bucket name is acquired from the URL-path.

Example

# HTTPS call:
GET https://mys3proxy.example.com/bucket1/folder/file.bin

# Result:
bucket: bucket1
key: /folder/file.bin

# HTTPS call:
GET https://mys3proxy.example.com/bucket2/foo/bar.bin

# Result:
bucket: bucket2
key: /foo/bar.bin

Handling method for storing audit log to somewhere

(question)
Im happy to find out this proejct. thank you.

In my environment, I have to remain some user(admin)'s activity on admin page to track abnormal usage.
So I want to save this specified log to somewhere such as DB, STDOUT(need formatted) and so on.

Can I use Log feature for this?

Issues with login redirect

  • Platform: Linux
  • Arch: amd64
  • Version: 4.1.0

Hey there,
First off this is a great tool, thanks for building it! I'm using it with Google OAuth as a proxy for S3 artifacts for Buildkite builds. I'm having an issue where the OAuth redirect gets stuck after a successful login. It works properly when I hit the s3-proxy URL directly, but if I try and use the links created by Buildkite it gets stuck. If I close the tab and refresh the page I can download the artifacts though, so the authentication has actually worked.

I've put the logs into debug mode and I think it's because it's trying to use HTTP whereas we redirect all traffic to HTTPS on our load balancers? It sounds kind of similar to this issue.

This is the requested URL which throws a 307:

https://<my_domain>/auth/google/callback?state=<state>:http://<my_domain>/b1258790-5738-4f46-aa8c-44b04c99df8f/<artifact_name>&code=<code>&scope=email%20profile%20https://www.googleapis.com/auth/userinfo.profile%20openid%20https://www.googleapis.com/auth/userinfo.email&authuser=0&hd=<my_domain>&prompt=none

This is a log from one of the pods:

level=info msg="request complete" client_ip=10.0.20.71 http_method=GET http_proto=HTTP/1.1 http_scheme=http remote_addr=10.0.20.71 req_id=s3-proxy-6bdb5545fb-k5d9n/BYqOqpjgJ4-002249 resp_bytes_length=13756449 resp_elapsed_ms=1332.687785 resp_status=200 uri="http://<my_domain>/b1258790-5738-4f46-aa8c-44b04c99df8f/<artifact_name>" user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36"

As you can see part of the redirect uses http:// and the http_scheme in the pod's logs is http too. I'm not 100% sure if this is the actual issue or if it's something I need to raise with buildkite.

My config:

    log:
      level: error
      format: text

    authProviders:
      oidc:
        google:
          clientID: **************
          clientSecret:
            value: **************
          state: **************
          issuerUrl: https://accounts.google.com
          redirectUrl: https://**************
          emailVerified: true

    targets:
      buildkite-artifacts:
        mount:
          path:
            - /
        resources:
        - path: /*
          methods:
            - GET
          provider: google
          oidc:
            authorizationAccesses:
              - email: "^([^@]+)@<my_domain>\\.<tld>$"
                regexp: true

        bucket:
          name: **************
          region: ap-southeast-2

Document and sanitize input for secret files

Describe the bug
Secret file (for CredentialSecret) input is not sanitized, and there isn't any documentation stating what the format is for secret files.

To Reproduce
Steps to reproduce the behavior:
Use secret file (credentials.path) in bucket config, e.g.

      bucket:
          name: something
          credentials:
            accessKey:
              path: /proxy/secret-files/metrics-bucket-key
            secretKey:
              path: /proxy/secret-files/metrics-bucket-secret

Expected behavior
I would like to see documentation stating that these files need to have nothing but the secret in them. I would also expect that the input is sanitized, at least having trailing \n removed.
Ideally, it would be preferred to put both key and secret in a single file, structured as YAML

Version and platform (please complete the following information):

  • Platform: linux
  • Arch: amd64
  • Version: v4.4.0

Additional context
Figuring out how this was supposed to be used, (helm -> k8s secrets as files, with literally nothing but the secret/key in them) took too long and ended up with me reading through code in order to figure out what usage is supposed to look like.
I'd actually prefer to use env vars from secret, but secrets are only exposed as helm values, rather than as native k8s secret config.

Allow application/octet-stream

Is your feature request related to a problem? Please describe.
I have configured S3 proxy to use from gitlab runner and they are uploading with Content-Type: application/octet-stream

Describe the solution you'd like
Add support to PUT with Content-Type: application/octet-stream

Describe alternatives you've considered

Gitlab not support yet https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26921

Additional context

dump of request:

PUT /gitlab-runner/cache/project/1/job-non_protected?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=1%2F20221118%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221118T074923Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=b6548e02e85c4f93463f0555238f04b85d9bf97db456cbc90edb825bc4b92520 HTTP/1.1
Host: s3-proxy.host:8080
User-Agent: Go-http-client/1.1
Content-Length: 237
Content-Type: application/octet-stream
Last-Modified: Fri, 18 Nov 2022 07:49:23 GMT

PK........+>rU..............!.cached_fileux.............UT....9wcUT....9wc*..MU..-./*I.+QHI,I.241312.4.......PK..$.!L#.......PK..........+>rU$.!L#.........!...............cached_fileux.............UT....9wcUT....9wcPK..........Z...}.....HTTP/1.1 500 Internal Server Error
Cache-Control: no-cache, no-store, no-transform, must-revalidate, private, max-age=0
Content-Type: text/html; charset=utf-8
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Pragma: no-cache
X-Accel-Expires: 0
Date: Fri, 18 Nov 2022 07:49:23 GMT
Content-Length: 142

<!DOCTYPE html>
<html>
  <body>
    <h1>Internal Server Error</h1>
    <p>request Content-Type isn't multipart/form-data</p>
  </body>
</html>

Proxy config is

# Log configuration
log:
  # Log level
  level: debug
  # Log format
  format: text
  # Log file path
  # filePath:
targets:
  first-bucket:
    ## Mount point
    mount:
      path:
        - /gitlab-runner/
    actions:
      PUT:
        enabled: true
        allowOverride: true
    ## Bucket configuration
    bucket:
      name: gitlab-runner
      s3Endpoint: test.host:8080
      disableSSL: true
      credentials:
        accessKey:
          value: ***
        secretKey:
          value: ***

Status code routing

Hi, is it possible to add features for response handling?

I tried your proxy to provide a static angular website to an internal vpc only. We need to route back requests with 404 or 403 code to index.html and code 200. Or is this already possible and I'm not seeing the right config part for this?

Thanks in advance.

How to operate s3-proxy behind a reverse http proxy?

Using an nginx reverse proxy to route https into s3-proxy (localhost:8080) with SSL certs server by nginx as well. However the port is still open, so using it explicitly will render an insecure connection... I thought using the internalServer port was the solution, but I may have missunderstood?

panic: label value "/\xff.js" is not valid UTF-8

Describe the bug
The server panics when trying to add a prometheus metric label that is not valid UTF-8, for example when the inbound request URL contains invalid characters.

Expected behavior
The server does not panic.

Version and platform (please complete the following information):

  • Platform: Docker
  • Version: 3.1.1

Additional context
Here is the log from the panic:

 panic: label value "/\xff.js" is not valid UTF-8
 
 -> github.com/prometheus/client_golang/prometheus.(*SummaryVec).WithLabelValues
 ->   /home/oxynozeta/dev/golang/pkg/mod/github.com/prometheus/[email protected]/prometheus/summary.go:603

    github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/metrics.(*prometheusClient).Instrument.func1.1
      /home/oxynozeta/dev/golang/src/github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/metrics/prometheus.go:44
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.RequestLogger.func1.1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/logger.go:46
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server/middlewares.ImproveTracing.func1.1
      /home/oxynozeta/dev/golang/src/github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server/middlewares/improve-tracing.go:28
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/httptracer.Tracer.func1.1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]/httptracer.go:80
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.Recoverer.func1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/recoverer.go:37
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.RealIP.func1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/realip.go:34
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.RequestID.func1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/request_id.go:76
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.NoCache.func1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/nocache.go:54
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi/middleware.(*Compressor).Handler.func1
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/middleware/compress.go:213
    net/http.HandlerFunc.ServeHTTP
      /usr/local/go/src/net/http/server.go:2042
    github.com/go-chi/chi.(*Mux).ServeHTTP
      /home/oxynozeta/dev/golang/pkg/mod/github.com/go-chi/[email protected]+incompatible/mux.go:86
    net/http.serverHandler.ServeHTTP
      /usr/local/go/src/net/http/server.go:2843
    net/http.(*conn).serve
      /usr/local/go/src/net/http/server.go:1925
    created by net/http.(*Server).Serve
      /usr/local/go/src/net/http/server.go:2969

Support PASETO authentication

It would be nice to have "scalable" authentication method which does not require involving additional services.
Basic authentication does not require additional service, however it does require specifying each user separately.

Most well-known are probably JSON Web Tokens (JWT), however there are other options also.

Describe the solution you'd like
I would like to propose adding support for another authentication method. The PASETO is specification for secure stateless tokens (similar to JWT, but "simpler" to use).
With this (or JWT), one could create "secure" tokens on 'another service' (which can have expiry time, and contain arbitrary claims), which can be parsed and verified by the 's3-proxy'. Both 'another service' and 's3-proxy' would just need to have same shared-key or private key on 'another service' and public key on 's3-proxy' configured.

Describe alternatives you've considered
I have tried current authentication options, but as I mentioned they either require managing external auth service, or "fixed" configuration (for Basic auth).

Additional context
The JSON Web Tokens (JWT) may be better idea to implement, since people may be better acquainted with, however PASETO seemed like easier thing to implement.

I am not writing this issue so much to request this feature, but more to ask if this is something that would be accepted if I were to create a PR for it?

Regarding that, I already have pretty much everything implemented. Only thing left is to find where to add some additional tests, and to update the documentation, and I could create a PR.

[Helm] Auto-reload when ConfigMap changed

Is your feature request related to a problem? Please describe.
Currently if the s3-proxy ConfigMap is changed after deploy, the pod does not pick up changes until it is restarted.

Describe the solution you'd like
Either internally poll/detect change and reload config or use sidecar pattern

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

OIDC: no resource declared message when it is

Let me start by saying this looks like a really cool project and I can benefit from it greatly. However, I must say I've struggled to get the most simplistic implementation of this working. The documentation is a little terse and the "Example" page is just the main configuration file will all the options commented out. It'd be more helpful if there were more working examples with those options (I've introduced errors trying to un-comment lines and messing up the YAML).

That being said, I can get OIDC working with the "Target List" just fine. However, navigating to paths in that list seem to be ignoring the authorization piece. I keep getting the message in the debug logs:

level=debug msg="no resource found in authorization, means that authentication was skipped => skip authorization too"

However, I do have a resource declared in the Target Map.

# Targets map
targets:
  first-bucket:
     mount:
        path:
          - /usr/local/buckets/
        resources:
         - path: /
           methods:
             - GET
           provider: provider1
           oidc:
             authorizationAccesses: # Authorization accesses : groups or email or regexp
               - group: "Roles - My Group"
        actions:
         GET:
           enabled: true
           config:
             redirectWithTrailingSlashForNotFoundFile: true

I apologize for submitting a bug report here, but I don't see another way to communicate with the maintaners here in a discussion. I'm at a lost why this isn't working.

BONUS Question: Can someone enlighten me on what a "Mount" and "Path" is in this and the Target List context? It seems to be required but if the mounts in the Target List and Target Map are the same, you get an error. I don't see the container creating and mounting any files to the mount point, so I'm not sure what the purpose of them are, except to change the URL path when accessing the server in the browser.

Use security-credentials endpoint for authentication?

Is it possible to use the ephemeral/short-lived credentials provided by the AWS IAM role security credentials endpoint as a form of authentication for s3-proxy?

It would likely require tracking the expiration of the token and requesting a new one when it expires.

Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

It if is not possible currently, is it a feature you would consider developing and including in this project?

OIDC not working: ERR_TOO_MANY_REDIRECTS

Describe the bug
I am trying to integrate oidc in s3-proxy and after that whats happening is - The app is redirecting me to the authorization server and after authenticating with the authorization server, it's redirecting me to the redirect url and its the point where the process stops with this URL in chrome URL box myapp.com/callback?code=fnkfnwj... and with this error ERR_TOO_MANY_REDIRECTS on screen.
To Reproduce
Steps to reproduce the behavior:
1.) Configure an app in the authorization server. I am using fusion auth here. From there you will get client id, issuer URL, client secret. Also set redirect url there in the authorization app to the url where you want the authorization app to redirect you after the authentication is done. In my case, I setup this url as redirect URL in the auth app - https://reports.app.mydomain.com/api/.

2.) Now configure s3-proxy to use oidc as authorization and put those values that you got in step 1 from your auth app here in the config of s3-proxy oidc. The config looks something like this
authProviders:
oidc:
provider1:
clientID: fsdfdnfwjfnwjfwkfwkfkfwrfwkfnwkfnwrfnwrj
clientSecret:
env: CLIENT_SECRET
state: mqeklfnrjfnejfnjw
issuerUrl: https://auth.demo.app.mydomain.com/
redirectUrl: https://reports.app.mydomain.com/api/ # /auth/oidc/callback will be added automatically
scopes: # OIDC Scopes (defaults: oidc, email, profile)
- oidc
- email
- profile
groupClaim: groups # path in token
cookieSecure: true # Is the cookie generated secure ?
cookieName: oidc # Cookie generated name
emailVerified: true # check email verified field from token
loginPath: / # Override login path dynamically generated from provider key
callbackPath: /callback # Override callback path dynamically generated from provider key

and the target block looks like

targets:
- name: api
mount:
path:
- /api/
# A specific host can be added for filtering. Otherwise, all hosts will be accepted
# host: reports.app.mydomain.com
resources:
- path: /api/*
# HTTP Methods authorized (Must be in GET, PUT or DELETE)
methods:
- GET
- PUT
- DELETE
# A authentication provider declared in section before, here is the key name
provider: provider1
# OIDC section for access filter
oidc:
# NOTE: This list can be empty ([]) for authentication only and no group filter
authorizationAccesses: # Authorization accesses : groups or email or regexp
- email: "[email protected]"

also my bucket config
bucket:
name: api-tests.reports.app.mydomain.com
prefix:
region: us-east-1
s3Endpoint:
disableSSL: false
credentials:
accessKey:
env: AWS_ACCESS_KEY
secretKey:
env: AWS_SECRET_KEY

Expected behavior
Now when I go to https://reports.app.mydomain.com/api/ ----- the client (s3-proxy) should redirect me to auth server and auth server after authenticating should redirect me to this URL https://reports.app.mydomain.com/api/callback with code and all, and app should let me in to view whatever is there behind it.

** But **
Everything goes as expected until the very last step. The app does catch me back after I have authenticated with auth-server. And I am shown an error instead. See the following screenshot.
Screenshots
https://imgur.com/52my0wP

Support for a private issuerUrl and public issuerUrl

Skipped the bug report template as this is more of a question as opposed to a bug report, let me know if you'd prefer I go through those steps!

I'm running v3.0.3 (git commit: 81b70c4) via the latest version of your helm chart.

I'm trying to use s3-proxy with an in cluster OIDC provider (dex). The flow I'm imagining is that end users are directed to some "public" facing hostname for dex, but for in cluster traffic (the token exchange) that would go to a different hostname. With one issuerUrl, I either have to send the private token exchange over the public internet, or I have to have some strange setup for end users to be able to port-forward or somehow otherwise access the private interface for dex.

Ideally I think it would be something like:

authProviders:
  oidc:
    provider1:
      publicIssuerUrl: dex.mydomain.com
      privateIssuerUrl: dex.dex.svc.cluster.local

I'm pretty new to getting this type of thing set up so if I'm missing something simple please let me know. Thank you for the project!

Question about config file parameters

Sorry to misuse this Feature Request to ask a question. Trying to understand the target and bucket configuration items. Everything I have tried results in “Not Found” errors. I’ve tried a large number of configs, and the current one looks like this (it’s an edit of the sample taking out everything except the bare minimum to see if I can solve this problem):

$ cat /home/ubuntu/s3-proxy/conf/NOPE/config-example.yaml
# Log configuration
log:
  # Log level
  level: info
  # Log format
  format: text
  # Log file path
  # filePath:


# Targets map
targets:
  first-bucket:
    ## Mount point
    mount:
      path:
        - /
    ## Bucket configuration
    bucket:
      name: my-bucket
      prefix:
      region: us-east-2
      s3Endpoint: https://<REDACTED>.<REDACTED>.com
      disableSSL: false
      # s3ListMaxKeys: 1000
      credentials:
        accessKey:
          env: AWS_ACCESS_KEY_ID
        secretKey:
          env: AWS_SECRET_ACCESS_KEY

I’m starting the system like this:

$ sudo docker run -d --name s3-proxy-1 -e AWS_SECRET_ACCESS_KEY='<REDACTED>' -e AWS_ACCESS_KEY_ID='<REDACTED>' -p 8080:8080 -p 9090:9090 -v /home/ubuntu/s3-proxy/conf/NOPE:/proxy/conf oxynozeta/s3-proxy

The requests I’m making look like this from another host in the same network:

$ aws --endpoint-url http://10.0.0.224:8080 --profile <REDACTED> s3 ls my-bucket
An error occurred () when calling the ListObjectsV2 operation:<ENDS HERE>

And I get these errors in the logs:

2022-08-04T14:05:31.636209032Z time="2022-08-04T14:05:31Z" level=info msg="Starting s3-proxy version: v4.5.0 (git commit: 53d4768) built on 2022-03-30T19:42:02Z"
2022-08-04T14:05:31.636480759Z time="2022-08-04T14:05:31Z" level=info msg="Load S3 clients for all targets"
2022-08-04T14:05:31.639789233Z time="2022-08-04T14:05:31Z" level=info msg="Internal server listening on :9090"
2022-08-04T14:05:31.658982636Z time="2022-08-04T14:05:31Z" level=info msg="Server listening on :8080"
2022-08-04T14:05:43.716161002Z time="2022-08-04T14:05:43Z" level=info msg="no resource declared => skip authentication" client_ip="10.0.0.87:58670" http_method=GET http_proto=HTTP/1.1 http_scheme=http remote_addr="10.0.0.87:58670" req_id=83592f43e87d/rAqjygfsOK-000001 uri="http://10.0.0.224:8080/my-bucket?list-type=2&prefix=&delimiter=%2F&encoding-type=url" user_agent="aws-cli/2.7.15 Python/3.9.11 Linux/5.15.0-1015-aws exe/x86_64.ubuntu.22 prompt/off command/s3.ls"
2022-08-04T14:05:43.838108571Z time="2022-08-04T14:05:43Z" level=error msg="Not Found" client_ip="10.0.0.87:58670" error="Not Found" http_method=GET http_proto=HTTP/1.1 http_scheme=http remote_addr="10.0.0.87:58670" req_id=83592f43e87d/rAqjygfsOK-000001 stack="github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/response-handler.(*handler).NotFoundError,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/response-handler/error-handlers.go:161,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/bucket.(*requestContext).Get,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/bucket/requestContext.go:165,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server.(*Server).generateRouter.func4.1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server/server.go:356,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5.(*Mux).routeHTTP,github.com/go-chi/chi/[email protected]/mux.go:442,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-pr```
oxy/authx/authorization.Middleware.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/authx/authorization/main.go:38,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/authx/authentication.(*service).Middleware.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/authx/authentication/main.go:44,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/bucket.HTTPMiddleware.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/bucket/http-middleware.go:37,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/response-handler.HTTPMiddleware.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/response-handler/http-middleware.go:25,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5.(*Mux).ServeHTTP,github.com/go-chi/chi/[email protected]/mux.go:71,github.com/go-chi/chi/v5.(*Mux).Mount.func1,github.com/go-chi/chi/[email protected]/mux.go:314,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5.(*Mux).routeHTTP,github.com/go-chi/chi/[email protected]/mux.go:442,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5.(*Mux).ServeHTTP,github.com/go-chi/chi/[email protected]/mux.go:71,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server.HostRouter.ServeHTTP,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/server/hostrouter.go:65,github.com/go-chi/chi/v5.(*Mux).Mount.func1,github.com/go-chi/chi/[email protected]/mux.go:314,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5.(*Mux).routeHTTP,github.com/go-chi/chi/[email protected]/mux.go:442,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5/middleware.Recoverer.func1,github.com/go-chi/chi/[email protected]/middleware/recoverer.go:38,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/metrics.(*prometheusClient).Instrument.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/metrics/prometheus.go:36,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/log.HTTPAddLoggerToContextMiddleware.func1.1,github.com/oxyno-zeta/s3-proxy/pkg/s3-proxy/log/http-middleware.go:26,net/http.HandlerFunc.ServeHTTP,net/http/server.go:2084,github.com/go-chi/chi/v5/middleware.RequestLogger.func1.1,github.com/go-chi/chi/[email protected]/middleware/logger.go:57" uri="http://10.0.0.224:8080/my-bucket?list-type=2&prefix=&delimiter=%2F&encoding-type=url" user_agent="aws-cli/2.7.15 Python/3.9.11 Linux/5.15.0-1015-aws exe/x86_64.ubuntu.22 prompt/off command/s3.ls"
2022-08-04T14:05:43.844387262Z time="2022-08-04T14:05:43Z" level=error msg="request complete" client_ip="10.0.0.87:58670" http_method=GET http_proto=HTTP/1.1 http_scheme=http remote_addr="10.0.0.87:58670" req_id=83592f43e87d/rAqjygfsOK-000001 resp_bytes_length=83 resp_elapsed_ms=129.027299 resp_status=404 uri="http://10.0.0.224:8080/my-bucket?list-type=2&prefix=&delimiter=%2F&encoding-type=url" user_agent="aws-cli/2.7.15 Python/3.9.11 Linux/5.15.0-1015-aws exe/x86_64.ubuntu.22 prompt/off command/s3.ls"

I can do the ls directly against the bucket from the config on the host where I’m running the proxy. So I know it’s reachable from that place with those credentials by that name (i.e. AWS_ACCESS_KEY_ID='<REDACTED>' AWS_SECRET_ACCESS_KEY=<REDACTED> aws s3 ls my-bucket --endpoint-url 'https://<REDACTED>.<REDACTED>.com/' - is working). I understand this is more than likely a config error on my part, and likely simply me not understanding the relationship between target config and bucket config. I’ve tried everything I can think of to supply different combinations of the values I have without making any progress. I have also tried to configure “normal” S3 buckets (i.e. not hosted on alternate endpoints and instead hosted natively by AWS S3 service), and those fail in exactly the same way. The only difference is I comment out the s3Endpoint in the config.

Thank you for any insight you may be able to offer in advance. And I apologize for the remedial nature of the question.

Assume role failing via WebIdentity

Describe the bug
Error when trying to access STS endpoint (sts.us-east-1.amazonaws.com), It is using http endpoint instead of https.

Error:
Internal Server Error
WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post "http://sts.us-east-1.amazonaws.com/": read tcp 100.64.15.129:48714->67.220.245.46:80: read: connection reset by peer

Screenshots
Screenshot 2023-10-17 at 6 46 56 PM

Version and platform (please complete the following information):

  • Platform Linux
  • Arch amd64
  • Version 4.5

Can't set a context-path for the server config

Thanks for the great work on this!

I'm trying to run this on a specific context path (e.g. https://example.com/data/) but I don't see a way to do this. This is causing a problem because the OIDC configuration creates a dynamically generated 307 redirect to a "callback" relative URL that contains the URL of the host it comes in on (i.e. /auth/provider?rd=https%3A%2F%2Fexample.com%2) , but not when it contains a context path. Assuming the issue is lack of configuration for this, how can I fix it?

folder-list page leads to Internal server error

Hi,
I'm trying to disable the folder list page (index). But as there is no direct way, I'm overriding the template as described here: #237. But this seems not to work as expected.

Describe the bug
When using an access token that only has the permissions "s3:GetObject" it seems to be impossible to disable the folder-list page without overriding the internal-server-error template as well. The proxy application will receive a 403 when trying to receive the list of folders (even if the template does not require it). This leads to the internal server error page even when replacing the folder-list template.

To Reproduce
Steps to reproduce the behavior:

  1. Configure the proxy application with an access key with the policy:
{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Action": [
    "s3:GetObject"
   ],
   "Resource": [
    "arn:aws:s3:::bucket-name",
    "arn:aws:s3:::bucket-name/*"
   ]
  }
 ]
}
  1. Change the config to use the 403 template for the list-folder page (as I want to disable the access to the folders):
      templates:
        helpers:
          - templates/_helpers.tpl
        folderList:
          path: templates/forbidden-error.tpl
          headers:
            Content-Type: '{{ template "main.headers.contentType" . }}'
          status: "403"
  1. When accessing the folder-list page you will receive an internal server error page instead of the expected access-denied page.

Expected behavior
The folder-list page returns the access-denied template (or there is a way to disable the listing entirely).

How to set authorizationAccesses

I'd like to restrict access to a bucket to members belonging to certain groups in my OIDC provider. I'm not sure how to go about it with authorizationAccesses. How can I let members of e.g. only group1 and group2 access the bucket elements?

targets:
  target1:
    resources:
      - path: /*
        provider: gitlab
        oidc:
          authorizationAccesses:
            - group: only_one_group?
     bucket:
       ...

CORS support

Is your feature request related to a problem? Please describe.
CORS aren't supported by the S3-Proxy project.

Describe the solution you'd like
It can be good to be able to configure CORS by target or just globally

Describe alternatives you've considered

Additional context

I can't disable index page

Describe the bug
I am trying to disable the index page by configuring ListTargetsConfiguration

To Reproduce
I have tried the following configurations. But they all show the index page.

log:
  level: debug
  format: text
listTargets:
  enabled: false
  mount:
    path:
      - /
targets:
  first-bucket:
    mount:
      path:
        - /
    bucket:
      name: foo-common.example.com
      prefix:
      region: gra
      s3Endpoint: https://s3.gra.cloud.ovh.net
      disableSSL: false
      s3ListMaxKeys: 1000
      credentials:
        accessKey:
          value: testing
        secretKey:
          value: testing
log:
  level: debug
  format: text
listTargets:
  enabled: false
  mount:
    path:
      - /foo-common.example.com/
targets:
  first-bucket:
    mount:
      path:
        - /foo-common.example.com/
    bucket:
      name: foo-common.example.com
      prefix:
      region: gra
      s3Endpoint: https://s3.gra.cloud.ovh.net
      disableSSL: false
      s3ListMaxKeys: 1000
      credentials:
        accessKey:
          value: testing
        secretKey:
          value: testing
log:
  level: debug
  format: text
listTargets:
  enabled: false
  mount:
    path:
      - /
targets:
  first-bucket:
    mount:
      path:
        - /foo-common.example.com/
    bucket:
      name: foo-common.example.com
      prefix:
      region: gra
      s3Endpoint: https://s3.gra.cloud.ovh.net
      disableSSL: false
      s3ListMaxKeys: 1000
      credentials:
        accessKey:
          value: testing
        secretKey:
          value: testing

Expected behavior
Do not show and index page with files when I visit https://mys3proxy.example.com

Version and platform (please complete the following information):

  • Helm chart: 2.7.0
  • Image: oxynozeta/s3-proxy:4.1.0
  • Kubernetes 1.21.1 (created with kind, running in Docker Desktop on Windows using WSL 2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.