Giter Site home page Giter Site logo

martinklepsch / s3-beam Goto Github PK

View Code? Open in Web Editor NEW
92.0 7.0 17.0 72 KB

๐Ÿš€ direct-to-S3 uploading using ClojureScript

License: Eclipse Public License 1.0

Clojure 100.00%
s3 clojurescript cljs signing-server direct-upload aws digitalocean

s3-beam's Introduction

s3-beam Dependencies Status

Usage | Changes

s3-beam is a Clojure/Clojurescript library designed to help you upload files from the browser to S3 (CORS upload). s3-beam can also upload files from the browser to DigitalOcean Spaces.

[org.martinklepsch/s3-beam "0.6.0-alpha5"] ;; latest release

Usage

To upload files directly to S3 you need to send special request parameters that are based on your AWS credentials, the file name, mime type, date etc. Since we don't want to store our credentials in the client these parameters need to be generated on the server side. For this reason this library consists of two parts:

  1. A pluggable route that will send back the required parameters for a given file-name & mime-type
  2. A client-side core.async pipeline setup that will retrieve the special parameters for a given File object, upload it to S3 and report back to you

1. Enable CORS on your S3 bucket

Please follow Amazon's official documentation.

For DigitalOcean Spaces, please follow DigitalOceans official documentation.

2. Plug-in the route to sign uploads

(ns your.server
  (:require [s3-beam.handler :as s3b]
            [compojure.core :refer [GET defroutes]]
            [compojure.route :refer [resources]]))

(def bucket "your-bucket")
(def aws-zone "eu-west-1")
(def access-key "your-aws-access-key")
(def secret-key "your-aws-secret-key")

(defroutes routes
  (resources "/")
  (GET "/sign" {params :params} (s3b/s3-sign bucket aws-zone access-key secret-key)))

If you want to use a route different than /sign, define it in the handler, (GET "/my-cool-route" ...), and then pass it in the options map to s3-pipe in the frontend.

If you are serving your S3 bucket from DigitalOcean Spaces, with CloudFront, or another CDN/proxy, you can pass upload-url as a fifth parameter to s3-sign, so that the ClojureScript client is directed to upload through this bucket. You still need to pass the bucket name, as the policy that is created and signed is based on the bucket name.

3. Integrate the upload pipeline into your frontend

In your frontend code you can now use s3-beam.client/s3-pipe. s3-pipe's argument is a channel where completed uploads will be reported. The function returns a channel where you can put File objects of a file map that should get uploaded. It can also take an extra options map with the previously mentioned :server-url like so:

(s3/s3-pipe uploaded {:server-url "/my-cool-route"}) ; assuming s3-beam.client is NS aliased as s3

The full options map spec is:

  • :server-url the signing server url, defaults to "/sign"
  • :response-parser a function to process the signing response from the signing server into EDN defaults to read-string.
  • :key-fn a function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
  • :headers-fn a function used to create the headers for the GET request to the signing server. The returned headers should be a Clojure map of header name Strings to corresponding header value Strings.
  • :progress-events? If set to true, it will push progress events to the channel during the transfer, false per default.

If you choose to place a file map instead of a File object, you file map should follow:

  • :file A File object
  • :identifier (optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
  • :key (optional) The file-name parameter that is sent to the signing server. If a :key key exists in the input-map it will be used instead of the key-fn as an object-key.
  • :metadata (optional) Metadata for the object. See Amazon's API docs for full details on which keys are supported. Keys and values can be strings or keywords. N.B. Keys not on that list will not be accepted. If you want to set arbitrary metadata, it needs to be prefixed with x-amz-meta-*.

An example using it within an Om component:

(ns your.client
  (:require [s3-beam.client :as s3]
  ...))

(defcomponent upload-form [app-state owner]
  (init-state [_]
    (let [uploaded (chan 20)]
      {:dropped-queue (chan 20)
       :upload-queue (s3/s3-pipe uploaded)
       :uploaded uploaded
       :uploads []}))
  (did-mount [_]
    (listen-file-drop js/document (om/get-state owner :dropped-queue))
    (go (while true
          (let [{:keys [dropped-queue upload-queue uploaded uploads]} (om/get-state owner)]
            (let [[v ch] (alts! [dropped-queue uploaded])]
              (cond
               (= ch dropped-queue) (put! upload-queue v)
               (= ch uploaded) (om/set-state! owner :uploads (conj uploads v))))))))
  (render-state [this state]
    ; ....
    )

Return values

The spec for the returned map (in the example above the returned map is v):

  • :type :success
  • :file The File object from the uploaded file
  • :response The upload response from S3 as a map with:
  • :location The S3 URL of the uploaded file
  • :bucket The S3 bucket where the file is located
  • :key The S3 key for the file
  • :etag The etag for the file
  • :xhr The XhrIo object used to POST to S3
  • :identifier A value used to uniquely identify the uploaded file

Or, if an error occurs during upload processing, an error-map will be placed on the response channel:

  • :type :error
  • :identifier A variable used to uniquely identify this file upload. This will be included in the response channel.
  • :error-code The error code from the XHR
  • :error-message The debug message from the error code
  • :http-error-code The HTTP error code

If :progress-events? are set to true, it will also forward those events from XhrIo:

  • :type :progress
  • :file The File object from the uploaded file
  • :bytes-sent Bytes uploaded
  • :bytes-total Total file size in bytes
  • :xhr The XhrIo object used to POST to S3
  • :identifier A value used to uniquely identify the uploaded file

Changes

0.6.0-alpha5

  • Fix compilation issues with shadow-cljs (#47)
  • Upgrade dependencies (#48)

0.6.0-alpha4

  • Add support for DigitalOcean Spaces (#44)

0.6.0-alpha3

  • Add support for progress events (#40)

0.6.0-alpha1

  • Add support for assigning metadata to files when uploading them. See the file-map spec above for more details. #37
  • Tweak keys and parameters for communication between the client and server parts of the library. This is backwards and forwards compatible between clients and servers running 0.5.2 and 0.6.0-alpha1.

0.5.2

  • Allow the user to upload to S3 through a custom URL as an extra parameter to sign-upload
  • Support bucket names with a '.' in them
  • Add asserts that arguments are provided

0.5.1

  • Allow the upload-queue to be passed an input-map instead of a file. This input-map follows the spec:

    • :file A File object
    • :identifier (optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
    • :key (optional) The file-name parameter that is sent to the signing server. If a :key key exists in the input-map it will be used instead of the key-fn as an object-key.
  • Introduce error handling. When an error has been thrown while uploading a file to S3 an error-map will be put onto the channel. The error-map follows the spec:

    • :identifier A variable used to uniquely identify this file upload. This will be included in the response channel.
    • :error-code The error code from the XHR
    • :error-message The debug message from the error code
    • :http-error-code The HTTP error code
  • New options are available in the options map:

    • :response-parser a function to process the signing response from the signing server into EDN defaults to read-string.
    • :key-fn a function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
    • :headers-fn a function used to create the headers for the GET request to the signing server.
  • Places a map into the upload-channel with:

    • :file The File object from the uploaded file
    • :response The upload response from S3 as a map with:
    • :location The S3 URL of the uploaded file
    • :bucket The S3 bucket where the file is located
    • :key The S3 key for the file
    • :etag The etag for the file
    • :xhr The XhrIo object used to POST to S3
    • :identifier A value used to uniquely identify the uploaded file

0.4.0

  • Support custom ACLs. The sign-upload function that can be used to implement custom signing routes now supports an additional :acl key to upload assets with a different ACL than public-read.

      (sign-upload {:file-name "xyz.html" :mime-type "text/html"}
                   {:bucket bucket
                    :aws-zone aws-zone
                    :aws-access-key access-key
                    :aws-secret-key secret-key
                    :acl "authenticated-read"})
    
  • Changes the arity of s3-beam.handler/policy function.

0.3.1

  • Correctly look up endpoints given a zone parameter (#10)

0.3.0

  • Allow customization of server-side endpoint (1cb9b27)

     (s3/s3-pipe uploaded {:server-url "/my-cool-route"})
    

0.2.0

  • Allow passing of aws-zone parameter to s3-sign handler function (b880736)

Contributing

Pull requests and issues are welcome. There are a few things I'd like to improve:

  • Testing: currently there are no tests
  • Error handling: what happens when the request fails?

Maintainers

Martin Klepsch Daniel Compton

License

Copyright ยฉ 2014 Martin Klepsch

Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version.

s3-beam's People

Contributors

benhowell avatar bensu avatar cammellos avatar danielcompton avatar getcontented avatar gja avatar igel avatar julianleviston avatar kennyjwilli avatar martinklepsch avatar publicmayhem avatar stuarth avatar superstructor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-beam's Issues

Refactor into a callback based API and layer core.async on top?

S3 beam was built around core.async and works fairly well. However, in re-frame (and other frameworks) apps, a callback based approach may be more natural than creating a pipeline channel. Additionally, it can be a little bit hard to follow the flow of logic through the pipeline when you are making changes that affect multiple steps.

One possible option would be to convert this library so that the core uses callbacks, and that core.async can be provided as a layer on top. This would also make core.async options more configurable if people had different needs.

Thoughts?

JavaScript Equivalent

Is it possible to use s3-beam in JavaScript or even TypeScript applications? If not, are you aware of a similar JavaScript project?

us-east-1 zone must be set to "s3" in server side configuration.

Hi @martinklepsch, thanks for the useful tool. I just wanted to share a pain point getting it setup (spoiler it's just like #3).

In short, on S3, I created a bucket in us-east-1 w/ CORS configured, then configured my the server side according to the readme, except I configured my zone to be "s3-us-east-1" (which causes the issue), and then hooked up the client side. Everything worked up until the actual request to S3. My request was blocked each and every time.
Fortunately, I found #3 which hinted to set the zone "s3" at which point the upload was successful.

To be clear, I attempted to use "us-east-1" and "s3-us-east-1" neither of which worked.

Working server side configuration for us-east-1:

(def bucket "xyzabc")
(def aws-zone "s3")
(def access-key "***")
(def secret-key "***")
...
(compojure/GET "/sign" {params :params} (s3b/s3-sign bucket aws-zone access-key secret-key))
...

I admit that I should have checked the closed issues earlier; I honestly thought it was my CORS configuration and definitely went down a rabbit hole. I haven't dug into the cause of this behavior but am curious whether this is a valid bug and if we might should mention it in the readme for the time being.

Here is output from the s3cmd to verify bucket location.

$ s3cmd info s3://foo
s3://foo/ (bucket):
   Location:  us-east-1
...

Thanks again!

Way to extend headers and parameters sent to signing endpoint on server

In order to use the /sign endpoint on my server an Authorization header containing a JWT must be sent with the request to the server. There is no way to add that header to the current GET request.

It would also be great if we had the ability to add parameters sent to the /sign endpoint. For example, if the user uploads directory structure rather than a single file I would like to additionally send the relative path for each file to the server.

Provide acl in headers rather than form data?

I was confused for a bit trying to debug why "PutObject" permissions weren't enough to put an object to S3. At first I thought I needed "PutObject", but after reading the docs, it says "PutObject" is only needed for adding acl's to existing objects. I tried it anyway and it worked.

After reading "PutObject" REST docs, I couldn't see anywhere that it showed passing acl as a form parameter, they suggest adding it as a header.

This needs more investigation.

setting "Content-Dispoition": "attachment" ?

Hi,

I've been trying for a few hours now to set the metadata "Content-Disposition": "attachment" in S3. I'm trying to enforce that all images / videos are downloaded when one navigates to them in s3. I can't seem to get this to work with either header-fn or with metadata. Do you have any suggestions on how to set this header for S3 objects? Thanks!

aws-zone not required when passing custom upload-url

I'm using s3-beam for DigitalOcean spaces and therefore using custom upload-url. However, the handler checks for aws-zone and bombs out if it's not there.

As a work-around, you can set any arbitrary aws zone (e.g. "us-east-1") as it's not used in the hash or anything.

Thanks!

Cut a new release?

We've been using 0.5.2-alpha2 for a while, can we cut an 0.5.2 release?

Internet Explorer 9 support?

Is there an easy way to add IE9 support to this? Particularly support for input type-file and not just drag and drop. I've looked into a couple of polyfills, but none of them seems to enable that. Most promising seems to be dropfile.js but it lists on-changed support for file inputs as a TODO.

Assert content type is not null

I was constructing a File in the browser and hadn't given it a content type. s3-beam signed the request, but it wasn't valid when it was sent to S3. It would be good to add a client side assert that the File has a not null content-type to avoid this (I think).

readme needs adjument? (custom signing url)

There's a custom URL in the source (: server-url in opts), but the readme currently lists custom signing url as being one of the nice to haves. Or, have I missed something here? Readme should be updated if I haven't missed anything.

Also, how is upload progress coming along? or not?

Errors when uploading file to S3

Hi Martin, thanks for making s3-beam.

I'm getting some errors when uploading to S3. Some files yield this error:

POST https://my-bucket.s3-eu-west-1.amazonaws.com/ net::ERR_CONNECTION_RESET

And some this error:

XMLHttpRequest cannot load https://my-bucket.s3-eu-west-1.amazonaws.com/. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8080' is therefore not allowed access.
client.cljs:37 Uncaught TypeError: Cannot read property 'getElementsByTagName' of null(anonymous function) @ client.cljs:37goog.events.EventTarget.fireListeners @ eventtarget.js:285goog.events.EventTarget.dispatchEventInternal_ @ eventtarget.js:382goog.events.EventTarget.dispatchEvent @ eventtarget.js:197goog.net.XhrIo.dispatchErrors_ @ xhrio.js:666goog.net.XhrIo.onReadyStateChangeHelper_ @ xhrio.js:808goog.net.XhrIo.onReadyStateChangeEntryPoint_ @ xhrio.js:748goog.net.XhrIo.onReadyStateChange_ @ xhrio.js:732

Here's the S3 configuration I'm using:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Am I missing something?

Errors

Hi @martinklepsch I'm about to implement something around errors (because I need it on s3-beam in my app). Are you happy with the idea of passing in a chan on the opts map, which, if present will have a map of error-message and error-code put on it? (something like {:error-message "" :error-code 0} I'm imagining at this stage)?

I'll go ahead with that anyway, let me know if you've got another preference.

Use AWS Java SDK for signing URL

We should investigate whether it is possible to use the AWS Java SDK to sign URLs. This would let us support v4 signing format, as well as letting users use the credential chain rather than needing to provide access keys. One thing to watch out for is the dependencies that this might bring along with it.

Switch to using virtual-host style URLs when possible

Currently, s3-beam uses https://s3-<region>.amazonaws.com/<bucket> as the upload URL path (path-style). It is also possible to upload to https://<bucket>.s3-<region>.amazonaws.com (virtual-host style).

AWS is planning to deprecate path-style addressing for buckets created after September 30, 2020. There are also performance and resilience benefits for switching to virtual-host style addressing for existing buckets.

The switch is transparent for many buckets, but is not for buckets with a . in the bucket name (among others). AWS doesn't currently create a valid certificate for these buckets, so you will get TLS errors when you try to upload/download from them.

Using #38 would fix this for us, as it defaults to using virtual-host style addressing, unless the bucket has a . in it.

In the meantime, you can use the upload-url parameter to s3-sign to override the upload URL.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.