Giter Site home page Giter Site logo

lepozepo / s3 Goto Github PK

View Code? Open in Web Editor NEW
273.0 20.0 74.0 616 KB

A simple helper for easily uploading files to Amazon S3 from Meteor. This package will also make Knox available server-side.

License: MIT License

CoffeeScript 62.00% HTML 33.03% JavaScript 4.97%

s3's Introduction

Amazon S3 Uploader

S3 provides a simple way for uploading files to the Amazon S3 service with a progress bar. This is useful for uploading images and files that you want accesible to the public. S3 is built on Knox and AWS-SDK. Both modules are made available on the server after installing this package.

If you want to keep using the older version of this package (pre 0.9.0) check it out using meteor add lepozepo:s3@=3.0.1

If you want to keep using the version of this package that uses server resources to upload files check it out using meteor add lepozepo:s3@=4.1.3

S3 now uploads directly from the client to Amazon. Client files will not touch your server.

Show your support!

Star my code in github or atmosphere if you like my code or shoot me a dollar or two!

DONATE HERE

Moving Forward

In line with Meteor's move towards NPM, I'm moving this package to NPM. Along with this move, I'll resolve issues with importing the uploader and making it easy to use in React and React Native. I've already started migrating but I'm only half way there, have a look at the repo here

NEW IN 5.2.1

  • AWS Signature V4!! This means more regions can use this package

Installation

$ meteor add lepozepo:s3

How to use

Step 1

Define your Amazon S3 credentials. SERVER SIDE.

S3.config = {
	key: 'amazonKey',
	secret: 'amazonSecret',
	bucket: 'bucketName',
	region: 'eu-west-1' // Only needed if not "us-east-1" or "us-standard"
};

Step 2

Create a file input and progress indicator. CLIENT SIDE.

<template name="s3_tester">
	<input type="file" class="file_bag">
	<button class="upload">Upload</button>

	{{#each files}}
		<p>{{percent_uploaded}}</p>
	{{/each}}
</template>

Step 3

Create a function to upload the files and a helper to see the uploads progress. CLIENT SIDE.

Template.s3_tester.events({
	"click button.upload": function(){
		var files = $("input.file_bag")[0].files

		S3.upload({
				files:files,
				path:"subfolder"
			},function(e,r){
				console.log(r);
		});
	}
})

Template.s3_tester.helpers({
	"files": function(){
		return S3.collection.find();
	}
})

Create your Amazon S3

For all of this to work you need to create an aws account.

1. Create an S3 bucket in your preferred region.

2. Access Key Id and Secret Key

  1. Navigate to your bucket
  2. On the top right side you'll see your account name. Click it and go to Security Credentials.
  3. Create a new access key under the Access Keys (Access Key ID and Secret Access Key) tab.
  4. Enter this information into your app as defined in "How to Use" "Step 1".
  5. Your region can be found under "Properties" button and "Static Website Hosting" tab.
    • bucketName.s3-website-eu-west-1.amazonaws.com.
    • If your region is "us-east-1" or "us-standard" then you don't need to specify this in the config.

3. Hosting

  1. Upload a blank index.html file (anywhere is ok, I put it in root).
  2. Select the bucket's properties by clicking on the bucket (from All Buckets) then the "Properties" button at the top right.
  3. Click "Static Website Hosting" tab.
  4. Click Enable Website Hosting.
  5. Fill the Index Document input with the path to your index.html without a trailing slash. E.g. afolder/index.html, index.html
  6. Click "Save"

4. CORS

You need to set permissions so that everyone can see what's in there.

  1. Select the bucket's properties and go to the "Permissions" tab.

  2. Click "Edit CORS Configuration" and paste this:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    	<CORSRule>
    		<AllowedOrigin>*</AllowedOrigin>
    		<AllowedMethod>PUT</AllowedMethod>
    		<AllowedMethod>POST</AllowedMethod>
    		<AllowedMethod>GET</AllowedMethod>
    		<AllowedMethod>HEAD</AllowedMethod>
    		<MaxAgeSeconds>3000</MaxAgeSeconds>
    		<AllowedHeader>*</AllowedHeader>
    	</CORSRule>
    </CORSConfiguration>
  3. Click "Edit bucket policy" and paste this (Replace the bucket name with your own):

    {
    	"Version": "2008-10-17",
    	"Statement": [
    		{
    			"Sid": "AllowPublicRead",
    			"Effect": "Allow",
    			"Principal": {
    				"AWS": "*"
    			},
    			"Action": "s3:GetObject",
    			"Resource": "arn:aws:s3:::YOURBUCKETNAMEHERE/*"
    		}
    	]
    }
  4. Click Save

Note

It might take a couple of hours before you can actually start uploading to S3. Amazon takes some time to make things work.

Enjoy, this took me a long time to figure out and I'm sharing it so that nobody has to go through all that.

API

S3 (CLIENT SIDE)

S3.collection

This is a null Meteor.Collection that exists only on the users client. After the user leaves the page or refreshes, the collection disappears forever.

S3.upload(ops,callback)

This is the upload function that manages all the dramatic things you need to do for something so essentially simple.

Parameters:

  • ops.file [OPTIONAL]: Must be a File object. You can create this via new File(). Either this otpion or 'files' just be provided.
  • ops.files [OPTIONAL]: Must be a FileList object. You can get this via jQuery via $("input[type='file']")[0].files.
  • ops.path [DEFAULT: ""]: Must be in this format ("folder/other_folder"). So basically never start with "/" and never end with "/". Defaults to ROOT folder.
  • ops.unique_name [DEFAULT: true]: If set to true, the uploaded file name will be set to a uuid without changing the files' extension. If set to false, the uploaded file name will be set to the original name of the file.
  • ops.encoding [OPTIONAL: "base64"]: If set to "base64", the uploaded file will be uploaded as a base64 string. The uploader will enforce a unique_name if this option is set.
  • ops.expiration [DEFAULT: 1800000 (30 mins)]: Defines how much time the file has before Amazon denies the upload. Must be in milliseconds. Defaults to 1800000 (30 minutes).
  • ops.uploader [DEFAULT: "default"]: Defines the name of the uploader. Useful for forms that use multiple uploaders.
  • ops.acl [DEFAULT: "public-read"]: Access Control List. Describes who has access to the file. Can only be one of the following options:
    • "private"
    • "public-read"
    • "public-read-write"
    • "authenticated-read"
    • "bucket-owner-read"
    • "bucket-owner-full-control"
    • "log-delivery-write"
    • Support for signed GET is still pending so uploads that require authentication won't be easily reachable
  • ops.bucket [DEFAULT: SERVER SETTINGS]: Overrides the bucket that will be used for the upload.
  • ops.region [DEFAULT: SERVER SETTINGS]: Overrides the region that will be used for the upload. Only accepts the following regions:
    • "us-west-2"
    • "us-west-1"
    • "eu-west-1"
    • "eu-central-1"
    • "ap-southeast-1"
    • "ap-southeast-2"
    • "ap-northeast-1"
    • "sa-east-1"
    • file.upload_name [OPTIONAL]: A function that returns the name with which you want to upload the file. It takes the file object as the only parameter. eg.
       // The following function simply replicates the default behavior.
       function(f) {
       	var extension = f.type.split("/")[1];
       	return Meteor.uuid() + "." + extension;
       }
  • callback: A function that is run after the upload is complete returning an Error as the first parameter (if there is one), and a Result as the second.
  • Result: The returned value of the callback function if there is no error. It returns an object with these keys:
    • loaded: Integer (bytes)
    • total: Integer (bytes)
    • percent_uploaded: Integer (out of 100)
    • uploader: String (describes which uploader was used to upload the file)
    • url: String (S3 hosted URL)
    • secure_url: String (S3 hosted URL for https)
    • relative_url: String (S3 URL for delete operations, this is what you should save in your DB to control delete)

S3.delete(path,callback)

This function permanently destroys a file located in your S3 bucket.

Parameters:

  • path: Must be in this format ("/folder/other_folder/file.extension"). So basically always start with "/" and never end with "/". This is required.
  • callback: A function that is run after the delete operation is complete returning an Error as the first parameter (if there is one), and a Result as the second.

S3 (SERVER SIDE)

S3.config(ops)

This is where you define your key, secret, bucket, and other account wide settings.

Parameters:

  • ops.key [REQUIRED]: Your Amazon AWS Key.
  • ops.secret [REQUIRED]: Your Amazon AWS Secret.
  • ops.bucket [REQUIRED]: Your Amazon AWS S3 bucket.
  • ops.denyDelete [DEFAULT: undefined]: If set to true, will block delete calls. This is to enable secure deployment of this package before a more granular permissions system is developed.
  • ops.region [DEFAULT: "us-east-1"]: Your Amazon AWS S3 Region. Defaults to US Standard. Can be any of the following:
    • "us-west-2"
    • "us-west-1"
    • "eu-west-1"
    • "eu-central-1"
    • "ap-southeast-1"
    • "ap-southeast-2"
    • "ap-northeast-1"
    • "sa-east-1"
S3.config = {
	key: 'amazonKey',
	secret: 'amazonSecret',
	bucket: 'bucketName'
};

S3.rules

S3.rules.delete

This is a function that runs every time someone uses the delete function on the client side. The context of this for the function has access to the path and this from a Meteor.method.

S3.knox

The current knox client.

S3.aws

The current aws-sdk client.

Developer Notes

http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html https://github.com/Differential/meteor-uploader/blob/master/lib/UploaderFile.coffee#L169-L178

http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html https://github.com/CulturalMe/meteor-slingshot/blob/master/services/aws-s3.js

s3's People

Contributors

ajduke avatar dgrijuela avatar jpadvo avatar kristerv avatar ladoli avatar lepozepo avatar luchoh avatar nicolaslopezj avatar proehlen avatar s7dhansh avatar tscizzle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3's Issues

add http caching tags to uploaded files

This is a feature request. Would it be possible to flag S3 to add appropriate meta tags/flags when uploading through know to S3 so that the file is cached when downloaded from S3?

Uncaught RangeError: Maximum call stack size exceeded

After updating I'm experiencing this bug. It seems caused by the context send in events.js

reader.onload = function () {
    fileData.data = new Uint8Array(reader.result);
    Meteor.call("S3upload",fileData,context,callback);
};

Setting context to null makes uploading work fine.. Am I the only one experiencing this?

MetaData

I am unable to set the content-type meta data for an image. This is crucial for using AWS as a CDN for image storage.

breaks in 0.8.3

Hello

this package is not working on meteor 8.3

meteor create yyy
cd yyy
mrt add s3

prevents startup of the application

Use $set for S3.collection.update

Just wondering if you would consider using the $set modifier when updating the S3 collection on line 3 of streams.coffee.

If a developer wants to add their own fields to the S3 collection, they will get overwritten each time this update is done. Furthermore, replacing the entire document each time causes conflicts with simple-schema (which someone might want to use to protect there data).

I'm pretty new to Git and Github, so I'm not sure how to commit this change myself (or if I'm even allowed to).

Thank you.

Mike

How to upload image that is created from Canvas?

My use case is:

  • User pick an image
  • They crop the image
  • I generate the cropped image with canvas
  • Then I upload the generated image to S3

In previous version, I manually created the file object, simply by setting the name, size, type, and data properties, such as below:

filedata = {
    name: "Picture",
    type: "image/png,
    size: 1223727,
    data: // the dataURL converted with Uint8Array()
}

And its working fine by directly calling the method. But now, the new update changes everything and the first parameter require you to pass the FileList object, which I don't really know how to form them from the pieces of information I have.

I tried converting the dataURL to Blob, but it doesn't work and throwing an error.

Do you have any suggestion on how can this be achieve with your package?

EDIT
Or can I directly call the internal method, _S3uploadand use it as previous version?

Library reports success even when the upload fails

I have noticed that the S3.upload() method returns success (percent_uploaded = 100, url, etc.) even when the Amazon S3 does not return HTTP 200 (OK) - for example due to misconfiguration (missing S3.config.region, in my case).

In addition to this, I believe the error-handling logic in Meteor.methods._S3upload is faulty, mainly to the fact, that if you throw new Meteor.Error inside the S3.knox.putStream (ClientRequest) context, it brings down the whole application (unhandled exception). Throwing error from _S3upload context is fine and results in err being propagated back to the client via callback.

Error

I am getting this error:
Exception while invoking method '_S3upload' TypeError: Object # has no method 'putStream'
at Meteor.methods._S3upload (packages/lepozepo:s3/server/methods.coffee:20:19)

for now i have only copied your simple tutorial, with the apropriate keys and bucket.

Add hooks

I wonder if that's possible to hook intro package's server S3.upload function in order to alter the file before actual uploading. In my case I want to resize huge images before sending them to S3. Sure I can deny uploading files based on their size in bytes or mime type using FileList object data but I'd like to make it more user friendly. Also I believe it's not safe to wholly rely on client side code here.
I'm thinking of something like a new S3.config option like onUpload: function(file) { } which will either manage the file argument passed and returns true/false to allow/deny uploading OR will return changed file object / null.
Does it make any sense?

Meteor Streams: Is it work on multi instances of meteor?

Hi, on your meteor streams repository you have not enabled issues, so I'm writing here.

I want to ask, if Meteor streams package can work across multi meteor instances?
From https://arunoda.github.io/meteor-streams/scaling-support.html:

If you decided to scale your app horizontally, by default Meteor Streams works within a single instance only. As an example lets say you are going to have 3 meteor instances (A, B, C). Now events emitted from instance A cannot be listen from B or C.

But simply integrating Meteor Cluster you can remove this limitation. See below for the integration.

But Meteor cluster package is not also maintained.

[email protected] and S3.<methods> have stopped working

I recreated the exact steps from your setup and it worked great. Then I upgraded to [email protected] and it stopped working. The S3.upload method calls the callback with no error, reports the uploaded file, but S3 never received it. Same thing with S3.delete, it reports back success but never removes the file on S3.

Am pretty sure you will point to knox, but still think it is worthwhile logging here.

Read Me Clarification - Bucket naming

In the setup instructions you have:

"Under Static Website Hosting you can Enable website hosting, to do that first upload a blank index.html file and then enable it."

And I'm looking for a little clarification because I'm sure I'm doing something wrong.

I uploaded a blank index.html via the s3 UI to the bucket I created called "website_beta".

Then I went to the bucket properties > static website hosting and entered 'index.html' in the 'Index Document' field and hit the save button but it errors saying: "The specified bucket is not valid."
#1 - Any idea why the bucket is not valid?
#2 - When you say blank index.html file do you mean a skeleton html file or completely empty?

console.log(result) should be console.log(r)

Template.s3_tester.events({
    "click button.upload": function(){
        var files = $("input.file_bag")[0].files
        S3.upload(files,"/subfolder",function(e,r){
            console.log(result);
        });
    }
})

should be:

Template.s3_tester.events({
    "click button.upload": function(){
        var files = $("input.file_bag")[0].files
        S3.upload(files,"/subfolder",function(e,r){
            console.log(r);
        });
    }
})

Missing required key 'UploadId' in params

Hello There,

Im getting this error in the callback of S3.upload function:

issing required key 'UploadId' in params

but files get uploaded to S3

Seems it's related with CORS in Meteor, any idea?

Thank you

Stopping upload of file over a certain size?

I tried returning r.total_uploaded to check file size, but it came back undefined. I really need to check the file size and stop the upload before it starts, but could stop the upload when it reaches a certain amount.

Any ideas?

Exception in _S3upload. No method 'putStream'

Hi,

I'm having an issue with the S3.upload method call. I use Meteor 1.0 and S3 4.1.3. I get the following exception.

I20141202-14:08:24.107(1)? Exception while invoking method '_S3upload' TypeError: Object # has no method 'putStream'
I20141202-14:08:24.108(1)? at Meteor.methods._S3upload (packages/lepozepo:s3/server/methods.coffee:40:19)
I20141202-14:08:24.108(1)? at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1599)
I20141202-14:08:24.108(1)? at packages/ddp/livedata_server.js:648
I20141202-14:08:24.108(1)? at _.extend.withValue (packages/meteor/dynamics_nodejs.js:56)
I20141202-14:08:24.109(1)? at packages/ddp/livedata_server.js:647
I20141202-14:08:24.110(1)? at _.extend.withValue (packages/meteor/dynamics_nodejs.js:56)
I20141202-14:08:24.110(1)? at _.extend.protocol_handlers.method (packages/ddp/livedata_server.js:646)
I20141202-14:08:24.110(1)? at packages/ddp/livedata_server.js:546

Progress Bar

Any luck with a progress bar?
I'm looking at using this on a large project but am curious how far you hope to take it.

Issues after MRT update

I am having issues with this S3 package and Knox after running mrt update this morning, was there anything significant that changed. The source code didn't look much different ?

Using the package with different regions / problem uploading

Hey!

I tried using the package but am having difficulty uploading anything. I get no error when invoking S3.upload and it happily returns an URL. However, the file does not get uploaded. It doesn't show up in the S3 console on AWS and the link that the function returns throws a 404 error when used.
I realised that the links generated by Meteor/the package were different than if I upload something by hand. For example:
This file actually exists and is viewable:
https://s3.eu-central-1.amazonaws.com/taskwunder-appt/handshake.png

This URL was generated by Meteor and is not useable:
http://taskwunder-appt.s3.amazonaws.com/avatar/t4q5gth8XX4y8XbqH.png

I have added this to the S3.config.
region: 'eu-central-1'

The URL changes but is still not useable:
http://taskwunder-appt.s3-eu-central-1.amazonaws.com/avatar/83S4aPeGtsTcHJKDD.png

Is the package not useable with eu-central buckets? Or can I specify the region somewhere else?
EDIT: I created an US Standard bucket and everything seems to work. The issue is definitely with the generation of the URLS when using an EU bucket.

Upload callback does not return total_uploaded

The documentation says that the result in the callback should return total_uploaded. However, when I use the example provided and I console log r, I get percent_uploaded, uploading, url, secure_url and relative_url.

S3.upload(files,"/subfolder",function(e,r){
  console.log(r);
});

For now, I have made a workaround in my application where I query S3.collection with the url from the callback in order to get to the total_uploaded.

S3.delete doesn`t change LocalCollection

S3.delete is not reactive.
So, I should call collection`s remove function like this.

"click .remove-image":function(event) {
    var that=this;
    S3.delete(this.relative_url, function(err, result) {
      if(err) {
        console.log(err);
      }
      var uploads=S3.collection.find();
      uploads.collection.remove(that._id);
    });
}

Uploading manually in S3

Hi i want to upload manually to s3 using your package. When i went through your package i tried to call the s3Upload method provided in the package. The parameters needed are file,context
I am beet confused what is the file parameter actually expecting,
I sent the html file object and it gives me error as
Exception while invoking method 'S3upload' TypeError: First argument needs to be a number, array or string.

S3 wont upload files - meteor 0.9.4

Console:
Object { percent_uploaded: 100, uploading: false, url: "http://mybuk.s3.amazonaws.com/subfolder/497ea3ef-9a9c-481e-b90a-e83012d92ed3.png", secure_url: "https://mybuk.s3.amazonaws.com/subfolder/497ea3ef-9a9c-481e-b90a-e83012d92ed3.png", relative_url: "/subfolder/497ea3ef-9a9c-481e-b90a-e83012d92ed3.png" }

http://mybuk.s3.amazonaws.com/subfolder/497ea3ef-9a9c-481e-b90a-e83012d92ed3.png:
NoSuchKeyThe specified key does not exist.subfolder/497ea3ef-9a9c-481e-b90a-e83012d92ed3.png7242C76C754A864C5jIpKN8LSDJb+984O6C7+XoD6/OGXFDlF1hEOLxaE2DfUXJuPVGzigBAWeDz38UP

S3.delete does not delete file

S3.delete does not seem to delete the file when its relative path is used. The S3.delete function returns result with the value true in the callback.

Why isnt the file being deleted off S3? Thank you!

Original URL: https://my.Bucket.s3-ap-southeast-1.amazonaws.com/demo/89fd92d2-98ce-4af2-8e1f-87f213896027-avatar.jpg
Relative URL: /my.Bucket/demo/89fd92d2-98ce-4af2-8e1f-87f213896027-avatar.jpg

var S3ImagePath = "/my.Bucket/" + ( Meteor.user().profile.imageUrl.split("/").slice(3).join("/") )
console.log(S3ImagePath)
S3.delete( S3ImagePath, function(error, result) {
    if(error)
        console.log('error:', error)
    else
        console.log('result:', result)

I have defined S3.config on the server, but somehow I also noticed that using the wrong credentials doesn't throw any errors

S3.config = {
    key: 'mykey',
    secret: 'mysecret------/eYKWn9K+7v07IL',
    bucket: 'my.Bucket'
};

Error when uploading small file in 4.1.0

The latest version works awesome when uploading large files, good job!

I do get an error now, when uploading a small file (like 100k):

screen shot 2014-10-31 at 12 43 08

The upload does succeed, though.

No files in S3 bucket

Hi,
I followed your steps to setup S3 upload with Meteor. I have created s3.js within server folder and placed

S3.config = {
key: 'accesskey',
secret: 'secretkey ',
bucket: 'my-bucket-name'
};

When i try to upload file to S3, i did not see any error. The callback function returns

Object { percent_uploaded: 100, uploading: false, url: "http://mybucket.s3.amazonaws.com/gfcNkm53mm2NNxGwY.jpg", secure_url: "https://mybucket.s3.amazonaws.com/gfcNkm53mm2NNxGwY.jpg", relative_url: "gfcNkm53mm2NNxGwY.jpg" }

on console.

But there is no file in my bucket on S3.

Also i added,

BrowserPolicy.content.allowOriginForAll('mybucket.s3.amazonaws.com')

My CORS

I added GET, POST, HEAD, PUT in AllowedMethod

Bucket policy:

{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": ""
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/
"
}
]
}

I created bucket before an hour. May i know what is the issue?

Make compatible with mdg:camera

so i would like to use s3 with the mgd:camera package which give you a bas64 encoded data-uri. so converting this to a blob and uploading it to s3 would be great.

do you have any insight on this?

Ill have a look how to do it and make a PR if it works ;)

Does the uploaded file go through the server?

Sorry for my ignorance, but do file uploads go straight to Amazon or does the data have to pass through the server first? That is, if I'm getting a bunch of uploads, will the Meteor server become a bottleneck? Thanks!

ReferenceError: S3 is not defined

Meteor v0.8.2
S3 v2.1.4
Added config to /server/config.js

S3.config = {
    key: 'amazonKey',
    secret: 'amazonSecret',
    bucket: 'bucketName',
    directory: '/subfolder/' //This is optional, defaults to root
};

Getting this error:

Your application is crashing. Waiting for file change.
ReferenceError: S3 is not defined
 (STDERR)     at app/server/config.js:1:36
 (STDERR)     at app/server/config.js:8:3

S3.knox.putStream

I had this working locally with ease (thank you) - and after testing on server (modulus) all seemed fine -until today! i try and upload a small jpeg, i get a 500 error in my browser console:

GET http://[my-app].onmodulus.net/sockjs/info?cb=8ufu8t_rw3

and then the server crashes and restarts, with this in the log:

/mnt/data/1/programs/server/packages/lepozepo_s3.js:58
throw new Meteor.Error("S3.knox.putStream", err);
^
Error: Error: CERT_UNTRUSTED [S3.knox.putStream]
at ClientRequest. (packages/lepozepo:s3/server/methods.coffee:55:14)
at ClientRequest.f (/mnt/data/1/node_modules/once/once.js:17:25)
at ClientRequest.emit (events.js:117:20)
at CleartextStream.socketErrorListener (http.js:1552:9)
at CleartextStream.emit (events.js:95:17)
at SecurePair. (tls.js:1400:19)
at SecurePair.emit (events.js:92:17)
at SecurePair.maybeInitFinished (tls.js:980:10)
at CleartextStream.read as _read
at CleartextStream.Readable.read (_stream_readable.js:341:10)

Add a way to identify 2 or more separate uploaders in the same template

Hi,
I have two file upload section with respective upload button on same template. Both files will be uploaded seperate folder in S3 bucket. For example file 1 should be uploaded to 'mybucket/folder1', second should be 'mybucket/folder2'. I dont know how to show progress for both section separately.

Template.addClient.helpers({
sopFiles : function(){
return S3.collection.find();
},
templateFiles : function(){
return S3.collection.find();
}
});

I know both helpers returns same progress bar.

Can we restrict in S3.collection.find() using current upload progress?

Restrict File types?

How do I restrict file types for certain upload such as image only (.png, .jpg, .gif...) or document file (.docx, .txt, .pdf...)

Uploading different object other than File?

I want to be able to upload other fileTypes such as Blobs to get image uploads working through Canvas. I'm not always going to have users input files because I want to be able to upload without a user having to attach their file to an input field.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.