Giter Site home page Giter Site logo

gaul / s3proxy Goto Github PK

View Code? Open in Web Editor NEW
1.6K 21.0 212.0 1.59 MB

Access other storage backends via the S3 API

License: Apache License 2.0

Java 99.27% Shell 0.52% Dockerfile 0.22%
s3 proxy openstack-swift azure aws-s3 google-cloud-storage backblaze-b2 atmos

s3proxy's Introduction

S3Proxy

Github All Releases Docker Pulls Maven Central Twitter Follow

S3Proxy implements the S3 API and proxies requests, enabling several use cases:

  • translation from S3 to Backblaze B2, EMC Atmos, Google Cloud, Microsoft Azure, and OpenStack Swift
  • testing without Amazon by using the local filesystem
  • extension via middlewares
  • embedding into Java applications

Usage with Docker

Docker Hub hosts a Docker image and has instructions on how to run it.

Usage without Docker

Users can download releases from GitHub. Developers can build the project by running mvn package which produces a binary at target/s3proxy. S3Proxy requires Java 11 or newer to run.

Configure S3Proxy via a properties file. An example using the local file system as the storage backend with anonymous access:

s3proxy.authorization=none
s3proxy.endpoint=http://127.0.0.1:8080
jclouds.provider=filesystem
jclouds.filesystem.basedir=/tmp/s3proxy

First create the filesystem basedir:

mkdir /tmp/s3proxy

Next run S3Proxy. Linux and Mac OS X users can run the executable jar:

chmod +x s3proxy
s3proxy --properties s3proxy.conf

Windows users must explicitly invoke java:

java -jar s3proxy --properties s3proxy.conf

Finally test by creating a bucket then listing all the buckets:

$ curl --request PUT http://localhost:8080/testbucket

$ curl http://localhost:8080/
<?xml version="1.0" ?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID><DisplayName>[email protected]</DisplayName></Owner><Buckets><Bucket><Name>testbucket</Name><CreationDate>2015-08-05T22:16:24.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>

Usage with Java

Maven Central hosts S3Proxy artifacts and the wiki has instructions on Java use.

Supported storage backends

  • atmos
  • aws-s3 (Amazon-only)
  • azureblob
  • b2
  • filesystem (on-disk storage)
  • google-cloud-storage
  • openstack-swift
  • rackspace-cloudfiles-uk and rackspace-cloudfiles-us
  • s3 (all implementations)
  • transient (in-memory storage)

See the wiki for examples of configurations.

Assigning buckets to backends

S3Proxy can be configured to assign buckets to different backends with the same credentials. The configuration in the properties file is as follows:

s3proxy.bucket-locator.1=bucket
s3proxy.bucket-locator.2=another-bucket

In addition to the explicit names, glob syntax can be used to configure many buckets for a given backend.

A bucket (or a glob) cannot be assigned cannot be assigned to multiple backends.

Middlewares

S3Proxy can modify its behavior based on middlewares:

SSL Support

S3Proxy can listen on HTTPS by setting the secure-endpoint and configuring a keystore. You can read more about how configure S3Proxy for SSL Support in the dedicated wiki page with Docker, Kubernetes or simply Java.

Limitations

S3Proxy has broad compatibility with the S3 API, however, it does not support:

  • ACLs other than private and public-read
  • BitTorrent hosting
  • bucket logging
  • bucket policies
  • CORS bucket operations like getting or setting the CORS configuration for a bucket. S3Proxy only supports a static configuration (see below).
  • hosting static websites
  • object server-side encryption
  • object tagging
  • object versioning, see #74
  • POST upload policies, see #73
  • requester pays buckets
  • select object content

S3Proxy emulates the following operations:

  • copy multi-part objects, see #76

S3Proxy has basic CORS preflight and actual request/response handling. It can be configured within the properties file (and corresponding ENV variables for Docker):

s3proxy.cors-allow-origins=https://example\.com https://.+\.example\.com https://example\.cloud
s3proxy.cors-allow-methods=GET PUT
s3proxy.cors-allow-headers=Accept Content-Type
s3proxy.cors-allow-credential=true

CORS cannot be configured per bucket. s3proxy.cors-allow-all=true will accept any origin and header. Actual CORS requests are supported for GET, PUT, POST, HEAD and DELETE methods.

The wiki collects compatibility notes for specific storage backends.

Support

References

  • Apache jclouds provides storage backend support for S3Proxy
  • Ceph s3-tests help maintain and improve compatibility with the S3 API
  • fake-s3, gofakes3, minio, S3 ninja, and s3rver provide functionality similar to S3Proxy when using the filesystem backend
  • GlacierProxy and SwiftProxy provide similar functionality for the Amazon Glacier and OpenStack Swift APIs
  • s3mock mocks the S3 API for Java/Scala projects
  • sbt-s3 runs S3Proxy via the Scala Build Tool
  • swift3 provides an S3 middleware for OpenStack Swift
  • Zenko provide similar multi-cloud functionality

License

Copyright (C) 2014-2021 Andrew Gaul

Licensed under the Apache License, Version 2.0

s3proxy's People

Contributors

ansman avatar chaithanyagk avatar cstamas avatar decard6 avatar dependabot[bot] avatar gaul avatar jixinchi avatar johnnyaug avatar kahing avatar kinoute avatar kishorebattula avatar larshagencognite avatar liamwhite avatar massdosage avatar mmezei avatar raphink avatar reimannf avatar ritazh avatar ryanfaircloth avatar shenghu avatar snpz avatar srstsavage avatar srujandeshpande avatar st-h avatar steven-sheehy avatar sullis avatar thiagodasilva avatar timuralp avatar xgourmandin avatar zvikagart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3proxy's Issues

Support service paths

Some object stores like Eucalyptus Walrus use service paths, e.g., https://host/services/Walrus/container-name/blob-name. S3Proxy supports these for the S3 back-end via jclouds.s3.service-path; it should support them for the front-end as well.

swift: how to select region? It is always the first entry selected.

main o.j.l.s.i.GetRegionIdMatchingProviderURIOrNull:74 |::] failed to find key for value  https://auth.cloud.ovh.net/v2.0 in {GRA1=https://storage.gra1.cloud.ovh.net/v1/AUTH_xxx, BHS1=https://storage.bhs1.cloud.ovh.net/v1/AUTH_xxx, SBG1=https://storage.sbg1.cloud.ovh.net/v1/AUTH_xxx}; choosing first: GRA1

my config:

s3proxy.endpoint=http://127.0.0.1:8080
s3proxy.authorization=aws-v2
s3proxy.identity=xxx
s3proxy.credential=xxx
jclouds.provider=openstack-swift
jclouds.endpoint=https://auth.cloud.ovh.net/v2.0
jclouds.api=swift
jclouds.regions=SBG1
jclouds.identity=1xx9:xxx
jclouds.credential=xxx

NumberFormatException on open range

A half open range, like -500 or 9500- results in a NumberFormatException.
Both parts of the range are always parsed as long.

stacktrace:

java.lang.NumberFormatException: For input string: ""
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) ~[na:1.7.0_51]
    at java.lang.Long.parseLong(Long.java:453) ~[na:1.7.0_51]
    at java.lang.Long.parseLong(Long.java:483) ~[na:1.7.0_51]
    at org.gaul.s3proxy.S3ProxyHandler.handleGetBlob(S3ProxyHandler.java:499) ~[s3proxy:1.0.0]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:152) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.Server.handle(Server.java:485) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:290) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) [s3proxy:1.0.0]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [s3proxy:1.0.0]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:606) [s3proxy:1.0.0]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:535) [s3proxy:1.0.0]
    at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

Signature dosen't match

Hello,

I'm getting signature issues, even though I've set the s3proxy.credential and s3proxy.identity to be the same as in my local ~/.aws/credentials file. Am I misreading the documentation, should this work?

EDIT: Problem with my set up.

Support multiple providers/configurations

It would be great to be able to support multiple providers/configurations through a single S3Proxy process. For example, by specifying AWS S3 and GCS, one could transfer data from one provider to another (there are a bunch of other use cases).

One way to do this would be to use the location (but that feels like an incorrect patch, as the location itself may be useful to express).

Another way would be through an extended endpoint path, e.g. http://127.0.0.1/aws and http://127.0.0.1/gcs.

jclouds integration test failures

We have several remaining failures:

  BucketsLiveTest.testBucketLogging:214 » AWSResponse request GET http://localho...
  BucketsLiveTest.testBucketPayer:183 » AWSResponse request GET http://localhost...
  BucketsLiveTest.testUpdateBucketACL:127->checkGrants:137 AccessControlList{owner=org.jclouds.s3.domain.CanonicalUser@52bd459b, grants=[Grant{grantee=CanonicalUserGrantee{displayName='[email protected]', identifier='75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}, permission=FULL_CONTROL}]} expected [4] but found [1]
  S3ClientLiveTest.testCopyCannedAccessPolicyPublic:145 » UnknownHost gaul-blobs...
  S3ClientLiveTest.testCopyIfMatch:442 » AWSResponse request PUT http://localhos...
  S3ClientLiveTest.testCopyIfModifiedSince:389 » AWSResponse request PUT http://...
  S3ClientLiveTest.testCopyIfNoneMatch:464 » AWSResponse request PUT http://loca...
  S3ClientLiveTest.testCopyIfUnmodifiedSince:419 » AWSResponse request PUT http:...
  S3BlobIntegrationLiveTest>BaseBlobIntegrationTest.testPutFileParallel:153 » Timeout
  S3BlobIntegrationLiveTest>BaseBlobIntegrationTest.testSetBlobAccess:668
Expecting:
 <PRIVATE>
to be equal to:
 <PUBLIC_READ>
but was not.
  S3ContainerIntegrationLiveTest>BaseContainerIntegrationTest.testSetContainerAccess:511
Expecting:
 <PRIVATE>
to be equal to:
 <PUBLIC_READ>
but was not.
  S3ClientLiveTest.testMetadataWithCacheControlAndContentDisposition:319->assertCacheControl:328 NullPointer
  S3ClientLiveTest.testPublicWriteOnObject:165 » AWSResponse request PUT http://...
  S3ClientLiveTest.testPutCannedAccessPolicyPublic:125 » UnknownHost gaul-blobst...
  S3ClientLiveTest.testUpdateObjectACL:212->checkGrants:576 AccessControlList{owner=org.jclouds.s3.domain.CanonicalUser@52bd459b, grants=[Grant{grantee=CanonicalUserGrantee{displayName='[email protected]', identifier='75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}, permission=FULL_CONTROL}]} expected [4] but found [1]

Tests run: 120, Failures: 15, Errors: 0, Skipped: 6

native multi-part upload

AWS-S3 limits single-part uploads to 5 GB and provides multi-part uploads to allow larger blobs. This API consists of initiate, uploadPart, complete, abort, and listParts. How to translate these multiple calls into the single jclouds method BlobStore.putBlob(blob, multipart())? Can S3Proxy provide a dynamic Payload variant which demuxes these calls?

support authorization

S3Proxy ignores the Authorization header in S3 requests. It should validate this against configurable local credentials.

Release S3Proxy 1.5.0

Master has three important features: v4 signing #24, native multi-part upload #2, and native copy object #46. These require pre-release jclouds and we must wait until its next release.

Native multipart copy

Presently S3Proxy emulates multipart copy with a range getBlob request followed by uploadPart. Instead jclouds should offer native support. Note that this will need the Azure part size workaround as the existing multipart upload code.

S3Proxy incorrectly demangles _$folder_ blob names

jclouds demangles blob names to emulate directories which are not useful to S3Proxy. @kahing suggests working around this with:

try {
    Field f = BlobStoreConstants.class.getDeclaredField("DIRECTORY_SUFFIXES");
    f.setAccessible(true);
    Field modifiersField = Field.class.getDeclaredField("modifiers");
    modifiersField.setAccessible(true);
    modifiersField.setInt(f, f.getModifiers() & ~Modifier.FINAL);
    f.set(null, ImmutableList.of("/"));
} catch (NoSuchFieldException | IllegalAccessException e) {
    throw propagate(e);
}

error on copy

copying within the same bucket results in error on transient store

W 08-12 11:23:41.959 qtp1011325276-17 o.e.jetty.server.HttpChannel:372 |::] /bucket1/test/data/kw56pd6b5kr3ti6oo7vd44hbfu
org.jclouds.blobstore.ContainerNotFoundException:  not found: container  not in [bucket1]
    at org.jclouds.blobstore.LocalAsyncBlobStore.cnfe(LocalAsyncBlobStore.java:222) ~[s3proxy:1.0.0]
    at org.jclouds.blobstore.LocalAsyncBlobStore.getBlob(LocalAsyncBlobStore.java:432) ~[s3proxy:1.0.0]
    at org.jclouds.blobstore.internal.BaseAsyncBlobStore.getBlob(BaseAsyncBlobStore.java:244) ~[s3proxy:1.0.0]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_51]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
    at com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:37) ~[s3proxy:1.0.0]
    at com.sun.proxy.$Proxy40.getBlob(Unknown Source) ~[na:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_51]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_51]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_51]
    at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_51]
    at com.google.common.reflect.Invokable$MethodInvokable.invokeInternal(Invokable.java:197) ~[s3proxy:1.0.0]
    at com.google.common.reflect.Invokable.invoke(Invokable.java:102) ~[s3proxy:1.0.0]
    at org.jclouds.rest.internal.InvokeAndCallGetOnFutures.apply(InvokeAndCallGetOnFutures.java:67) ~[s3proxy:1.0.0]
    at org.jclouds.rest.internal.InvokeAndCallGetOnFutures.apply(InvokeAndCallGetOnFutures.java:39) ~[s3proxy:1.0.0]
    at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156) ~[s3proxy:1.0.0]
    at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123) ~[s3proxy:1.0.0]
    at com.sun.proxy.$Proxy41.getBlob(Unknown Source) ~[na:na]
    at org.gaul.s3proxy.S3ProxyHandler.handleCopyBlob(S3ProxyHandler.java:565) ~[s3proxy:1.0.0]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:177) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.Server.handle(Server.java:485) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:290) ~[s3proxy:1.0.0]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) [s3proxy:1.0.0]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [s3proxy:1.0.0]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:606) [s3proxy:1.0.0]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:535) [s3proxy:1.0.0]
    at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]

listen on HTTPS

S3Proxy only supports listening on the HTTP protocol. It should also support HTTPS to allow use on untrusted networks. How to configure Jetty to allow this while minimizing user complexity of certificate management?

not authorized for swift and openstack-swift for softlayer object store backend

I am attempting to connect s3 proxy to a softlayer object storage which should be equivalent to swift 2.2.

Pasted below is the error with the swift provider (removed auth string, replaced with ****)

I am using current master for this exercise. This first error is after successfully putting a file and then trying to get the same exact file.

Any help would be appreciated. Should I revert to a previously tagged release?

org.jclouds.rest.AuthorizationException: command: GET https://dal05.objectstorage.softlayer.net/v1/AUTH_****/container_0/ HTTP/1.1 failed with response: HTTP/1.1 401 Unauthorized; content: [<html><h1>Unauthorized</h1><p>This server could not verify that you are authorized to access the document you requested.</p></html>]
    at org.jclouds.openstack.swift.handlers.ParseSwiftErrorFromHttpResponse.handleError(ParseSwiftErrorFromHttpResponse.java:58) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:65) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.http.internal.BaseHttpCommandExecutorService.shouldContinue(BaseHttpCommandExecutorService.java:136) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:105) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at com.sun.proxy.$Proxy51.invoke(Unknown Source) ~[na:na]
    at org.gaul.s3proxy.S3ProxyHandler.doHandleAnonymous(S3ProxyHandler.java:561) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:294) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:237) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.server.Server.handle(Server.java:499) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) ~[s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) [s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [s3proxy-original:1.5.0-SNAPSHOT]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [s3proxy-original:1.5.0-SNAPSHOT]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]

For kicks I tried it with the openstack-swift provider and got this on startup:

Exception in thread "main" com.google.common.util.concurrent.UncheckedExecutionException: org.jclouds.http.HttpResponseException: command: POST https://dal05.objectstorage.softlayer.net/auth/v1.0/tokens HTTP/1.1 failed with response: HTTP/1.1 400 Bad Request; content: [<html><h1>Bad Request</h1><p>The server could not comply with the request since it is either malformed or otherwise incorrect.</p></html>]
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3934)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938)
    at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
    at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4827)
    at org.jclouds.openstack.keystone.v2_0.config.KeystoneAuthenticationModule$2.get(KeystoneAuthenticationModule.java:252)
    at org.jclouds.openstack.keystone.v2_0.config.KeystoneAuthenticationModule$2.get(KeystoneAuthenticationModule.java:249)
    at org.jclouds.openstack.keystone.v2_0.suppliers.LocationIdToURIFromAccessForTypeAndVersion.get(LocationIdToURIFromAccessForTypeAndVersion.java:94)
    at org.jclouds.openstack.keystone.v2_0.suppliers.LocationIdToURIFromAccessForTypeAndVersion.get(LocationIdToURIFromAccessForTypeAndVersion.java:54)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier$SetAndThrowAuthorizationExceptionSupplierBackedLoader.load(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:73)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier$SetAndThrowAuthorizationExceptionSupplierBackedLoader.load(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:57)
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3934)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938)
    at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.get(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:119)
    at org.jclouds.suppliers.SupplyKeyMatchingValueOrNull.get(SupplyKeyMatchingValueOrNull.java:52)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier$SetAndThrowAuthorizationExceptionSupplierBackedLoader.load(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:73)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier$SetAndThrowAuthorizationExceptionSupplierBackedLoader.load(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:57)
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3934)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938)
    at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
    at org.jclouds.rest.suppliers.MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.get(MemoizedRetryOnTimeOutButNotOnAuthorizationExceptionSupplier.java:119)
    at org.jclouds.openstack.swift.v1.blobstore.RegionScopedBlobStoreContext.getBlobStore(RegionScopedBlobStoreContext.java:122)
    at org.gaul.s3proxy.Main.main(Main.java:131)

allow proxy-server multi-part upload

Some object stores limit single-part object size to a much smaller size than S3, e.g., Azure's 64 MB limit. S3Proxy should use multi-part uploads in this situation and provide a configuration knob to enable it. Note that this issue discusses proxy-server transfers, whereas #2 discusses client-proxy transfers. Azure specifically needs a fix for JCLOUDS-671 to use MPU with the InputStream payloads that S3Proxy uses.

Object versioning

S3Proxy does not currently support object versioning, triggering many s3-tests failures:

ERROR: s3tests.functional.test_s3.test_versioning_bucket_create_suspend
ERROR: s3tests.functional.test_s3.test_versioning_obj_create_read_remove
ERROR: s3tests.functional.test_s3.test_versioning_obj_create_read_remove_head
ERROR: s3tests.functional.test_s3.test_versioning_obj_suspend_versions
ERROR: s3tests.functional.test_s3.test_versioning_obj_suspend_versions_simple
ERROR: s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_all
ERROR: s3tests.functional.test_s3.test_versioning_obj_create_overwrite_multipart
ERROR: s3tests.functional.test_s3.test_versioning_obj_list_marker
ERROR: s3tests.functional.test_s3.test_versioning_copy_obj_version
ERROR: s3tests.functional.test_s3.test_versioning_multi_object_delete
ERROR: s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker
ERROR: s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker_create
ERROR: s3tests.functional.test_s3.test_versioned_object_acl
ERROR: s3tests.functional.test_s3.test_versioned_concurrent_object_create_concurrent_remove
ERROR: s3tests.functional.test_s3.test_versioned_concurrent_object_create_and_remove

Implementing this requires upstream jclouds work tracked by JCLOUDS-895.

support anonymous bucket and object access to public-read assets

Now that S3Proxy support setBucketACL and setObjectACL, it should support anonymous access to public-read buckets and objects. This requires extending how we do authentication; presently S3Proxy supports only authenticated or anonymous access globally but this should happen at the object level.

support bucket-in-hostname

S3Proxy supports bucket-in-path URLs, e.g., example.com/bucket-name/blob-name. It should also support bucket-in-hostname URLs, e.g., bucket-name.example.com/blob-name.

Multipart cleanup

I noticed that when I send a file that triggers a multipart upload, after the upload some stuff remains.

For example, I have an empty file called EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35t I2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-7e9ddb3d-b582-4e9f-b099-a824cedc87b4 that lies in the bucket next to the actual uploaded file.

This "junk file" is listed by ls and can be removed by rm

I used release 1.4.0 jar and aws-cli/1.4.2 Python/3.4.2 Linux/3.16.0-4-amd64

POST uploads

S3Proxy does not currently support POST uploads, triggering many s3-tests failures:

ERROR: s3tests.functional.test_s3.test_post_object_anonymous_request
ERROR: s3tests.functional.test_s3.test_post_object_authenticated_request_bad_access_key
ERROR: s3tests.functional.test_s3.test_post_object_set_success_code
ERROR: s3tests.functional.test_s3.test_post_object_set_invalid_success_code
FAIL: s3tests.functional.test_s3.test_post_object_authenticated_request
FAIL: s3tests.functional.test_s3.test_post_object_upload_larger_than_chunk
FAIL: s3tests.functional.test_s3.test_post_object_set_key_from_filename
FAIL: s3tests.functional.test_s3.test_post_object_ignored_header
FAIL: s3tests.functional.test_s3.test_post_object_case_insensitive_condition_fields
FAIL: s3tests.functional.test_s3.test_post_object_escaped_field_values
FAIL: s3tests.functional.test_s3.test_post_object_success_redirect_action
FAIL: s3tests.functional.test_s3.test_post_object_invalid_date_format
FAIL: s3tests.functional.test_s3.test_post_object_no_key_specified
FAIL: s3tests.functional.test_s3.test_post_object_missing_signature
FAIL: s3tests.functional.test_s3.test_post_object_user_specified_header
FAIL: s3tests.functional.test_s3.test_post_object_condition_is_case_sensitive
FAIL: s3tests.functional.test_s3.test_post_object_expires_is_case_sensitive
FAIL: s3tests.functional.test_s3.test_post_object_missing_expires_condition
FAIL: s3tests.functional.test_s3.test_post_object_missing_conditions_list
FAIL: s3tests.functional.test_s3.test_post_object_upload_size_limit_exceeded
FAIL: s3tests.functional.test_s3.test_post_object_missing_content_length_argument
FAIL: s3tests.functional.test_s3.test_post_object_invalid_content_length_argument
FAIL: s3tests.functional.test_s3.test_post_object_upload_size_below_minimum

HTTP 204 code for DELETE requests

Expected the status code to be 200 or 202 for a DELETE:

% curl -i -H'Content-type: text/plain' -XPUT 192.168.42.43:8080/foo/bar -d 'test'
HTTP/1.1 200 OK
Date: Tue, 14 Oct 2014 23:58:35 GMT
ETag: "098f6bcd4621d373cade4e832627b4f6"
Content-Length: 0
Server: Jetty(9.2.z-SNAPSHOT)

% curl -XGET 192.168.42.43:8080/foo/bar
test%
  % curl -i -XDELETE 192.168.42.43:8080/foo/bar
HTTP/1.1 204 No Content
Date: Tue, 14 Oct 2014 23:59:37 GMT
Server: Jetty(9.2.z-SNAPSHOT)

Instead currently it seems to 204. I'll take a peek at the code.

Blob metadata ignored with filesystem provider

% curl -i -XPUT -H'Content-Type: application/json' 192.168.42.43:8080/foo/bar -d '{"foo": "bar"}'
HTTP/1.1 200 OK
Date: Tue, 14 Oct 2014 23:49:33 GMT
ETag: "94232c5b8fc9272f6f73a1e36eb68fcf"
Content-Length: 0
Server: Jetty(9.2.z-SNAPSHOT)


% curl -i -XGET 192.168.42.43:8080/foo/bar
HTTP/1.1 200 OK
Date: Tue, 14 Oct 2014 23:49:35 GMT
Content-Type: application/unknown
Content-MD5: lCMsW4/JJy9vc6HjbraPzw==
ETag: "94232c5b8fc9272f6f73a1e36eb68fcf"
Last-Modified: Tue, 14 Oct 2014 23:49:33 GMT
Content-Length: 14
Server: Jetty(9.2.z-SNAPSHOT)

{"foo": "bar"}%

Expected content-type to not be defaulted to application/unknown. This was on 6792798. I'll take a peek at the code.

Requests with unreadable characters in headers

Many s3-tests fail due to unreadable characters in headers:

ERROR: s3tests.functional.test_headers.test_object_create_bad_expect_unreadable
ERROR: s3tests.functional.test_headers.test_object_create_bad_ua_unreadable
ERROR: s3tests.functional.test_headers.test_bucket_create_bad_expect_unreadable
ERROR: s3tests.functional.test_headers.test_bucket_create_bad_ua_unreadable
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_empty_to_unreadable_prefix
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_empty_to_unreadable_suffix
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_empty_to_unreadable_infix
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_overwrite_to_unreadable_prefix
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_overwrite_to_unreadable_suffix
ERROR: s3tests.functional.test_s3.test_object_set_get_metadata_overwrite_to_unreadable_infix
FAIL: s3tests.functional.test_headers.test_object_create_bad_md5_unreadable
FAIL: s3tests.functional.test_headers.test_object_create_bad_contentlength_unreadable
FAIL: s3tests.functional.test_headers.test_object_create_bad_contenttype_unreadable
FAIL: s3tests.functional.test_headers.test_object_create_bad_authorization_unreadable
FAIL: s3tests.functional.test_headers.test_object_create_bad_date_unreadable
FAIL: s3tests.functional.test_headers.test_bucket_create_bad_contentlength_unreadable
FAIL: s3tests.functional.test_headers.test_bucket_create_bad_authorization_unreadable
FAIL: s3tests.functional.test_headers.test_bucket_create_bad_date_unreadable

Jetty immediately returns a 400 error for these requests which prevents S3Proxy from handling them:

W 07-24 11:08:02.144 S3Proxy-14 o.e.jetty.http.HttpParser:1719 |::] Illegal character 0x4 in state=HEADER_IN_VALUE for buffer HeapByteBuffer@3d6811d8[p=279,l=410,c=16384,r=131]={PUT /gaul-33o3yv7...-meta-meta1: h\x04<<<w\r\nAuthorization:...-57-generic\r\n\r\n>>>\r\n\r\n-generic\r\n\r\n\n...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00}
W 07-24 11:08:02.144 S3Proxy-14 o.e.jetty.http.HttpParser:1344 |::] badMessage: 400 Illegal character 0x4 for HttpChannelOverHttp@4b0947c6{r=1,c=false,a=IDLE,uri=-}

S3 compatibility test

S3Proxy should use an existing S3 compatibility tool such as https://github.com/ceph/s3-tests instead of a hodge-podge of unit tests, jclouds integration tests, and s3fs-fuse operations. Results from the latest jclouds integration test run:

Failed tests:
  S3ContainerLiveTest>BaseContainerLiveTest.testPublicAccess:75 [type=BLOB, id=null, name=hello, location={scope=PROVIDER, id=s3, description=http://127.0.0.1:8080}, uri=http://127.0.0.1:8080/gaul-blobstore-1183189886598536954/hello, userMetadata={}] expected object to not be null
  BucketsLiveTest.testBucketLogging:209->setupAclForBucketLoggingTarget:258 » IllegalState
  BucketsLiveTest.testBucketPayer:176 expected [UNRECOGNIZED] but found [BUCKET_OWNER]
  BucketsLiveTest.testPublicReadAccessPolicy:151 AccessControlList{owner=org.jclouds.s3.domain.CanonicalUser@52bd459b, grants=[Grant{grantee=CanonicalUserGrantee{displayName='[email protected]', identifier='75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a'}, permission=FULL_CONTROL}]} expected [true] but found [false]
  BucketsLiveTest.testUpdateBucketACL:116 » IllegalState Your previous request t...
  S3ClientLiveTest.testCopyCannedAccessPolicyPublic:118 » Runtime request: PUT h...
  S3ClientLiveTest.testCopyIfMatch:418 » Runtime request: PUT http://127.0.0.1:8...
  S3ClientLiveTest.testCopyIfModifiedSince:369 » Runtime request: PUT http://127...
  S3ClientLiveTest.testCopyIfNoneMatch:440 » Runtime request: PUT http://127.0.0...
  S3ClientLiveTest.testCopyIfUnmodifiedSince:396 » Runtime request: PUT http://1...
  S3ClientLiveTest.testCopyObject:343 » Runtime request: PUT http://127.0.0.1:80...
  S3ClientLiveTest.testCopyWithMetadata:465 » Runtime request: PUT http://127.0....
  S3ClientLiveTest.testMetadataWithCacheControlAndContentDisposition:298->assertCacheControl:307 NullPointer
  S3ClientLiveTest.testPrivateAclIsDefaultForObject:226 expected [1] but found [0]
  S3ClientLiveTest.testPublicReadOnObject:245->BaseBlobStoreIntegrationTest.assertConsistencyAware:248->BaseBlobStoreIntegrationTest.assertConsistencyAware:235 » Runtime
  S3ClientLiveTest.testPublicWriteOnObject:147->BaseBlobStoreIntegrationTest.assertConsistencyAware:248->BaseBlobStoreIntegrationTest.assertConsistencyAware:235 » Runtime
  S3ClientLiveTest.testPutCannedAccessPolicyPublic:104 » UnknownHost gaul-blobst...
  S3ClientLiveTest.testUpdateObjectACL:180 NullPointer

Tests run: 86, Failures: 18, Errors: 0, Skipped: 5

Add mechanism to model eventual consistency

S3Proxy offers an opportunity to implement an eventual consistency middleware. This would allow a more deterministic and reliable way to uncover applications erroneously relying on strong consistency. This middleware could use two backend blobstores, writing the the first one and reading from the second, periodically syncing updates from the former to the latter.

bucket and object canned ACLs

jclouds 1.9.0 will add bucket and object ACL support via JCLOUDS-660 and JCLOUDS-732. S3Proxy can easily add get and set ACL RPCs but needs some additional work to pass an authentication context through for unauthenticated reads.

s3cmd error because s3proxy does not return etag

It looks like s3proxy doesn't return an etag which makes it difficult to use s3cmd with the proxy. Obviously any other client that uses the etag will also have issues.

This is on a s3cmd get s3://container_0/file.txt based on a swift provider.

I am receiving this error:

Problem: KeyError: 'etag'
S3cmd:   1.6.0+
python:   2.7.10 (default, Oct  6 2015, 11:07:59) 
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)]
environment LANG=en_US.UTF-8

Traceback (most recent call last):
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/s3cmd-1.6.0_-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2813, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/s3cmd-1.6.0_-py2.7.egg/EGG-INFO/scripts/s3cmd", line 2721, in main
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/s3cmd-1.6.0_-py2.7.egg/EGG-INFO/scripts/s3cmd", line 549, in cmd_object_get
  File "build/bdist.macosx-10.11-x86_64/egg/S3/S3.py", line 636, in object_get
    response = self.recv_file(request, stream, labels, start_position)
  File "build/bdist.macosx-10.11-x86_64/egg/S3/S3.py", line 1455, in recv_file
    md5_from_s3 = response["headers"]["etag"].strip('"')
KeyError: 'etag'

Exception with partial content (206)

I try to serve .mp4 files, so I put an nginx in front of s3proxy to set the right content-type (video/mp4). Once I do this I can write the following url in the browser: http://localhost/6015_2018_2035.h264.cutted.mp4

If I don't set the content-type the video is downloaded instead of being reproduced in the browser (chrome in my case).

The problem is that when chrome tries to reproduce the video it sends two connections. The second one responds with a 206 (partial content) and with the right header (byte ranges) but at the end I get the following exception:

15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{WRITING}:IDLE-->WRITING
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.ChannelEndPoint - flushed 32768 SelectChannelEndPoint@4465ed37{/127.0.0.1:48839<->9090,Open,in,out,-,W,1/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{IDLE}:WRITING-->IDLE
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@56cd6ef[PROCESSING][i=null,cb=Blocker@69df07da{null}] generate: DONE (null,[p=32768,l=32768,c=32768,r=0],false)@COMMITTED
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@56cd6ef[PROCESSING][i=null,cb=Blocker@69df07da{null}] generate: FLUSH (null,[p=0,l=32768,c=32768,r=32768],false)@COMMITTED
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write: WriteFlusher@18916c5b{IDLE} [HeapByteBuffer@3fb28ad9[p=0,l=32768,c=32768,r=32768]={<<<\x1d\x88\x11x6=\xA4\\QR\xF7Z\xD5\xFa\xB2\x90\x94...\x8b}\x1b@\xD7\x8bF\xAb\xCb\x12\xEd\x85\x06t\x0f>>>}]
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{WRITING}:IDLE-->WRITING
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.ChannelEndPoint - flushed 32768 SelectChannelEndPoint@4465ed37{/127.0.0.1:48839<->9090,Open,in,out,-,W,0/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{IDLE}:WRITING-->IDLE
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@56cd6ef[PROCESSING][i=null,cb=Blocker@69df07da{null}] generate: DONE (null,[p=32768,l=32768,c=32768,r=0],false)@COMMITTED
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@56cd6ef[PROCESSING][i=null,cb=Blocker@69df07da{null}] generate: FLUSH (null,[p=0,l=32768,c=32768,r=32768],false)@COMMITTED
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write: WriteFlusher@18916c5b{IDLE} [HeapByteBuffer@3fb28ad9[p=0,l=32768,c=32768,r=32768]={<<<\x17\x88\x8eC\xA4C\xC5\x85\xB3\x88\x1a\xA5\xB4\xDft0\xAb...\xDd\xA5\xFd\xC1\xC9\xEb\x0b\xD7\xB9\x0f\xF3\xD329I>>>}]
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{WRITING}:IDLE-->WRITING
15:26:13.819 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.ChannelEndPoint - flushed 32768 SelectChannelEndPoint@4465ed37{/127.0.0.1:48839<->9090,Open,in,out,-,W,0/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:13.820 [qtp2032647583-21 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@18916c5b{IDLE}:WRITING-->IDLE

...
... more than 4000 lines with the same trace
...

15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write: WriteFlusher@1703ec5e{IDLE} [HeapByteBuffer@3fb28ad9[p=0,l=32768,c=32768,r=32768]={<<<T\xEe\x07T\xE9e\x8e\xBce\x0f\x18Oa~\xBa}\xB1...\xEa\xFb\x08\xF3e\xE1ydDe?\xC9\x04\xF4\xD6>>>}]
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{WRITING}:IDLE-->WRITING
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.ChannelEndPoint - flushed 32768 SelectChannelEndPoint@4d0efa12{/127.0.0.1:48843<->9090,Open,in,out,-,W,0/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{IDLE}:WRITING-->IDLE
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@60b9d7b1[PROCESSING][i=null,cb=Blocker@35eea7f5{null}] generate: DONE (null,[p=32768,l=32768,c=32768,r=0],false)@COMMITTED
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@60b9d7b1[PROCESSING][i=null,cb=Blocker@35eea7f5{null}] generate: FLUSH (null,[p=0,l=32768,c=32768,r=32768],false)@COMMITTED
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write: WriteFlusher@1703ec5e{IDLE} [HeapByteBuffer@3fb28ad9[p=0,l=32768,c=32768,r=32768]={<<<JI\x87\x1a\xF5\xDa8\xD5e\xF0\xEem\xFd�\xC12\xEc...\x9b\xEf\xA9H\xF6\xD6*\xB0~\xD7R]1-\x1b>>>}]
15:26:14.851 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{WRITING}:IDLE-->WRITING
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write exception
org.eclipse.jetty.io.EofException: null
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:192) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:408) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:302) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:129) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:690) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:246) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:208) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:480) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:768) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:801) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:147) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:140) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:355) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at com.google.common.io.ByteStreams.copy(ByteStreams.java:179) [guava-16.0.1.jar:na]
    at org.gaul.s3proxy.S3ProxyHandler.handleGetBlob(S3ProxyHandler.java:979) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:392) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:201) [classes/:na]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.Server.handle(Server.java:499) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45-internal]
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45-internal]
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:170) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    ... 24 common frames omitted
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{IDLE}:WRITING-->IDLE
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpConnection - org.eclipse.jetty.server.HttpConnection$SendCallback@60b9d7b1[PROCESSING][i=null,cb=Blocker@35eea7f5{null}] generate: FLUSH (null,[p=0,l=32768,c=32768,r=32768],true)@COMPLETING
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write: WriteFlusher@1703ec5e{IDLE} [HeapByteBuffer@3fb28ad9[p=0,l=32768,c=32768,r=32768]={<<<JI\x87\x1a\xF5\xDa8\xD5e\xF0\xEem\xFd�\xC12\xEc...\x9b\xEf\xA9H\xF6\xD6*\xB0~\xD7R]1-\x1b>>>}]
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{WRITING}:IDLE-->WRITING
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - write exception
org.eclipse.jetty.io.EofException: null
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:192) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:408) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:302) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:129) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:690) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:246) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:208) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:480) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:768) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:801) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:147) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:140) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:171) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.gaul.s3proxy.S3ProxyHandler.handleGetBlob(S3ProxyHandler.java:981) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:392) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:201) [classes/:na]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.Server.handle(Server.java:499) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45-internal]
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45-internal]
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:170) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    ... 23 common frames omitted
15:26:14.853 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.WriteFlusher - update WriteFlusher@1703ec5e{IDLE}:WRITING-->IDLE
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.server.HttpOutput - 
org.eclipse.jetty.io.EofException: null
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:192) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:408) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:302) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:129) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:690) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:246) ~[jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:208) ~[jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:480) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:768) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:801) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:147) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:140) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:171) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.gaul.s3proxy.S3ProxyHandler.handleGetBlob(S3ProxyHandler.java:981) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:392) [classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:201) [classes/:na]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.Server.handle(Server.java:499) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45-internal]
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45-internal]
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:170) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    ... 23 common frames omitted
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.eclipse.jetty.io.AbstractEndPoint - onClose SelectChannelEndPoint@4d0efa12{/127.0.0.1:48843<->9090,CLOSED,in,out,-,-,3/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.ChannelEndPoint - close SelectChannelEndPoint@4d0efa12{/127.0.0.1:48843<->9090,CLOSED,in,out,-,-,3/30000,HttpConnection}{io=0,kio=0,kro=1}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.io.SelectorManager - Destroyed SelectChannelEndPoint@4d0efa12{/127.0.0.1:48843<->9090,CLOSED,ISHUT,OSHUT,-,-,3/30000,HttpConnection}{io=0,kio=-1,kro=-1}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.io.AbstractConnection - onClose HttpConnection@711dd5b5{FILLING}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.eclipse.jetty.io.AbstractEndPoint - onClose SelectChannelEndPoint@4d0efa12{/127.0.0.1:48843<->9090,CLOSED,ISHUT,OSHUT,-,-,3/30000,HttpConnection}{io=0,kio=-1,kro=-1}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.server.HttpChannel - 
org.eclipse.jetty.io.EofException: null
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:192) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:408) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:302) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:129) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:690) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:246) ~[jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:208) ~[jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:480) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:768) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:801) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:147) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:140) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:355) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at com.google.common.io.ByteStreams.copy(ByteStreams.java:179) ~[guava-16.0.1.jar:na]
    at org.gaul.s3proxy.S3ProxyHandler.handleGetBlob(S3ProxyHandler.java:979) ~[classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:392) ~[classes/:na]
    at org.gaul.s3proxy.S3ProxyHandler.handle(S3ProxyHandler.java:201) ~[classes/:na]
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.Server.handle(Server.java:499) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) ~[jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) [jetty-server-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) [jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) [jetty-util-9.2.11.v20150529.jar:9.2.11.v20150529]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45-internal]
Caused by: java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45-internal]
    at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[na:1.8.0_45-internal]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45-internal]
    at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:170) ~[jetty-io-9.2.11.v20150529.jar:9.2.11.v20150529]
    ... 24 common frames omitted
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG o.e.jetty.server.HttpChannelState - HttpChannelState@50db53b7{s=DISPATCHED i=true a=null} unhandle DISPATCHED
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.http.HttpParser - close HttpParser{s=END,0 of 0}
15:26:14.854 [qtp2032647583-18 - /videos/6015_2018_2035.h264.cutted.mp4] DEBUG org.eclipse.jetty.http.HttpParser - END --> CLOSED
15:26:14.854 [qtp2032647583-18] DEBUG org.eclipse.jetty.server.HttpChannel - HttpChannelOverHttp@3f99f0a6{r=1,c=false,a=IDLE,uri=-} handle exit, result COMPLETE
15:26:14.854 [qtp2032647583-18] DEBUG org.eclipse.jetty.http.HttpParser - atEOF HttpParser{s=CLOSED,0 of 0}
15:26:14.854 [qtp2032647583-18] DEBUG org.eclipse.jetty.http.HttpParser - parseNext s=CLOSED HeapByteBuffer@40635532[p=0,l=0,c=0,r=0]={<<<>>>}
15:26:14.854 [qtp2032647583-18] DEBUG o.e.jetty.io.AbstractConnection - FILLING-->IDLE HttpConnection@711dd5b5{IDLE}

hang on HEAD request?

user@local ~
% curl -i -XPUT 192.168.42.43:8080/foo
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2014 04:53:31 GMT
Content-Length: 0
Server: Jetty(9.2.z-SNAPSHOT)
user@local ~
% curl -i -XPUT 192.168.42.43:8080/foo/blah -d 'my data'
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2014 04:53:39 GMT
ETag: "1291e1c0aa879147f51f4a279e7c2e55"
Content-Length: 0
Server: Jetty(9.2.z-SNAPSHOT)
user@local ~
% curl -i -XGET 192.168.42.43:8080/foo/blah
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2014 04:53:46 GMT
Content-Type: application/unknown
Content-MD5: EpHhwKqHkUf1H0onnnwuVQ==
Last-Modified: Wed, 20 Aug 2014 04:53:39 GMT
Content-Length: 7
Server: Jetty(9.2.z-SNAPSHOT)

my data
user@local ~
% curl -i -XHEAD 192.168.42.43:8080/foo/blah
HTTP/1.1 200 OK
Date: Wed, 20 Aug 2014 04:53:50 GMT
Content-Type: application/unknown
Content-MD5: EpHhwKqHkUf1H0onnnwuVQ==
Last-Modified: Wed, 20 Aug 2014 04:53:39 GMT
Content-Length: 7
Server: Jetty(9.2.z-SNAPSHOT)

^C

I had to Ctrl-c out of it. Is this expected behavior?

Multipart upload results in "Form too large" exception

I tried performing a multipart upload via the Java AWS SDK but jetty threw an exception:

20:06:58.167 [qtp792782299-17 - /s3proxy-596354326/stuff?uploadId=EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-dc5b1fe8-31ea-4902-9de1-51187d9c718e&partNumber=5001] WARN  org.eclipse.jetty.server.HttpChannel - /s3proxy-596354326/stuff?uploadId=EXAMPLEJZ6e0YupT2h66iePQCc9IEbYbDUy4RTpMeoSMLPRp8Z5o1u8feSRonpvnWsKKG35tI2LB9VDPiCgTy.Gq2VxQLYjrue4Nq.NBdqI-dc5b1fe8-31ea-4902-9de1-51187d9c718e&partNumber=5001
java.lang.IllegalStateException: Form too large: 10485768 > 200000
    at org.eclipse.jetty.server.Request.extractFormParameters(Request.java:364)

The part is about 10 MB. Am I doing something wrong? The setup for the proxy was copied directly from S3AwsSdkTest.java and the client was set up as follows:

AmazonS3Client client = new AmazonS3Client(awsCreds);
        client.setEndpoint(s3Endpoint.toString());
        client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));

I didn't include the signer override part because I didn't know what algorithm to put in there.

As far as I understand the "Form too large" error indicates that jetty is configured this way. Do I have to configure it myself? If so, how? Moreover, why? It seems surprising that the setup wouldn't accept large packets by default, to the extent that I'm convinced I've done something else wrong.

native server-side object copy

S3Proxy emulates server-side copy by performing reads and writes within S3Proxy. While this is an improvement over client-side copy, instead S3Proxy should implement native support in jclouds via JCLOUDS-651.

ACL and http form upload

I read in the s3proxy documentation that it is possible to

set and get canned bucket and object ACLs (private and public-read only).

Where can I find documentation about those ACL ?

I'm using the filesystem providers for testing purposes. I read on jclouds documentation that:

By default, every item you put into a container is private, if you are interested in giving access to others, you will have to explicitly configure that. Exposing public containers is provider-specific.

The page about filesystem provider doesn't mention ACL.

Basically, I'm trying to upload files from an HTML form and download, all directly to s3proxy using the official AWS documentation on Creating an HTML Form.

Whatever I attempt, I always get the following error:

<Error>
<Code>AccessDenied</Code>
<Message>
AWS authentication requires a valid Date or x-amz-date header
</Message>
<RequestId>4442587FB7D0A2F9</RequestId>
</Error>

Though x-amz-date is (correctly) set, I tried multiple variant of date. So I'm trying to make the bucket public to bypass the authentification issues.

AWS signature V4

Amazon allows V2 signatures for old regions and requires V4 signatures for new regions like eu-central-1 (Frankfurt). jclouds plans to move to V4 by default for aws-s3 JCLOUDS-480 so we must support V4 to allow jclouds applications to use S3Proxy.

Improve support for Azure multipart upload

S3 uses a minimum MPU part size of 5 MB while Azure has a maximum part size of 4 MB. Thus well-behaved S3 applications cannot use multipart upload with Azure. S3Proxy should break the larger S3 request into multiple smaller Azure MPU requests. This will require a more sophisticated mapping from S3 part number to Azure block ID.

Bucket in hostname on filestore provider

According to the S3 api docs the bucket name should be part of the hostname of the URL ( http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html#RESTBucketGET-responses-examples ) In other words the bucket name is part of the domain and not part of the path.

So I added a couple of /etc/hostfile entries to test this behaviour of s3proxy with a filestore provider:

127.0.0.1       s3
127.0.0.1       bucket1.s3

I used a pretty simple config file:

s3proxy.authorization=none
s3proxy.endpoint=http://s3:8080
jclouds.provider=filesystem
jclouds.identity=identity
jclouds.credential=credential
jclouds.filesystem.basedir=/tmp/s3

I created the /tmp/s3 and /tmp/s3/bucket1 directories.

Then used curl to test the config. I expected curl http://s3:8080/ to give me a list of buckets and curl http://bucket1.s3:8080/ to give me a list of objects in bucket1. However both these give me a list of buckets. (It works if I do curl -v http://s3:8080/bucket1/)

Is this expected behaviour?

Conditional get

S3Proxy should support conditional gets with:

  • If-Modified-Since
  • If-Unmodified-Since
  • If-Match
  • If-None-Match

The underlying jclouds already has support for this but s3-tests lacks test coverage ceph/s3-tests#72.

403s all of a sudden?

I redeployed my docker container and now I'm getting 403 on things that I think used to work:

± % cat config/s3proxy.conf                                                                                                                                                                                                                                            !11500
s3proxy.authorization=none
s3proxy.endpoint=http://0.0.0.0:8080
jclouds.provider=filesystem
jclouds.identity=identity
jclouds.credential=credential
jclouds.filesystem.basedir=/data
docker run -t -i -p 8080:8080 s3proxy
I 10-14 23:22:56.659 main org.eclipse.jetty.util.log:188 |::] Logging initialized @1617ms
I 10-14 23:22:56.695 main o.eclipse.jetty.server.Server:327 |::] jetty-9.2.z-SNAPSHOT
I 10-14 23:22:56.719 main o.e.j.server.ServerConnector:266 |::] Started ServerConnector@78c1f32c{HTTP/1.1}{0.0.0.0:8080}
I 10-14 23:22:56.720 main o.eclipse.jetty.server.Server:379 |::] Started @1681ms
  % curl -i -XPUT 192.168.42.43:8080/foo                                 !10164
HTTP/1.1 403 Forbidden
Date: Tue, 14 Oct 2014 23:23:06 GMT
Transfer-Encoding: chunked
Server: Jetty(9.2.z-SNAPSHOT)

<?xml version="1.0" encoding="UTF-8"?><Error>
  <Code>AccessDenied</Code>
  <Message>Forbidden</Message>
  <RequestId>4442587FB7D0A2F9</RequestId>
</Error>%

Any idea?

commandline port configuration

Allow setting port (or all configuration) via commandline.
Example use case: setting the port for running on heroku.
For example this could be done via environment variables or system properties.

S3PROXY_PORT=8080 java -jar s3proxy --properties s3proxy.conf
java -Ds3proxy.port=8080 -jar s3proxy --properties s3proxy.conf

Multipart copy

Presently S3Proxy cannot copy objects larger than 5 GB.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.