Giter Site home page Giter Site logo

minio / minio-go Goto Github PK

View Code? Open in Web Editor NEW
2.3K 47.0 610.0 6.48 MB

MinIO Go client SDK for S3 compatible object storage

Home Page: https://docs.min.io/docs/golang-client-quickstart-guide.html

License: Apache License 2.0

Go 99.89% Makefile 0.11%
libraries go minio client-sdk s3 cloud fileuploader aws-signature

minio-go's Introduction

MinIO Go Client SDK for Amazon S3 Compatible Cloud Storage Slack Sourcegraph Apache V2 License

The MinIO Go Client SDK provides straightforward APIs to access any Amazon S3 compatible object storage.

This Quickstart Guide covers how to install the MinIO client SDK, connect to MinIO, and create a sample file uploader. For a complete list of APIs and examples, see the godoc documentation or Go Client API Reference.

These examples presume a working Go development environment and the MinIO mc command line tool.

Download from Github

From your project directory:

go get github.com/minio/minio-go/v7

Initialize a MinIO Client Object

The MinIO client requires the following parameters to connect to an Amazon S3 compatible object storage:

Parameter Description
endpoint URL to object storage service.
_minio.Options_ All the options such as credentials, custom transport etc.
package main

import (
	"log"

	"github.com/minio/minio-go/v7"
	"github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
	endpoint := "play.min.io"
	accessKeyID := "Q3AM3UQ867SPQQA43P2F"
	secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
	useSSL := true

	// Initialize minio client object.
	minioClient, err := minio.New(endpoint, &minio.Options{
		Creds:  credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
		Secure: useSSL,
	})
	if err != nil {
		log.Fatalln(err)
	}

	log.Printf("%#v\n", minioClient) // minioClient is now set up
}

Example - File Uploader

This sample code connects to an object storage server, creates a bucket, and uploads a file to the bucket. It uses the MinIO play server, a public MinIO cluster located at https://play.min.io.

The play server runs the latest stable version of MinIO and may be used for testing and development. The access credentials shown in this example are open to the public and all data uploaded to play should be considered public and non-protected.

FileUploader.go

This example does the following:

  • Connects to the MinIO play server using the provided credentials.
  • Creates a bucket named testbucket.
  • Uploads a file named testdata from /tmp.
  • Verifies the file was created using mc ls.
// FileUploader.go MinIO example
package main

import (
	"context"
	"log"

	"github.com/minio/minio-go/v7"
	"github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
	ctx := context.Background()
	endpoint := "play.min.io"
	accessKeyID := "Q3AM3UQ867SPQQA43P2F"
	secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
	useSSL := true

	// Initialize minio client object.
	minioClient, err := minio.New(endpoint, &minio.Options{
		Creds:  credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
		Secure: useSSL,
	})
	if err != nil {
		log.Fatalln(err)
	}

	// Make a new bucket called testbucket.
	bucketName := "testbucket"
	location := "us-east-1"

	err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
	if err != nil {
		// Check to see if we already own this bucket (which happens if you run this twice)
		exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
		if errBucketExists == nil && exists {
			log.Printf("We already own %s\n", bucketName)
		} else {
			log.Fatalln(err)
		}
	} else {
		log.Printf("Successfully created %s\n", bucketName)
	}

	// Upload the test file
	// Change the value of filePath if the file is in another location
	objectName := "testdata"
	filePath := "/tmp/testdata"
	contentType := "application/octet-stream"

	// Upload the test file with FPutObject
	info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
	if err != nil {
		log.Fatalln(err)
	}

	log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size)
}

1. Create a test file containing data:

You can do this with dd on Linux or macOS systems:

dd if=/dev/urandom of=/tmp/testdata bs=2048 count=10

or fsutil on Windows:

fsutil file createnew "C:\Users\<username>\Desktop\sample.txt" 20480

2. Run FileUploader with the following commands:

go mod init example/FileUploader
go get github.com/minio/minio-go/v7
go get github.com/minio/minio-go/v7/pkg/credentials
go run FileUploader.go

The output resembles the following:

2023/11/01 14:27:55 Successfully created testbucket
2023/11/01 14:27:55 Successfully uploaded testdata of size 20480

3. Verify the Uploaded File With mc ls:

mc ls play/testbucket
[2023-11-01 14:27:55 UTC]  20KiB STANDARD TestDataFile

API Reference

The full API Reference is available here.

API Reference : Bucket Operations

API Reference : Bucket policy Operations

API Reference : Bucket notification Operations

API Reference : File Object Operations

API Reference : Object Operations

API Reference : Presigned Operations

API Reference : Client custom settings

Full Examples

Full Examples : Bucket Operations

Full Examples : Bucket policy Operations

Full Examples : Bucket lifecycle Operations

Full Examples : Bucket encryption Operations

Full Examples : Bucket replication Operations

Full Examples : Bucket notification Operations

Full Examples : File Object Operations

Full Examples : Object Operations

Full Examples : Encrypted Object Operations

Full Examples : Presigned Operations

Explore Further

Contribute

Contributors Guide

License

This SDK is distributed under the Apache License, Version 2.0, see LICENSE and NOTICE for more information.

minio-go's People

Contributors

abperiasamy avatar andreaso avatar awwalker avatar balamurugana avatar chavacava avatar ckemper67 avatar deekoder avatar dependabot[bot] avatar donatello avatar dvaldivia avatar ebozduman avatar ferhatelmas avatar fkautz avatar hackintoshrao avatar harshavardhana avatar iternity-dotcom avatar justinclift avatar kannappanr avatar klauspost avatar krishnasrinivas avatar krisis avatar minio-trusted avatar mkash32 avatar nitisht avatar nl5887 avatar pierre-emmanuelj avatar poornas avatar shtripat avatar sinhaashish avatar vadmeste avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

minio-go's Issues

Bug: ReadAt() returns EOF

Hi, during development on restic/restic#366 I came across strange behaviour: Calling ReadAt() with a small buffer (e.g. 512 byte) and a larger offset (e.g. 32k) in a larger file (e.g. 500KiB) correctly reads the 512 byte, but also returns an EOF error.

According to the spec for io.ReaderAt this may only be the case when the last 512 bytes in the resource are read, but here I read 512 from the middle of the file, so the EOF is wrong.

I wrote a small program which demonstrates this on play.minio.io https://gist.github.com/7dbbef2dd5e78bbf8723

The output is:

$ go run s3bug.go
len = 513767
n 513767, err <nil>
size 513767, err <nil>
ReadAt(len(buf) 512, offset 32000)
m 512, err EOF

I think this is a bug :)

ping @harshavardhana

PutObject() returns error `Access Denied` when saving more than 8MiB of data

When more than 8MiB of data should be saved in a new file on Amazon S3 with PutObject(), the error Access Denied is returned. This can be seen by running the program at https://gist.github.com/f852268fe0862fca81b3

Enabling the HTTP traces shows that an additional listing request is sent to the server:

$ go run s3bug.go 
---------START-HTTP---------
GET /restic-test-ffm?location= HTTP/1.1
Host: s3.amazonaws.com
User-Agent: Minio (linux; amd64) minio-go/0.2.5
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIERLV25C4U7EWZLQ/20160103/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=5096fec5db29d427c98ec7f9ad118bd2bb39286352d509f324cbfe2b6714d32a
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20160103T172130Z
Accept-Encoding: gzip

HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Sun, 03 Jan 2016 17:21:32 GMT
Server: AmazonS3
X-Amz-Id-2: TTgqG3a+KHIVrquTQH2XRVYXqC2F0DRV2J0O1MIndTwF2h71vK7fBKHxmZ04n6uP9kq/7PZkUFw=
X-Amz-Request-Id: 5239D3ADA0324178
---------END-HTTP---------
---------START-HTTP---------
GET /?delimiter=&key-marker=&max-uploads=1000&prefix=testfile&upload-id-marker=&uploads= HTTP/1.1
Host: restic-test-ffm.s3.amazonaws.com
User-Agent: Minio (linux; amd64) minio-go/0.2.5
Authorization: AWS4-HMAC-SHA256 Credential=AKIAIERLV25C4U7EWZLQ/20160103/eu-central-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=e8072980b68a79a63b56d671ecf62b19db281e9405c54eeffdce92981784ab82
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20160103T172131Z
Accept-Encoding: gzip

HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Sun, 03 Jan 2016 17:21:30 GMT
Server: AmazonS3
X-Amz-Id-2: iD18qDv+4JIaP+oMKzsQDQQEAvJpH8sM83/hYo+/qMOqHC/ZVfZ8tNnH3tuhMd+K5eGjMlOWHCY=
X-Amz-Request-Id: 52D6BFE21A182FBF
---------END-HTTP---------
n 0, err minio.ErrorResponse{XMLName:xml.Name{Space:"", Local:"Error"}, Code:"AccessDenied", Message:"Access Denied", BucketName:"", Key:"", RequestID:"52D6BFE21A182FBF", HostID:"iD18qDv+4JIaP+oMKzsQDQQEAvJpH8sM83/hYo+/qMOqHC/ZVfZ8tNnH3tuhMd+K5eGjMlOWHCY=", AmzBucketRegion:""}

So it would be nice to know:

  • Why is there an additional listing request?
  • Why is this request not authorized?

Other things:

  • The bucket used in the test has the region eu-central-1
  • Listing items in the bucket in general works

api: Re-work GetObject() to not fetch new readers for each Read() operation.

Current code *minio.Object

  • Read(p []byte) - causes the underlying request to fetch new readers for all ranges. If p buffer is small there will be a flood of Get requests which is not necessary.

To avoid this we started using io.CopyBuffer which is a newer API and leads to build failures. But the right fix is to keep the reader until the Read() finishes.

header inconsistencies

minio-go sends Accept-Encoding: gzip
is the correct behavior?

minio-go does not print response headers for PUT DELETE

PutObject does not fail when specified length mismatches length of data

func TestSmallObjectTooSmall(t *testing.T) {
    input := bytes.NewBufferString("hello world")
    key := "small/toosmall"
    if err := client.PutObject(bucket, key, "", 10, input); err == nil {
        t.Error("Should fail when length is too small")
    }
    _, err := client.StatObject(bucket, key)
    if err == nil {
        t.Error(err)
    }
}

Fix for #279 introduced new bug: Only null-bytes are returned.

Hi again,

Unfortunately the fix for #279 introduced an even graver bug: Instead of the content, only zero bytes are returned. I've updated the small program to reproduce that at https://gist.github.com/fd0/7dbbef2dd5e78bbf8723, sample output:

$ go run s3bug.go

bucket create error: The requested bucket name is not available.
len = 228072
n 228072, err <nil>
size 228072, err <nil>
ReadAt(len(buf2) 512, offset 32000)
m 512, err <nil>
([]uint8) (len=512 cap=196072) {
 00000000  bf d2 3a 26 7e 1e 93 7c  36 ed 68 30 85 8e 87 3b  |..:&~..|6.h0...;|
 00000010  85 02 69 a2 0a 5e e6 a3  67 60 ce 1b a1 c1 4c 74  |..i..^..g`....Lt|
 00000020  bb a8 c0 d0 ca 7a 5f b1  56 31 56 1b 8b 44 e7 06  |.....z_.V1V..D..|
 00000030  4e 64 f0 5a 61 21 cd 8d  05 d9 c1 28 81 ec 8b d6  |Nd.Za!.....(....|
 00000040  82 4d 76 e0 9b f3 2f 85  ee e9 0f dc 33 ca d7 e7  |.Mv.../.....3...|
 00000050  5e 03 69 ed 3d 78 42 d9  d7 1b 75 c0 f8 a0 34 ff  |^.i.=xB...u...4.|
 00000060  44 23 69 1d 76 40 b9 1f  67 fd 1a b0 f5 38 6d 59  |D#[email protected]|
 00000070  4e 37 2c 5c dd 71 29 3a  45 f2 48 e3 80 c6 97 b0  |N7,\.q):E.H.....|
 00000080  32 86 94 ba 78 8e e8 88  bc 9e e9 3e 27 53 17 4e  |2...x......>'S.N|
 00000090  21 04 c5 bb 7e 2c 1a 94  08 9d 98 2e db 9b c6 be  |!...~,..........|
 000000a0  3a f6 64 74 19 59 b6 c3  72 f4 a5 e0 ba 49 75 bc  |:.dt.Y..r....Iu.|
 000000b0  e8 75 75 48 85 33 7f f6  0f b1 71 cf 2f 04 62 06  |.uuH.3....q./.b.|
 000000c0  20 ba 8b de 5c 2a 1b 5f  d2 45 eb af 96 21 5b f4  | ...\*._.E...![.|
 000000d0  a2 7d 99 50 63 42 dd fc  fe e3 14 6e 50 72 ab 8e  |.}.PcB.....nPr..|
 000000e0  ff 8a 42 2e 03 ee 22 72  a3 5e d2 24 6f 40 55 2f  |..B..."r.^.$o@U/|
 000000f0  c9 38 5a 55 c9 22 9c 3a  99 ac 57 11 e4 bf ad a3  |.8ZU.".:..W.....|
 00000100  e8 cf 17 33 cf 01 7f cb  53 ad a1 e5 eb d1 2c 95  |...3....S.....,.|
 00000110  01 0e 62 17 fb b4 10 f8  1f ee 3a f3 be e4 aa a0  |..b.......:.....|
 00000120  9e 31 55 60 1c 07 b8 9e  8b ed f8 53 40 c4 19 98  |.1U`.......S@...|
 00000130  0a 9f d1 a9 f8 df 8f 91  24 54 e8 af 76 27 f1 c6  |........$T..v'..|
 00000140  dc 95 d9 94 88 f7 a0 b4  53 0a 0e 88 5d 22 b0 c0  |........S...]"..|
 00000150  d7 7f 9d 19 1c 24 ba 18  dd d6 31 b3 69 94 95 5f  |.....$....1.i.._|
 00000160  84 9c b8 2d a5 16 2a ba  de b2 98 16 f4 fc 49 69  |...-..*.......Ii|
 00000170  34 33 6e 2d f8 17 80 65  55 79 34 a7 c0 c6 a6 4d  |43n-...eUy4....M|
 00000180  e5 e9 64 15 0f b2 b3 e6  39 d9 9e e3 a4 28 35 6e  |..d.....9....(5n|
 00000190  14 7f 9b c7 34 bc 9e 9b  31 aa 23 2b 6d da 85 f9  |....4...1.#+m...|
 000001a0  8e e4 6d b4 e9 57 9f 10  9f 49 c2 7a a4 1a ff 5c  |..m..W...I.z...\|
 000001b0  54 bd d8 47 e3 ac 5b 22  00 e3 a7 e8 44 35 7f 07  |T..G..["....D5..|
 000001c0  e8 ff 25 9e 77 3e 3f 71  c9 93 07 31 01 c0 5a 7f  |..%.w>?q...1..Z.|
 000001d0  ab d4 bd e9 97 f9 59 a5  ef 4b c9 9a 3a b6 b4 ac  |......Y..K..:...|
 000001e0  e7 34 59 e9 0d 61 0f c6  63 69 7f 50 55 8f 05 5b  |.4Y..a..ci.PU..[|
 000001f0  da 8b 13 ec 02 1e e3 71  61 72 1d 6d 4e 1d ab 2b  |.......qar.mN..+|
}
([]uint8) (len=512 cap=512) {
 00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000070  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000080  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000000f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000100  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000110  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000120  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000140  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000150  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000160  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000170  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000180  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 00000190  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
 000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
}

Absolute paths regression with swift3 on openstack kilo

Summary

I'm currently working with swift3 and auth version2 on openstack kilo, and I noticed that the mc client was reporting 500's when making any request to the s3 endpoint. The issue appears to be related to the absolute paths used by the minio client.

Analysis

Normally a GET to swift's S3 would go like this:

GET /bucket/object/path HTTP/1.1
Host: 10.0.0.0

Instead, I'm seeing this from mc:

GET https://10.0.0.0/bucket/object/path HTTP/1.1
Host: 10.0.0.0

While this works with many s3 implementations, the swift3 implementation is rather conservative and the juno/kilo release of swift doesn't appear to know how to map those routes.

To show this, I rigged up a simple HAProxy config that strips the path out, leaving the rest of the request the same:

global
  log 127.0.0.1 local1 notice
  maxconn 256

defaults
  log global
  mode http
  timeout connect 15000ms
  timeout client 15000ms
  timeout server 25000ms
  option http-server-close
  option httplog

listen https-swift-in
  bind *:443 ssl crt /certs/server-bundle.pem
  reqrep ^(^[^\ :]*)\ https://[^\ /]+(/.*) \1\ \2
  server server1 10.0.0.1:8080 maxconn 32

Note the 'reqrep' . That did the trick and mc was happily grabbing details and downloading files from the bucket/container.
I don't know how to test delete with the current mc client yet (I just started using it yesterday ;-) ) But that should work too.

Multipart uploads last part even if size mismatch occurred

End result is fine, multipart put detects size mismatch exists and doesn't complete the session. The last part which is incomplete was still uploaded rather than failing fast though.

11m with truncated 10.5m example workflow:

POST Initiate Multipart #OK 
PUT 5m - 200 # OK
PUT 5m - 200 # OK
PUT [mismatched size 500k instead of 1M] - 200 # BAD, should not of been sent
POST Complete Multipart [never sent] # OK

google storage public share url for object ?

Hello!
Thanks for minio, it works great. I have one question. When creating a bucket with
gceClient.MakeBucket("user-1234","public-read")

objects that are being PUT in that still has no default public URL, i have to enable it in the gce console. Any idea how to enable that by default ?

I tried adding, after looking into some google migration docs add a
r.req.Header.Set("x-goog-acl", "public-read") to the *Request when building the request in the s3 api go source file but to no avail =(

Thanks!

Non US standard bucket requests receive 301 error

2015/05/07 09:18:18 Get : 301 response missing Location header
exit status 1

This is true for all s3 requests, we have to get the Locationconstraint of the bucket along with the header for the request to resolve this issue, this also would bring in multi region support which we don't have right now.

convert minio-go to a stateless library

Convert the APIs to follow this style:

        config := minio.Config{
                AccessKeyID:     "YOUR-ACCESS-KEY-HERE",
                SecretAccessKey: "YOUR-PASSWORD-HERE",
        }

        // Default is Signature Version 4. To enable Signature Version 2 do the following.
        // config.Signature = minio.SignatureV2

        reader, stat, err := minio.GetObject(config, "https://s3.amazonaws.com/bucket/object") {
        if err != nil {
                log.Fatalln(err)
        }  

postpolicy content-length

Implement content-length constraint for postpolicy.
(Place holder bug for all the client libraries)

Tests fail against current master branch

I am on c5884ce and get this test output:

ENG000656:minio-go aaronschlesinger$ go test -race ./...
--- FAIL: TestMakeBucketErrorV2 (2.05s)
    api_functional_v2_test.go:65: Error: Anonymous access is forbidden for this operation gx5xrbpfiso4r33klhomnhnouvophm
--- FAIL: TestGetObjectClosedTwiceV2 (0.19s)
    api_functional_v2_test.go:112: Error: Anonymous access is forbidden for this operation 34jrkpn5lgynk92oah2svn4qcfe4fn
--- FAIL: TestRemovePartiallyUploadedV2 (0.10s)
    api_functional_v2_test.go:197: Error: Anonymous access is forbidden for this operation bjz5zvcro11h95phaa3msl4f3dwkm6
--- FAIL: TestResumbalePutObjectV2 (0.15s)
    api_functional_v2_test.go:264: Error: Anonymous access is forbidden for this operation k1154pqzhmzhd5x52hlecczh5hs2oi
--- FAIL: TestResumableFPutObjectV2 (0.10s)
    api_functional_v2_test.go:375: Error: Anonymous access is forbidden for this operation 5vs5xutzqcv4gov2ubju9lpixigoil
--- FAIL: TestMakeBucketRegionsV2 (0.11s)
    api_functional_v2_test.go:451: Error: Anonymous access is forbidden for this operation obtghc94f5il2j5qo5umq26nbp4o4q
--- FAIL: TestResumablePutObjectV2 (0.11s)
    api_functional_v2_test.go:503: Error: Anonymous access is forbidden for this operation 5mmab9zziw5x2bpdmsipgsosm1yh2l
--- FAIL: TestGetObjectReadSeekFunctionalV2 (0.11s)
    api_functional_v2_test.go:567: Error: Anonymous access is forbidden for this operation hyef3cusj02lyeyw0etvjsfy19r4i6
--- FAIL: TestGetObjectReadAtFunctionalV2 (0.13s)
    api_functional_v2_test.go:681: Error: Anonymous access is forbidden for this operation m2wozuulnu9l6s92qs6wklxbgfggtr
--- FAIL: TestFunctionalV2 (0.11s)
    api_functional_v2_test.go:822: Error: Anonymous access is forbidden for this operation f0jibm0azy1gcrad5kqf03fc40j6j5
--- FAIL: TestMakeBucketError (0.12s)
    api_functional_v4_test.go:89: Error: Anonymous access is forbidden for this operation 6ov9dlm6rq3unimnkd25ieg9ulr2wy
--- FAIL: TestMakeBucketRegions (0.11s)
    api_functional_v4_test.go:135: Error: Anonymous access is forbidden for this operation yg5je0sr60qv4mfct1osknds4aqk2w
--- FAIL: TestGetObjectClosedTwice (0.11s)
    api_functional_v4_test.go:187: Error: Anonymous access is forbidden for this operation 2zo2q0gkrh1hodjdeifvali6gtacoa
--- FAIL: TestRemovePartiallyUploaded (0.21s)
    api_functional_v4_test.go:272: Error: Anonymous access is forbidden for this operation r0a9gy36pimbhorvgvmcdvdmt5nw9d
--- FAIL: TestResumbalePutObject (0.11s)
    api_functional_v4_test.go:342: Error: Anonymous access is forbidden for this operation 5rlsuspt1n4v05h6ph4pvaxux3lwya
--- FAIL: TestResumableFPutObject (0.11s)
    api_functional_v4_test.go:452: Error: Anonymous access is forbidden for this operation gn4mdbej9ewz5rzs4obavftv1ezomv
--- FAIL: TestResumablePutObject (0.18s)
    api_functional_v4_test.go:532: Error: Anonymous access is forbidden for this operation scv0rmlyub39kjmkdbjgpibvgbpnkm
--- FAIL: TestGetObjectReadSeekFunctional (0.11s)
    api_functional_v4_test.go:596: Error: Anonymous access is forbidden for this operation txkeaboi3bois3rrl6l3wmzr1bw0pa
--- FAIL: TestGetObjectReadAtFunctional (0.10s)
    api_functional_v4_test.go:710: Error: Anonymous access is forbidden for this operation aibg3gdem9tll06fvxyuc5hn6huymh
--- FAIL: TestFunctional (0.11s)
    api_functional_v4_test.go:851: Error: Anonymous access is forbidden for this operation zq2ftf09vmhkqufy0m05zpa5gzir0n
FAIL
FAIL    github.com/minio/minio-go   4.469s

amazon keys

wondering how to use this package, and get the keys from the environment.

api: Remove bucket on a resource which redirects does not work with Golang http.Client

Redirect upon 'DELETE' is not handled.

$ mc rm s3/12312313444444123 --debug
mc: <DEBUG> GET /12312313444444123?location= HTTP/1.1
Host: s3.amazonaws.com
User-Agent: Minio (darwin; amd64) minio-go/0.2.5 mc/2016-01-18T08:54:38Z
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20160118/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20160118T094927Z
Accept-Encoding: gzip

mc: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Mon, 18 Jan 2016 09:49:28 GMT
Server: AmazonS3
X-Amz-Id-2: 2jnFBsmdWp5pPZwUREtCPrQUj82VSvU78oJurPoMiwr6VL3mWXDMBIetQo060D7tEsgGI3+Dp/8=
X-Amz-Request-Id: 36701F477A161833

mc: <DEBUG> Response Time:  839.439967ms

mc: <DEBUG> DELETE / HTTP/1.1
Host: 12312313444444123.s3.amazonaws.com
User-Agent: Minio (darwin; amd64) minio-go/0.2.5 mc/2016-01-18T08:54:38Z
Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20160118/us-west-2/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20160118T094927Z
Accept-Encoding: gzip

mc: <DEBUG> HTTP/1.1 307 Temporary Redirect
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Mon, 18 Jan 2016 09:49:28 GMT
Location: https://12312313444444123.s3-us-west-2.amazonaws.com/
Server: AmazonS3
X-Amz-Id-2: YDyfeIwfwL+qms0WR0mf10B2a7qqmKvCFs+p3zTGh0HIEui8MhLTM0CazcCGI0i8
X-Amz-Request-Id: 6D1DB0CDFD79156D

1c5
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>TemporaryRedirect</Code><Message>Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.</Message><Bucket>12312313444444123</Bucket><Endpoint>12312313444444123.s3-us-west-2.amazonaws.com</Endpoint><RequestId>6D1DB0CDFD79156D</RequestId><HostId>YDyfeIwfwL+qms0WR0mf10B2a7qqmKvCFs+p3zTGh0HIEui8MhLTM0CazcCGI0i8</HostId></Error>
0

mc: <DEBUG> Response Time:  796.252627ms

mc: <ERROR> Unable to remove ‘s3/12312313444444123’. Please re-send this request to the specified temporary endpoint. Continue to use the original request endpoint for future requests.
 (2) rm-main.go:209 main.mainRm(..) Tags: [s3/12312313444444123]
 (1) rm-main.go:141 main.rm(..) Tags: [https://s3.amazonaws.com/12312313444444123]
 (0) pkg/client/s3/s3.go:208 s3.(*s3Client).Remove(..)
 Release-Tag:UNOFFICIAL.2016-01-18T08-54-38Z | Commit:7d52e60f1d05 | Host:command.local | OS:darwin | Arch:amd64 | Lang:go1.5.3 | Mem:3.3MB/9.3MB | Heap:3.3MB/5.8MB

region caching needs to take care cache invalidation

As the library caches bucket region, we would need to take care below cases

  • if a bucket is deleted after the bucket gets cached, we would need to invalidate the cache if we get the request for the same bucket
  • if a bucket is deleted and recreated after the bucket gets cached, we would need to invalidate the cache if we get region mismatch errors from amazon s3

Retry for specific errors in case of multipart uploads.

$ go run putobject.go
2015/12/10 12:20:22 Put https://myuniquebucketnameforevery.s3.amazonaws.com/objectName?partNumber=155&uploadId=bFrAsXMPDHwT4e4NKTVp_nSZjgWJTJK3axYzmYohvtaFh3OABlsD.3K_dvtrFEE3ojC5uxtebcvweNQqsE3Y0e.GUOIEFjfAOdvqPS1D.xEO8B8nUEeQhkl2MkdLDspK: EOF
exit status 1
$ go run putobject.go  
2015/12/11 00:32:33 Put https://anythinglikenothing.s3.amazonaws.com/bigfile.55GB?partNumber=352&uploadId=vOGG8pnSjnXWBYpYyLcv37oj_iJtS3.sdMtPyC_mYW_MnRLjuwMJjVl7Ql2Ww8JvlIgSM2q49BATy4ldsUtVR6_aA_5rkzOtnN0glid1F3YCyglFRhPCfl4pkM_7RkT6: read tcp 10.0.2.3:33323->54.231.160.74:443: read: connection reset by peer
exit status 1
 $ go run putobject.go
22015/12/11 01:04:22 We encountered an internal error. Please try again.
exit status 1

Non-exhaustive list, there will be more.

implement tracing

implement apis:
TraceOn(traceFilePath)
TraceOff()
This should trace the HTTP calls made similar to mc --debug and also should trace any useful information - like if the region is getting used from cache, region map invalidation, if during resume of putObject/fPutObject it should log if it is continuing from where it left off or reuploading etc.
In the functional tests tracing should be 'on'. We should regularly monitor the functional test log to make sure that it is printing what is expected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.