Giter Site home page Giter Site logo

azure-storage-blob-go's Introduction

Azure Storage Blob SDK for Go (PREVIEW) (Deprecated)

GoDoc Widget

This project will be in Community Support until 13 September 2024. After this date the project and associated client libraries will be retired permanently. For more details on the retirement and alternatives to using this project, visit Retirement notice: The legacy Azure Storage Go client libraries will be retired on 13 September 2024.


If you would like to access our latest Go SDK, please refer to the new preview azblob package here. If you would like more information on Azure's burgeoning effort to coordinate the development of the SDKs across services, of which this change is a part, please refer to this article.

We will continue to respond to issues here, but prefer that you post them on the new repo. Thank you for your patience. We look forward to continuing to work together with you.


The Microsoft Azure Storage SDK for Go allows you to build applications that takes advantage of Azure's scalable cloud storage.

This repository contains the open source Blob SDK for Go. The File SDK and Queue SDK are also available.

This library is in preview.

Features

  • Blob Storage
    • Create/List/Delete Containers
    • Create/Read/List/Update/Delete Block Blobs
    • Create/Read/List/Update/Delete Page Blobs
    • Create/Read/List/Update/Delete Append Blobs

Getting Started

  • If you don't already have it, install the Go distribution
  • Get the SDK, with any method you prefer:
    • Go Get: go get github.com/Azure/azure-storage-blob-go/azblob
    • Dep: add github.com/Azure/azure-storage-blob-go to Gopkg.toml:
      [[constraint]]
        version = "0.3.0"
        name = "github.com/Azure/azure-storage-blob-go"
      
    • Module: simply import the SDK and Go will download it for you
  • Use the SDK: import "github.com/Azure/azure-storage-blob-go/azblob"

Version Table

  • If you are looking to use a specific version of the Storage Service, please refer to the following table:
Service Version Corresponding SDK Version Import Path
2016-05-31 0.2.0 github.com/Azure/azure-storage-blob-go/2016-05-31/azblob
2017-07-29 0.2.0 github.com/Azure/azure-storage-blob-go/2017-07-29/azblob
2018-03-28 0.3.0 - 0.5.0 github.com/Azure/azure-storage-blob-go/azblob
2018-11-09 0.6.0 - 0.7.0 github.com/Azure/azure-storage-blob-go/azblob

Note: the directory structure of the SDK has changed dramatically since 0.3.0. The different Service Versions are no longer sub-directories; the latest azblob is directly under the root directory. In the future, each new Service Version will be introduced with a new major semantic version.

SDK Architecture

  • The Azure Storage SDK for Go provides low-level and high-level APIs.
    • ServiceURL, ContainerURL and BlobURL objects provide the low-level API functionality and map one-to-one to the Azure Storage Blob REST APIs
    • The high-level APIs provide convenience abstractions such as uploading a large stream to a block blob (using multiple PutBlock requests).

Code Samples

License

This project is licensed under MIT.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

azure-storage-blob-go's People

Contributors

adreed-msft avatar atomicnebula avatar devigned avatar eclipseo avatar ehotinger avatar element-of-surprise avatar elsesiy avatar jared-gs avatar jeffreyrichter avatar jhendrixmsft avatar jiacfan avatar johejo avatar kfarnung avatar kpfaulkner avatar mcardosos avatar microsoft-github-policy-service[bot] avatar mohsha-msft avatar ncw avatar pharaujo avatar rickle-msft avatar schoag-msft avatar seanmcc-msft avatar semk avatar siminsavani-msft avatar sumit-kalra avatar toasterson avatar wbh1 avatar xaxa06 avatar yangp18 avatar zezha-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-storage-blob-go's Issues

Issue with an example

Which version of the SDK was used?

Which platform are you using? (ex: Windows, Linux, Debian)

What problem was encountered?

Example posted for configuring pipeline has issue. Compiler complains "return &httpResponse{response: resp}, err" as undefined. Please update doc with return statement "return pipeline.NewHTTPResponse(resp), err"

Link to example: https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob#example-NewPipeline

How can we reproduce the problem in the simplest way?

Have you found a mitigation/solution?

syscall and Windows

Hi

It's been a while since I looked at the repo, and I see that mmap_unix.go and mmap_windows.go use various syscall constants which do not exist in Windows.

example output:

go build ./...

azure-storage-blob-go-microsoft/azure-storage-blob-go/2017-07-29/azblob

.\mmap_unix.go:11:17: undefined: syscall.PROT_READ
.\mmap_unix.go:11:36: undefined: syscall.MAP_SHARED
.\mmap_unix.go:13:17: undefined: syscall.PROT_READ
.\mmap_unix.go:13:35: undefined: syscall.PROT_WRITE
.\mmap_unix.go:13:55: undefined: syscall.MAP_SHARED

etc

Am I missing something? (wrong Go version? 3rd party deps etc?)

Any advice appreciated.

Cheers

Ken

GetBlobProperties error doesn't populate service code correctly in case of `404`

Which version of the SDK was used?

  • Using latest - 2018-03-28/azblob

Which platform are you using? (ex: Windows, Linux, Debian)

  • Mac OS X

What problem was encountered?

GetBlobProperties error code doesn't populate service code correctly when casted to storageError if blob doesn't exist

How can we reproduce the problem in the simplest way?

 _, err := blobProperties, err = blob.GetProperties(ctx, options)
if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound)  {
 <=== This doesn't work but if  change this as below it works
}

if storageErr, ok := err.(azblob.StorageError); ok && (storageErr.ServiceCode() == azblob.ServiceCodeContainerNotFound || storageErr.Response().StatusCode == http.StatusNotFound) {
===> This works by checking explicitly Response status code, 
}

Have you found a mitigation/solution?

I didn't investigate further, since I wasn't sure if it bug or expected behavior.

Unexpected context deadline exceeded

Which version of the SDK was used?

master 197d1c0
(go get github.com/Azure/azure-storage-blob-go/2018-03-28/azblob)

Which platform are you using? (ex: Windows, Linux, Debian)

macOS

What problem was encountered?

Trying a simple example to download a file. I passed in context.Background() but got the error:
context deadline exceeded

How can we reproduce the problem in the simplest way?

Code below, credentials and blob information changed, but otherwise unchanged. I guess the blob must be big enough to trigger the deadline. In my case it was 723.75 MiB.

	credential := azblob.NewSharedKeyCredential("account", "key")
	p := azblob.NewPipeline(credential, azblob.PipelineOptions{})

	u, _ := url.Parse("https://account.blob.core.windows.net/")

	serviceURL := azblob.NewServiceURL(*u, p)
	containerURL := serviceURL.NewContainerURL("container")
	blobURL := containerURL.NewBlobURL("blob")

	log.Printf("starting")

	ctx := context.Background()
	blob, err := blobURL.Download(ctx, 0, 0, azblob.BlobAccessConditions{}, false)
	if err != nil {
		log.Fatal(err)
	}

	body := blob.Body(azblob.RetryReaderOptions{})
	defer body.Close()

	f, err := os.Create("output")
	if err != nil {
		log.Fatal(err)
	}
	defer f.Close()

	_, err = io.Copy(f, body)
	if err != nil {
		log.Fatal(err)
	}

Output:

2018/08/08 15:18:52 starting
2018/08/08 15:19:52 context deadline exceeded

Have you found a mitigation/solution?

Not as yet.

project query

Hi

I've been away from the azure-sdk-for-go project for a while, but just noticed this repo. I'm just wondering how this one is different to the previously used https://github.com/Azure/azure-storage-go/ ? Is this purely for blob storage (and not table/queues etc?)

Or does it have other aims?

Merely curious, since blob storage is one of my main use of Azure + Go.

Cheers

Ken

azblob.NewSharedKeyCredential should not panic

When we create a new Shared Key Credential, and pass in a account key which is an invalid base 64 string, the library panics. Normally in Golang a panic means something went unexpectedly wrong, which is not the case in this function call, and the error can be easily returned.

For us as consumers of the package we have to recover the panic in order to prevent our program from exiting unexpectedly. Also, the error passed to panic does not indicate that it was caused by an invalid account key.

Would you accept a PR?

@jamesjoshuahill and @tinygrasshopper

Add Content-MD5 header when staging blocks

It's trivial to calculate the md5 sum for the chunk provided (do it here?):

        chunk, err := ioutil.ReadAll(body)
        if err != nil {
                return req, pipeline.NewError(err, "failed to read request body")
        }
        body.Seek(0, io.SeekStart)
        md5sum := md5.Sum(chunk)
	req.Header.Set("Content-MD5", base64.StdEncoding.EncodeToString(md5sum[:]))

Or allow the user to pass the option to include the header when staging the block

[Proposal] High-level APIs should support to taking io.Reader as input

The high-level APIs provide convenience abstractions such as uploading a large stream to a block blob (using multiple PutBlock requests).

Which API shall i use? (I know we can PutBlock(s) and PutBlockList, but this is not the high-level API)
So is BlobURL.StartCopy the high-level API? Is it support 4TB ?

Update the description below

Now we have
UploadFileToBlockBlob to take io.File
UploadBufferToBlockBlob to take []byte

Propose to have a new function to take io.Reader as input.
btw http.Response.Body is io.ReadCloser

install error on mac

Running $ go get github.com/Azure/azure-storage-blob-go/2016-05-31/azblob

gives me this output

# github.com/Azure/azure-storage-blob-go/2016-05-31/azblob
gocode/src/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/highlevel.go:164:7: undefined: mmf
gocode/src/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/highlevel.go:166:12: undefined: newMMF

My environment:

$ go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/mfischer/alpha/gocode"
GORACE=""
GOROOT="/usr/local/Cellar/go/1.9.3/libexec"
GOTOOLDIR="/usr/local/Cellar/go/1.9.3/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/mm/q2bf08hn361bhqqq5wy38ssc0000gn/T/go-build111810440=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"

Edit:
oh I see there are files for windows/linux but none for mac. For now, I'll comment out that code since looks like only used as example.

Remove unavailable versions from the README.md

Which version of the SDK was used?

I was using the 2016-05-31 version which has been removed with the latest release. Unfortunately this broke not only my project build but also dependent projects like azure-eventhubs-go for which I filed an issue already. I don't think it's good practice to delete tags but I'm sure you'll have your reasons to do so. At least please make it clear in the README by removing the version section altogether as the references to old versions are worth nothing.

Which platform are you using? (ex: Windows, Linux, Debian)

Mac OS X 10.13.6

What problem was encountered?

README still references versions of the repo that have been removed with the latest release

How can we reproduce the problem in the simplest way?

Follow these two links mentioned in the README
github.com/Azure/azure-storage-blob-go/2016-05-31/azblob
github.com/Azure/azure-storage-blob-go/2017-07-29/azblob

Have you found a mitigation/solution?

N/A

Response body of Putblock response is not closed in case of success.

In use case of PutBlock api while we are trying to upload blocks in parallel, with high number of parallel threads, there occurs an error "dial tcp 13.71.200.240:443: socket: too many open files". The OS places a limit to the number of open file descriptors that a process may have at any given time. The error too many open files I guess is raised since we have reached the limit of file descriptors (and or pipe or socket)currently opened and we are trying to open a new file (and or pipe or socket). Looking at the generated code for api validateResponse in zz_generated_responder_policy.go, I see the response pipeline response is not closed in case resp status code was success. The pipeline needs to be closed.

Use ReadCloser for body instead of ReadSeeker

We would like to store a file that a user uploads to blob storage. The Body field of the http.Request struct where we will access the uploaded file is an io.ReadCloser, whereas the blob storage method PutBlob requires an io.ReadSeeker. Can you please downgrade the requirement to io.ReadCloser or explain how to work around this? Third party blob storage libraries do not have the same limitation because they do not implement as robust retry policies, so perhaps there is an alternative implementation you could make that takes a ReadCloser but does not seek back to a successful offset to retry from on failure. Thanks

Website URL 404

Which version of the SDK was used?

n/a

Which platform are you using? (ex: Windows, Linux, Debian)

n/a

What problem was encountered?

Website URL cannot be found by godoc.org's team of gophers.

How can we reproduce the problem in the simplest way?

  1. Go to https://github.com/Azure/azure-storage-blob-go
  2. Go to the website URL (https://godoc.org/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob) next to the repository description ("Microsoft Azure Blob Storage Library for Go")

Have you found a mitigation/solution?

Replace the URL with https://godoc.org/github.com/Azure/azure-storage-blob-go/azblob

Force Travis CI as a branch policy

  • Force gated check ins via Travis CI.
  • Don't run dep ensure in Travis CI - just build directly against whatever's committed in /vendor.
  • Commit the /vendor folder.
  • Add the Travis CI badge to the README.

This should make issues like #20 easier.

Blob service error response should have proper escaping for reserved format characters

Minor issue.
It looks like error response formatting does not properly escape reserved format characters.

Below is error generated by attempt to append a block to deleted append blob via SAS URL.
Notice how URL included into error message has (MISSING) substrings, which are not part of URL.

This is how Golang fmt package handles unescaped reserved format characters included into format variable in calls like fmt.Sprintf(format, ...)

Strings containing reserved format characters need to be moved out of format strings or properly escaped.

-> (redacted...)/vendor/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob.NewResponseError, (redacted...)/github.com/Azure/azure-storage-blob-go/2017-07-29/azblob/zz_generated_response_error.go:29
	            	===== RESPONSE ERROR (ServiceCode=BlobNotFound) =====
	            	Description=The specified blob does not exist.
	            	RequestId:09c9f9f6-b01e-00bd-767c-f80688000000
	            	Time:2018-05-31T01:11:00.8695383Z, Details: (none)
	            	   PUT https://(redacted).blob.core.windows.net/deployment/TestBlobStarter.txt?comp=appendblock&se=2018-05-31t16%!A(MISSING)50%!A(MISSING)00z&sig=REDACTED&sp=wl&sr=b&st=2018-05-30t16%!A(MISSING)50%!A(MISSING)00z&sv=2017-04-17&timeout=61
	            	   Content-Length: [272]
	            	   User-Agent: [Azure-Storage/0.1 (go1.9.4; linux)]
	            	   X-Ms-Client-Request-Id: [40ff2160-493a-4aec-697f-828b2c2ede16]
	            	   X-Ms-Version: [2017-07-29]
	            	   --------------------------------------------------------------------------------
	            	   RESPONSE Status: 404 The specified blob does not exist.
	            	   Content-Length: [215]
	            	   Content-Type: [application/xml]
	            	   Date: [Thu, 31 May 2018 01:11:00 GMT]
	            	   Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0]
	            	   X-Ms-Error-Code: [BlobNotFound]
	            	   X-Ms-Request-Id: [09c9f9f6-b01e-00bd-767c-f80688000000]
	            	   X-Ms-Version: [2017-07-29]```

How would I stream upload a 10 GB file with configurable concurrency / chunk sizes?

In Azure-Samples/azure-sdk-for-go-samples#118 says @derekperkins:
It's pretty obvious with S3 and GCS, but I can't figure out how to do that with Azure. Even uploading a simple small file isn't very obvious, since the storage package combines all the messaging, table and blob components.

S3 https://godoc.org/github.com/aws/aws-sdk-go-v2/service/s3/s3manager#Uploader.UploadWithContext
GCS https://godoc.org/cloud.google.com/go/storage#Writer

Copying it here to the more relevant repository.

Make it easier to override the HTTPSender in azblob

Which version of the SDK was used?

cf01652

Which platform are you using? (ex: Windows, Linux, Debian)

Linux

What problem was encountered?

It is very hard to override the HTTPSender when using azureblob.

It means copying this code and modifying it to create the pipeline. Copying that code means that I won't get further updates to the pipeline that the Azure team make. Also anonymousCredentialPolicyFactory is private so I can't do that bit.

// NewPipeline creates a Pipeline using the specified credentials and options.
func NewPipeline(c Credential, o PipelineOptions) pipeline.Pipeline {
// Closest to API goes first; closest to the wire goes last
f := []pipeline.Factory{
NewTelemetryPolicyFactory(o.Telemetry),
NewUniqueRequestIDPolicyFactory(),
NewRetryPolicyFactory(o.Retry),
}
if _, ok := c.(*anonymousCredentialPolicyFactory); !ok {
// For AnonymousCredential, we optimize out the policy factory since it doesn't do anything
// NOTE: The credential's policy factory must appear close to the wire so it can sign any
// changes made by other factories (like UniqueRequestIDPolicyFactory)
f = append(f, c)
}
f = append(f,
pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked
NewRequestLogPolicyFactory(o.RequestLog))
return pipeline.NewPipeline(f, pipeline.Options{HTTPSender: nil, Log: o.Log})
}

I'd like to propose this patch as a potential solution, but I thought I'd open an issue to discuss before I send a PR.

diff --git a/vendor/github.com/Azure/azure-storage-blob-go/azblob/zc_pipeline.go b/vendor/github.com/Azure/azure-storage-blob-go/azblob/zc_pipeline.go
index f34cd0a7b..91db40a99 100644
--- a/vendor/github.com/Azure/azure-storage-blob-go/azblob/zc_pipeline.go
+++ b/vendor/github.com/Azure/azure-storage-blob-go/azblob/zc_pipeline.go
@@ -17,6 +17,10 @@ type PipelineOptions struct {
 
 	// Telemetry configures the built-in telemetry policy behavior.
 	Telemetry TelemetryOptions
+
+	// HTTPSender allow the sender to be overidden. If it is nil, then
+	// the pipeline's default client is used to send the HTTP requests.
+	HTTPSender Factory
 }
 
 // NewPipeline creates a Pipeline using the specified credentials and options.
@@ -38,5 +42,5 @@ func NewPipeline(c Credential, o PipelineOptions) pipeline.Pipeline {
 		pipeline.MethodFactoryMarker(), // indicates at what stage in the pipeline the method factory is invoked
 		NewRequestLogPolicyFactory(o.RequestLog))
 
-	return pipeline.NewPipeline(f, pipeline.Options{HTTPSender: nil, Log: o.Log})
+	return pipeline.NewPipeline(f, pipeline.Options{HTTPSender: o.HTTPSender, Log: o.Log})
 }

[Proposal] High-level APIs should support to taking io.WriterAt as output

Which version of the SDK was used?

Latest

Which platform are you using? (ex: Windows, Linux, Debian)

Linux

What problem was encountered?

No support for io.writerAt.

How can we reproduce the problem in the simplest way?

N/A

Have you found a mitigation/solution?

Mitigate by writing directly to a file.

Description

I am writing a plugin for a parallel file system to archive back to BLOB storage. The current implementation handles the IO but requires io.reader for upload and io.writerAt for download. There is currently no support for io.writerAt. Would it be possible to add?

Override http.Client or http.Transport in use by SDK?

Is it possible to override either the http.Client or http.Transport in use by the SDK?

This is very important for restricted environments, eg on google AppEngine.

Rclone uses this functionality to add http debugging and idle timeouts.

I'll just note that the AWS SDK and the Google SDK both allow overriding it :-)

Thanks

Project fails to compile

Seems like even a basic example fails to compile for me, likely due to the constraint on the azure-pipeline-go library.

vendor/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/credential_anonymous.go:15:21: undefined: pipeline.FactoryFunc

$ dep status

PROJECT CONSTRAINT VERSION REVISION LATEST PKGS USED
github.com/Azure/azure-pipeline-go 1.1.3 1.1.3 f4da77e 1.1.3 1
github.com/Azure/azure-storage-blob-go ^0.1.2 0.1.2 c91e48a 0.1.2 1

Also tested a build directly against master - same issue.


EDIT: this is because of the 1.1.3 constraint put in the project for azure-pipeline-go. It needs to be 1.1.4; the upstream source is out of sync.

Build error on DragonFlyBSD

% go get github.com/Azure/azure-storage-blob-go/azblob

# github.com/Azure/azure-storage-blob-go/azblob
go/src/github.com/Azure/azure-storage-blob-go/azblob/highlevel.go:144:7: undefined: mmf
go/src/github.com/Azure/azure-storage-blob-go/azblob/highlevel.go:146:12: undefined: newMMF
go/src/github.com/Azure/azure-storage-blob-go/azblob/highlevel.go:272:13: undefined: newMMF

Adding "dragonfly" to the // +build line of azblob/zc_mmf_unix.go fixes it.

OpenBSD and NetBSD build issue

Hi

I'm cross compiling one of my apps that I've just ported to this new SDK (from azure-sdk-for-go). I'm using gox to trigger a bunch of cross platform builds.

Most platforms are fine, but OpenBSD and NetBSD fail.

gox -ldflags "-X main.Version=0.4.0"
Number of parallel builds: 3

--> darwin/386: azurecopy/azurecopycommand
--> darwin/amd64: azurecopy/azurecopycommand
--> linux/amd64: azurecopy/azurecopycommand
--> linux/arm: azurecopy/azurecopycommand
--> linux/386: azurecopy/azurecopycommand
--> freebsd/386: azurecopy/azurecopycommand
--> freebsd/amd64: azurecopy/azurecopycommand
--> openbsd/386: azurecopy/azurecopycommand
--> openbsd/amd64: azurecopy/azurecopycommand
--> windows/386: azurecopy/azurecopycommand
--> windows/amd64: azurecopy/azurecopycommand
--> freebsd/arm: azurecopy/azurecopycommand
--> netbsd/386: azurecopy/azurecopycommand
--> netbsd/amd64: azurecopy/azurecopycommand
--> netbsd/arm: azurecopy/azurecopycommand

5 errors occurred:
--> openbsd/386 error: exit status 2
Stderr: # github.com/azure/azure-storage-blob-go/2016-05-31/azblob
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:165:7: undefined: mmf
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:167:12: undefined: newMMF

--> openbsd/amd64 error: exit status 2
Stderr: # github.com/azure/azure-storage-blob-go/2016-05-31/azblob
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:165:7: undefined: mmf
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:167:12: undefined: newMMF

--> netbsd/386 error: exit status 2
Stderr: # github.com/azure/azure-storage-blob-go/2016-05-31/azblob
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:165:7: undefined: mmf
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:167:12: undefined: newMMF

--> netbsd/amd64 error: exit status 2
Stderr: # github.com/azure/azure-storage-blob-go/2016-05-31/azblob
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:165:7: undefined: mmf
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:167:12: undefined: newMMF

--> netbsd/arm error: exit status 2
Stderr: # github.com/azure/azure-storage-blob-go/2016-05-31/azblob
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:165:7: undefined: mmf
....\github.com\azure\azure-storage-blob-go\2016-05-31\azblob\highlevel.go:167:12: undefined: newMMF

I don't have a minimum reproducible setup yet, but wanted to check if this "highlevel/mmf" error was already known. If not, I'll try and make a minimum repro step.

When using the azure-sdk-for-go, OpenBSD and NetBSD built fine.

Thanks

Ken

UploadStreamToBlockBlob does not return error correctly

UploadStreamToBlockBlob which calls uploadStream is not returning error correctly to user. If reader is wrapped up with error, we were expecting this call to fail the transfer, instead currently below code simply copies stream to buffer irrespective of error type.

n, err := io.ReadFull(reader, buffer)

Once we break out of upload process here

if err != nil { // The reader is done, no more outgoing buffers

it simply masks occurred error and returns success to caller

I believe correct behavior here should fail the transfer and return error to user, this used be to be the behavior in older SDK azure-sdk-for-go CreateBlockBlobFromReader

Please let me know if you need any more details on this.

The HTTP response is not returned when an error occurs

When an error occurs in the low-level APIs/operations the wrappers of the HTTP responses are not returned. This behavior makes the implementation of scenarios that benefit from knowing the status of the response cumbersome, e.g. identify a 404 to handle when a container does not exist.

Any thoughts?

install error on Solaris

Similar issue to #23 on Solaris. Worked around by creating azblob/zc_mmf_solaris.go with following contents (from some searching/copy/pasting – I don't know Go).

// +build solaris

package azblob

import (
        "os"
        "syscall"
        "golang.org/x/sys/unix"
)

type mmf []byte

func newMMF(file *os.File, writable bool, offset int64, length int) (mmf, error) {
        prot, flags := unix.PROT_READ, syscall.MAP_SHARED // Assume read-only
        if writable {
                prot, flags = syscall.PROT_READ|syscall.PROT_WRITE, syscall.MAP_SHARED
        }
        addr, err := unix.Mmap(int(file.Fd()), offset, length, prot, flags)
        return mmf(addr), err
}

func (m *mmf) unmap() {
        err := unix.Munmap(*m)
        *m = nil
        if err != nil {
                panic("if we are unable to unmap the memory-mapped file, there is serious concern for memory corruption")
        }
}

use UTC for tests

It appears to me if some of the tests should be using UTC but are actually using local time.
Will submit a PR with changes.

NewSharedKeyCredential Panics with invalid key. Should return error

Which version of the SDK was used?

  name = "github.com/Azure/azure-storage-blob-go"
  packages = ["2016-05-31/azblob"]
  pruneopts = "UT"
  revision = "bb46532f68b79e9e1baca8fb19a382ef5d40ed33"
  version = "0.2.0"

Which platform are you using? (ex: Windows, Linux, Debian)

Linux (Ubuntu 18) Go 1.10.1

What problem was encountered?

The following call panics if an invalid key is presented to it. This is not a nice way to handle things. The method should return creds, err with the error containing some detail on why it failed, eg: "invalid key, must be valid base64 data"

	creds := azblob.NewSharedKeyCredential(accountName, storageAccountKey)

=== RUN   TestLockingEnd2End_WrongStorageKey
--- FAIL: TestLockingEnd2End_WrongStorageKey (0.00s)
panic: illegal base64 data at input byte 4 [recovered]
	panic: illegal base64 data at input byte 4

goroutine 20 [running]:
testing.tRunner.func1(0xc4201441e0)
	/usr/lib/go-1.10/src/testing/testing.go:742 +0x29d
panic(0x6b2a40, 0xc4200acb08)
	/usr/lib/go-1.10/src/runtime/panic.go:502 +0x229
github.com/lawrencegripper/goazurelocking/vendor/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob.NewSharedKeyCredential(0xc42015e11b, 0xf, 0x7175b4, 0x7, 0x0)
	/home/lawrence/go/src/github.com/lawrencegripper/goazurelocking/vendor/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/credential_shared_key.go:23 +0x11c
github.com/lawrencegripper/goazurelocking.NewLockInstance(0x753760, 0xc4200b2bc0, 0xc42015e113, 0x2d, 0x7175b4, 0x7, 0xc4200ac870, 0xa, 0x37e11d600, 0xc42009ee10, ...)
	/home/lawrence/go/src/github.com/lawrencegripper/goazurelocking/locking.go:194 +0x215
github.com/lawrencegripper/goazurelocking.TestLockingEnd2End_WrongStorageKey(0xc4201441e0)
	/home/lawrence/go/src/github.com/lawrencegripper/goazurelocking/locking_integration_test.go:385 +0x1d1
testing.tRunner(0xc4201441e0, 0x72ca38)
	/usr/lib/go-1.10/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
	/usr/lib/go-1.10/src/testing/testing.go:824 +0x2e0
FAIL	github.com/lawrencegripper/goazurelocking	0.007s
Error: Tests failed.

How can we reproduce the problem in the simplest way?

Pass a key of an account key of "somekey" to the NewSharedKeyCredential

Have you found a mitigation/solution?

Not yet but I plan to validate the key is valid Base64 before making the call.

Regression: support for the 2016/2017 API's has been removed

Which version of the SDK was used?

0.3.0

Which platform are you using? (ex: Windows, Linux, Debian)

n/a

What problem was encountered?

Prior to 0.3.0 the Azure Storage SDK for Go supported allowing the user to decide which version of the API should be used. This option is no longer available and users are required to use the 2018 API's which are not available in all regions - which unfortunately this approach means it's not possible to adopt version 0.3.0 of the Azure Storage SDK for Go.

The Azure SDK for Go has adopted a versioning strategy based on discussion with the community where multiple versions of the API are available, since not all versions of the Storage API's are available in all Azure Regions/Sovereign Clouds (incl. Azure Stack) - which allows users to pick the API versions which are most appropriate for their needs.

How can we reproduce the problem in the simplest way?

Navigate to https://github.com/Azure/azure-storage-blob-go and notice that the following paths are no longer present:

  • /2016-05-31/azblob
  • /2017-07-29/azblob

Whilst shipping only a single API version allows for simpler maintenance of this SDK (and some simpler uses of this SDK) - unfortunately this means it's not possible to adopt it. For reference the Azure SDK for Go consulted on the API versioning strategy - which is where the support for multiple versions came from; in order to make the day 1 support easier - the Go SDK supports a 'latest' profile which points to the latest API version - which allows both use cases to be supported.

On a separate note - whilst perhaps I'd missed the Github issue for this - was this major functionality removal discussed with the community/users of this SDK prior to being made?

Have you found a mitigation/solution?

Continuing to use the Storage SDK in the Azure SDK for Go - which is deprecated.

Will not build on macOS due to nonexistent mmap_darwin.go

I encountered this whilst testing on Mac. The error manifests as follows:

go get -u github.com/Azure/azure-storage-blob-go/2016-05-31/azblob

# github.com/Azure/azure-storage-blob-go/2016-05-31/azblob
../../go/src/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/highlevel.go:164:7: undefined: mmf
../../go/src/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/highlevel.go:166:12: undefined: newMMF

Resolution: Requires mmap_darwin.go in addition to the mmap_linux.go and mmap_windows.go

Latest release of pipeline is not compatible with the latest release storage-blog

You may want to put out a new release for this project. I just started a new project, ran dep ensure --add github.com/Azure/azure-storage-blob-go/2016-05-31/azblob, copy pasted the code from the sample, and it failed to build:

vendor/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/credential_anonymous.go:27:53: undefined: pipeline.Node
vendor/github.com/Azure/azure-storage-blob-go/2016-05-31/azblob/credential_anonymous.go:36:7: undefined: pipeline.Node

The head of master works just fine so after updating my Gopkg.toml to the following it worked:

[[constraint]]
  branch ="master"
  name = "github.com/Azure/azure-storage-blob-go"

If you push a new tag to this repo, then people won't run into that incompatibility problem between the latest releases.

correct branch

Hi

Just wondering which branch is used for development? Is it dev or master? Appears more stuff is happening on master.

Cheers

Ken

service principal access support

Which version of the SDK was used?

0.3.0

Which platform are you using? (ex: Windows, Linux, Debian)

linux

What problem was encountered?

Cannot use service principal to access blob storage.

How can we reproduce the problem in the simplest way?

The credentials support is limited to shared key, anonymous, and token based.

Have you found a mitigation/solution?

no

Is there any plan to support Service Principal support? It will solve a lot of issues, especially cross-project account access.

metadata value with non-ascii characters

blobURL.PutBlob(ctx, strings.NewReader("Some text"), azblob.BlobHTTPHeaders{}, azblob.Metadata{"Foo": "/愛知県/bar"}, azblob.BlobAccessConditions{})

gives this error:
===== RESPONSE ERROR (ServiceCode=AuthenticationFailed) =====
Description=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:daf8a672-001e-000e-2f4b-a033f3000000
Time:2018-02-07T19:38:09.6740273Z, Details:
AuthenticationErrorDetail: The MAC signature found in the HTTP request '' is not the same as any computed signature. Server used following string to sign: 'PUT

9

x-ms-blob-cache-control:
x-ms-blob-content-disposition:
x-ms-blob-content-encoding:
x-ms-blob-content-language:
x-ms-blob-content-type:
x-ms-blob-type:BlockBlob
x-ms-client-request-id:f18fd538-3780-4f62-5236-777ac244affa
x-ms-date:Wed, 07 Feb 2018 19:38:09 GMT
x-ms-meta-foo:/愛知県/bar
x-ms-version:2016-05-31
/MYACCOUNT/MYCONTAINER/ReadMe.txt
timeout:61.
PUT https://MYACCOUNT.blob.core.cloudapi.de/MYCONTAINER/ReadMe.txt?timeout=61
Authorization: REDACTED
Content-Length: [9]
User-Agent: [Azure-Storage/0.1 (go1.9.3; darwin)]
X-Ms-Blob-Cache-Control: []
X-Ms-Blob-Content-Disposition: []
X-Ms-Blob-Content-Encoding: []
X-Ms-Blob-Content-Language: []
X-Ms-Blob-Content-Type: []
X-Ms-Blob-Type: [BlockBlob]
X-Ms-Client-Request-Id: [f18fd538-3780-4f62-5236-777ac244affa]
X-Ms-Date: [Wed, 07 Feb 2018 19:38:09 GMT]
X-Ms-Meta-Foo: [/愛知県/bar]
X-Ms-Version: [2016-05-31]

RESPONSE Status: 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Content-Length: [936]
Content-Type: [application/xml]
Date: [Wed, 07 Feb 2018 19:38:09 GMT]
Server: [Microsoft-HTTPAPI/2.0]
X-Ms-Request-Id: [daf8a672-001e-000e-2f4b-a033f3000000]

exit status 1

Is this because "/愛知県/bar" != "/愛知県/bar"?
Do you see any way to set non-ascii character like "/愛知県/bar" as a metadata value?

API Stability and Versioning

Hi,

There have been a number of breaking high-level API changes recently. Considering this is still v0.1 there is nothing unusual about that, but we (and probably other Azure-based developers) would benefit from having an idea of when you plan to stabilise the high-level API and how you will version/release thereafter.

A note in the README would be most helpful.

Thanks

Michael

Azure/azure-storage-go split

I can understand the need to split the different packages, as announced on the roadmap.
https://azure.microsoft.com/en-us/blog/preview-the-new-azure-storage-sdk-for-go-storage-sdks-roadmap/

What I find VERY inconvenient it to simply remove the entire GO SDK library for storage, before the split is even in place.

Expected

Actual

If I make a parallel to what was done with PHP, the following exist:


Where is the GO File Storage SDK???

Please don't panic on DecodeString

I was surprised to see all of the implementations of NewSharedKeyCredential panic when decoding the account key.

I would expect the error to be returned, rather than a panic.

@JeffreyRichter, there are over 100 panic calls in this library. Is this intended? I haven't found an instance of a panic call which wouldn't be better handled by returning an error. Thoughts?

Expired Context Causes Hang

If you supply an expired context in a request, it will cause the library to hang indefinitely.

For example

ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()

// do stuff for 10+ seconds

containerURL.Delete(ctx, azblob.ContainerAccessConditions{}) // this or any other call will hang

[Feature Request] Cleanup failed UploadBufferToBlockBlob?

Part of this is a "how does Azure's Storage API" work question - what happens if a file is abandoned (for whatever reason) during a block blob upload and never "cleaned up"? Are the blocks counted against the user in terms of storage? If so, how can we "clear" this abandoned upload?

Google's Cloud Storage will auto-expire upload sessions after 1 week. AWS S3 allows you to abort a multipart upload. Additionally, S3 allows you to place lifecycle rules on failed multipart uploads to reclaim the space that is otherwise inaccessible but billed to your account.

Can the high level APIs provided by this SDK handle the appropriate post-error action? I know AWS S3 Go SDK will actually abort a multipart upload if it fails for whatever reason unless an option is passed in to "keep parts." I think it would be helpful and easily align users to "best practices" if the SDK provided the correct action to take after a block blob upload.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.