Giter Site home page Giter Site logo

fsouza / fake-gcs-server Goto Github PK

View Code? Open in Web Editor NEW
1.0K 8.0 209.0 3.38 MB

Google Cloud Storage emulator & testing library.

Home Page: https://pkg.go.dev/github.com/fsouza/fake-gcs-server/fakestorage?tab=doc

License: BSD 2-Clause "Simplified" License

Go 98.84% Dockerfile 0.14% Shell 1.02%
gcp gcs emulator storage google-cloud-storage hacktoberfest

fake-gcs-server's Introduction

fake-gcs-server

Build Status GoDoc

fake-gcs-server provides an emulator for Google Cloud Storage API. It can be used as a library in Go projects and/or as a standalone binary/Docker image.

The library is available inside the package github.com/fsouza/fake-gcs-server/fakestorage and can be used from within test suites in Go package. The emulator is available as a binary that can be built manually, downloaded from the releases page or pulled from Docker Hub (docker pull fsouza/fake-gcs-server).

Using the emulator in Docker

You can stub/mock Google Cloud Storage as a standalone server (like the datastore/pubsub emulators) which is ideal for integration tests and/or tests in other languages you may want to run the fake-gcs-server inside a Docker container:

docker run -d --name fake-gcs-server -p 4443:4443 fsouza/fake-gcs-server

Preload data

In case you want to preload some data in fake-gcs-server just mount a folder in the container at /data:

docker run -d --name fake-gcs-server -p 4443:4443 -v ${PWD}/examples/data:/data fsouza/fake-gcs-server

Where the content of ${PWD}/examples/data is something like:

.
└── sample-bucket
    └── some_file.txt

To make sure everything works as expected you can execute these commands:

curl --insecure https://0.0.0.0:4443/storage/v1/b
{"kind":"storage#buckets","items":[{"kind":"storage#bucket","id":"sample-bucket","name":"sample-bucket"}],"prefixes":null}

curl --insecure https://0.0.0.0:4443/storage/v1/b/sample-bucket/o
{"kind":"storage#objects","items":[{"kind":"storage#object","name":"some_file.txt","id":"sample-bucket/some_file.txt","bucket":"sample-bucket","size":"33"}],"prefixes":[]}

This will result in one bucket called sample-bucket containing one object called some_file.txt.

Running with HTTP

fake-gcs-server defaults to HTTPS, but it can also be used with HTTP. The flag -scheme can be used to specify the protocol.
The binding port will be -port (defaults to 4443).
For example, the previous example could be changed to pass -scheme http:

docker run -d --name fake-gcs-server -p 4443:4443 -v ${PWD}/examples/data:/data fsouza/fake-gcs-server -scheme http

And now we can curl it without the --insecure flag and using http:// instead of https://:

curl http://0.0.0.0:4443/storage/v1/b
{"kind":"storage#buckets","items":[{"kind":"storage#bucket","id":"sample-bucket","name":"sample-bucket"}],"prefixes":null}

curl http://0.0.0.0:4443/storage/v1/b/sample-bucket/o
{"kind":"storage#objects","items":[{"kind":"storage#object","name":"some_file.txt","id":"sample-bucket/some_file.txt","bucket":"sample-bucket","size":"33"}],"prefixes":[]}

Running with both HTTPS and HTTP

To start both HTTPS and HTTP servers, pass -scheme both.
HTTPS will bind to -port (defaults to 4443) and HTTP will bind to -port-http (defaults to 8000).
For example, the previous example could be changed to pass -scheme both:

docker run -d --name fake-gcs-server -p 4443:4443 -p 8000:8000 -v ${PWD}/examples/data:/data fsouza/fake-gcs-server -scheme both

Using with signed URLs

It is possible to use fake-gcs-server with signed URLs, although with a few caveats:

  • No validation is made on the query params (signature, expiration ...)
  • You need your client to modify the URL before passing it around (replace storage.googleapis.com with something that points to fake-gcs-server)
  • You need to configure fake-gcs-server to accept this local URL (by setting -public-host)

Available server flags

fake-gcs-server supports various features that can be configured through flags upon start. Use the -help flag to list all of them alongside their usage instructions

docker run --rm fsouza/fake-gcs-server -help

Client library examples

For examples using SDK from multiple languages, check out the examples directory.

Building the image locally

You may use docker build to build the image locally instead of pulling it from Docker Hub:

docker build -t fsouza/fake-gcs-server .

fake-gcs-server's People

Contributors

antonydenyer avatar chrisrecalis avatar cuonglm avatar darthsim avatar dcaba avatar dependabot-preview[bot] avatar dependabot[bot] avatar dnerdy avatar dpoetzsch avatar fsouza avatar gaul avatar gh2k avatar gracenoah avatar ironsmile avatar jwhitlock avatar kenniaa avatar le0pard avatar lelikg avatar m4gus88 avatar manuteleco avatar mheffner avatar rachitsharma2001 avatar raz-amir avatar renovate-bot avatar sergseven avatar snawaz avatar solidsystem avatar teone avatar unafraid avatar zigius avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fake-gcs-server's Issues

Dependabot couldn't find a Dockerfile for this project

Dependabot couldn't find a Dockerfile for this project.

Dependabot requires a Dockerfile to evaluate your project's current Docker dependencies. It had expected to find one at the path: /drone/Dockerfile.

If this isn't a Docker project, or if it is a library, you may wish to disable updates for it from within Dependabot.

You can mention @dependabot in the comments below to contact the Dependabot team.

Error: Could not load the default credentials

I was playing around with the docker fake-gcs-server. Based on nodeJS example from this repo, I copy-pasted the sample code and modified it a little:

process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
process.on("unhandledRejection", err => {throw err});

async function testOperation() {
  
  const {Storage} = require("@google-cloud/storage");

  const storage = new Storage({
    apiEndpoint: "127.0.0.1:4443",
    projectId: "test",
  });

  const bucket = storage.bucket('photos');

  bucket.create()
    .then(function(data) {
      const b = data[0];
      console.log(b.name)
    })

  const file = bucket.file('note.txt');
  const contents = 'Happy working prototype!';

  file.save(contents).then(function() {});

}

testOperation().then(console.log);

When I run it, I got:

process.on("unhandledRejection", err => {throw err});
                                         ^

Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
    at GoogleAuth.getApplicationDefaultAsync (/Users/Alex/Desktop/projects/contestio/api/node_modules/google-auth-library/build/src/auth/googleauth.js:160:19)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async GoogleAuth.getClient (/Users/Alex/Desktop/projects/contestio/api/node_modules/google-auth-library/build/src/auth/googleauth.js:502:17)
    at async GoogleAuth.request (/Users/Alex/Desktop/projects/contestio/api/node_modules/google-auth-library/build/src/auth/googleauth.js:555:24)
    at async Upload.makeRequest (/Users/Alex/Desktop/projects/contestio/api/node_modules/gcs-resumable-upload/build/src/index.js:316:21)
    at async Upload.createURIAsync (/Users/Alex/Desktop/projects/contestio/api/node_modules/gcs-resumable-upload/build/src/index.js:142:22)

python gcs client use generation as 0 when it is None, which is not supported/implemented yet.

GCS client forms the GET url with param generation=0 which is not implemented in the server.
So, if it's not implemented, the quick fix may be replacing all

if generationStr != ""

to

if generationStr != "" && generationStr != "0"

so that it uses Server.GetObject()

otherwise, get/download object will fail. We can easily reproduce this by running example code.

Just wonder if GetObjectWithGeneration() will be implemented soon or any other better fix?

DockerHub Image Causes Exception

From scratch pulling fsouza/fake-gcs-server causes an exception in standard_init_linux.go

$ docker run --name fake-gcs-server -p 4443:4443 fsouza/fake-gcs-server
$ Unable to find image 'fsouza/fake-gcs-server:latest' locally
$ latest: Pulling from fsouza/fake-gcs-server
$ 050382585609: Pull complete
$ ac2a410f0388: Pull complete
$ Digest: sha256:a3364067c3887a157cf16b0f91a0c7412393a0c5183cf396c220a2b9e2755dd4
$ Status: Downloaded newer image for fsouza/fake-gcs-server:latest
$ standard_init_linux.go:211: exec user process caused "no such file or directory"

However, building the image from the repo and running the same is successful.

$ docker run --name fake-gcs-server -p 4443:4443 fsouza/fake-gcs-server
$ time="2019-07-22T15:17:21Z" level=info msg="couldn't load any objects from \"/data\", starting empty"
$ time="2019-07-22T15:17:21Z" level=info msg="server started at https://[::]:4443"

Checking if a folder exists is not working as expected

when connecting to googles endpoint by using
blob = storage.Blob(bucket=bucket, name="somefolder/")
fileExists = blob.exists() # true

however when overiding storage endpoints etc as explained in examples for python. and using the exact same code, fileExists is now false. It works as expected when checking for example 'name="somefolder/somefile.txt"'

Trying to download blob calls original googleapis url

Hi, thanks for all the great work so far! I'm interested in downloading files with the server and am not sure if that is supported right now.

I can follow the steps in the readme to get a blob object. I would like to call blob.download_as_string() (or any similar download function) but when I do, I get

Unauthorized: 401 GET https://www.googleapis.com/download/storage/v1/b/<bucket>/o<object_name>?alt=media: (u'Request failed with status code', 401, u'Expected one of', 200, 206)

Is this expected? Wasn't sure if blob download is supported atm.

fake-gcs-server eats all memory

I've downloaded the v1.18.1 binaries version for Linux and launched the server like this:

$ fake-gcs-server -filesystem-root /home/antoine/t -scheme http

Without even issuing a single request, fake-gcs-server started allocating a ton of memory and eating CPU, making my computer crawl:

  PID UTIL.     PR  NI    VIRT    RES    SHR S  %CPU %MEM    TEMPS+ COM.
31933 antoine   20   0 52,306g 0,044t      0 S  24,4 95,5   1:48.97 fake-gcs-server

I had to kill the process.

The dependency on cloud.google.com/go doesn't need to be 0.19.0

In Gopkg.toml, there's a dependency on

  name = "cloud.google.com/go"
  version = "0.19.0"

and this was forcing me to upgrade cloud.google.com/go, but fake-gcs-server doesn't actually depend on anything new in this version, so I would prefer it if a lower version was required, such as 0.18.0. I'm working around this with a fork because I already have a fork, but maybe there's a way to tell dep to override it. Either way, it seems simple enough to lower the required version in this repo. Any objections?

"open /storage/: no such file or directory" - Docker on Windows

I'm trying to run the server, here is my command to start the Docker container:
docker run -d --name fake-gcs-server -p 4443:4443 -v %CD%\bot\tests\data:/data fsouza/fake-gcs-server

It starts with no problems, but using both cURL and Postman gives the same response when hitting https://127.0.0.1:4443/storage/v1/b:

open /storage/: no such file or directory

I'm clearly missing something here, but can't think what...

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /go.mod.

The error Dependabot encountered was:

go: cloud.google.com/go/[email protected] requires
	honnef.co/go/[email protected] requires
	github.com/rogpeppe/[email protected] requires
	gopkg.in/[email protected] requires
	gopkg.in/[email protected]: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /opt/go/gopath/pkg/mod/cache/vcs/9241c28341fcedca6a799ab7a465dd6924dc5d94044cbfabb75778817250adfc: exit status 128:
	fatal: The remote end hung up unexpectedly

View the update logs.

Error - Download file

Hello everyone,

Sorry to bother opening an issue, but I have a question about the implementation, because I'm having an error when trying to download a file.

When downloading the file with the code:

func (video *Video) Download(bucketName string, storagePath string) (Video, error) {
	ctx := context.TODO()

	//client, err := storage.NewClient(ctx)
	transCfg := &http.Transport{
		TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, // ignore expired SSL certificates
	}
	httpClient := &http.Client{Transport: transCfg}

	client, err := storage.NewClient(ctx, option.WithEndpoint("https://fake:4443/storage/v1/"), option.WithHTTPClient(httpClient))


	if err != nil {
		video.Status = "error"
		fmt.Println(err.Error())
		return *video, err
	}


	bkt := client.Bucket(bucketName)

	obj := bkt.Object(video.Path)

	r, err := obj.NewReader(ctx)

	if err != nil {
		video.Status = "error"
		fmt.Println(err.Error())
		return *video, err
	}

	defer r.Close()

	body, err := ioutil.ReadAll(r)
	if err != nil {
		video.Status = "error"
		fmt.Println(err.Error())
		return *video, err
	}

	f, err := os.Create(storagePath+"/"+video.Uuid+".mp4");

	if err != nil {
		video.Status = "error"
		fmt.Println(err.Error())
		return *video, err
	}

	_, err = f.Write(body)

	if err != nil {
		video.Status = "error"
		fmt.Println(err.Error())
		return *video, err
	}

	defer f.Close()

	fmt.Println("Video ", video.Uuid, "has been stored")

	return *video, nil
}

The error comes from: newReader

Error:
storage: object doesn't exist

If I access the file via URL, I can do it normally.

The repository for my application is this:
https://github.com/WillRy/golang-gcs-test

obs. I have files pre-loaded through volumes, in docker-compose.yaml

Did I make a mistake?

Thank you very much in advance.

gsutil cp bad request

I need to test some bash scripts which run gsutil but I keep getting errors when trying to create objects using gsutil. The scenario I run is basically:

docker run --name fake-gcs-server -p 4443:4443 fsouza/fake-gcs-server

gsutil -o "Credentials:gs_json_host=0.0.0.0" -o "Credentials:gs_json_port=4443" -o "Boto:https_validate_certificates=False" mb "gs://test"

gsutil -o "Credentials:gs_json_host=0.0.0.0" -o "Credentials:gs_json_port=4443" -o "Boto:https_validate_certificates=False" ls "gs://test"

echo "Hello" | gsutil -o "Credentials:gs_json_host=0.0.0.0" -o "Credentials:gs_json_port=4443" -o "Boto:https_validate_certificates=False" cp - "gs://test/hello.txt"

Copying from ...
ResumableUploadStartOverException: 404 Bad Request

gsutil -o "Credentials:gs_json_host=0.0.0.0" -o "Credentials:gs_json_port=4443" -o "Boto:https_validate_certificates=False" cp ./tmp/funny-memes-81.jpg "gs://test/"

Copying file://./tmp/funny-memes-81.jpg [Content-Type=image/jpeg]...
BadRequestException: 400 Bad Request

How to use the server?

Hey! Nice project, appreciate it!

But I'm wondering: how can I use the server in my go apps? For example, I can use the pubsub emulator by setting the PUBSUB_EMULATOR_HOST to the address where the pubsub emulator is running. This wat the client directly connects to the one running on that port. Is there also some kind of mechanism for the Go storage SDK?

In other words: how can I point the client to the emulated server?

I tried to set the options while creating the storage client, but I can only override the endpoint. Not the hostname + port.

Screenshot 2019-03-28 at 21 34 12

Content type not preserved

Thank you for your work on this module - it has made testing with GCS a breeze.

It looks to me like the content type of my objects is not getting preserved. Here is a function that tries to set the content type for a new object and then to get the same from the objects attrs:

func checkContentType() {
	bucketName := "test-bucket"
	server := fakestorage.NewServer(nil)
	server.CreateBucket(bucketName)
	client := server.Client()
	defer client.Close()

	bucket := client.Bucket(bucketName)
	object := bucket.Object("test-object")
	writer := object.NewWriter(context.Background())
	defer writer.Close()

	// trying to set content type
	writer.ContentType = "text/html"
	
	reader := strings.NewReader("<body>content</body>")
	if _, err := io.Copy(writer, reader); err != nil {
		log.Fatalln(err)
	}
	if err := writer.Close(); err != nil {
		log.Fatalln(err)
	}

	attrs, err := object.Attrs(context.Background())
	if err != nil {
		log.Fatalln(err)
	}

	fmt.Printf("content-type: %s\n", attrs.ContentType)
}

I'd expect that to print content-type: text/html. Instead it prints content-type: .

Let me know if this looks like a bug. I'd be happy to take a pass at a patch.

Deleting a bucket fails

First off, amazing project! Much needed.

When I try and delete a bucket to cleanup after my tests, it fails:

	gcsTestBucketName := "test_bucket"
	transCfg := &http.Transport{
		// nolint:gosec // Purposely connecting to a local self-signed certificate endpoint
		TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, // ignore expired SSL certificates
	}
	httpClient := &http.Client{Transport: transCfg}
	client, err := storage.NewClient(ctx, option.WithEndpoint(os.Getenv("GCS_FAKE_SERVER")+"/storage/v1/"), option.WithHTTPClient(httpClient))
	if err != nil {
		t.Fatal(err)
	}

	if err = client.Bucket(gcsTestBucketName).Create(ctx, "test-project", nil); err != nil {
		t.Fatalf("could not create bucket; %v", err)
	}

	defer func() {
		if err = client.Bucket(gcsTestBucketName).Delete(ctx); err != nil {
			t.Errorf("could not delete bucket: %v", err)
		}
	}()

I end up getting a panic:

panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0xc1854e]

goroutine 8 [running]:
testing.tRunner.func1.1(0xd66900, 0x1725600)
        /usr/lib/go/src/testing/testing.go:940 +0x2f5
testing.tRunner.func1(0xc0003ac900)
        /usr/lib/go/src/testing/testing.go:943 +0x3f9
panic(0xd66900, 0x1725600)
        /usr/lib/go/src/runtime/panic.go:969 +0x166
cloud.google.com/go/storage.(*BucketHandle).newDeleteCall(0xc000017cb8, 0xc000079d40, 0xeb5bc4, 0x29)
        /home/user/go/pkg/mod/cloud.google.com/go/[email protected]/bucket.go:107 +0x3e
cloud.google.com/go/storage.(*BucketHandle).Delete(0xc000017cb8, 0x10f9c40, 0xc000079d40, 0x0, 0x0)
        /home/user/go/pkg/mod/cloud.google.com/go/[email protected]/bucket.go:99 +0xed
// ...

CopierFrom Call

Currently calling CopierFrom(src).Run(ctx) yields 404 error.

Object retrieval fails with query path constructed by storage.NewReader()

storage.NewReader() fails to read newly-created object. The reason is that the GET query path is constructed as "/bucket/object" in storage.NewRangeReader, and fake-gcs-server appears to only support paths like "/storage/v1/b/bucket/o/object".

Steps to recreate the issue are below:

  1. Start a fake-gcs-server with docker-compose.
>$ cat docker-compose.yaml
version: "2.2"

services:
  gcs: 
    hostname: gcs
    image: fsouza/fake-gcs-server:latest
    ports:
      - "4443:4443"
>$ docker-compose up -d gcs
Creating network "fake-gcs-server-bug_default" with the default driver
Creating fake-gcs-server-bug_gcs_1 ... done
  1. Run go client to create bucket, write object, and then attempt to read object (which fails).
>$ cat main.go
package main

import (
	"context"
	"crypto/tls"
	"flag"
	"fmt"
	"io/ioutil"
	"log"
	"net/http"

	"cloud.google.com/go/storage"
	"google.golang.org/api/option"
)

var (
	projectName = flag.String("prj.name", "test-project", "")
	bucketName  = flag.String("bkt.name", "test-bucket", "")
	fileName    = flag.String("obj.name", "test-file", "")
)

func main() {
	flag.Parse()
	transCfg := &http.Transport{
		TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, // ignore expired SSL certificates
	}
	httpClient := &http.Client{Transport: transCfg}
	ctx := context.Background()
	client, err := storage.NewClient(ctx, option.WithEndpoint("https://0.0.0.0:4443/storage/v1/"), option.WithHTTPClient(httpClient))
	if err != nil {
		log.Fatalf("Failed to create client: %v", err)
	}
	bkt := client.Bucket(*bucketName)
	if _, err := bkt.Attrs(ctx); err != nil {
		log.Printf("Creating bucket %v", *bucketName)
		if err = bkt.Create(ctx, *projectName, nil); err != nil {
			log.Fatalf("Failed to create bucket: %v", err)
		}
	}
	log.Printf("Writing file %v", *fileName)
	err = writeFile(ctx, client, *bucketName, *fileName)
	if err != nil {
		log.Fatalf("Failed to write file: %v", err)
	}

	data, err := downloadFile(ctx, client, *bucketName, *fileName)
	if err != nil {
		log.Fatalf("Failed to read object %v:%v/%v: %v", *projectName, *bucketName, *fileName, err)
	}
	log.Printf("read %v bytes from object %v:%v/%v", len(data), *projectName, *bucketName, *fileName)
}

func writeFile(ctx context.Context, client *storage.Client, bucketName, fileKey string) error {
	writer := client.Bucket(bucketName).Object(fileKey).NewWriter(ctx)
	defer writer.Close()
	_, err := fmt.Fprintf(writer, "This object contains text.\n")
	return err
}

func downloadFile(ctx context.Context, client *storage.Client, bucketName, fileKey string) ([]byte, error) {
	reader, err := client.Bucket(bucketName).Object(fileKey).NewReader(ctx)
	if err != nil {
		return nil, err
	}
	defer reader.Close()
	return ioutil.ReadAll(reader)
}

>$ go mod init test-module
>$ go mod vendor
>$ go run -mod vendor main.go
2020/03/23 10:02:13 Creating bucket test-bucket
2020/03/23 10:02:13 Writing file test-file
2020/03/23 10:02:13 Failed to read object test-project:test-bucket/test-file: storage: object doesn't exist
exit status 1
  1. Manually retrieve object via cURL.
>$ curl --insecure https://0.0.0.0:4443/storage/v1/b/test-bucket/o/test-file
{"kind":"storage#object","name":"test-file","id":"test-bucket/test-file","bucket":"test-bucket","size":"27","contentType":"text/plain; charset=utf-8","crc32c":"vJADtw==","acl":[{"bucket":"test-bucket","entity":"projectOwner","object":"test-file","role":"OWNER"}],"md5Hash":"PRQTwE6LQBuo7v10WdA/LQ==","timeCreated":"2020-03-23T14:55:32Z","timeDeleted":"0001-01-01T00:00:00Z","updated":"2020-03-23T14:55:32Z","generation":"0"}
  1. Check server logs
>$ docker-compose logs
Attaching to fake-gcs-server-bug_gcs_1
gcs_1  | time="2020-03-23T15:01:59Z" level=info msg="couldn't load any objects or buckets from \"/data\", starting empty"
gcs_1  | time="2020-03-23T15:01:59Z" level=info msg="server started at https://[::]:4443"
gcs_1  | time="2020-03-23T15:02:13Z" level=info msg="192.168.224.1 - - [23/Mar/2020:15:02:13 +0000] \"GET /storage/v1/b/test-bucket?alt=json&prettyPrint=false&projection=full HTTP/1.1\" 404 59"
gcs_1  | time="2020-03-23T15:02:13Z" level=info msg="192.168.224.1 - - [23/Mar/2020:15:02:13 +0000] \"POST /storage/v1/b?alt=json&prettyPrint=false&project=test-project HTTP/1.1\" 200 119"
gcs_1  | time="2020-03-23T15:02:13Z" level=info msg="192.168.224.1 - - [23/Mar/2020:15:02:13 +0000] \"POST /upload/storage/v1/b/test-bucket/o?alt=json&prettyPrint=false&projection=full&uploadType=multipart HTTP/1.1\" 200 345"
gcs_1  | time="2020-03-23T15:02:13Z" level=info msg="192.168.224.1 - - [23/Mar/2020:15:02:13 +0000] \"GET /test-bucket/test-file HTTP/1.1\" 404 19"
gcs_1  | time="2020-03-23T15:02:24Z" level=info msg="192.168.224.1 - - [23/Mar/2020:15:02:24 +0000] \"GET /storage/v1/b/test-bucket/o/test-file HTTP/1.1\" 200 425"

From the server logs we can see that the cURL object retrieval works with the path "/storage/v1/b/test-bucket/o/test-file", but the Go client's object retrieval with path "/test-bucket/test-file" fails.

The Go client works with a real GCS server. Changing the URL path to "/storage/v1/b/%s/o/%s" in the storage.NewRangeReader function also works.

invalid character '\x1f' looking for beginning of value when uploading via java apis

Hi, I'm trying to create a bucket in the fake gcs server using the java apis(which don't seem to be documented, happy to contribute with some docs once I get this working) but getting

 invalid character '\x1f' looking for beginning of value
[info]   at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:229)
[info]   at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:272)
[info]   at com.google.cloud.storage.StorageImpl$2.call(StorageImpl.java:121)
[info]   at com.google.cloud.storage.StorageImpl$2.call(StorageImpl.java:118)
[info]   at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
[info]   at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
[info]   at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
[info]   at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:117)

This is the code:

    val transportFactory = new HttpTransportFactory {
      override def create(): HttpTransport =
        new NetHttpTransport.Builder().doNotValidateCertificate().build()
    }

    val storage = StorageOptions.newBuilder
      .setCredentials(NoCredentials.getInstance)
      .setTransportOptions(HttpTransportOptions.newBuilder().setHttpTransportFactory(transportFactory).build())
      .setHost(s"https://$host:$port")
      .setProjectId(projectId)
      .build()
      .getService

    storage.create(BucketInfo.of(bucketName))

Can't really figure out what I'm doing wrong, does anybody have any idea?

Support the GRPC API?

Certain clients use only the GRPC API and we currently don't support this.

As reported in #84 and #142, the Java clients seem to be use only GRPC.

Open as a question because I don't know yet how much work that would be.

Support http schema?

HI there,

I have started a fake-gcs-server docker container to do unit test.
Then i'm meet some problems about wrong reader host (It should indicate fake-gcs-server, like 127.0.0.1:4443).

I don't want to modify my /etc/hosts, so i try to set env $STORAGE_EMULATOR_HOST in my unit test. The schema will assign to http [LINK] in GCS library when i set the STORAGE_EMULATOR_HOST env. But fake-gcs-server schema is https...XD

So would you like to add a flag to indicate schema is for https or http?
or do you have other suggestion?

"prefixes" key is always in the response of Object:list even if it's empty.

version 1.18 (dockerhub image)

The responses of /storage/v1/b/bucket/o (Object: list) on the fake server have always "prefixes" key even when it's empty, while the genuine server's responses are without that key.
If "prefixes" is empty, fake-gcs-server returns "prefixes" key with an empty array.

For example (using Google's public bucket gs://gcp-public-data-landsat/ and same directory structure for fake-gcs-server)

The fake-gcs-server response is

# wget --no-proxy --no-check-certificate -q -O - "https://localhost:4443/storage/v1/b/gcp-public-data-landsat/o?prefix=LC08%2f01%2f0
01%2f002%2fLC08_L1GT_001002_20160817_20170322_01_T2%2f" |  jq
{
  "kind": "storage#objects",
  "items": [
    // 13 file objects
  ],
  "prefixes": []
}

And here are examples of the GCS response below

1. without delimiter option

https://storage.googleapis.com/storage/v1/b/gcp-public-data-landsat/o?prefix=LC08%2f01%2f001%2f002%2fLC08_L1GT_001002_20160817_20170322_01_T2%2f

returns

{
  "kind": "storage#objects",
  "items": [
    // 13 file objects
  ]
}

"prefixes" key is not present.

2. with delimiter option matches nothing

https://storage.googleapis.com/storage/v1/b/gcp-public-data-landsat/o?delimiter=nomatch&prefix=LC08%2f01%2f001%2f002%2fLC08_L1GT_001002_20160817_20170322_01_T2%2f

returns the same response as 1.

3. with delimeter option matches some files

https://storage.googleapis.com/storage/v1/b/gcp-public-data-landsat/o?delimiter=.TIF&prefix=LC08%2f01%2f001%2f002%2fLC08_L1GT_001002_20160817_20170322_01_T2%2f

returns

{
  "kind": "storage#objects",
  "prefixes": [
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B1.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B10.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B11.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B2.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B3.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B4.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B5.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B6.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B7.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B8.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_B9.TIF",
    "LC08/01/001/002/LC08_L1GT_001002_20160817_20170322_01_T2/LC08_L1GT_001002_20160817_20170322_01_T2_BQA.TIF"
  ],
  "items": [
    // 2 file objects
  ]
}

has "prefix" key with values.

Preloading data volume must have files in directories

I was trying to preload the buckets, but without any data by creating empty directories (which seems like a validate use case?) but the preload data is only picked up if the directory has a file in it, empty directories don't cut it.

Issue downloading file!?

Here's a single use case where I'm:

  1. Creating a bucket "test_bucket"
  2. Uploading a file "foo.txt"
  3. Listing the files (I see "foo.txt")
  4. Downloading the file "foo.txt"

I'm able to do steps 1 to 3. But step for gives me an error, as if it was trying to go the the cloud:

Here's the code sample:

func TestNewGcsStore(t *testing.T) {
	ctx := context.Background()

	server := fakestorage.NewServer(nil)
	server.CreateBucket("bucket_test")
	url := server.URL() + "/storage/v1/"

	transCfg := &http.Transport{
		TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, // ignore expired SSL certificates
	}
	httpClient := &http.Client{Transport: transCfg}
	client, err := storage.NewClient(ctx, option.WithEndpoint(url), option.WithHTTPClient(httpClient))
	if err != nil {
		t.Fatal(err)
	}

	// Create bucket

	bucket := client.Bucket("bucket_test")

	// Create file
	wc := bucket.Object("foo.txt").NewWriter(ctx)

	if _, err = io.Copy(wc, strings.NewReader("Foo content!")); err != nil {
		t.Fatal(err)
	}

	if err := wc.Close(); err != nil {
		t.Fatal(err)
	}

	// List files
	it := bucket.Objects(ctx, nil)
	for {
		attrs, err := it.Next()
		if err == iterator.Done {
			break
		}
		if err != nil {
			t.Fatal(err)
		}
		t.Log(" -> " + attrs.Name)
	}

	// Download file
	rc, err := bucket.Object("foo.txt").NewReader(ctx)
	if err != nil {
		t.Fatal(err)
	}

	data, err := ioutil.ReadAll(rc)
	defer rc.Close()
	if err != nil {
		t.Fatal(err)
	}

	fileContents := string(data)
	t.Log(fileContents)
}

The output is:

=== RUN   TestNewGcsStore
--- FAIL: TestNewGcsStore (0.01s)
    gcsstore_test.go:98:  -> foo.txt
    gcsstore_test.go:104: Get https://storage.googleapis.com/bucket_test/foo.txt: dial tcp: lookup storage.googleapis.com: no such host

Am I missing something or this is a bug?

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /examples/go/go.mod.

The error Dependabot encountered was:

go: cloud.google.com/go/[email protected] requires
	google.golang.org/[email protected] requires
	golang.org/x/[email protected]: invalid version: git fetch --unshallow -f origin in /opt/go/gopath/pkg/mod/cache/vcs/d7a72cb88b44c6f9a01cd83ccc1abffb1bb34c472a6bfd538fe8f953e0c108c9: exit status 128:
	fatal: unable to access 'https://go.googlesource.com/oauth2/': The requested URL returned error: 502

View the update logs.

Object.NewReader doesn't seem to be hitting the right endpoint

Firstly, thanks for the project!

I'm currently writing some integration tests, using the docker image, and there seems to be something wrong with the mapping from Object().NewReader.

Using the Go example returns an unable to create new reader: storage: object doesn't exist error.

Looking at the Docker logs, it appears to be mapped to the wrong endpoint:

time="2020-05-07T14:14:05Z" level=info msg="- - [07/May/2020:14:14:05 +0000] \"GET /test-bucket/some_file.txt HTTP/1.1\" 404 19"

Note how the endpoint is /test-bucket/some_file.txt whereas other GET requests have /storage/v1/b/ prefixed

time="2020-05-07T14:14:05Z" level=info msg="- - [07/May/2020:14:14:05 +0000] \"GET /storage/v1/b/test-bucket/o/dir%2Fb?alt=json&prettyPrint=false&projection=full HTTP/1.1\" 200 412"

Multi-part upload isn't supported in all cases

Using the Java SDK in this fashion

bucketStorage.create(BlobInfo.newBuilder(bucketInfo, fullKey).build(), inputstream);

Appears to send data using an unsupported mechanism, gives this error:

multipart: NextPart: EOF

com.google.cloud.storage.StorageException: 500 Internal Server Error
multipart: NextPart: EOF

	at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:229)
	at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:310)
	at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:180)

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /go.mod.

The error Dependabot encountered was:

go: cloud.google.com/go/[email protected] requires
	cloud.google.com/[email protected] requires
	cloud.google.com/go/[email protected]: reading cloud.google.com/go/bigquery/bigquery/go.mod at revision bigquery/v1.3.0: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /opt/go/gopath/pkg/mod/cache/vcs/8c519da240983d9a633ea12fd10c34b4b38691b2c0f6fa9c30a7cdefb6df8bf9: exit status 128:
	fatal: unable to access 'https://code.googlesource.com/gocloud/': The requested URL returned error: 429

View the update logs.

[BUG] Range Reader returning wrong bytes

I am currently using the fakestorage package in my project to unit test my GCS code. It seems like calling NewRangeReader(ctx, offset, length) returns the wrong bytes depending on what I pass in as offset and length. For example, say I have a GCS Object that contains the following text: "Hello There!" and my goal is to read this file in chunks until the entire file is read, I would get get the following => "Hlotee" if I chunk the reads by 2 bytes starting from 0 to EOF.

I tested my code with a real GCS instance, and I get the correct data back after reading in chunks (2 byte size), so this seems to be related to the fakestorage package.

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /examples/go/go.mod.

The error Dependabot encountered was:

go: cloud.google.com/go/[email protected] requires
	cloud.google.com/go/[email protected] requires
	cloud.google.com/[email protected]: reading cloud.google.com/go/go.mod at revision v0.52.0: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /opt/go/gopath/pkg/mod/cache/vcs/8c519da240983d9a633ea12fd10c34b4b38691b2c0f6fa9c30a7cdefb6df8bf9: exit status 128:
	fatal: unable to access 'https://code.googlesource.com/gocloud/': The requested URL returned error: 429

View the update logs.

Copy file from bucket to bucket is not supported

Hi,
Thx for the fake gcs server.

When trying to copy file from one bucket to another I got this error.

self = <google.cloud.storage._http.Connection object at 0x7fafe923ae80>
method = 'POST'
path = '/b/data-transfer1/o/test_file.txt/copyTo/b/data-transfer2/o/ntbrpeljbr'
query_params = {}, data = None, content_type = None, headers = None
api_base_url = None, api_version = None, expect_json = True
_target_object = <Blob: data-transfer2, ntbrpeljbr, None>
...
raise exceptions.from_http_response(response)
google.api_core.exceptions.MethodNotAllowed: 405 POST https://localhost:33107/storage/v1/b/data-transfer1/o/test_file.txt/copyTo/b/data-transfer2/o/ntbrpeljbr: unknown error

I'm using python API google-cloud-storage==1.18.0
source_bucket.copy_blob(remote_file, to_bucket, target_gcs_path)

Regards, Avihay

Add a dockerized server configuration

https://github.com/teone/gc-fake-storage provides a Dockerfile and related scripts to allow this code to be run as a server, so that it can be used outside of tests of Go code. This Docker image is published at https://hub.docker.com/r/matteoscandolo/gc-fake-storage.

It would be easier to improve and test this use case if the Docker configuration was part of this project instead. It could be easier to coordinate creating a new image when the library changes. Also, contributors could collaborate on language-specific clients, verified by integration tests.

There is no explicit license on https://github.com/teone/gc-fake-storage, which I believe means @teone has the copyright, which makes it incompatible with this project's BSD 2 license, copyright Francisco Souza. I believe the clearest way to merge the code is if @teone makes the initial PR, which makes it clear he's contributing under the license for this this project. After the initial merge of files, I'd be happy to make additional changes.

This was first discussed on closed PR #39 (comment)

Dependabot can't resolve your Go dependency files

Dependabot can't resolve your Go dependency files.

As a result, Dependabot couldn't update your dependencies.

The error Dependabot encountered was:

go: honnef.co/go/[email protected]: unrecognized import path "honnef.co/go/tools" (https fetch: Get https://honnef.co/go/tools?go-get=1: EOF)

If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.

You can mention @dependabot in the comments below to contact the Dependabot team.

Expected headers are not sent for ObjectHandle.NewReader()

The Reader you get from ObjectHandle.NewReader has an Attrs field of type ReaderObjectAttrs.

The storage library fills those values in based on headers sent in the response:

https://github.com/googleapis/google-cloud-go/blob/bd00477b309def1610fbf4cb259329deac795891/storage/reader.go#L255-L268

As far as I can tell, fakestorage is not setting these headers, which means that you get something like the following when you try to use them:

reader attrs: {Size:56 StartOffset:0 ContentType: ContentEncoding: CacheControl: LastModified:0001-01-01 00:00:00 +0000 UTC Generation:-1 Metageneration:0}

This library has been great, but including these headers would make it so that I can safely use the Attrs attribute and be able to test. Thanks!

How to use?

Hi, I am really interested by this initiative to mimick GCS a la fakes3. Some questions though,

  • how to start the server?
  • what are the dependencies?
    Cheers

Could not load the default credentials

Hey, I'm trying to upload a file in node but I get the following error:

Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (.../node_modules/@google-cloud/storage/node_modules/google-auth-library/build/src/auth/googleauth.js:161:19)
at async GoogleAuth.getClient (.../node_modules/@google-cloud/storage/node_modules/google-auth-library/build/src/auth/googleauth.js:503:17)
at async GoogleAuth.request (.../node_modules/@google-cloud/storage/node_modules/google-auth-library/build/src/auth/googleauth.js:544:24)
at async Upload.makeRequest (.../node_modules/gcs-resumable-upload/build/src/index.js:316:21)
at async Upload.createURIAsync (.../node_modules/gcs-resumable-upload/build/src/index.js:142:22)`

Here is the code:

const storage = new Storage({
    apiEndpoint: "127.0.0.1:4443",
    projectId: "test",
});

const bucket = storage.bucket('sample-bucket');
const uploadStream = bucket.file(fileName).createWriteStream();
fileStream.pipe(uploadStream);`

Interestingly getting the buckets using your node example works just fine.

I also tried supplying empty credentials but that just complained further

const storage = new Storage({
    apiEndpoint: "127.0.0.1:4443",
    projectId: "test",
    credentials: {},
});

I might be out of luck just by nature of how the google node storage library works but any suggestions would be greatly appreciated, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.