Giter Site home page Giter Site logo

gorilla / http Goto Github PK

View Code? Open in Web Editor NEW
266.0 24.0 62.0 150 KB

Package gorilla/http is an alternative HTTP client implementation for Go.

Home Page: https://gorilla.github.io

License: BSD 3-Clause "New" or "Revised" License

Go 98.96% Makefile 1.04%
gorilla go golang gorilla-web-toolkit http

http's People

Contributors

andyjeffries avatar apoorvajagtap avatar coreydaley avatar davecheney avatar elithrar avatar fiorix avatar kisielk avatar nikai3d avatar sqs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

http's Issues

support 301/302 redirects transparently

Client should grow the ability to transparently handle redirects internally

  • DefaultClient should default to on
  • There should be a limit on the number of redirects

Retry support?

A good http client should support retry.
For example, user could set their customize retry policy.
maxRetryNum retryStatusCodes interval etc.

Example doesn't work with https://api.github.com

I compiled the curl example and found that it doesn't work for some URLs. I'm wondering if I missed anything (./curl is the example executable):

$ ./curl https://google.com //works
$ ./curl https://api.github.com // doesn't work
2013/10/15 06:39:51 unable to fetch "https://api.github.com": dial tcp 192.30.252.137:80: connection refused
$ ./curl https://github.com // hangs forever
$ curl https://api.github.com // works

Chunked []byte requests produce incorrect request, hang handler

It seems that passing a bytes.Reader causes an invalid request to be written to the wire, which hangs the handler. The test case below demonstrates this issue.

package http

import (
    "bytes"
    "io/ioutil"
    "net/http"
    "net/http/httptest"
    "strings"
    "testing"
)

func TestChunked_gorilla_bytes(t *testing.T) {
    t.Parallel()

    postdata := []byte("FOO_bytes")
    var gotPostdata []byte

    handled := make(chan struct{}, 1)
    mux := http.NewServeMux()
    mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        gotPostdata, _ = ioutil.ReadAll(r.Body)
        if !bytes.Equal(postdata, gotPostdata) {
            t.Errorf("want postdata == %q, got %q", postdata, gotPostdata)
        }

        handled <- struct{}{}
    }))

    server := httptest.NewServer(mux)
    defer server.Close()

    err := Post(server.URL, bytes.NewReader(postdata))
    if err != nil {
        t.Error("Post: ", err)
    }

    <-handled
}

func TestChunked_gorilla_string(t *testing.T) {
    t.Parallel()

    postdata := []byte("FOO_string")
    var gotPostdata []byte

    handled := make(chan struct{}, 1)
    mux := http.NewServeMux()
    mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        gotPostdata, _ = ioutil.ReadAll(r.Body)
        if !bytes.Equal(postdata, gotPostdata) {
            t.Errorf("want postdata == %q, got %q", postdata, gotPostdata)
        }

        handled <- struct{}{}
    }))

    server := httptest.NewServer(mux)
    defer server.Close()

    err := Post(server.URL, strings.NewReader(string(postdata)))
    if err != nil {
        t.Error("Post: ", err)
    }

    <-handled
}

func TestChunked_stdlib(t *testing.T) {
    t.Parallel()

    postdata := []byte("FOO_stdlib")
    var gotPostdata []byte

    handled := make(chan struct{}, 1)
    mux := http.NewServeMux()
    mux.Handle("/", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        gotPostdata, _ = ioutil.ReadAll(r.Body)
        if !bytes.Equal(postdata, gotPostdata) {
            t.Errorf("want postdata == %q, got %q", postdata, gotPostdata)
        }

        handled <- struct{}{}
    }))

    server := httptest.NewServer(mux)
    defer server.Close()

    resp, err := http.Post(server.URL, "text/plain", bytes.NewReader(postdata))
    if err != nil {
        t.Error("Post: ", err)
    }
    defer resp.Body.Close()

    <-handled
}

Running it produces:

$ go test -test.run=TestChunked -test.v -test.p=8 
=== RUN TestChunked_gorilla_bytes-8
=== RUN TestChunked_gorilla_string-8
=== RUN TestChunked_stdlib-8
--- PASS: TestChunked_gorilla_string-8 (0.00 seconds)
--- PASS: TestChunked_stdlib-8 (0.00 seconds)

... (hanging on TestChunked_gorilla_bytes) ...

^Cexit status 2
FAIL    github.com/sqs/http 0.678s

I tried debugging the use of ChunkedWriter but I couldn't figure out the problem. Any pointers?

Use the exact URI specified by the application when writing the request URI to the network

If the application calls

  client.Get("http://example.com/%2f")

then the package should write

GET /%2f HTTP/1.1

to the network. The standard http package writes

GET // HTTP/1.1 

to the network. This is rarely what an application wants.

The standard http package generates the request URI using the following lines of code:

        u, err := url.Parse(urlStr)
        ...
        req.URL = u
         ...
       ruri := req.URL.RequestURI() 

Because url.Parse() decodes escape sequences, the RequestURI() method cannot recover the URI as specified by the application.

Return body on HTTP error

A lot of REST API's will spit out an error reason into the body of the response and return a 400 or 406 error code. Would it not make sense to expose this somehow?

if _, err := http.Get(os.Stdout, "http://localhost:8000/"); err != nil {
fmt.Println(body) ?
panic(err)
}

expect 100 continue

Provide a knob for enabling the Expect: 100-continue header and the corresponding handling of the request body.

io.Writer for request body

On the server, an application receives data from a peer using io.Reader and sends data to the peer using an io.Writer. On the client, we handle send differently. The application provides an io.Reader that is copied to the peer.

I think the server model for sending data is conceptually easier to work with. The application drives the generation of data instead of being called to generate the data.

The break with net/http.Request opens up the possibility of using an io.Writer for sending data to the peer. Here's what the code might look like:

requestBody, err := client.Start("POST", "http://example.com/api/endpoint", headers)
err = json.NewEncoder(requestBody).Encode(someStruct)
status, headers, responseBody, err := requestBody.Fiinsh()

Unfortunately, the Start / Finish method calls are more complicated than Do. This extra complication is very noticeable for GET requests and in cases where the application already has the request body on hand as an io.Reader, []byte or a string.

I cannot think of a way to make this palatable. I am throwing this out in case one of you has some ideas about it.

Conditional GET convenience method

Suggestion from dsal for a Conditional GET helper

type ReaderMaker func() (io.Reader, error)

// ConditionalGet writes the body of url to w if the server provides a 200 response.
// If the response is 304, f is called to provide an io.Reader which will provide the body.
// If f is nil, and a 304 is encounted, StatusError{304} will be returned.
func ConditionalGet(w io.Writer, url, etag string, modtime time.Time, f ReaderMaker) (int64, error)

golang.org/x/net/context support?

Have the authors considered using golang.org/x/net/context as a mechanism for controlling timeouts?

The idea would be that the caller would pass in a context.Context and implementation would cancel in-flight requests if the <-chan struct{} returned by ctx.Done() receives.

For example, Do would add a context.Context as its first argument:

func (c *Client) Do(ctx context.Context, method, url string, headers map[string][]string, body io.Reader) (client.Status, map[string][]string, io.ReadCloser, error)

And usage would look like:

// spend no more than 1 second from request initiation to response return
ctx, cancel := context.WithTimeout(context.Background(), 1 * time.Second) 
defer cancel()
_, err := http.Do(ctx, "POST", "http://www.gorillatoolkit.org/", make(map[string][]string), strings.NewReader("POST body goes here"))
// ...

I'm happy to submit a proposal pull request for this if interested, wanted to float the idea here first.

Reference material on context: https://blog.golang.org/context (see bottom near the httpDo func)

Support http authentication

Authenticators to support, in order of importance / difficulty

  • basic
  • digest
  • client side tls
  • ntlm (don't really expect this one)

Allow application to set the read deadline when reading the response body

If an application can specify the read deadline when reading the response body, then the application can easily detect dropped connections for Twitter streaming endpoints and similar services.

The following snippet of code shows how an application can use the feature. The Twitter streaming endpoints write a blank keep alive line very 30 seconds.

scanner := bufio.NewScanner(body)
body.SetReadDeadline(time.Now().Add(40 * time.Second))
for scanner.Scan() {
    p := scanner.Bytes()
    if len(p) > 0 {
        processTweet(p)
    }
   body.SetReadDeadline(time.Now().Add(40 * time.Second))
}

DefaultClient not support cookies

var DefaultClient = Client{
dialer: new(dialer),
FollowRedirects: true,
}

why not

var DefaultClient = Client{
dialer: new(dialer),
FollowRedirects: true,
Jar: Cookiesjar}
}

Unable to create own client - dialer is not exported and no function to set

In Client struct:

type Client struct {
    dialer Dialer

    // FollowRedirects instructs the client to follow 301/302 redirects when idempotent.
    FollowRedirects bool
}

dialer attribute is not exported, it's not set by library and there is no method to set it from outside of the package.however it's referenced in the code which causes application to crash with nil pointer dereference.

P.S. Seriously, i see this project is not developed anymore, but you could at least remove it from your website if it contains such bugs like this (especially when in code comments you encourage people to create own client while this is not possible, wtf)

Support connection reuse

The current http.conn http.dialer implementations do not reuse connections. The connection to Closed, rather than being pooled after each request.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.