Giter Site home page Giter Site logo

tallarium / reverse_proxy_plug Goto Github PK

View Code? Open in Web Editor NEW
258.0 9.0 65.0 388 KB

๐Ÿ”› an Elixir reverse proxy Plug with HTTP/2, chunked transfer and path proxying support

License: MIT License

Elixir 99.79% Euphoria 0.21%
elixir plug http proxy reverse-proxy reverse-proxy-server http-server chunked-transmission

reverse_proxy_plug's Introduction

ReverseProxyPlug

Module Version Hex Docs Total Download License Last Updated

A reverse proxy plug for proxying a request to another URL using HTTPoison. Perfect when you need to transparently proxy requests to another service but also need to have full programmatic control over the outgoing requests.

This project grew out of a fork of elixir-reverse-proxy. Advantages over the original include more flexible upstreams, zero-delay chunked transfer encoding support, HTTP2 support with Cowboy 2 and focus on being a composable Plug instead of providing a standalone reverse proxy application.

Installation

Add reverse_proxy_plug to your list of dependencies in mix.exs:

def deps do
  [
    {:reverse_proxy_plug, "~> 2.4"}
  ]
end

Then add an HTTP client library, either httpoison or tesla, and configure depending on your choice:

config :reverse_proxy_plug, :http_client, ReverseProxyPlug.HTTPClient.Adapters.HTTPoison
# OR
config :reverse_proxy_plug, :http_client, ReverseProxyPlug.HTTPClient.Adapters.Tesla

You can also set the config as a per-plug basis, which will override any global config. Either of those must be set, otherwise the system will attempt to default to the HTTPoison adapter or raise if it's not present.

plug ReverseProxyPlug, client: ReverseProxyPlug.HTTPClient.Adapters.Tesla

Usage

The plug works best when used with Plug.Router.forward/2. Drop this line into your Plug router:

forward("/foo", to: ReverseProxyPlug, upstream: "//example.com/bar")

Now all requests matching /foo will be proxied to the upstream. For example, a request to /foo/baz made over HTTP will result in a request to http://example.com/bar/baz.

You can also specify the scheme or choose a port:

forward("/foo", to: ReverseProxyPlug, upstream: "https://example.com:4200/bar")

The :upstream option should be a well formed URI parseable by URI.parse/1, or a function with zero or one arity which returns one. If it is a function, it will be evaluated for every request. If the function is arity one, the Conn struct will be passed to it, in order to have more flexibility in dynamic routing.

Modifying the client request body

You can modify various aspects of the client request by simply modifying the Conn struct. In case you want to modify the request body, fetch it using Conn.read_body/2, make your changes, and leave it under Conn.assigns[:raw_body]. ReverseProxyPlug will use that as the request body. In case a custom raw body is not present, ReverseProxyPlug will fetch it from the Conn struct directly.

Configuration options

Custom HTTP methods

Only standard HTTP methods in "GET", "HEAD", "POST", "PUT", "DELETE", "CONNECT", "OPTIONS", "TRACE" and "PATCH" will be forwarded by default. You can specific define other custom HTTP methods in keyword :custom_http_methods.

forward("/foo", to: ReverseProxyPlug, upstream: "//example.com/bar", custom_http_methods: [:XMETHOD])

Preserve host header

A reverse HTTP proxy often has to preserve the original Host request header when making a request to the upstream origin. Presenting a different Host to the upstream server can lead to issues related to cookies, redirects, and incorrect routing that can become a security concern.

Some HTTP proxies send the original Host value in other headers, like Forwarded or X-Forwarded-Host, but those are only useful if the upstream application is coded to read those headers.

By default, ReverseProxyPlug does not preserve the original header nor does it send the original header in any other form. Use :preserve_host_header to make upstream requests with the same Host header as in the original request.

forward("/foo", to: ReverseProxyPlug, upstream: "//example.com", preserve_host_header: true)

Normalize headers for upstream request

An upstream request will downcase all request header names by default (ReverseProxyPlug.downcase_headers/1)

You can override this behaviour by passing your own normalize_headers/1, which can transform a list of headers - a list of {"header", "value"} tuples - and return them in the form desired. For instance, you may want to drop certain headers in the upstream request, beyond the usual hop-by-hop headers.

Response mode

ReverseProxyPlug supports two response modes:

  • :stream (default) - The response from the plug will always be chunk encoded. If the upstream server sends a chunked response, ReverseProxyPlug will pass chunks to the clients as soon as they arrive, resulting in zero delay.

  • :buffer - The plug will wait until the whole response is received from the upstream server, at which point it will send it to the client using Conn.send_resp. This allows for processing the response before sending it back using Conn.register_before_send.

You can choose the response mode by passing a :response_mode option:

forward("/foo", to: ReverseProxyPlug, response_mode: :buffer, upstream: "//example.com/bar")

Client options

You can pass options to the configured HTTP client. Valid options depend on the HTTP client used.

forward("/foo", to: ReverseProxyPlug, upstream: "//example.com", client_options: [timeout: 2000])

Callback for connection errors

By default, ReverseProxyPlug will automatically respond with 502 Bad Gateway in case of network error. To inspect the HTTPoison error that caused the response, you can pass an :error_callback option.

plug(ReverseProxyPlug,
  upstream: "example.com",
  error_callback: fn error -> Logger.error("Network error: #{inspect(error)}") end
)

If you wish to handle the response directly, you can provide a function with arity 2 where the connection will be passed as the second argument:

plug(ReverseProxyPlug,
  upstream: "example.com",
  error_callback: fn error, conn ->
    Logger.error("Network error: #{inspect(error)}")
    Plug.Conn.send_resp(conn, :internal_server_error, "something went wrong")
  end)
)

You can also provide a MFA (module, function, arguments) tuple, to which the error will be inserted as the last argument:

plug(ReverseProxyPlug,
  upstream: "example.com",
  error_callback: {MyErrorHandler, :handle_proxy_error, ["example.com"]}
)

If the function specified by the MFA tuple supports two additional arguments, the error and connection will inserted as the last two arguments, respectively.

Callbacks for responses in streaming mode

In order to add special handling for responses with particular statuses instead of passing them on to the client as usual, provide the :status_callbacks option with a map from status code to handler:

plug(ReverseProxyPlug,
  upstream: "example.com",
  status_callbacks: %{404 => &handle_404/2}
)

The handler is called as soon as an HTTPoison.AsyncStatus message with the given status is received, taking the Plug.Conn and the options given to ReverseProxyPlug. It must then consume all the remaining incoming HTTPoison asynchronous response parts, respond to the client and return the Plug.Conn.

:status_callbacks must only be given when :response_mode is :stream, which is the default.

Usage in Phoenix

The Phoenix default autogenerated project assumes that you'll want to parse all request bodies coming to your Phoenix server and puts Plug.Parsers directly in your endpoint.ex. If you're using something like ReverseProxyPlug, this is likely not what you want โ€” in this case you'll want to move Plug.Parsers out of your endpoint and into specific router pipelines or routes themselves.

Or you can extract the raw request body with a custom body reader in your endpoint.ex:

plug Plug.Parsers,
  body_reader: {CacheBodyReader, :read_body, []},
  # ...

and store it in the Conn struct with custom plug cache_body_reader.ex:

defmodule CacheBodyReader do
  @moduledoc """
  Inspired by https://hexdocs.pm/plug/1.6.0/Plug.Parsers.html#module-custom-body-reader
  """

  alias Plug.Conn

  @doc """
  Read the raw body and store it for later use in the connection.
  It ignores the updated connection returned by `Plug.Conn.read_body/2` to not break CSRF.
  """
  @spec read_body(Conn.t(), Plug.opts()) :: {:ok, String.t(), Conn.t()}
  def read_body(%Conn{request_path: "/api/" <> _} = conn, opts) do
    {:ok, body, _conn} = Conn.read_body(conn, opts)
    conn = update_in(conn.assigns[:raw_body], &[body | &1 || []])
    {:ok, body, conn}
  end

  def read_body(conn, opts), do: Conn.read_body(conn, opts)
end

which then allows you to use the Phoenix.Router.forward/4 in the router.ex:

  scope "/api" do
    pipe_through :api

    forward "/foo", ReverseProxyPlug,
      upstream: &Settings.foo_url/0,
      error_callback: &__MODULE__.log_reverse_proxy_error/1

    def log_reverse_proxy_error(error) do
      Logger.warn("ReverseProxyPlug network error: #{inspect(error)}")
    end
  end

Copyright and License

Copyright (c) 2018 Tallarium Technologies

ReverseProxyPlug is released under the MIT License.

reverse_proxy_plug's People

Contributors

aselder avatar axelson avatar ccapndave avatar dependabot-preview[bot] avatar dependabot-support avatar fedjmike avatar germandz avatar isavita avatar j-deng avatar jhonndabi avatar jimmybot avatar jsangilve avatar kianmeng avatar kmoe avatar msz avatar mwhitworth avatar pcorey avatar polvalente avatar quyetnguyen1997 avatar rhcarvalho avatar xadhoom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reverse_proxy_plug's Issues

TLS/SSL support?

Hi,
Thanks for the plugin.
I am having problems forwarding to upstream HTTPS sites. Does the plug not support SSL or is it me?

G

how to inject upstream's response?

I am using the lib in my project. One purpose is to modify the response based on the client roles and content. Any suggestions to implement the function with the right way?

Stalling proxied requests

Sometime buffered & already parsed requests stalls.

This because reverse_proxy_plug tries to read an already read body, and Plug.Conn will wait on the underlying adapter for more data to come.

Looking into the code, reverse_proxy_plug uses raw_body only if body is empty which is not optimal in such cases (also goes in contrast on what README.md states).

I've sent the PR #131 which addresses this case.

CacheBodyReader improvement

Thanks for publishing reverse_proxy_plug. It was just what I was looking for.

I did have an issue with the sample CacheBodyReader in the README. I believe the second function should be:

 def read_body(conn, opts), do: Conn.read_body(conn, opts

otherwise, non-api functions will never get the body of the request.

Websocket support?

This the most awesome package for proxying, but it does not work with websockets, the error code 426 Upgrade Required received when i try to proxy "ws://" request. I found that "connection" and "upgrade" are disabled in remove_hop_by_hop_headers function. Are you going to support WS?

Include Plug connection in error callbacks

In the current implementation, callers have no control over the response when client errors occur. We currently use this library as part of strangling pattern while migrating from a legacy system to its replacement. Users are currently shown a blank page with HTTP 502 when the upstream server is unavailable. This isn't a very good user experience, and it would be very useful to be able to control the response on client errors.

I realize this is a breaking change, but I'm not sure how best to address this issue otherwise. The alternative would be to allow functions of arity 1 or 2 and fallback to the current functionality if arity==1. This seems like a bit more work, but would maintain backwards compatibility and still expand the functionality of ReverseProxyPlug.

I've got an initial version of this working that doesn't look too bad. I can submit the PR for the time being and maybe we can gather feedback there.

Configurable HTTP Client

Are there any plans to support other HTTP Clients aside from HTTPoison?

The motivation behind this is that currently neither HTTPoison nor Hackney (which is the default pool) have good Telemetry events.

Potential memory leak

I'm new to elixir, so I'm not sure if this is true, but since atoms are never cleared from memory, I've heard they are a potential attack vector if user input gets converted to atoms. In the code bellow, the http method gets converted to an atom, and since a user can define an arbitrary method, would this not be an issue?

lib/reverse_proxy_plug.ex

defp prepare_request(conn, options) do
    method = conn.method |> String.downcase() |> String.to_atom()
    ...
end

Performance related question

Thanks for your good jobs, I really like the plug.

I couldn't found any place you mentioned support, could you advise on this?

For the performance question, do I need to do some extra works to make it scalable like Phoenix or it just works out of box with this one line adding to my Router:

  forward("/hls", to: ReverseProxyPlug, upstream: "http://127.0.0.1:8080/hls")

I plan to to proxy RTMP HLS video streaming traffic.

Finch Adapter?

Not so much an issue, but I wanted to see if there is a desire for a Finch adapter? I have a simple one working, and would be happy to make a PR. I did not see any contribution guidelines, so I figured it would be best to open an issue first.

The latest version of Phoenix ships with Finch by default, so having an adapter means one less dependency users of your library will need if they are coming from Phoenix.

Cheers.

Tesla.Env.__struct__/0 is undefined

Can't compile 1.5.0 anymore. 1.4.0 is ok.

==> reverse_proxy_plug
Compiling 9 files (.ex)

== Compilation error in file lib/reverse_proxy_plug/http_client/adapters/tesla.ex ==
** (CompileError) lib/reverse_proxy_plug/http_client/adapters/tesla.ex:33: Tesla.Env.__struct__/0 is undefined, cannot expand struct Tesla.Env. Make sure the struct name is correct. If the struct name exists and is correct but it still cannot be found, you likely have cyclic module usage in your code
    (stdlib 3.13.2) lists.erl:1358: :lists.mapfoldl/3
    (stdlib 3.13.2) lists.erl:1359: :lists.mapfoldl/3
could not compile dependency :reverse_proxy_plug, "mix compile" failed. You can recompile this dependency with "mix deps.compile reverse_proxy_plug", update it with "mix deps.update reverse_proxy_plug" or clean it with "mix deps.clean reverse_proxy_plug"

New release?

I was having trouble with 1.2.1, and noticed that master was 20 commits ahead. So I decided to give master a try, and appearently one of those 20 fixed the issue I was having!

Recycle cookies for upstream request

This plug does not recycle cookies at all when making the upstream request. Most other proxies will simply forward the entire cookie jar to the upstream request.

For example from Go:

More specifically, it looks like this plug recycles the cookie headers, but doesn't pass the correct option to HTTPoison/hackney:
https://hexdocs.pm/httpoison/readme.html#cookies

I'd be happy to submit the PR for this as I've already got a working copy locally.

New release

Hey people! Thanks for your work!

Could you please release a new version? It has 17 commits since last tag and some important fixes (like the port in the host header).

Cheers

Investigate restoring streaming request body through to upstream

This functionality was effectively removed to fix #175 - the new version of this functionality should fulfil two requirements:

  • it should be tested to work with each adapter - {:stream, _} as a input value for the upstream body was supported by HTTPoison, but it was unclear (and untested) whether this was supported by the Tesla adapter at all
  • as #175, we need to forward conn whenever we use Plug.Conn.read_body - but that seems to be at odds with passing in a stream to the library. Perhaps Stream.resource, and sending the latest conn to ourselves?

Runtime configuration for the `:upstream` configuration setting

I have been using this reverse proxy in a phoenix plug, but the plug init/1 gets called during compile time. However I need to configure the :upstream at runtime, based on the environment at runtime.

I have a working implementation of passing in a function or a string for the :upstream value, and then only using that value in the init/1 if it is a string, or applying the function in the call/2 otherwise.
Would you be open to a pull request?

Another alternative, and perhaps even complementary, solution would be to allow setting a value in the Connection, similar to the raw_body, with the use of: https://hexdocs.pm/plug/Plug.Conn.html#put_private/3

How to reverse proxy to two different resources on the same domain?

Using plausible I need to reverse proxy:

https://plausible.io/docs/proxy/introduction#are-you-concerned-about-missing-data

https://<yourdomain.com>/js/script.js -> https://plausible.io/js/script.js
https://<yourdomain.com>/api/event    -> https://plausible.io/api/event

In my Phoenix app's Router I've added the following:

forward "/js/script.js", ReverseProxyPlug, upstream: "//plausible.io/js/script.js"
forward "/api/event", ReverseProxyPlug, upstream: "//plausible.io/api/event"

but get this error: ReverseProxyPlug has already been forwarded to. A module can only be forwarded a single time

Can the forwarding be done in a single line?

Issue with Path Params Support in reverse_proxy_plug

I am very excited about your project and was looking forward to integrating reverse_proxy_plug into my work project. During the integration process, I noticed that the plug seems to lack support for path parameters in the upstream URL. Specifically, path variables are not replaced with their corresponding values, leading to incorrect request routing.

Code Example:
forward "/users/:userId", to: ReverseProxyPlug, upstream: "https://example.com/users/:userId"

In this example, I expected :userId to be replaced by the actual user ID from the request, but this substitution does not occur.

Here is the solution to this issue
#174

Issue with duplicate headers in response.

Hi,
i would like to point an issue i found while proxying for example keycloak.
Keycloak uses the "set-cookie"-Header multiple times.
In streaming mode reverse proxy plug does not pass on these headers because it is reducing on the response headers using Conn.put_resp_header which will update existing headers.
In buffering mode it works as there it is just passed on.

Cannot get proxy to work with Tesla adapter

Hello,

I am trying out this plug, and ran into a strange issue. Configuring tesla_client option was also kind of tricky, and I don't know if I have it setup correctly. The problem I see is that when configured with Tesla (using either mint or hackney as tesla_client), the requests just hang and never complete. Using hackney directly instead of tesla seems to work.

Here's how I have it setup:

mix.exs:

defp deps do
  [
    {:reverse_proxy_plug, "~> 2.1"},
    {:tesla, "~> 1.4"},
    {:mint, "~> 1.4"},
    {:castore, "~> 0.1"}
  ]
end

endpoint.ex:

defmodule MiddleMan.Endpoint do
  use Plug.Router

  plug(Plug.Telemetry, event_prefix: [:middle_man, :plug])
  plug(Plug.Logger)
  plug(:match)

  plug(:dispatch)

  get "/hello" do
    send_resp(conn, 200, "world")
  end

forward("/test",
    to: ReverseProxyPlug,
    client: ReverseProxyPlug.HTTPClient.Adapters.Tesla,
    upstream: "//localhost:8000/",
    client_options: [
      tesla_client: Tesla.client([], {Tesla.Adapter.Mint, []})
    ]
  )
end

When I make a request to http://localhost:4000/test/test.json, I see this in the logs:
15:21:59.484 [info] GET /test/test.json
and the browser keeps loading and loading with no response.

Any clues what I did wrong above? I was unable to figure out how to pass tesla_client option in config.exs, so I ended up passing the options in the forward call.

Any help would be appreciated.....thanks for your time!

Make request header normalization configurable

Hi there!

Request headers are always downcased when sent upstream here.

  defp normalize_headers(headers) do
    headers
    |> downcase_headers
    |> remove_hop_by_hop_headers
  end

Could the casing be configurable? The upstream we relay requests to through this plug performs case sensitive matching to consume some headers, dropping the ones which have been downcased.

Thanks!

Error callback can be simplified

It will be a breaking change, but there is no reason for the opts[:error_callback] to pass {:error, error} through from response/3 - we should pattern match in the function head and pass error through to the callback instead.

Buffering responses

Thanks for open-sourcing this Plug.

The README states that:

ReverseProxyPlug supports chunked transfer encoding and by default the responses are not buffered

Is there any way to change this behavior, so the response is not passed back to the client until all chunks are received?

ReverseProxyPlug and longpoll

I am not 100% sure if this is an issue concerning Plug or Phoenix in general or one specific to the ReverseProxyPlug or plain old me misunderstanding your README.

My setup: I want to proxy a CouchDB 3.2.1 in my Phoenix app. If I use the default response mode, clients that want to connect to the CouchDB throw:

GET http://localhost:4000/db/project_1/_changes?timeout=600000&style=all_docs&feed=longpoll&heartbeat=10000&since=11-g1AAAACLe(...)AqSw&limit=50)
net::ERR_INCOMPLETE_CHUNKED_ENCODING 

See also https://docs.couchdb.org/en/3.2.0/api/database/changes.html.

The `host` header is not properly formed when is over a non-standard port

According to https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Host, the host header should (must?) not include the port number when the port is the default.

i.e:

For http://web.example.com:80, the host should be web.example.com
For http://special.port.com:666, the host should be special.port.com:666

Right now if the plug is configured with upstream http://special.port.com:666 and not host header is passed in the request, the host header is generated as special.port.com. The expected value is special.port.com:666

rewriting redirects

Any interest in adding the ability to transparently rewrite the Location if it matches the upstream to be the host/port/scheme of wherever the proxy is listening?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.