For library usage, see https://pkg.go.dev/github.com/inetaf/tcpproxy/
For CLI usage, see https://github.com/inetaf/tcpproxy/blob/master/cmd/tlsrouter/README.md
Proxy TCP connections based on static rules, HTTP Host headers, and SNI server names (Go package or binary)
Home Page: https://pkg.go.dev/github.com/inetaf/tcpproxy
License: Apache License 2.0
For library usage, see https://pkg.go.dev/github.com/inetaf/tcpproxy/
For CLI usage, see https://github.com/inetaf/tcpproxy/blob/master/cmd/tlsrouter/README.md
for example read 4KB data to parse the host header
but fallback config exists , then we can forward to
For a full replacement for tlsrouter, I need more flexible SNI matching than just string equality (dns wildcard and regex).
Adding this stuff to AddSNIRoute
feels a bit icky, the functions are going to become unwieldy. How about making the route
interface public, with appropriate changes? I think I have sort of a plan for how to make that work. @bradfitz thoughts?
Some things that would need to change in the API:
Peeker
interface in signatures instead of bufio.Reader
type Route interface { Match(Peeker) Target }
. Gives you full power to examine input and decide on a Target.type Matcher func(Peeker) bool
and func StaticRoute(Matcher, Target) Route
enables reuse/recomposition of matching code, and plumbs together the simpler cases.Proxy.Add*
replaced by a single Proxy.AddRoute(addr string, r Route)
. Maaybe make Proxy.AddStaticRoute(addr string, m Matcher, t Target)
instead of a standalone StaticRoute
constructor.Tentative, poorly thought out usage example that I'm not 100% happy with but hopefully gives an idea:
var p tcpproxy.Proxy
// ACME is a complex implementation of the Route interface.
var a tcpproxy.ACME
// Handle *.acme.invalid
p.AddRoute(":443", &a)
// ACMETarget is passthrough for its arguments, so you can inline it elsewhere if you want.
// Registers the target as a candidate for acme challenges.
target := a.ACMETarget(tcpproxy.To("1.2.3.4:443"))
p.AddStaticRoute(":443", tcpproxy.MatchSNI("foo.com"), target)
// Inlined form
p.AddStaticRoute(":443", tcpproxy.MatchSNI("bar.com"), a.ACMETarget(tcpproxy.To("3.4.5.6:443")))
// Third-party matcher
p.AddStaticRoute(":70", retro.MatchGopher("/Plushie"), tcpproxy.To("10.20.30.40:70"))
// Dynamic router
p.AddRoute(":80", backpain.Yxorp()) // because it's a backwards proxy, see?
ubuntu@ip-10-0-1-115:~$ go get go.universe.tf/tlsrouter
package go.universe.tf/tlsrouter: unrecognized import path "go.universe.tf/tlsrouter" (parse https://go.universe.tf/tlsrouter?go-get=1: no go-import meta tags ())
The comment reads: // If zero, a default is used.
// DialTimeout optionally specifies a dial timeout.
// If zero, a default is used.
// If negative, the timeout is disabled.
DialTimeout time.Duration
The source says: if zero, the timeout is disabled
func (dp *DialProxy) HandleConn(src net.Conn) {
ctx := context.Background()
var cancel context.CancelFunc
if dp.DialTimeout >= 0 {
ctx, cancel = context.WithTimeout(ctx, dp.dialTimeout())
}
dst, err := dp.dialContext()(ctx, "tcp", dp.Addr)
At home and in Go, I have a number of backends that are a pain to route to because they're buried behind NAT or other firewalls.
In the Go build system, we solve this using an old package I wrote (http://godoc.org/golang.org/x/build/revdial) that lets a backend connect to the server, and then turn the single TCP connection around (after authenticating) and let the server open up and multiplex many TCP connections as needed over that single TCP connection.
That code (first added Sep 2015 in golang/build@1f0d8f2) replaced an earlier "reverse roundtripper" that did the same thing, but only allowed a single HTTP request at a time per connection.
In any case, this is a model I keep returning to and finding super convenient.
To minimize this pain managing backends, I'd like some "revdial" or "backpain" mode in tcpproxy that lets backends register themselves.
The revdial package has been in production for a long time and might be a good start, but it doesn't do back pressure, so it wouldn't be good as a general solution. It only works for us because I know we won't have streams starving each other.
@danderson was suggesting re-using Go's existing HTTP/2 server is probably a better idea, even if it's a bit more work.
The http2 code already has unit tests to verify that full duplex CONNECT requests work over it.
We can just do an HTTP/1.x protocol upgrade over HTTPS with auth to the server proxy which can then Hijack the conn, turn it around, and be an HTTP/2 client to the HTTP/2 server running on the backends. The backend would then handle the incoming CONNECT requests from the server, do the proxying where needed, and let the io.Copy
to the http.ResponseWriter
handle all the flow control automatically via the http2 package.
The code on the backend (which could be an embeddable Go package + a binary for non-Go users) would look at lot this code from Go's build system:
(but using http2+CONNECT instead of revdial)
Note that it just writes an HTTP/1.x request over HTTPS with a token in it, expects a "101 Switching Protocols", and then switches into becoming a server itself.
The token it sends as auth can also register it as a new Target (https://godoc.org/github.com/google/tcpproxy#Target) implementation on the server side, so we can say "anything matching SNI foo.com should go to backend with token XYZFOOBAR".
@danderson, thoughts?
Hello
I used to import your project into mine
import (
"github.com/inetaf/tcpproxy"
)
Recently you've added go.mod in your project.
But it seems you've made a wrong module name: module inet.af/tcpproxy
instead of module github.com/inetaf/tcpproxy
.
When I go get
my project I got this error:
go get
go get: github.com/inetaf/tcpproxy@none updating to
github.com/inetaf/[email protected]: parsing go.mod:
module declares its path as: inet.af/tcpproxy
but was required as: github.com/inetaf/tcpproxy
Hi,
reading through the code it seems to me the teardown of tcp connections is not graceful and might close connections to early.
HandleConn
calls Close()
on both tcp connections when it returns (using defer
) and
HandleConn returns as soon as a one of the two proxyCopy
goroutines exits.
This will shutdown the second proxyCopy
goroutine which still might want to transfer data in the opposite direction.
I wrote a little contrived rpc scenario where a tcp client closes its write stream to signal an EOF
and waits for a reply:
Client:
tcpConn.Write([]byte(`ping`))
tcpConn.CloseWrite()
// Wait for response
response, err := ioutil.ReadAll(tcpConn)
fmt.Println("response:", string(response), "err:", err)
Server:
request, err := ioutil.ReadAll(conn)
time.Sleep(100 * time.Millisecond)
conn.Write([]byte(`pong`))
When I connect the client and server directly this works but when putting tcpproxy
in the middle the client gets an empty response because the tcp connections are closed as soon as the client issues CloseWrite
. So in this case tcpproxy
causes different behaviour as when connecting directly.
Looking at the manpage of socat I can find this section which seems related:
When one of the streams effectively reaches EOF, the closing phase begins. Socat transfers the EOF condition to the other stream, i.e. tries to shutdown only its write stream, giving it a chance to terminate gracefully. For a defined time socat continues to transfer data in the other direction, but then closes all remaining channels and terminates.
So my question would be if it would be better to propagate the WriteClose()
instead of just closing the tcp connection.
Hi, this is not a fault with the project, but it's something I hit while using this (and almost every alternative) and which surprised me, so maybe it'd be nice to add something about it to the documentation?
tcpproxy
, correctly, does its best to take advantage of Go using the splice
syscall to implement io.Copy
between two net.TCPConn
. This, however, leads to 6 fds being created for each proxied connection: the 2 net.TCPConn
and 4 pipes (one in each direction for the each of the connections). This means you hit the (default?) soft ulimit
of 1024 fds per process with just ~170 connections.
The "fix" is to raise the ulimit
for the process, either using the syscall
package, or systemd
's LimitNOFILE
directive.
sir
there is few log in the program to prevention program exceeds 10000-thread limit.
I already have the *net.Listener
in the same process as the TCP Proxy. It would be great if I could proxy directly to that instead of needing to Dial a separate IP address.
// KeepAlivePeriod sets the period between TCP keep alives.
// If zero, a default is used. To disable, use a negative number.
// The keep-alive is used for both the client connection and
KeepAlivePeriod time.Duration
Using the proxy code and noticed that the current implementation makes it slightly hard to propagate context cancellation through the proxy.Run() function. I've temporarily addressed this issue by using error groups and calling proxy.Close():
eg.Go(func() error {
<-ctx.Done()
proxy.Close()
return ctx.Err()
})
eg.Go(func() error {
if err := s.proxy.Run(); err != nil {
// If context has been cancelled, terminate goroutine and return no error.
if ctx.Err() != nil {
return nil
}
return err
}
return nil
})
However, this created a raciness issue which eventually led me to creating a SafeProxy type that wrapped the proxy with a mutex:
i.e.
type SafeProxy struct {
mu sync.Mutex
proxy *tcpproxy.Proxy
}
However, if the original Run or Wait functions were to actually take a context (i.e. proxy.Run(ctx) or proxy.Wait(ctx)
, that would make it easy to gracefully terminate the proxy.
I'm also happy to work on a PR to implement this if there is any interest!
The Taliban apparently seized it in 2024-Jan sometime?
Our bad for using it, even "temporarily", as a joke. Sorry.
We'll change this repo's import path to just be github.com/inetaf/tcpproxy
func clientHelloServerName(br *bufio.Reader) (sni string)
peeks the connection to read the entire client hello packet.
If read was successful, the client hello bytes are passed in to Go's tls
to parse the packet and extract the SNI.
The client hello is peeked using a bufio.Reader
, which is initialized by (p *Proxy) serveConn
, using br := bufio.NewReader(c)
.
The call to bufio.NewReader
initializes an internal backing buffer of size 4K.
If the client hello is bigger than 4K, the bufio.Reader.Peek
call fails with bufio.ErrBuffFull
, and this directly leads to the failure of the SNI matcher.
Specifically, I've been testing with Envoy as a TLS client which I've seen producing a client hello of size 5476 bytes (>4K).
I've attached a sample tcpdump capture.
big_client_hello.zip
For reference, Go's TLS implementation supports client hellos of up-to 64KB:
https://github.com/golang/go/blob/cda1e40b44771f8a01f361672cba721d0f283683/src/crypto/tls/common.go#L65
My personal suggestion is that we increase our bufio.Reader
from the default 4K size to 64KB size.
If there isn't already documentation for using this as a binary, that is definitely necessary. Not everyone who could use this program for their own use will know how to program. Personally, I don't want to read source code to figure out CLI usage.
If there is documentation for this, it's not linked anywhere I can see
Hi,
Is there a way to limit the number of clients per connection?
Thanks in advance
The Matcher has a context parameter, which is only initialized with context.TODO()
When doing some basic ACL stuff (e.g. allow access for sni foo from 10.0.0.0/8, deny from all), it would be nice to somehow have the source available in the matcher function. Since there already is a context, it would be nice to add the source to this context.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.