Giter Site home page Giter Site logo

norouter / norouter Goto Github PK

View Code? Open in Web Editor NEW
342.0 7.0 21.0 1.16 MB

NoRouter: IP-over-Stdio. The easiest multi-host & multi-cloud networking ever. No root privilege is required.

Home Page: https://norouter.io

License: Apache License 2.0

Makefile 1.35% Go 91.02% Shell 4.90% SCSS 0.12% HTML 2.61%
multi-cloud usermode-networking netstack vpn

norouter's Introduction

NoRouter banner

NoRouter (IP-over-Stdio) is the easiest multi-host & multi-cloud networking ever:

  • Works with any container, any VM, and any baremetal machine, on anywhere, as long as the shell access is available (e.g. docker exec, kubectl exec, ssh)
  • Omnidirectional port forwarding: Local-to-Remote, Remote-to-Local, and Remote-to-Remote
  • No routing configuration is required
  • No root privilege is required (e.g. sudo, docker run --privileged)
  • No public IP is required
  • Provides several network modes
    • Loopback IP mode (e.g. 127.0.42.101, 127.0.42.102, ...)
    • HTTP proxy mode with built-in name resolver
    • SOCKS4a and SOCKS5 proxy mode with built-in name resolver
  • Easily installable with a single binary, available for Linux, macOS, BSDs, and Windows

Web site: https://norouter.io/


What is NoRouter?

NoRouter implements unprivileged networking by using multiple loopback addresses such as 127.0.42.101 and 127.0.42.102. The hosts in the network are connected by forwarding packets over stdio streams like docker exec, kubectl exec, ssh, and whatever.

Unlike traditional port forwarders such as docker run -p, kubectl port-forward, ssh -L, and ssh -R, NoRouter provides mutual interconnectivity across multiple remote hosts.

overview

NoRouter is mostly expected to be used in a dev environment for running heterogeneous multi-cloud apps.

e.g. An environment that is composed of:

  • A laptop in the living room, for writing codes
  • A baremetal workstation with GPU/FPGA in the office, for running machine-learning workloads
  • ACI (Azure Container Instances) containers, for running other workloads that do not require a complete Kubernetes cluster
  • EKS (Amazon Elastic Kubernetes Service) pods, for workloads that heavily access Amazon S3 buckets
  • GKE (Google Kubernetes Engine) pods, for running gVisor-armored workloads

For production environments, setting up VPNs rather than NoRouter would be the right choice.

Download

The binaries are available at https://github.com/norouter/norouter/releases .

See also Getting Started.

Quick usage

  • Install the norouter binary to all the hosts. Run norouter show-installer to show an installation script.
  • Create a manifest YAML file. Run norouter show-example to show an example manifest.
  • Run norouter <FILE> to start NoRouter with the specified manifest YAML file.

Example 1: Port forwarding across localhost + Docker + Kubernetes + LXD + SSH

Run norouter <FILE> with the following YAML file:

hosts:
# localhost
  local:
    vip: "127.0.42.100"
# Docker & Podman container (docker exec, podman exec)
  docker:
    cmd: "docker exec -i some-container norouter"
    vip: "127.0.42.101"
    ports: ["8080:127.0.0.1:80"]
# Writing /etc/hosts is possible on most Docker and Kubernetes containers
    writeEtcHosts: true
# Kubernetes Pod (kubectl exec)
  kube:
    cmd: "kubectl --context=some-context exec -i some-pod -- norouter"
    vip: "127.0.42.102"
    ports: ["8080:127.0.0.1:80"]
# Writing /etc/hosts is possible on most Docker and Kubernetes containers
    writeEtcHosts: true
# LXD container (lxc exec)
  lxd:
    cmd: "lxc exec some-container -- norouter"
    vip: "127.0.42.103"
    ports: ["8080:127.0.0.1:80"]
# SSH
# If your key has a passphrase, make sure to configure ssh-agent so that NoRouter can login to the remote host automatically.
  ssh:
    cmd: "ssh [email protected] -- norouter"
    vip: "127.0.42.104"
    ports: ["8080:127.0.0.1:80"]

In this example, 127.0.42.101:8080 on each hosts is forwarded to the port 80 of the Docker container.

Try:

$ curl http://127.0.42.101:8080
$ docker exec some-container curl http://127.0.42.101:8080
$ kubectl --context=some-context exec some-pod -- curl http://127.0.42.101:8080
$ lxc exec some-container -- curl http://127.0.42.101:8080
$ ssh [email protected] -- curl http://127.0.42.101:8080

Similarly, 127.0.42.102:8080 is forwarded to the port 80 of the Kubernetes Pod, 127.0.42.103:8080 is forwarderd to the port 80 of the LXD container, and 127.0.42.104:8080 is forwarded to the port 80 of some-ssh-host.example.com.

Example 2: Virtual VPN connection into docker network create networks

This example shows steps to use NoRouter for creating an HTTP proxy that works like a VPN router that connects clients into docker network create networks.

This technique also works with remote Docker, rootless Docker, Docker for Mac, and even with Podman. Read docker as podman for the usage with Podman.

First, create a Docker network named "foo", and create an nginx container named "nginx" there:

$ docker network create foo
$ docker run -d --name nginx --hostname nginx --network foo nginx:alpine

Then, create a "bastion" container in the same network, and install NoRouter into it:

$ docker run -d --name bastion --network foo alpine sleep infinity
$ norouter show-installer | docker exec -i bastion sh

Launch norouter example2.yaml with the following YAML:

hosts:
  local:
    vip: "127.0.42.100"
    http:
      listen: "127.0.0.1:18080"
    loopback:
      disable: true
  bastion:
    cmd: "docker exec -i bastion /root/bin/norouter"
    vip: "127.0.42.101"
routes:
  - via: bastion
    to: ["0.0.0.0/0", "*"]

The "nginx" container can be connected from the host as follows:

$ export http_proxy=http://127.0.0.1:18080
$ curl http://nginx

If you are using Podman, try curl http://nginx.dns.podman rather than curl http://nginx .

Example 3: Virtual VPN connection into Kubernetes networks

Example 2 can be also applied to Kubernetes clusters, just by replacing docker exec with kubectl exec.

$ export http_proxy=http://127.0.0.1:18080
$ curl http://nginx.default.svc.cluster.local

Example 4: Aggregate VPCs of AWS, Azure, and GCP

The following example provides an HTTP proxy that virtually aggregates VPCs of AWS, Azure, and GCP:

hosts:
  local:
    vip: "127.0.42.100"
    http:
      listen: "127.0.0.1:18080"
  aws_bastion:
    cmd: "ssh aws_bastion -- ~/bin/norouter"
    vip: "127.0.42.101"
  azure_bastion:
    cmd: "ssh azure_bastion -- ~/bin/norouter"
    vip: "127.0.42.102"
  gcp_bastion:
    cmd: "ssh gcp_bastion -- ~/bin/norouter"
    vip: "127.0.42.103"
routes:
  - via: aws_bastion
    to:
      - "*.compute.internal"
  - via: azure_bastion
    to:
      - "*.internal.cloudapp.net"
  - via: gcp_bastion
    to:
# Substitute "example-123456" with your own GCP project ID
      - "*.example-123456.internal"

The localhost can access all remote hosts in these networks:

$ export http_proxy=http://127.0.0.1:18080
$ curl http://ip-XXX-XXX-XX-XXX.ap-northeast-1.compute.internal
$ curl http://some-azure-host.internal.cloudapp.net
$ curl http://some-gcp-host.asia-northeast1-b.c.example-123456.internal

Documentation

Installing NoRouter from source

$ make
$ sudo make install

Contributing to NoRouter


NoRouter is licensed under the terms of Apache License, Version 2.0.

Copyright (C) NoRouter authors.

norouter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

norouter's Issues

Integration with libp2p and/or bifrost

norouter is really cool, the idea of forwarding IP traffic via a stdin/stdout "exec" session is great.

I see a few ways my project Bifrost and libp2p could improve norouter and vise-versa:

  • Bifrost defines a Transport interface and a common Quic transport over a io.Reader/io.Writer or a net.Listener.
  • Norouter can pass the "exec" session read/writer to the Quic constructor to make a Transport.
  • Bifrost can then be dynamically configured to listen on local ports & forward traffic via norouter.
  • Multiple transports can be used simultaneously & Bifrost will balance traffic over them.
  • Other logic like connection management and on-demand conns is then implemented "for free"

There's then a very modular / easy way to then swap norouter out for a more production grade transport like direct UDP connections w/ k8s DNS resolution later on, without changing any app code.

Would love to hear your thoughts on this @AkihiroSuda

Provide built-in HTTP proxy for name resolution

Most RPC protocols including REST and gRPC can be called via an HTTP proxy.

Providing built-in HTTP proxy service on 127.0.0.1:8080 on all the hosts would be beneficial for providing name resolution without privileges.

Optional support for real TUN mode with setuid (setcap)

In addition to the default unprivileged mode (netstack mode #7), we should also support real TUN mode.

Pro: No need to specify ports mapping.
Con: The user needs to have CAP_NET_ADMIN capability, not just DAC permissions for /dev/net/tun.

The norouter binary file will need to have setuid bit or file capability to use real TUN mode.

socks: silence "proxy starts" and "proxy ends" logs

$ norouter a.yaml
suda-ws01: INFO[0000] Ready: 127.0.42.100                          
suda-ws01: INFO[0000] Ready: 127.0.42.102                          
suda-ws01: INFO[0000] Ready: 127.0.42.103                          
suda-ws01: INFO[0000] Ready: 127.0.42.101                          
suda-ws01: WARN[0011] stderr[local(127.0.42.100)]: 2020-10-28T09:41:40.477192Z suda-ws01 exe info: "proxy starts" client_addr="127.0.0.1:59240" command="connect" dest_addr="127.0.42.101:80" dest_host="nginx.norouter" protocol="SOCKS5" request_id="02b75cc5-a02f-1a99-9aa7-51648567b6b1" src_addr="127.0.42.100:17548" type="access" 
suda-ws01: WARN[0011] stderr[local(127.0.42.100)]: 2020-10-28T09:41:40.483037Z suda-ws01 exe info: "proxy ends" elapsed=0.005710385 request_id="02b75cc5-a02f-1a99-9aa7-51648567b6b1" 

The extra "proxy starts" and "proxy ends" logs can be removed when cybozu-go/usocksd#8 gets merged

[routes] msg="failed to call hookRouteOnSYN" error="failed to listen on {'\\x00' \"172.27.0.3\" 'P'}: bind tcp 172.27.0.3:80: port is in use"

Step 1: Run the following script on macOS with NoRouter v0.5.0.

#!/bin/bash
set -eux -o pipefail
L="norouter-test"
NET="norouter-test"

x="$(docker container ls -a -q --filter label=$L)"; if [ -n "$x" ]; then docker container rm -f $x; fi
x="$(docker network ls -q --filter label=$L)"; if [ -n "$x" ]; then docker network rm $x; fi

docker network create --label $L $NET
docker run --label $L -d --name bastion --hostname bastion --network $NET alpine sleep infinity
docker run --label $L -d --name wordpress --hostname wordpress --network $NET wordpress
docker run --label $L -d --name mysql --hostname mysql --network $NET -e MYSQL_ROOT_PASSWORD=******** mysql
docker cp $GOPATH/src/github.com/norouter/norouter/bin/norouter-Linux-x86_64 bastion:/usr/local/bin/norouter
cat << EOF | norouter /dev/stdin
hostTemplate:
  loopback:
    disable: true
hosts:
  local:
    vip: "127.0.42.100"
    http:
      listen: "127.0.0.1:18080"
  bastion:
    cmd: "docker exec -i bastion norouter"
    vip: "127.0.42.101"
routes:
  - via: bastion
    to: ["*.${NET}"]
EOF

Step 2: Launch Firefox (v82), and set the proxy to http://127.0.0.1:18080

Step 3: Open http://wordpress.norouter-test via Firefox

Step 4: Fulfill the database configuration

image

Step 5: Click "Submit". It fails with "See NoRouter agent log" error.

NoRouter logs:

+ norouter /dev/stdin
suda-mbp.local: INFO[0000] Ready: 127.0.42.100                          
suda-mbp.local: INFO[0000] Ready: 127.0.42.101                          
suda-mbp.local: WARN[0053] stderr[bastion(127.0.42.101)]: bastion: time="2020-11-13T07:29:15Z" level=warning msg="failed to call hookRouteOnSYN" error="failed to listen on {'\\x00' \"172.27.0.3\" 'P'}: bind tcp 172.27.0.3:80: port is in use" 
suda-mbp.local: WARN[0053] stderr[local(127.0.42.100)]: suda-mbp.local: time="2020-11-13T16:29:15+09:00" level=warning msg="failed to call do()" error="failed to dial gonet 172.27.0.3:80: connect tcp 172.27.0.3:80: connection was refused" 

support building with go 1.21

when building with go 1.21

go build ./cmd/norouter
package github.com/norouter/norouter/cmd/norouter
	imports github.com/norouter/norouter/pkg/agent
	imports github.com/norouter/norouter/pkg/agent/dns
	imports gvisor.dev/gvisor/pkg/tcpip/adapters/gonet
	imports gvisor.dev/gvisor/pkg/tcpip/stack
	imports gvisor.dev/gvisor/pkg/sync/locking
	imports gvisor.dev/gvisor/pkg/gohacks: build constraints exclude all Go files in /home/kirillvr/go/pkg/mod/gvisor.dev/[email protected]/pkg/gohacks

Support automatic installation of `norouter binary` to remote hosts

Currently, user needs to install norouter binary to all the remote hosts manually.

This could be automatically installed using a new YAML field .install.shCmd like this:

hosts:
# uses norouter binary that is already installed (as in the current version)
  host1:
    cmd: ["docker", "exec", "-i", "host1", "norouter"]
# NEW: install norouter binary automatically using `norouter show-installer`
  host2:
    install:
      shCmd: ["docker", "exec", "-i", "host2", "sh"]

set password for HTTP proxy and SOCKS proxy

When .[]hosts.http.auth is set to "Basic", the username and password will be autogenerated as ~/.norouter/agent/http/username and ~/.norouter/agent/http/password.SECRET.

For ease of configuration, we might also want to generate ~/.norouter/agent/http/sourceme.SECRET that contains a shell script snippet like this:

http_proxy="norouter:[email protected]:18080"
https_proxy="norouter:[email protected]:18080"
HTTP_PROXY="norouter:[email protected]:18080"
HTTPS_PROXY="norouter:[email protected]:18080"
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY

Security Policy violation SECURITY.md

This issue was automatically created by Allstar.

Security Policy Violation
Security policy not enabled.
A SECURITY.md file can give users information about what constitutes a vulnerability and how to report one securely so that information about a bug is not publicly visible. Examples of secure reporting methods include using an issue tracker with private issue support, or encrypted email with a published key.

To fix this, add a SECURITY.md file that explains how to handle vulnerabilities found in your repository. Go to https://github.com/norouter/norouter/security/policy to enable.

For more information, see https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository.


This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

Release: compress binaries

$ du -hs norouter-Linux-x86_64 
 17M	norouter-Linux-x86_64

$ gzip -9 norouter-Linux-x86_64

$ du -hs norouter-Linux-x86_64.gz 
9.1M	norouter-Linux-x86_64.gz

This will break existing installation scripts.

agent process remains after exit

Gave this a shot on a hashicorp/nomad allocation using the nomad exec command to open a port from the instance to my local machine and it worked wonderfully, but there is one little snag: the agent process on the instance remains after exiting on the host (with SIGTERM), and I have to manually nomad exec in again and kill it (with SIGKILL) to restart norouter (if I wanna use the same port).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.