Giter Site home page Giter Site logo

almirkadric-published / docker-tuntap-osx Goto Github PK

View Code? Open in Web Editor NEW
328.0 19.0 65.0 22 KB

A tuntap shim installer for "Docker for Mac"

License: MIT License

Shell 100.00%
docker hyperkit tuntap networking routing shim docker-for-mac docker-network

docker-tuntap-osx's Introduction

docker-tuntap-osx

docker-tuntap-osx is a tuntap support shim installer for Docker for Mac.

The Problem

Current on Docker for Mac has no support for network routing into the Host Virtual Machine that is created using hyperkit. The reason for this is due to the fact that the network interface options used to create the instance does not create a bridge interface between the Physical Machine and the Host Virtual Machine. To make matters worse, the arguments used to create the Host Virtual Machine is hardcoded into the Docker for Mac binary with no means to configure it.

How it works

This installer (docker_tap_install.sh) will move the original hyperkit binary (hyperkit.original) inside the Docker for Mac application and instead places our shim (./sbin/docker.hyperkit.tuntap.sh) in its stead. This shim will then inject the additional arguments required to attach a TunTap interface into the Host Virtual Machine, essentially creating a bridge interface between the guest and the host (this is essentially what hvint0 is on Docker for Windows).

From there the up script (docker_tap_up.sh) is used to bring the network interface up on both the Physical Machine and the Host Virtual Machine. Unlike the install script, which only needs to be run once, this up script must be run for every restart of the Host Virtual Machine.

Once done the IP address 10.0.75.2 can be used as a network routing gateway to reach any containers within the Host Virtual Machine:

route add -net <IP RANGE> -netmask <IP MASK> 10.0.75.2

Note: Although as of docker-for-mac version 17.12.0 you do not need the following, for prior versions you will need to setup IP Forwarding in the iptables defintion on the Host Virtual Machine:
(This is not done by the helpers as this is not a OSX or tuntap specific issue. You would need to do the same for Docker for Windows, as such it should be handled outside the scope of this project.)

docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables -A FORWARD -i eth1 -j ACCEPT

Note: Although not required for docker-for-mac versions greater than 17.12.0, the above command can be replaced with the following if ever needed and is tested to be working on docker-for-windwos as an alternative. This is in case docker-for-mac changes something in future and this command ends up being a necessity once again.

docker run --rm --privileged --pid=host docker4w/nsenter-dockerd /bin/sh -c 'iptables -A FORWARD -i eth1 -j ACCEPT'

Dependencies

Docker for Mac

TunTap

brew cask install tuntap

How to install it

To install it, run the shim installer script. This will automatically check if the currently installed shim is the correct version and make a backup if necessary:

./sbin/docker_tap_install.sh

After this you will need to bring up the network interfaces every time the docker Host Virtual Machine is restarted:

./sbin/docker_tap_up.sh

How to remove it

The uninstall script will simply revert the installer. Restoring the original and removing the shim:

./sbin/docker_tap_uninstall.sh

Projects using docker-tuntap-osx

License

MIT

References & Credits

  • A big thanks to michaelhenkel and strayerror on the Docker forums for the inspiration and help to make this package
  • The original thread on the Docker Forums

docker-tuntap-osx's People

Contributors

alexpekurovsky avatar almirkadric avatar deinspanjer avatar mamercad avatar martinlasek avatar mhumesf avatar thiagorb avatar tverlaan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-tuntap-osx's Issues

Conflicts with my VPN

After doing a docker_tap_up.sh, I cannot do docker builds that require my npm registry accessible over vpn in 10.x range.

Is there something simple I can do regarding it?

# inside a test busybox container
/ # netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.65.1    0.0.0.0         UG        0 0          0 eth0
10.0.0.0        0.0.0.0         255.0.0.0       U         0 0          0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U         0 0          0 br-293cf0386986
192.168.65.0    0.0.0.0         255.255.255.0   U         0 0          0 eth0

Clearly the 10.0.0.0 entry won't let it get to my npm which is at 10.21.x.y.

Can't install plugin on macOS High Sierra 10.13.5

Dear team,

Please help us, when we did "brew cask install tuntap", we got error :

brew cask install tuntap
==> Caveats
To install and/or use tuntap you may need to enable their kernel extension in

System Preferences โ†’ Security & Privacy โ†’ General

For more information refer to vendor documentation or the Apple Technical Note:

https://developer.apple.com/library/content/technotes/tn2459/_index.html

==> Satisfying dependencies
==> Downloading https://downloads.sourceforge.net/tuntaposx/tuntap/20150118/tuntap_20150118.tar.gz
Already downloaded: /Users/tonifirnandes/Library/Caches/Homebrew/Cask/tuntap--20150118.tar.gz
==> Verifying checksum for Cask tuntap
==> Installing Cask tuntap
==> Running installer for tuntap; your password may be necessary.
==> Package installers may write to any location; options such as --appdir are ignored.
installer: Package name is TunTap Installer package
installer: Installing at base path /
installer: The install failed (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.)
==> Purging files for version 20150118 of Cask tuntap
Error: Command failed to execute!

==> Failed command:
/usr/bin/sudo -E -- env LOGNAME=tonifirnandes USER=tonifirnandes USERNAME=tonifirnandes /usr/sbin/installer -pkg /usr/local/Caskroom/tuntap/20150118/tuntap_20150118.pkg -target /

==> Standard Output of failed command:
installer: Package name is TunTap Installer package
installer: Installing at base path /
installer: The install failed (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance.)

==> Standard Error of failed command:

==> Exit status of failed command:
#<Process::Status: pid 5941 exit 1>

Thanks.

Are the arguments used to create the host virtual machine really hardcoded?

~/Library/Containers/com.docker.docker/Data/vms/0/hyperkit.json gives me the following

{"hyperkit":"/Applications/Docker.app/Contents/Resources/bin/com.docker.hyperkit","argv0":"com.docker.hyperkit","state_dir":"vms/0","vpnkit_sock":"vpnkit.eth.sock","vpnkit_uuid":"502f067e-545a-455c-acb2-cb65f7d0b95e","vpnkit_preferred_ipv4":"","uuid":"838c36b8-c656-4d11-b047-ed50463f9faa","disks":[{"path":"/Users/richie/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw","size":15258,"format":"","trim":true}],"iso":["/Applications/Docker.app/Contents/Resources/linuxkit/docker-desktop.iso","vms/0/config.iso","/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso"],"vsock":true,"vsock_dir":"vms/0","vsock_ports":[2376,1525],"vsock_guest_cid":3,"vmnet":false,"9p_sockets":null,"kernel":"","initrd":"","bootrom":"/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd","cpus":12,"memory":16384,"console":2,"pid":36751,"arguments":["-A","-u","-F","vms/0/hyperkit.pid","-c","12","-m","16384M","-s","0:0,hostbridge","-s","31,lpc","-s","1:0,virtio-vpnkit,path=vpnkit.eth.sock,uuid=502f067e-545a-455c-acb2-cb65f7d0b95e","-U","838c36b8-c656-4d11-b047-ed50463f9faa","-s","2:0,ahci-hd,/Users/richie/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw","-s","3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525","-s","4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-desktop.iso","-s","5,ahci-cd,vms/0/config.iso","-s","6,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso","-s","7,virtio-rnd","-l","com1,autopty=vms/0/tty,asl","-f","bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,"],"cmdline":"/Applications/Docker.app/Contents/Resources/bin/com.docker.hyperkit -A -u -F vms/0/hyperkit.pid -c 12 -m 16384M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=vpnkit.eth.sock,uuid=502f067e-545a-455c-acb2-cb65f7d0b95e -U 838c36b8-c656-4d11-b047-ed50463f9faa -s 2:0,ahci-hd,/Users/richie/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-desktop.iso -s 5,ahci-cd,vms/0/config.iso -s 6,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso -s 7,virtio-rnd -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,"}

The Hyperkit wrapper included in Docker Desktop For Mac as a "key component." Its source code shows that the JSON config is directly used to configure com.docker.hyperkit.

https://github.com/moby/hyperkit/blob/ed9ab73104691fb24db340b58e28a7d45e177eea/go/hyperkit.go#L58

https://github.com/moby/hyperkit/blob/ed9ab73104691fb24db340b58e28a7d45e177eea/go/hyperkit.go#L42-L137

As you can see, the location of com.docker.hyperkit and the arguments are clearly specified in this JSON-formatted file. There's no need for injection.

UPDATE: it turns out hyperkit will only "write the state to the JSON file."

Do I need a default route out of my containers?

I can ping 172.17.0.1 (the default gw that my containers are set to) from my host.

I can't ping any of the other containers, can't seem to get a http connection either (perhaps ping will never work)..

Idk what this default gateway is.. its related to default0 on linux, but I understand OSX it works in a different manner.

So, do I need to go into my other containers and replace the default route with 10.75.0.2?

Thanks for the help!

Docker Toolbox compatible?

Thanks for your efforts with this lovely project!
This isn't a bug with your project so much as a question, please feel free to redirect me elsewhere if this is the wrong place. Definitely NOT a complaint either, this project looks sweet, just a developer looking for another developer's opinion. :)

My goal is to get this docker-compose file working on my Mac. [1]
It's been surprisingly challenging, I think because of the problem docker-tuntap-osx solves.

TL, DR

Do you think this project could be made to work with Docker Toolbox + the new Xhyve driver?

Back story

The container I need to use is configured to point to localhost, and I'd prefer to avoid meddling with upstream configs if possible.

I noticed the README says "Docker for Mac".
So I did a search in the repo for 'docker toolbox' and no results.

Curious, how much of a lift do you think this would be, knowing Docker for Mac and the Xhyve driver both use Hypervisor.framework?

To recreate my env, try the following.
And no, Docker for Mac and Docker Toolbox don't clobber each other.

# install Docker Tools
brew install docker-compose docker-machine docker
brew install docker-machine-driver-xhyve

# you may need create a default machine
docker-machine create --driver xhyve default

# clone the React Native project
git clone [email protected]/fbsamples/f8app.git
cd f8app

# spin up the machines
docker-compose up

With Docker running, from your Mac try http://localhost:4040. I get an error.

[1] It's a React Native project Facebook published back in December. Doesn't appear to be maintained but nevertheless useful for learning.

Details:
macOS 10.13.2
homebrew 1.5.0
Docker version 18.01.0-ce, build 03596f5
docker-compose version 1.18.0, build 8dd22a9
docker-machine version 0.13.0, build 9ba6da9

Fatal Error restarting docker daemon on install

so I :

  • by some chance once get the tap interface after running the install script, and could then use the tap up script, add the route, and ping my containers
  • after some minutes playing with the containers I realize I cannot ping them anymore, although the tap interface was still ther (but noticed IP address of tap, configured as gateway, had changed -> but there even changing the route with new ip address dud not solve the problem)
  • many times uninstalled reinstalled (plus addituonal steps) , results :
    • almost every time I reinstall, a fatal error occurs restarting docker daemon , giving this error message in a dialogbox :
2021-04-18T04:57:03Z dockerd time="2021-04-18T04:57:03.299215681Z" level=error msg="Handler for GET /v1.41/services returned error: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."
  • no tap interface ever appears with ifconfig

I understand that there's something about mac not supporting tap interfaces anymore, and know that community would have provided a solution if it had been possible,

Honestly macbook sucks for anything technically serious : just good at media and secretrary work , no respect for GNU/linux, this would never have happened on debian, going back to debian, keeping mac for dumb slack youtube google meet and google slides, and invoices

High Sierra compatibility?

Will this be compatible with macOS High Sierra? Apple seems to be limiting usage of kernel extensions.

The output give me the message("The hyperkit executable file was of an unknown type") when execute the install script twice

โžœ docker-tuntap-osx git:(master) sh ./sbin/docker_tap_install.sh
Password:
The hyperkit executable file was of an unknown type

And after i restart the docker application manually, and add the route to my container ip, i still cannot ping my docker containers directly.

sudo route add -net 172.16.238.20 -netmask 255.255.0.0 10.0.75.1

Those docker containers are created by the docker-compose with static IPs.
The docker-compose.yml is under: https://github.com/wushibin/kafka-learning/blob/master/docker-compose.yml

Thanks in advance.

Installer script still fails to recognize that docker has completed its restart

When the installer script is run, docker is restarted. The script should detect this new PID and proceed to the completion message. However for some reason it still fails to detect this new PID in some instances resulting in a failure to restart message.

Although this is the final step in the process and everything in fact works just fine. It's a nuisance and should be fixed.

Create uninstaller script

Currently the uninstall process needs to be done manually. Ideally we should automate this into a script for simpler use.

Thoughts on automation?

Installed this yesterday, in the absence of docker/for-mac#155 ever being addressed, and discovered it works perfectly, despite me being on Catalina (10.15.7) (so why can't Docker just fix it?!). Thanks so much! I've been wanting to make it easier to switch back and forth between running tests and the application under test on the host machine or inside docker, as it's so much faster to change things running locally, and much easier to debug them too.

However, I'd really like to automate running docker_tap_up.sh. Anyone got any thoughts on how to do it reliably? I can't find any hook that will trivially run a script when docker for mac (re)starts.

My only idea is to write a docker image that will, on start, use a mounted, named pipe to run commands on the host to run docker_tap_up.sh. Then run that image with --restart always (I can justify it running in the background forever - it can listen to network creation events to create the necessary routes on the host).

But that seems quite an arduous solution to write... anyone have any better ideas?

Fails to install due to Swarm error

New Docker installation:

  • Version: 2.5.0.1 (49550)
  • Engine: 19.03.13

Following error message after running docker_tap_install.sh:

2020-12-09T23:42:43Z dockerd time="2020-12-09T23:42:43.108011945Z" level=error msg="Handler for GET /v1.24/services returned error: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."

Unable to install it on Macbook m1

Not able to install tuntap

brew cask install tuntap

==> Installing Cask tuntap
==> Running installer for tuntap; your password may be necessary.
Package installers may write to any location; options such as --appdir are ignored.
Password:
installer: Package name is TunTap Installer package
installer: Installing at base path /
installer: The install failed. (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. An error occurred while running scripts from the package โ€œtuntap_20150118.pkgโ€.)
==> Purging files for version 20150118 of Cask tuntap
Error: Failure while executing; /usr/bin/sudo -E -- /usr/bin/env LOGNAME=kiran_chavala USER=kiran_chavala USERNAME=kiran_chavala /usr/sbin/installer -pkg /opt/homebrew/Caskroom/tuntap/20150118/tuntap_20150118.pkg -target / exited with 1. Here's the output:
installer: Package name is TunTap Installer package
installer: Installing at base path /
installer: The install failed. (The Installer encountered an error that caused the installation to fail. Contact the software manufacturer for assistance. An error occurred while running scripts from the package โ€œtuntap_20150118.pkgโ€.)

Allow scripts to be configurable

Currently the scripts hard code the following:

  • Physical Machine Tap Interface (e.g. /dev/tap1)
  • Host VM Tap Interface (eth1)
  • Physical Machine Gateway IP
  • Host VM Gateway IP

We should allow the user to configure these in some way

Is there any alternative to tuntap osx?

  1. Apple M1 doesn't support tuntap-osx (not my case)
  2. Apple deprecates tuntap kexts (all cases)
  3. There're some bugs that would not be fixed in tuntap-osx (latest update 2015 year) and sometimes it cases reboot of macOS 11 (x64 cpu) and system removes tun/tap kexts and plist files of tuntap-osx

So, is there any alternatives for tuntap-osx?

Cannot connect to container after setup

I really don't get why this is not working. I'm simply running a netcat listener in my container:

% docker run --rm --privileged alpine nc -v -l -p 54321
listening on [::]:54321 ...

...and trying to connect from my laptop's shell, which fails:

% nc -v 172.17.0.1 54321
nc: connectx to 172.17.0.1 port 54321 (tcp) failed: Operation timed out

Before doing this test, I had set up a route on my laptop that uses the 10.0.75.2 gateway:

% sudo route -v add -net 172.17.0.1 -netmask 255.255.255.0 10.0.75.2

...which we can see here:

% netstat -nr
Routing tables

Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
default            10.219.16.1        UGSc           90        0     en4
default            10.219.5.1         UGScI           3        0     en0
10.0.75/30         link#14            UC              4        0    tap1
10.0.75.2          link#14            UHLWI           0        0    tap1
10.0.75.3          ff:ff:ff:ff:ff:ff  UHLWbI          0       88    tap1
10.219.5/24        link#6             UCS             2        0     en0
10.219.5.1/32      link#6             UCS             1        0     en0
10.219.5.1         0:0:c:7:ac:8       UHLWIir         3       10     en0   1138
10.219.5.3         0:5d:73:dc:ae:7f   UHLWI           0        0     en0    309
10.219.5.45/32     link#6             UCS             1        0     en0
10.219.5.255       ff:ff:ff:ff:ff:ff  UHLWbI          0       88     en0
10.219.16/22       link#5             UCS            12        0     en4
10.219.16.1/32     link#5             UCS             1        0     en4
10.219.16.1        0:0:5e:0:1:a       UHLWIir        18        0     en4   1195
10.219.16.2        a8:2b:b5:58:5:47   UHLWI           0        0     en4   1167
10.219.16.3        a8:2b:b5:57:da:bd  UHLWI           0        0     en4   1199
10.219.16.8        0:26:73:f7:6:57    UHLWI           0        0     en4    899
10.219.16.9        0:26:73:f7:6:3b    UHLWI           0        0     en4    897
10.219.16.10       0:26:73:f7:5:7     UHLWI           0        0     en4    898
10.219.16.49       98:5a:eb:d2:ee:2a  UHLWI           0        0     en4    924
10.219.16.50       38:c9:86:14:1e:22  UHLWI           0        0     en4    762
10.219.16.55       50:65:f3:2d:8c:6c  UHLWI           0        0     en4    795
10.219.18.26       link#5             UHLWI           0        0     en4
10.219.18.33       3c:2c:30:f8:14:4a  UHLWIi          3     1899     en4    436
10.219.18.54/32    link#5             UCS             0        0     en4
10.219.18.56       54:bf:64:12:37:af  UHLWIi          2     4285     en4    768
10.219.19.255      ff:ff:ff:ff:ff:ff  UHLWbI          0       88     en4
127                127.0.0.1          UCS             1   275814     lo0
127.0.0.1          127.0.0.1          UH             15  1571399     lo0
127.255.255.255    127.0.0.1          UHW3I           0   275807     lo0      3
169.254            link#5             UCS             0        0     en4
169.254            link#6             UCSI            0        0     en0
172.17/24          10.0.75.2          UGSc            0        0    tap1
224.0.0/4          link#5             UmCS            2        0     en4
224.0.0/4          link#6             UmCSI           2        0     en0
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en4
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en0
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0      128     en4
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0      128     en0
255.255.255.255/32 link#5             UCS             0        0     en4
255.255.255.255/32 link#6             UCSI            0        0     en0

We can see the tap1 virtual device is present on the laptop:

% ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
	inet 127.0.0.1 netmask 0xff000000
	inet6 ::1 prefixlen 128
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
	nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether 18:65:90:d2:88:b5
	inet 10.219.5.45 netmask 0xffffff00 broadcast 10.219.5.255
	media: autoselect
	status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
	ether 0a:65:90:d2:88:b5
	media: autoselect
	status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
	ether 5a:db:2b:c4:ca:c7
	inet6 fe80::58db:2bff:fec4:cac7%awdl0 prefixlen 64 scopeid 0x8
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=60<TSO4,TSO6>
	ether 4a:00:08:48:04:20
	media: autoselect <full-duplex>
	status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=60<TSO4,TSO6>
	ether 4a:00:08:48:04:21
	media: autoselect <full-duplex>
	status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 4a:00:08:48:04:20
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x2
	member: en1 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 9 priority 0 path cost 0
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 10 priority 0 path cost 0
	media: <unknown type>
	status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
	inet6 fe80::9655:8cdf:cde:2188%utun0 prefixlen 64 scopeid 0xc
	nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::1035:48b6:7222:41ca%utun1 prefixlen 64 scopeid 0xd
	nd6 options=201<PERFORMNUD,DAD>
tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether b2:b2:d7:83:2d:ac
	inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
	media: autoselect
	status: active
	open (pid 79033)
en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=10b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV>
	ether ac:87:a3:14:a3:a5
	inet 10.219.18.54 netmask 0xfffffc00 broadcast 10.219.19.255
	media: autoselect (1000baseT <full-duplex>)
	status: active

...and we can see the network devices in the container here:

% docker run --rm --net=host --privileged alpine ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:CC:10:A0:EF
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:ccff:fe10:a0ef/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01
          inet addr:192.168.65.3  Bcast:192.168.65.255  Mask:255.255.255.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:978 errors:0 dropped:0 overruns:0 frame:0
          TX packets:986 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:100738 (98.3 KiB)  TX bytes:80543 (78.6 KiB)

eth1      Link encap:Ethernet  HWaddr 00:A0:98:BC:F5:D7
          inet addr:10.0.75.2  Bcast:10.0.75.3  Mask:255.255.255.252
          inet6 addr: fe80::2a0:98ff:febc:f5d7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4616 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1419346 (1.3 MiB)  TX bytes:1138 (1.1 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:140 (140.0 B)  TX bytes:140 (140.0 B)

Hyperkit appears to be running with the tap1 device passed to it:

% ps axj | grep hyperkit
ntownsen         79027 78985 78985      0    1 S      ??    0:02.52 com.docker.vpnkit --ethernet fd:3 --port vpnkit.port.sock --port hyperkit://:62373/./vms/0 --diagnostics fd:4 --pcap fd:5 --vsock-path vms/0/connect --host-names host.docker.internal,docker.for.mac.host.internal,docker.for.mac.localhost --gateway-names gateway.docker.internal,docker.for.mac.gateway.internal,docker.for.mac.http.internal --vm-names docker-for-desktop --listen-backlog 32 --mtu 1500 --allowed-bind-addresses 0.0.0.0 --http /Users/ntownsen/Library/Group Containers/group.com.docker/http_proxy.json --dhcp /Users/ntownsen/Library/Group Containers/group.com.docker/dhcp.json --port-max-idle-time 300 --max-connections 2000 --gateway-ip 192.168.65.1 --host-ip 192.168.65.2 --lowest-ip 192.168.65.3 --highest-ip 192.168.65.254 --log-destination asl --udpv4-forwards 123:127.0.0.1:51032 --gc-compact-interval 1800
ntownsen         79033 79028 78985      0    1 S      ??    3:01.51 /Applications/Docker.app/Contents/Resources/bin/com.docker.hyperkit.original -A -u -F vms/0/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=vpnkit.eth.sock,uuid=45abaeb6-e762-4c5c-a923-08cf31152639 -U 0c889a65-63ac-4032-8e26-4d8e4985393a -s 2:0,ahci-hd,/Users/ntownsen/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 2:1,virtio-tap,tap1 -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso -s 5,ahci-cd,vms/0/config.iso -s 6,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso -s 7,virtio-rnd -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,

One thing that seems a little odd to me is that bus 2 has both a "hard disk" on it (ahci-hd) and the tap device that was injected by the shim script (virtio-tap). The script seems to assume that anything on bus 2 is a network device. I'm curious why that is the case.

Interestingly, the laptop cannot connect to the host VM either. I can get a shell with screen:

% screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

Here's ifconfig from the host VM:

linuxkit-025000000001:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:CC:10:A0:EF
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:ccff:fe10:a0ef/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01
          inet addr:192.168.65.3  Bcast:192.168.65.255  Mask:255.255.255.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1069 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:108356 (105.8 KiB)  TX bytes:87745 (85.6 KiB)

eth1      Link encap:Ethernet  HWaddr 00:A0:98:BC:F5:D7
          inet addr:10.0.75.2  Bcast:10.0.75.3  Mask:255.255.255.252
          inet6 addr: fe80::2a0:98ff:febc:f5d7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5377 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1651451 (1.5 MiB)  TX bytes:1138 (1.1 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:140 (140.0 B)  TX bytes:140 (140.0 B)

And I run a netcat listener in the host VM:

linuxkit-025000000001:~# nc -v -l -p 54321
listening on [::]:54321 ...

...with the same result when I try to connect from the Mac shell:

% nc -v 10.0.75.2 54321
nc: connectx to 10.0.75.2 port 54321 (tcp) failed: Operation timed out

For completeness, here's the rest of any debug info I can think of giving.

The route table on the host VM:

linuxkit-025000000001:~# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.65.1    0.0.0.0         UG        0 0          0 eth0
10.0.75.0       0.0.0.0         255.255.255.252 U         0 0          0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
192.168.65.0    0.0.0.0         255.255.255.0   U         0 0          0 eth0

And the tuntap devices installed on the Mac:

% ls -l /dev/tap*
crw-rw----  1 root      wheel   36,   0 Jul 15 17:38 /dev/tap0
crw-rw----  1 ntownsen  wheel   36,   1 Jul 18 17:24 /dev/tap1
crw-rw----  1 root      wheel   36,  10 Jul 15 17:38 /dev/tap10
crw-rw----  1 root      wheel   36,  11 Jul 15 17:38 /dev/tap11
crw-rw----  1 root      wheel   36,  12 Jul 15 17:38 /dev/tap12
crw-rw----  1 root      wheel   36,  13 Jul 15 17:38 /dev/tap13
crw-rw----  1 root      wheel   36,  14 Jul 15 17:38 /dev/tap14
crw-rw----  1 root      wheel   36,  15 Jul 15 17:38 /dev/tap15
crw-rw----  1 root      wheel   36,   2 Jul 15 17:38 /dev/tap2
crw-rw----  1 root      wheel   36,   3 Jul 15 17:38 /dev/tap3
crw-rw----  1 root      wheel   36,   4 Jul 15 17:38 /dev/tap4
crw-rw----  1 root      wheel   36,   5 Jul 15 17:38 /dev/tap5
crw-rw----  1 root      wheel   36,   6 Jul 15 17:38 /dev/tap6
crw-rw----  1 root      wheel   36,   7 Jul 15 17:38 /dev/tap7
crw-rw----  1 root      wheel   36,   8 Jul 15 17:38 /dev/tap8
crw-rw----  1 root      wheel   36,   9 Jul 15 17:38 /dev/tap9

% ls -l /dev/tun*
crw-rw----  1 root  wheel   37,   0 Jul 15 17:38 /dev/tun0
crw-rw----  1 root  wheel   37,   1 Jul 15 17:38 /dev/tun1
crw-rw----  1 root  wheel   37,  10 Jul 15 17:38 /dev/tun10
crw-rw----  1 root  wheel   37,  11 Jul 15 17:38 /dev/tun11
crw-rw----  1 root  wheel   37,  12 Jul 15 17:38 /dev/tun12
crw-rw----  1 root  wheel   37,  13 Jul 15 17:38 /dev/tun13
crw-rw----  1 root  wheel   37,  14 Jul 15 17:38 /dev/tun14
crw-rw----  1 root  wheel   37,  15 Jul 15 17:38 /dev/tun15
crw-rw----  1 root  wheel   37,   2 Jul 15 17:38 /dev/tun2
crw-rw----  1 root  wheel   37,   3 Jul 15 17:38 /dev/tun3
crw-rw----  1 root  wheel   37,   4 Jul 15 17:38 /dev/tun4
crw-rw----  1 root  wheel   37,   5 Jul 15 17:38 /dev/tun5
crw-rw----  1 root  wheel   37,   6 Jul 15 17:38 /dev/tun6
crw-rw----  1 root  wheel   37,   7 Jul 15 17:38 /dev/tun7
crw-rw----  1 root  wheel   37,   8 Jul 15 17:38 /dev/tun8
crw-rw----  1 root  wheel   37,   9 Jul 15 17:38 /dev/tun9

Unable to map hosts as expected

Set this up today for K3s and wrote a tutorial here showing how to do it with dynamic IP allocation using MetalLB.

Oddly, I'm not able to map the IP in my subnet with some entries in /etc/hosts.

This works:

192.168.112.151 rancher.local

This does not:

192.168.112.151 rancher.k3d.localhost

I understand the .local Top-Level Domain is reserved by RFC6762 for Multicast DNS. But it's not clear to me why IPs in the subnet created for my tun interface won't allow host name assignments using anything else and still successfully resolve.

IP Forwarding is broken for newest docker version

Docker Version: 17.12.0-ce-mac46

The command

docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables -A FORWARD -i eth1 -j ACCEPT

would not work anymore and gives the following error:

nsenter: failed to execute iptables: No such file or directory

Errors when restarting docker after installation

> ./sbin/docker_tap_install.sh
Installation complete
Restarting process 'com.docker.hyperkit' [33651]

Failed to restart process 'com.docker.hyperkit'

Then I manually restarted Docker.

> ./sbin/docker_tap_up.sh
Unable to find image 'alpine:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: proxyconnect tcp: dial tcp: lookup docker.for.mac.http.internal on 192.168.65.1:53: read udp 192.168.65.1:39602->192.168.65.1:53: read: connection refused.
See 'docker run --help'.

Runned both scripts, still unable to access the container through IP

Hi @AlmirKadric, first of all, thank you for the work that you've put into this project.
But I am facing some issues, and I'm wondering if you could help me with them.

I am on a MacBook Pro with HighSierra(10.13.3) on it, and after installing tuntap, and running both scripts successfully, I am still unable to access my container through the ip that is provided for me over the docker inspect command.

Also there it go my docker info:
Docker version 17.12.0-ce, build c97c6d6

I was expecting that after running both of the scripts I would be able to access the container directly using the IP (provided on docker inspect) from a terminal running on the host.

Have I missed something regarding the setup or the use of it?

Also I think it is worth mentioning that I am able to ping the IP 10.0.75.2 after running the scripts.

Let me know if there is any more info that I should provide for you.

Thank you in advance, best

Heucles

"The hyperkit executable file was of an unknown type" when installer is run twice

This is most likely related to #4

I ran in to this issue as well trying to run the install script twice. The error "The hyperkit executable file was of an unknown type" seems to be coming from the installer script because it checks the file type of the of the hyperkit shim by using the file command. In my machine the output of the command is

$ file /Applications/Docker.app/Contents/Resources/bin/hyperkit                                                                                                           
/Applications/Docker.app/Contents/Resources/bin/hyperkit: Bourne-Again shell script text executable

and the script checks for https://github.com/AlmirKadric-Published/docker-tuntap-osx/blob/master/sbin/docker_tap_install.sh#L66

if file "$hyperkitPath" | grep -q 'executable, ASCII text$'; then

Create issue template to help users to provide a bit more information for support

Currently when users have troubles and open an issue, they do not provide enough information for support. To help reduce the amount of back and forth required we should create an issue template as per https://github.com/blog/2111-issue-and-pull-request-templates with the below contents:

Thank you for opening an issue!
Before we can help you please make sure you provide the following information about your issue

Description
------------
<A detailed explanation of what your problem was>

Information for Debugging
---------------------------
 - `ls -l /Applications/Docker.app/Contents/Resources/bin/` to make sure the shim was properly installed and configured
 - `ls -l /dev/tap*` to make sure docker has access to tap interface
 - `ifconfig` to make sure docker connected to the tap interface
 - `netstat -rn` to make sure routes were set to the containers over the tap interface
 - `docker run --rm --privileged --pid=host --net=host alpine ifconfig` to make docker created the target interface
 - `docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables-save` to make sure the docker host allows routing
 - `docker inspect <<Container_id>>` to make sure your container is configure to use all of the above

18.03.1-ce-mac65 - The system does not work anymore

Mac OS: 10.13.5 (17F77)

Docker
Version 18.03.1-ce-mac65
Engine: 18.03.1-ce
Compose: 1.21.1

./docker_tap_install.sh - crashes the docker and it never recovers, you need to quit an start again, reports installed after

./docker_tap_install.sh -f
Restarting process 'com.docker.hyperkit' [57982] results in Fatal Error - com.docker.supervisor failed to start - Exit code 1

./docker_tap_up.sh - runs but you can't connect to the machine

The system does not work anymore

18.09.0 build 4d60db4 - The system does not work anymore

Hi this script not working,

the docker.hyperkit.tuntap.sh return always message Network interface arguments not found

Info for debug:

ls -l /Applications/Developer/Docker.app/Contents/Resources/bin/

-rwxr-xr-x  1 antwal  admin      1306 21 Dic 12:30 com.docker.hyperkit
-rwxr-xr-x  1 antwal  admin   4885120 29 Nov 10:11 com.docker.hyperkit.original
-rwxr-xr-x  1 antwal  admin  20480528 29 Nov 10:11 com.docker.vpnkit
-rwxr-xr-x  1 antwal  admin  46610704 29 Nov 10:11 docker
-rwxr-xr-x  1 antwal  admin   6573840 29 Nov 10:11 docker-compose
-rwxr-xr-x  1 antwal  admin   1633424 29 Nov 10:11 docker-credential-osxkeychain
-rwxr-xr-x  1 antwal  admin  34028992 29 Nov 10:11 docker-machine
-rwxr-xr-x  1 antwal  admin  54562560 29 Nov 10:11 kubectl
-rwxr-xr-x  1 antwal  admin   9579912 29 Nov 10:11 notary

ls -l /dev/tap*

crw-rw----  1 root    wheel   41,   0 21 Dic 12:28 /dev/tap0
crw-rw----  1 antwal  wheel   41,   1 21 Dic 12:28 /dev/tap1
crw-rw----  1 root    wheel   41,  10 21 Dic 12:28 /dev/tap10
crw-rw----  1 root    wheel   41,  11 21 Dic 12:28 /dev/tap11
crw-rw----  1 root    wheel   41,  12 21 Dic 12:28 /dev/tap12
crw-rw----  1 root    wheel   41,  13 21 Dic 12:28 /dev/tap13
crw-rw----  1 root    wheel   41,  14 21 Dic 12:28 /dev/tap14
crw-rw----  1 root    wheel   41,  15 21 Dic 12:28 /dev/tap15
crw-rw----  1 root    wheel   41,   2 21 Dic 12:28 /dev/tap2
crw-rw----  1 root    wheel   41,   3 21 Dic 12:28 /dev/tap3
crw-rw----  1 root    wheel   41,   4 21 Dic 12:28 /dev/tap4
crw-rw----  1 root    wheel   41,   5 21 Dic 12:28 /dev/tap5
crw-rw----  1 root    wheel   41,   6 21 Dic 12:28 /dev/tap6
crw-rw----  1 root    wheel   41,   7 21 Dic 12:28 /dev/tap7
crw-rw----  1 root    wheel   41,   8 21 Dic 12:28 /dev/tap8
crw-rw----  1 root    wheel   41,   9 21 Dic 12:28 /dev/tap9

ifconfig

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
	inet 127.0.0.1 netmask 0xff000000 
	inet6 ::1 prefixlen 128 
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
	nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
en0: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=10b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV>
	ether 0c:4d:e9:c2:8f:b8 
	inet6 fe80::c00:4bc5:a5b:1b31%en0 prefixlen 64 secured scopeid 0x4 
	inet 172.16.88.32 netmask 0xffffff00 broadcast 172.16.88.255
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect (1000baseT <full-duplex,flow-control,energy-efficient-ethernet>)
	status: active
en1: flags=8823<UP,BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
	ether 88:63:df:b2:61:69 
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect (<unknown type>)
	status: inactive
en2: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
	options=60<TSO4,TSO6>
	ether 32:00:16:2a:20:00 
	media: autoselect <full-duplex>
	status: inactive
en3: flags=963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX> mtu 1500
	options=60<TSO4,TSO6>
	ether 32:00:16:2a:20:01 
	media: autoselect <full-duplex>
	status: inactive
p2p0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> mtu 2304
	ether 0a:63:df:b2:61:69 
	media: autoselect
	status: inactive
awdl0: flags=8902<BROADCAST,PROMISC,SIMPLEX,MULTICAST> mtu 1484
	ether 2e:a2:e9:8c:ef:3c 
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 32:00:16:2a:20:00 
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x2
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 6 priority 0 path cost 0
	member: en3 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 7 priority 0 path cost 0
	nd6 options=201<PERFORMNUD,DAD>
	media: <unknown type>
	status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
	inet6 fe80::60c9:13be:bedb:3572%utun0 prefixlen 64 scopeid 0xb 
	nd6 options=201<PERFORMNUD,DAD>
bridge1: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 0e:4d:e9:2c:40:01 
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x2
	media: <unknown type>
	status: inactive
tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether 1a:f0:5e:f1:49:e3 
	inet 10.0.75.1 netmask 0xffffff00 broadcast 10.0.75.255
	media: autoselect
	status: active
	open (pid 77850)

netstat -rn

Routing tables

Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
default            172.16.88.1        UGSc           42        8     en0
10.0.75/24         link#12            UC              1        0    tap1
10.0.75.1          1a:f0:5e:f1:49:e3  UHLWIi          1        4     lo0
127                127.0.0.1          UCS             0        0     lo0
127.0.0.1          127.0.0.1          UH              8   983182     lo0
169.254            link#4             UCS             0        0     en0
172.16.88/24       link#4             UCS             4        0     en0
172.16.88.1/32     link#4             UCS             1        0     en0
172.16.88.1        0:c:42:b7:bb:7d    UHLWIir        43        3     en0   1178
172.16.88.32/32    link#4             UCS             0        0     en0
172.16.88.33       link#4             UHLWIi          1     2624     en0
172.16.88.38       2c:54:91:6b:c3:2f  UHLWI           0      452     en0    829
172.16.88.39       0:90:a9:e3:83:62   UHLWIi          1      209     en0   1034
172.16.88.255      ff:ff:ff:ff:ff:ff  UHLWbI          0        7     en0
224.0.0/4          link#4             UmCS            2        0     en0
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en0
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0     8045     en0
255.255.255.255/32 link#4             UCS             0        0     en0

Internet6:
Destination                             Gateway                         Flags         Netif Expire
default                                 fe80::%utun0                    UGcI          utun0
::1                                     ::1                             UHL             lo0
fe80::%lo0/64                           fe80::1%lo0                     UcI             lo0
fe80::1%lo0                             link#1                          UHLI            lo0
fe80::%en0/64                           link#4                          UCI             en0
fe80::c00:4bc5:a5b:1b31%en0             c:4d:e9:c2:8f:b8                UHLI            lo0
fe80::%utun0/64                         fe80::60c9:13be:bedb:3572%utun0 UcI           utun0
fe80::60c9:13be:bedb:3572%utun0         link#11                         UHLI            lo0
ff01::%lo0/32                           ::1                             UmCI            lo0
ff01::%en0/32                           link#4                          UmCI            en0
ff01::%en1/32                           link#5                          UmCI            en1
ff01::%utun0/32                         fe80::60c9:13be:bedb:3572%utun0 UmCI          utun0
ff02::%lo0/32                           ::1                             UmCI            lo0
ff02::%en0/32                           link#4                          UmCI            en0
ff02::%en1/32                           link#5                          UmCI            en1
ff02::%utun0/32                         fe80::60c9:13be:bedb:3572%utun0 UmCI          utun0

docker run --rm --privileged --pid=host --net=host alpine ifconfig

br-2b86980ef72b Link encap:Ethernet  HWaddr 02:42:2B:2C:79:C8  
          inet addr:172.20.0.1  Bcast:172.20.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

br-3b029b7aaa3f Link encap:Ethernet  HWaddr 02:42:67:92:30:30  
          inet addr:172.21.0.1  Bcast:172.21.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

docker0   Link encap:Ethernet  HWaddr 02:42:CB:C4:69:63  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01  
          inet addr:10.0.75.2  Bcast:10.255.255.255  Mask:255.0.0.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1700 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1064 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2360710 (2.2 MiB)  TX bytes:74439 (72.6 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:1092 (1.0 KiB)  TX bytes:1092 (1.0 KiB)

docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables-save

Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
Digest: sha256:df6ebd5e9c87d0d7381360209f3a05c62981b5c2a3ec94228da4082ba07c4f05
Status: Downloaded newer image for debian:latest
nsenter: failed to execute iptables-save: No such file or directory

docker inspect <<Container_id>>

[
    {
        "Id": "ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf",
        "Created": "2018-12-21T12:07:42.368132093Z",
        "Path": "nginx",
        "Args": [
            "-g",
            "daemon off;"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 3715,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2018-12-21T12:07:42.620622036Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:63356c558c795f9f4ec4d4197f341ebc31a2f708bacbdc53076a149108ce477b",
        "ResolvConfPath": "/var/lib/docker/containers/ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf/hostname",
        "HostsPath": "/var/lib/docker/containers/ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf/hosts",
        "LogPath": "/var/lib/docker/containers/ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf/ef7906edd573fa009b56226991db7a620cf513c2cea9aefa7645ba2e863fabaf-json.log",
        "Name": "/stoic_morse",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "host",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "host",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "label=disable"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": null,
            "ReadonlyPaths": null
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/496109bc3d3aee0ac602ebcc71bb695b69d01f3148c4342006ffaa73cf76fe70-init/diff:/var/lib/docker/overlay2/407b9b6388a9ef2015fcc6c338ccf45b932acc6fd717b2052faebc0ca9127f13/diff:/var/lib/docker/overlay2/f04e9a3d4a74c5151a6a3348e6f602d3ec3e510ab24a7201e4e0315f2c6bb930/diff:/var/lib/docker/overlay2/9d5a937ae185fb7e1c974c026bd2085f6b99ddfc432d4086082a15b7314302c4/diff:/var/lib/docker/overlay2/0a5fe5083aba00da82fd1aa2eafc4689c8f29b97e3fe96a4477a2d49724cf7af/diff",
                "MergedDir": "/var/lib/docker/overlay2/496109bc3d3aee0ac602ebcc71bb695b69d01f3148c4342006ffaa73cf76fe70/merged",
                "UpperDir": "/var/lib/docker/overlay2/496109bc3d3aee0ac602ebcc71bb695b69d01f3148c4342006ffaa73cf76fe70/diff",
                "WorkDir": "/var/lib/docker/overlay2/496109bc3d3aee0ac602ebcc71bb695b69d01f3148c4342006ffaa73cf76fe70/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "linuxkit-025000000001",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "80/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NGINX_VERSION=1.15.7"
            ],
            "Cmd": [
                "nginx",
                "-g",
                "daemon off;"
            ],
            "ArgsEscaped": true,
            "Image": "nginx:alpine",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "maintainer": "NGINX Docker Maintainers <[email protected]>"
            },
            "StopSignal": "SIGTERM"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "8890058d65a336a854cc7d4706cd2d223dcb3b9622ff55409dbf805bb2b69c17",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/default",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "87157e5ce097e54abbf39c6aa34999fd2c4e8c961a8c61915786f2f1f681f0a5",
                    "EndpointID": "e6f762194bb65eba2d88993e25dc56e64c984afeaf66fd1ed9718ee667354817",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]

Thank you

Working with Docker on Mac is tricky but your project saved my day,
Really appreciate it .. Thank you

Incompatible architecture: Binary is for x86_64, but needed arch arm64e

Hi Almir Kadric,

I am using MacBook Pro M2. Unfortunately when I install tuntap_20150118.pkg. It throws following error. Please help.

Error Domain=KMErrorDomain Code=71 "Incompatible architecture: Binary is for x86_64, but needed arch arm64e
Incompatible architecture: Binary is for x86_64, but needed arch arm64e
Unsupported Error: one or more extensions are unsupported to load 	Kext net.sf.tuntaposx.tap v1.0 in executable kext bundle net.sf.tuntaposx.tap at /Library/Extensions/tap.kext
Unsupported Error: one or more extensions are unsupported to load 	Kext net.sf.tuntaposx.tap v1.0 in executable kext bundle net.sf.tuntaposx.tap at /Library/Extensions/tap.kext" UserInfo={NSLocalizedDescription=Incompatible architecture: Binary is for x86_64, but needed arch arm64e

Moreover when I run the following command. it throws following,

masif@RDCG9VPF2D docker-for-mac-host-bridge % brew tap caskroom/cask
Error: caskroom/cask was moved. Tap homebrew/cask instead.
masif@RDCG9VPF2D docker-for-mac-host-bridge % brew tap homebrew/cask
masif@RDCG9VPF2D docker-for-mac-host-bridge % brew cask install tuntap
Error: `brew cask` is no longer a `brew` command. Use `brew <command> --cask` instead.
masif@RDCG9VPF2D docker-for-mac-host-bridge % brew install tuntap --cask
==> Downloading https://formulae.brew.sh/api/cask.jws.json
#=#=-  #       #                                                                                                                                                                                                                                               #=O#-     #        #                                                                                                                                                                                                                                           -#O=- #      #          #                                                                                                                                                                                                                                      -=O#-   #        #           #                     ######################################################################################################################################################################################################################################################### 100.0%
Warning: Cask 'tuntap' is unavailable: No Cask with this name exists.
==> Downloading https://formulae.brew.sh/api/formula.jws.json
#=#=-  #       #                                                                                                                                                                                                                                               #=O#-     #        #                                                                                                                                                                                                                                           -#O=- #      #          #         ######################################################################################################################################################################################################################################################### 100.0%
==> Searching for similarly named casks...
Error: No casks found for tuntap.

Install script does not kill Docker gracefully

When the install script is run, Docker is forced killed by finding its PID. This displays a modal dialog saying docker quit unexpectedly. Also, it doesn't restart. An alternative, is to gracefully quit Docker and restart it ourselves like so:

docker stop $(docker ps -aq) &amp;&amp; test -z "$(docker ps -q 2>/dev/null)" && osascript -e 'quit app "Docker"'

open --background -a Docker

I'd like to submit a PR with these changes to the install script if it makes sense.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.