Giter Site home page Giter Site logo

socketplane's Introduction

#SocketPlane

Circle CI Coverage Status

Developers don't want to care about VLANs, VXLANs, Tunnels or TEPs. People responsible for managing the infra expect it to be performant and reliable. SocketPlane provides a networking abstraction at the socket-layer in order to solve the problems of the network in a manageable fashion.

SocketPlane Technology Preview

This early release is just a peek at some of the things we are working on and releasing to the community as open source. As we are working upstream with the Docker community to bring in native support for network driver/plugin/extensions, we received a number of request to try the proposed socketplane solution with existing Docker versions. Hence we came up with a temporary wrapper command : socketplane that is used as a front-end to the docker CLI commands. This enables us to send hooks to the SocketPlane Daemon.

In this release we support the following features:

  • Open vSwitch integration
  • ZeroConf multi-host networking for Docker
  • Elastic growth of a Docker/SocketPlane cluster
  • Support for multiple networks
  • Distributed IP Address Management (IPAM)

Overlay networking establishes tunnels between host endpoints, and in our case, those host endpoints are Open vSwitch. The advantage to this scenario is the user doesn't need to worry about subnets/vlans or any other layer 2 usage constraints. This is just one way to deploy container networking that we will be presenting. The importance of Open vSwitch is performance and the defacto APIs for advanced networking.

Our 'ZeroConf' technology is based on multicast DNS. This allows us to discover other SocketPlane cluster members on the same segment and to start peering with them. This allows us to elastically grow the cluster on demand by simply deploying another host - mDNS handles the rest. Since multicast availability is hit and miss in most networks, it is aimed at making it easy to deploy Docker and SocketPlane to start getting familiar with the exciting marriage of advanced, yet sane networking scenario with the exciting Docker use cases. We will be working with the community on other clustering technologies such as swarm that can be in used in conjunction to provide a more provisioning oriented clustering solutions.

Once we've discovered our neighbors, we're able to join an embedded [Consul] instance, giving us access to an eventually consistent key/value store for network state.

We support mutiple networks, to allow you to divide your containers in to subnets to ease the burden of enforcing firewall policy in the network.

Finally, we've implemented a distributed IP address management solution that enables non conflicting address assignment throughout a cluster.

Note: As we previously mentioned, it's not an ideal approach, but it allows people to start kicking the tyres as soon as possible. All of the functionality in socketplane.sh will move in to our Golang core over time.

[ See Getting Started Demo Here ] ( https://www.youtube.com/watch?v=ukITRl58ntg )

[ See Socketplane with a LAMP Stack Demo Here ] ( https://www.youtube.com/watch?v=5uzUSk3NjD0 )

[ See Socketplane with Powerstrip Demo Here ] ( https://www.youtube.com/watch?v=Icl0L8tQybs )

Installation

Vagrant

A Default Vagrant file has been provided to setup a a demo system. By default three Ubuntu 14.04 VM hosts will be installed each with an installed version of Socketplane.

You can change the number of systems created as follows:

export SOCKETPLANE_NODES=1
#or
export SOCKETPLANE_NODES=10

To start the demo systems:

git clone https://github.com/socketplane/socketplane
cd socketplane
vagrant up

The VM's are named socketplane-{n}, where n is a number from 1 to SOCKETPLANE_NODES || 3

Once the VM's are started you can ssh in as follows:

vagrant ssh socketplane-1
vagrant ssh socketplane-2
vagrant ssh socketplane-3

You can start Docker containers in each of the VM's and they will all be in a default network.

sudo socketplane run -itd ubuntu

You can also see the status of containers on a specific host VM by typing:

sudo socketplane info

If you want to create multiple networks you can do the following:

sudo socketplane network create web 10.2.0.0/16

sudo socketplane run -n web -itd ubuntu

You can list all the created networks with the following command:

sudo socketplane network list

For more options use the HELP command

sudo socketplane help

Non-Vagrant install / deploy

If you are not a vagrant user, please follow these instructions to install and deploy socketplane. While Golang, Docker and OVS can run on many operating systems, we are currently running tests and QA against Ubuntu and Fedora.

Note: If you are using Virtualbox, please take care of the following before proceeding with the installation :

  • Clustering over NAT adapter will not work. Hence, the Virtualbox VMs must have either Host-Only Adapter (or) Internal Network (or) Bridged adapter installed for clustering to work.

  • The VMs/Hosts must have unique hostname. Make sure that /etc/hosts in the VMs have the unique hostname updated.

    First Node: curl -sSL http://get.socketplane.io/ | sudo BOOTSTRAP=true sh

    Subsequent Nodes: curl -sSL http://get.socketplane.io/ | sudo sh

or

First Node:
wget -qO- http://get.socketplane.io/ | sudo BOOTSTRAP=true sh

Subsequent Nodes:
wget -qO- http://get.socketplane.io/ | sudo sh

Warning: The BOOTSTRAP=true should be used on the first node only. Without it, it won't work. If used on subsequent nodes, bad things will happen.

This should ideally start the Socketplane agent container as well. You can use sudo docker ps | grep socketplane command to check the status. If, the agent isnt already running, you can install it using the following command :

sudo socketplane install

Next start an image, for example a bash shell:

sudo socketplane run -i -t ubuntu /bin/bash

You can also see the status of containers on a specific host VM by typing:

sudo socketplane info

If you want to create multiple networks you can do the following:

sudo socketplane network create web 10.2.0.0/16

sudo socketplane run -n web -itd ubuntu

You can list all the created networks with the following command:

sudo socketplane network list

For more options use the HELP command

sudo socketplane help

Useful Agent Commands

The Socketplane agent runs in its own container and you might find the following commands useful :

  1. Socketplane agent troubleshooting/debug logs :

     sudo socketplane agent logs
    
  2. Socketplane agent stop :

     sudo socketplane agent stop
    
  3. Socketplane agent start :

     sudo socketplane agent start
    

Hacking

See HACKING.md

Contact us

For bugs please file an issue. For any assistance, questions or just to say hi, please visit us on IRC, #socketplane at irc.freenode.net

Stay tuned for some exciting features coming soon from the SocketPlane team.

License

Copyright 2014 SocketPlane, Inc.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.Code is released under the Apache 2.0 license.

socketplane's People

Contributors

aanm avatar botchagalupe avatar dave-tucker avatar dwvisser avatar inthecloud247 avatar mavenugo avatar mrjana avatar nerdalert avatar wizard-cxy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

socketplane's Issues

consul cluster creation

Where can we see any logs regarding the multicast DNS discovery of nodes for joining a cluster ?
As well as consul lookups ?

I can start a master node just fine:

export BOOTSTRAP=true
socketplane install unattended

But if I try to add another node that I start manually, with no bootstrap set, then it does not seem to join the cluster.

Security group rules description

Hi, it would be nice to have a description of the security group rules that need to be created for things to work on ec2 (and the likeโ€ฆ).

Right now I got ingress:

UDP 8472
UDP 5353
TCP 6640
UDP 53 (consul DNS ?)
TCP 8400 (consul ?)
TCP 8500 (consul ?)

But I think I am missing some as my cluster bootstrapping does not seem to work.

Inconsistent network connectivity between socketplane/docker instances created in multiple networks.

Some docker instances can not ping other docker instances in other networks. Here is the scenario:

On host1

sudo socketplane network create web 10.2.0.0/16
sudo socketplane run -n web -itd ubuntu:14.04
sudo socketplane run -itd ubuntu:14.04

On host2

sudo socketplane network create database 10.3.0.0/16
sudo socketplane run -n database -itd ubuntu:14.04
sudo socketplane run -itd ubuntu:14.04

The following is a list of ping connectivity between the four docker instances:

Host1

10.1.0.2   (default)
10.2.0.2   (web)

From 10.1.0.2   (default)

ping 8.8.8.8 works
ping 10.2.0.2 works
ping 10.1.0.3 works
ping 10.3.0.2 hangs

From 10.2.0.2   (web)

ping 8.8.8.8 works
ping 10.1.0.2 works
ping 10.1.0.3 works
ping 10.3.0.2 hangs

Host 2

10.1.0.3   (default)
10.3.0.2   (database)

From 10.1.0.3   (default)

ping 8.8.8.8 works
ping 10.1.0.2 works
ping 10.2.0.2 works
ping 10.3.0.2 hangs

From 10.3.0.2   (database)

ping 8.8.8.8 works
ping 10.1.0.2 hangs
ping 10.2.0.2 hangs
ping 10.1.0.3 hangs

Additional data:

host1

vagrant@socketplane-1:~$ sudo socketplane network list
[
    {
        "gateway": "10.3.0.1",
        "id": "database",
        "subnet": "10.3.0.0/16",
        "vlan": 3
    },
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    },
    {
        "gateway": "10.2.0.1",
        "id": "web",
        "subnet": "10.2.0.0/16",
        "vlan": 2
    }
]

vagrant@socketplane-1:~$ sudo socketplane info
{
    "5abc4359847557e31588ebb7f09d710c8552b819036a0bc214dd0c1495c26cb5": {
        "connection_details": {
            "gateway": "10.1.0.1",
            "ip": "10.1.0.2",
            "mac": "02:42:0a:01:00:02",
            "name": "ovsed2732b",
            "subnet": "/16"
        },
        "container_id": "5abc4359847557e31588ebb7f09d710c8552b819036a0bc214dd0c1495c26cb5",
        "container_name": "/kickass_lumiere",
        "container_pid": "3151",
        "network": "default",
        "ovs_port_id": "ovsed2732b"
    },
    "e0d9aecffb214866e7cd6d49b6cbc2b76b25fd2182aad664b7decf33fec9a3c3": {
        "connection_details": {
            "gateway": "10.2.0.1",
            "ip": "10.2.0.2",
            "mac": "02:42:0a:02:00:02",
            "name": "ovsa8be396",
            "subnet": "/16"
        },
        "container_id": "e0d9aecffb214866e7cd6d49b6cbc2b76b25fd2182aad664b7decf33fec9a3c3",
        "container_name": "/stoic_torvalds",
        "container_pid": "3024",
        "network": "web",
        "ovs_port_id": "ovsa8be396"
    }
}

vagrant@socketplane-1:~$ ifconfig -a
default   Link encap:Ethernet  HWaddr e2:8c:77:b5:63:46  
          inet addr:10.1.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::e08c:77ff:feb5:6346/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:98 errors:0 dropped:0 overruns:0 frame:0
          TX packets:73 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8364 (8.3 KB)  TX bytes:5066 (5.0 KB)

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:6615 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6739 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:268103 (268.1 KB)  TX bytes:10201512 (10.2 MB)

docker0-ovs Link encap:Ethernet  HWaddr 3a:a1:a1:49:d1:47  
          inet6 addr: fe80::3c8e:4eff:fe3b:b927/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:109 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8298 (8.2 KB)  TX bytes:936 (936.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:c7:e7:dd  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fec7:e7dd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:113418 errors:0 dropped:0 overruns:0 frame:0
          TX packets:56280 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:86609660 (86.6 MB)  TX bytes:3529430 (3.5 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:8a:2f:9f  
          inet addr:10.254.101.21  Bcast:10.254.101.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5013 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6244 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:579357 (579.3 KB)  TX bytes:739424 (739.4 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:6103 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6103 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2485361 (2.4 MB)  TX bytes:2485361 (2.4 MB)

ovs-system Link encap:Ethernet  HWaddr 1a:ba:66:66:73:af  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

web       Link encap:Ethernet  HWaddr 6a:9b:0b:29:9a:64  
          inet addr:10.2.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::689b:bff:fe29:9a64/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:55 errors:0 dropped:0 overruns:0 frame:0
          TX packets:44 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4602 (4.6 KB)  TX bytes:3560 (3.5 KB)

On host 2

vagrant@socketplane-2:~$ sudo socketplane network list
[
    {
        "gateway": "10.3.0.1",
        "id": "database",
        "subnet": "10.3.0.0/16",
        "vlan": 3
    },
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    },
    {
        "gateway": "10.2.0.1",
        "id": "web",
        "subnet": "10.2.0.0/16",
        "vlan": 2
    }
]

vagrant@socketplane-2:~$ sudo socketplane info
{
    "496e72d9f5156bb4d65f9e4cd1de83cc61416b5324a9d4f20db97a7a9d312b1c": {
        "connection_details": {
            "gateway": "10.1.0.1",
            "ip": "10.1.0.3",
            "mac": "02:42:0a:01:00:03",
            "name": "ovs4f478d1",
            "subnet": "/16"
        },
        "container_id": "496e72d9f5156bb4d65f9e4cd1de83cc61416b5324a9d4f20db97a7a9d312b1c",
        "container_name": "/hungry_archimedes",
        "container_pid": "3135",
        "network": "default",
        "ovs_port_id": "ovs4f478d1"
    },
    "d45a08f222dd01b6a977d08b3243e8ce53c42a3641b7da08ac002f54ec42c1df": {
        "connection_details": {
            "gateway": "10.3.0.1",
            "ip": "10.3.0.2",
            "mac": "02:42:0a:03:00:02",
            "name": "ovs0762397",
            "subnet": "/16"
        },
        "container_id": "d45a08f222dd01b6a977d08b3243e8ce53c42a3641b7da08ac002f54ec42c1df",
        "container_name": "/tender_brattain",
        "container_pid": "3008",
        "network": "database",
        "ovs_port_id": "ovs0762397"
    }
}

vagrant@socketplane-2:~$ ifconfig -a
database  Link encap:Ethernet  HWaddr 32:95:4a:81:f4:8c  
          inet addr:10.3.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::3095:4aff:fe81:f48c/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:57 errors:0 dropped:0 overruns:0 frame:0
          TX packets:22 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5022 (5.0 KB)  TX bytes:1628 (1.6 KB)

docker0   Link encap:Ethernet  HWaddr 56:84:7a:fe:97:99  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:6454 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6542 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:261639 (261.6 KB)  TX bytes:10190848 (10.1 MB)

docker0-ovs Link encap:Ethernet  HWaddr 82:a3:d5:0b:39:42  
          inet6 addr: fe80::2c3e:5fff:fe3c:da49/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:80 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:6632 (6.6 KB)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:c7:e7:dd  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fec7:e7dd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:112932 errors:0 dropped:0 overruns:0 frame:0
          TX packets:55918 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:86580519 (86.5 MB)  TX bytes:3505140 (3.5 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:c1:80:48  
          inet addr:10.254.101.22  Bcast:10.254.101.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6474 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5238 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:741360 (741.3 KB)  TX bytes:619751 (619.7 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:7891 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7891 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:3346478 (3.3 MB)  TX bytes:3346478 (3.3 MB)

ovs-system Link encap:Ethernet  HWaddr b2:65:a8:94:90:e3  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

User must be able to use install.sh multiple times.

If the user decides to abort install.sh execution during the first attempt, and tries to run it again, the install.sh fails with the error

madhu@madhu-mint:~$ sudo sh install.sh

  • id -u
  • [ 0 != 0 ]
  • command_exists socketplane
  • hash socketplane
  • echo Warning: "socketplane" command appears to already exist.
    Warning: "socketplane" command appears to already exist.
  • echo Please ensure that you do not already have socketplane installed.
    Please ensure that you do not already have socketplane installed.
  • exit 1

We should be able to run install.sh multiple times without any restrictions.

Issues with socketplane start <container>

First time when I tried to start a stopped container :
madhu@Ubuntu-vm socketplane (master) $ sudo scripts/socketplane.sh start 8baa8c237ccd
Command line is not complete. Try option "help"

Trying it immediately for the second time :

madhu@Ubuntu-vm socketplane (master) $ sudo scripts/socketplane.sh start 8baa8c237ccd
ln: failed to create symbolic link โ€˜/var/run/netns/netโ€™: File exists

subnet mismatch between default network and Container / ovs-internal-port ip-addresses

When creating Default network, it should get the subnet address from the datastore. Only when it is not found, it should create one.

container ip-address :
/ # ifconfig
ovsa2ddae0 Link encap:Ethernet HWaddr 02:42:0A:2A:00:03
inet addr:10.42.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:aff:fe2a:3/64 Scope:Link
UP BROADCAST RUNNING MTU:1514 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1788 (1.7 KiB)

Host :

madhu@madhu-mint ~ $ ifconfig
default Link encap:Ethernet HWaddr 0a:b9:cf:68:19:81
inet addr:10.1.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::8b9:cfff:fe68:1981/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:32 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1788 (1.7 KB) TX bytes:756 (756.0 B)

docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:16050 errors:0 dropped:0 overruns:0 frame:0
TX packets:18137 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:655640 (655.6 KB) TX bytes:40351773 (40.3 MB)

docker0-ovs Link encap:Ethernet HWaddr 56:2a:a8:2b:98:44
inet6 addr: fe80::a8cd:4dff:fe18:3a78/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:42 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2544 (2.5 KB) TX bytes:756 (756.0 B)

socketplane start <container> isnt re-attaching ovs port

vagrant@socketplane-1:$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f57b0613f45 docker-ut:latest /bin/sh 13 minutes ago Up 13 minutes loving_hawking
69e299a3089c socketplane/socketplane:latest socketplane --bootst 27 minutes ago Up 27 minutes tender_poincare
vagrant@socketplane-1:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f57b0613f45 docker-ut:latest /bin/sh 13 minutes ago Up 13 minutes loving_hawking
7f43e14385c1 docker-ut:latest /bin/sh 15 minutes ago Exited (0) 14 minutes ago hungry_perlman
89404d8724c6 docker-ut:latest /bin/sh 17 minutes ago Exited (0) 15 minutes ago stupefied_meitner
69e299a3089c socketplane/socketplane:latest socketplane --bootst 27 minutes ago Up 27 minutes tender_poincare
vagrant@socketplane-1:$ sudo socketplane start 89404d8724c6
89404d8724c6
vagrant@socketplane-1:
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f57b0613f45 docker-ut:latest /bin/sh 14 minutes ago Up 14 minutes loving_hawking
7f43e14385c1 docker-ut:latest /bin/sh 16 minutes ago Exited (0) 14 minutes ago hungry_perlman
89404d8724c6 docker-ut:latest /bin/sh 18 minutes ago Up 8 seconds stupefied_meitner
69e299a3089c socketplane/socketplane:latest socketplane --bootst 27 minutes ago Up 27 minutes tender_poincare
vagrant@socketplane-1:~$ sudo docker attach 89404d8724c6

/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

/ # ifconfig

Fix the regular expression check for -d in socketplane run command

socketplane run without the -d option gets stuck if image name contains a "d" character.

This is due to an incorrect regular expression to check for -d option in the command line :
$(echo "$@" | grep -e '-.?d.?')

Example
sudo socketplane run -it docker-ut /bin/sh
fails to attach to the terminal because the image name (docker-ut) contains "d"

Update README with some common deployment gotchas

Couple of gotchas that I faced during socketplane installation in a non-vagrant environment

  1. If the User is installing socketplane in a virtual-box environment, clustering over NAT adapter will not work. Hence, the VMs in the cluster must be in the same network using either Host-Only Adapter (or) Internal Network (or) Bridged adapter.
  2. The hosts where the socketplane is installed must have unique hostnames. Make sure that /etc/hosts have the unique hostname updated.
  3. Updated README with the troubleshooting tips (sudo socketplane agent logs / stop / start).

'wget -qO- http://get.socketplane.io/ | sudo sh' doesnt pull an image

One liner install silently dies before getting to "first node y/n?" option. OS is fresh ubuntu 14.10 snapshot. If you go back and run "sudo socketplane install" after it fails then it will get past the following snippet below.

Looks like it is dying here but I didnt see where the one-liner magic occurs so opening a ticket. :
log_step "Starting the SocketPlane container"

        if [ -n "$(docker ps | grep socketplane/socketplane | awk '{ print $1 }')" ]; then
            log_fatal "A SocketPlane container is already running"
            return 1
        fi

        flags="--iface=auto"

        if [ "$1" = "unattended" ]; then
            [ -z $BOOTSTRAP ] && log_fatal "BOOTSTRAP not set" && exit 1

            if [ "$BOOTSTRAP" = "true" ] ; then
                flags="$flags --bootstrap=true"
            fi
        else

wget -qO- http://get.socketplane.io/ | sudo sh
-Output: https://gist.github.com/391bcbf650dc4fa3476e

Running "sudo socketplane install" after the failed 'wget xxx | sudo sh'
-Output: https://gist.github.com/0c3d7556f275f363a059

fail yum package silently on Ubuntu OS

proposing to fail the install for policycoreutils-python silently w/ a || true. We can add more OS checks but it will grow the shell script. To keep it trimmed down am patching it to be silent if policycoreutils-python package isn't found. Can add OS checks if preferred. Tested incoming patch on bot Ubuntu and Fedora.

Clustering peer discovery by configuration / provisioning

Socketplane's Tech Preview release provides Clustering support using -

  • Peer discovery through zero-config
  • Distributed Key-Value store using Consul

zero-config peer discovery is achieved through mDNS which requires Multicast support in the underlying network. There are many scenarios wherein multicast is not enabled or supported. Hence it calls for a configuration based discovery of clustering peer.
There are various options that can provide such configuration mechanism. This enhancement was opened to provide a simpler configuration option as requested by #53 .

Proposal

  • Simple configuration such as
    • socketplane cluster join {ip / host-name}
    • socketplane cluster leave
    • socketplane cluster status
  • The above configuration will result in joining the consul cluster (equivalent to consul join command).
  • Using gossip mechanism all other cluster peers learns the new host.
  • As we are planning on using Consul for this enhancement, ecc can be used to listen for the host join/leave events in the cluster.

exec: "socketplane": executable file not found in $PATH

Getting this on both Ubuntu and Fedora, def a block. Sounds like a gopath issue but the logs

cmd: sudo socketplane install

Ubuntu:

 Error response from daemon: Cannot start container a3e9644978c5: exec: "socketplane": executable file not found in $PATH
    FATA[0000] Error: failed to start one or more containers
    brent@ub136:~/socketplane/scripts$ sudo -E  echo $PATH
    /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/bin:/usr/local/go/bin:/home/brent/go//bin

Fedora:

    Is this the first node in the cluster? (y/n)y
    2014/12/27 03:49:23 Error response from daemon:
     Cannot start container a94810fd5b281397e1dfdeddc4abc6b4197ccbf73d8e38aaa242e75827103d66:
      exec: "socketplane": executable file not found in $PATH

IPAM allocates the same address!

Spun up a new environment with vagrant up, grabbed a coffee and proceeeded to create containers...
vagrant ssh -c socketplane-1 sudo socketplane run -itd busybox

To my surpirse, I couldn't ping 8.8.8.8. Looks like the IPAM allocated the same address to the Default Gateway AND the first container. Possibly a sync issue when the cluster moves from map storage to ECC.

# In host   

15: default: <BROADCAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default
    link/ether ee:c5:c8:e8:46:8d brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.1/16 scope global default
       valid_lft forever preferred_lft forever
    inet6 fe80::ecc5:c8ff:fee8:468d/64 scope link
       valid_lft forever preferred_lft forever  

# In container  

ovs01f2d80 Link encap:Ethernet  HWaddr 02:42:0A:01:00:01
          inet addr:10.1.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING  MTU:1440  Metric:1
          RX packets:44 errors:0 dropped:0 overruns:0 frame:0
          TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3744 (3.6 KiB)  TX bytes:5646 (5.5 KiB   

# Logs  

vagrant@socketplane-1:~$ sudo socketplane agent logs
INFO[0007] Identifying interface to bind ... Use --iface option for static binding
INFO[0015] Identifying interface to bind ... Use --iface option for static binding
INFO[0023] Identifying interface to bind ... Use --iface option for static binding
INFO[0031] Identifying interface to bind ... Use --iface option for static binding
INFO[0039] Identifying interface to bind ... Use --iface option for static binding
INFO[0047] Identifying interface to bind ... Use --iface option for static binding
INFO[0055] Identifying interface to bind ... Use --iface option for static binding
INFO[0063] Identifying interface to bind ... Use --iface option for static binding
INFO[0071] Identifying interface to bind ... Use --iface option for static binding
INFO[0079] Identifying interface to bind ... Use --iface option for static binding
INFO[0087] Identifying interface to bind ... Use --iface option for static binding
INFO[0095] Identifying interface to bind ... Use --iface option for static binding
INFO[0103] Identifying interface to bind ... Use --iface option for static binding
INFO[0111] Identifying interface to bind ... Use --iface option for static binding
INFO[0119] Identifying interface to bind ... Use --iface option for static binding
INFO[0119] Binding to eth1
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
         Node name: 'socketplane-1'
        Datacenter: 'dc1'
            Server: true (bootstrap: true)
       Client Addr: 127.0.0.1 (HTTP: 8500, DNS: 8600, RPC: 8400)
      Cluster Addr: 10.254.101.21 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false  

==> Log data will now stream in as it occurs:   

    2015/01/09 22:55:16 [INFO] serf: EventMemberJoin: socketplane-1 10.254.101.21
    2015/01/09 22:55:16 [INFO] raft: Node at 10.254.101.21:8300 [Follower] entering Follower state
    2015/01/09 22:55:16 [INFO] consul: adding server socketplane-1 (Addr: 10.254.101.21:8300) (DC: dc1)
    2015/01/09 22:55:16 [INFO] serf: EventMemberJoin: socketplane-1.dc1 10.254.101.21
    2015/01/09 22:55:16 [INFO] consul: adding server socketplane-1.dc1 (Addr: 10.254.101.21:8300) (DC: dc1)
    2015/01/09 22:55:16 [ERR] agent: failed to sync remote state: No cluster leader
    2015/01/09 22:55:17 [WARN] raft: Heartbeat timeout reached, starting election
    2015/01/09 22:55:17 [INFO] raft: Node at 10.254.101.21:8300 [Candidate] entering Candidate state
    2015/01/09 22:55:17 [INFO] raft: Election won. Tally: 1
    2015/01/09 22:55:17 [INFO] raft: Node at 10.254.101.21:8300 [Leader] entering Leader state
    2015/01/09 22:55:17 [INFO] consul: cluster leadership acquired
    2015/01/09 22:55:17 [INFO] consul: New leader elected: socketplane-1
    2015/01/09 22:55:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
    2015/01/09 22:55:17 [INFO] consul: member 'socketplane-1' joined, marking health alive
    2015/01/09 22:55:19 [INFO] agent: Synced service 'consul'
2015/01/09 22:55:21 Status of Get 404 Not Found 404 for http://localhost:8500/v1/kv/network/default
2015/01/09 22:55:21 Updating KV pair for http://localhost:8500/v1/kv/network/default?cas=0 default {"id":"default","subnet":"10.1.0.0/16","gateway":"10.1.0.1","vlan":1} 0
INFO[0124] New Node joined the cluster : 10.254.101.21
2015/01/09 22:55:21 Status of Get 404 Not Found 404 for http://localhost:8500/v1/kv/vlan/vlan
2015/01/09 22:55:21 Updating KV pair for http://localhost:8500/v1/kv/vlan/vlan?cas=0 vlan  0
2015/01/09 22:55:21 Status of Get 404 Not Found 404 for http://localhost:8500/v1/kv/ipam/10.1.0.0/16
2015/01/09 22:55:21 Updating KV pair for http://localhost:8500/v1/kv/ipam/10.1.0.0/16?cas=0 10.1.0.0/16  0
    2015/01/09 22:55:21 [INFO] serf: EventMemberJoin: socketplane-2 10.254.101.22
    2015/01/09 22:55:21 [INFO] consul: member 'socketplane-2' joined, marking health alive
INFO[0124] New Node joined the cluster : 10.254.101.22
2015/01/09 22:55:21 New Bonjour Member : socketplane-2, _docker._cluster, local, 10.254.101.22
INFO[0124] New Member Added : 10.254.101.22
    2015/01/09 22:55:21 [INFO] agent.rpc: Accepted client: 127.0.0.1:46825
    2015/01/09 22:55:21 [INFO] agent: (LAN) joining: [10.254.101.22]
    2015/01/09 22:55:21 [INFO] agent: (LAN) joined: 1 Err: <nil>
Successfully joined cluster by contacting 1 nodes.
2015/01/09 22:57:20 New Bonjour Member : socketplane-3, _docker._cluster, local, 10.254.101.23
INFO[0243] New Member Added : 10.254.101.23
    2015/01/09 22:57:20 [INFO] agent.rpc: Accepted client: 127.0.0.1:46831
    2015/01/09 22:57:20 [INFO] agent: (LAN) joining: [10.254.101.23]
    2015/01/09 22:57:20 [INFO] serf: EventMemberJoin: socketplane-3 10.254.101.23
    2015/01/09 22:57:20 [INFO] agent: (LAN) joined: 1 Err: <nil>
    2015/01/09 22:57:20 [INFO] consul: member 'socketplane-3' joined, marking health alive
Successfully joined cluster by contacting 1 nodes.
INFO[0243] New Node joined the cluster : 10.254.101.23
2015/01/09 23:11:43 Status of Get 200 OK 200 for http://localhost:8500/v1/kv/network/default
2015/01/09 23:11:44 Status of Get 200 OK 200 for http://localhost:8500/v1/kv/ipam/10.1.0.0/16
2015/01/09 23:11:44 Status of Get 200 OK 200 for http://localhost:8500/v1/kv/ipam/10.1.0.0/16
2015/01/09 23:11:44 Updating KV pair for http://localhost:8500/v1/kv/ipam/10.1.0.0/16?cas=10 10.1.0.0/16  10

Creating networks with same name as a deleted network uses wrong (old) CIDR

This may be related to the cleanup around #110 and possibly #72

If a network is created with name foo2 with cidr then deleted and a network is created with the name of foo2 again with cidr cidr will be used.

vagrant@socketplane-1:~$ sudo socketplane network create foo2 10.3.0.0/16
{
    "gateway": "10.3.0.1",
    "id": "foo2",
    "subnet": "10.3.0.0/16",
    "vlan": 2
}

vagrant@socketplane-1:~$ sudo socketplane network delete foo2
vagrant@socketplane-1:~$ sudo socketplane network create foo2 10.5.0.0/16
{
    "gateway": "10.3.0.1",
    "id": "foo2",
    "subnet": "10.3.0.0/16",
    "vlan": 2
}
vagrant@socketplane-1:~$ sudo socketplane network list
[
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    },
    {
        "gateway": "10.3.0.1",
        "id": "foo2",
        "subnet": "10.3.0.0/16",
        "vlan": 2
    }
]

Also confirmed that foo2 bridge is old CIDR and containers run with network receive the old CIDR address assignment.

socketplane network create command needs to check for an existing network before it does the create.

The socketplane network create command doesn't check for an existing network. If a network is created with an existing CIDR the command will not fail. However all instances create in that network will not work.

Here is an example:

vagrant@socketplane-1:~$ sudo socketplane network list 
[
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    },
    {
        "gateway": "10.2.0.1",
        "id": "web",
        "subnet": "10.2.0.0/16",
        "vlan": 2
    }
]

If I create another network as follows:

sudo socketplane network create db 10.2.0.0/16

You do not get an error and you can also create containers in that network; however those instances will not have network connectivity.

Create Local Network Gateway in all the hosts that are part of the cluster

When a network is created using socketplane network create command, the network gateway is created only on the host where the command is executed. This enhancement request is opened to make sure we create the gateway is created in all the hosts that are part of the cluster to make it consistent and functionally more appropriate.

socketplane.sh must install curl if not present

in the vm launched using vagrant when we launch container using "socketplane run" we see :
/usr/bin/socketplane: 404: /usr/bin/socketplane: curl: not found

Due to the missing curl, the script is unable to add connections via socketplane agent

vagrant@socketplane-1:~$ sudo docker attach 4ee

/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vagrant@socketplane-2:~$ sudo ovs-vsctl show
469cecfd-4e8e-464d-884c-2a2c2ae9437c
Manager "ptcp:6640"
is_connected: true
Bridge "docker0-ovs"
Port "vxlan-10.254.101.21"
Interface "vxlan-10.254.101.21"
type: vxlan
options: {remote_ip="10.254.101.21"}
Port "vxlan-10.254.101.23"
Interface "vxlan-10.254.101.23"
type: vxlan
options: {remote_ip="10.254.101.23"}
Port "docker0-ovs"
Interface "docker0-ovs"
type: internal
ovs_version: "2.0.2"

Downloading socketplane to windows thru git causes "^M" issue

When we do "vagrant up" in windows, get the following error:
==> socketplane-2: /tmp/vagrant-shell: /usr/bin/socketplane: /bin/sh^M: bad interpreter: No such file or directory

Workaround:
Run "dos2unix" on the scripts directory to remove "^M"

1 way to fix this would be by adding .gitattributes like this:

  •    text eol=lf
    
    _.sh text eol=lf
    *.rb text eol=lf
    bin_ text eol=lf

socketplane run options

In the socketplane script, the run command should take --network as option:

run [--network foo] <docker_run_args>

But the function below only takes -n it seems

If no SP container exists, create one

In the case where no container is present and stopped, when you try and start one, create one by default.

Old:

    $ sudo socketplane agent start
    -----> Starting SocketPlane Agent
    No Socketplane agent containers are started.

New:

    $ sudo socketplane agent start
    -----> Starting SocketPlane Agent
    No Socketplane agent containers are started.
    -----> Starting the SocketPlane container
    Is this the first node in the cluster? (y/n)y

*socketplane install* in install.sh doesnt take effect if docker was not previously installed in the system

If docker was not previously installed in the system, socketplane.sh script will install docker service and followed by that , it executes socketplane install. But this fails to get executed. Most likely due to timing issue of docker installation and followed immediately by the "socketplane install".
Hence the user has to explicitely run "sudo socketplane install" after running the install.sh script

Messed up configuration after reboot.

With that very specific (NOT!) subject line, a detailed synopsis:

  • A socketplane VM left idle for a few days suffers a memory leak, leading to a restart of a daemon.
  • Socketplane VM details: Single VM (socketplane0) in VirtualBox. . "eth0" is NAT'ed via DHCP to Internet , "eth1" is Host-only, manually configured interface:

root@u1404-socketplane0:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.10.10.1 0.0.0.0 UG 0 0 0 eth0
10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 default
10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0

## After reboot had to restart agent manually, but config is wrong.

root@u1404-socketplane0:~# socketplane info
No JSON object could be decoded

root@u1404-socketplane0:~# socketplane agent logs
SocketPlane container is not running

root@u1404-socketplane0:~# socketplane network list
No JSON object could be decoded

root@u1404-socketplane0:~# socketplane agent start
-----> Starting SocketPlane Agent
Starting container 64df707dfc9a
All Socketplane agent containers are started.

root@u1404-socketplane0:~# socketplane network list
[
{
"gateway": "10.1.0.1",
"id": "default",
"subnet": "10.1.0.0/16",
"vlan": 1
},
{
"gateway": "169.254.254.1",
"id": "frontend",
"subnet": "169.254.254.0/24",
"vlan": 2
}
]

## After starting a container on "frontend" network:

root@u1404-socketplane0:# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6da75607a980 ubuntu:latest "/bin/bash" 11 minutes ago Up 11 minutes mad_hodgkin
64df707dfc9a socketplane/socketplane:latest "socketplane --iface 9 days ago Up About an hour thirsty_fermi
root@u1404-socketplane0:
#
root@u1404-socketplane0:#
root@u1404-socketplane0:
# socketplane info
{
"6da75607a980b60e8a8701830bb412e3ac9a33e8f7eaec3940a81917cce39037": {
"connection_details": {
"gateway": "169.254.254.1",
"ip": "169.254.254.4",
"mac": "02:42:a9:fe:fe:04",
"name": "ovs0ba8057",
"subnet": "/24"
},
"container_id": "6da75607a980b60e8a8701830bb412e3ac9a33e8f7eaec3940a81917cce39037",
"container_name": "/mad_hodgkin",
"container_pid": "2113",
"network": "frontend",
"ovs_port_id": "ovs0ba8057"
},
"7e25a5e647850dd75d0bcede1928f96b0a28d57615041876386133a90339cab4": {
"connection_details": {
"gateway": "10.1.0.1",
"ip": "10.1.0.2",
"mac": "02:42:0a:01:00:02",
"name": "ovs5b69b67",
"subnet": "/16"
},
"container_id": "7e25a5e647850dd75d0bcede1928f96b0a28d57615041876386133a90339cab4",
"container_name": "/jovial_jones",
"container_pid": "19049",
"network": "default",
"ovs_port_id": "ovs5b69b67"
},
"a65436b3c940791b30e18afdc558e28899664d84707e24becddc571a300adc4c": {
"connection_details": {
"gateway": "169.254.254.1",
"ip": "169.254.254.3",
"mac": "02:42:a9:fe:fe:03",
"name": "ovs30e56d1",
"subnet": "/24"
},
"container_id": "a65436b3c940791b30e18afdc558e28899664d84707e24becddc571a300adc4c",
"container_name": "/sleepy_pare",
"container_pid": "1689",
"network": "frontend",
"ovs_port_id": "ovs30e56d1"
},
"db73d460123aa76d4dcb7cb9132dbebf73d831dbcb22a1516a7d29d78ab98b6d": {
"connection_details": {
"gateway": "169.254.254.1",
"ip": "169.254.254.2",
"mac": "02:42:a9:fe:fe:02",
"name": "ovsb675e71",
"subnet": "/24"
},
"container_id": "db73d460123aa76d4dcb7cb9132dbebf73d831dbcb22a1516a7d29d78ab98b6d",
"container_name": "/hungry_goldstine",
"container_pid": "19283",
"network": "frontend",
"ovs_port_id": "ovsb675e71"
}
}

## However, the network configuration in the host is messed up -- e.g. no IP addr on "frontend" network, hence the container started on that network doesn't have any connectivity:

root@u1404-socketplane0:# ovs-vsctl show
d6f3e161-d121-4156-964d-303967cd752c
Manager "ptcp:6640"
is_connected: true
Bridge "docker0-ovs"
Port "docker0-ovs"
Interface "docker0-ovs"
type: internal
Port default
tag: 1
Interface default
type: internal
Port "ovs0ba8057"
tag: 2
Interface "ovs0ba8057"
type: internal
Port "ovsb675e71"
tag: 2
Interface "ovsb675e71"
type: internal
Port frontend
tag: 2
Interface frontend
type: internal
Port "ovs5b69b67"
tag: 1
Interface "ovs5b69b67"
type: internal
Port "ovs30e56d1"
tag: 2
Interface "ovs30e56d1"
type: internal
ovs_version: "2.0.2"
root@u1404-socketplane0:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:94:82 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.27/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe48:9482/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:f3:47:07 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.30/24 brd 172.16.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fef3:4707/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 6e:a9:cb:1c:07:5f brd ff:ff:ff:ff:ff:ff
6: docker0-ovs: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 02:5c:a0:bf:c3:41 brd ff:ff:ff:ff:ff:ff
inet6 fe80::9441:31ff:fe72:c225/64 scope link
valid_lft forever preferred_lft forever
7: ovsb675e71: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 0e:ae:31:01:b5:61 brd ff:ff:ff:ff:ff:ff
inet6 fe80::cae:31ff:fe01:b561/64 scope link
valid_lft forever preferred_lft forever
8: frontend: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 1a:b8:b1:d8:f2:90 brd ff:ff:ff:ff:ff:ff
inet6 fe80::18b8:b1ff:fed8:f290/64 scope link
valid_lft forever preferred_lft forever
9: ovs5b69b67: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 36:1b:12:9b:26:17 brd ff:ff:ff:ff:ff:ff
inet6 fe80::341b:12ff:fe9b:2617/64 scope link
valid_lft forever preferred_lft forever
10: default: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether ba:e8:2f:c6:c1:34 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.1/16 scope global default
valid_lft forever preferred_lft forever
inet6 fe80::b8e8:2fff:fec6:c134/64 scope link
valid_lft forever preferred_lft forever
11: ovs30e56d1: mtu 1514 qdisc noop state DOWN group default
link/ether 02:42:a9:fe:fe:03 brd ff:ff:ff:ff:ff:ff

Install/Deploy SocketPlane using Docker Compose

We can simplify SocketPlane by deploying 3 containers:

The only requirements we have of the host system are then:

  • Docker
  • Docker Compose
  • iptables (which is also a Docker dependency)
  • Linux Kernel > 3.7 with Open vSwitch and VXLAN kernel modules loaded (modprobe openvswitch)

SocketPlane can be driven through the REST API and we can write a command line client in Golang that could be installed by deb/rpm etc... (this would replace the shell script)

An "apt-get update" hangs "[Waiting for headers]" in an a socketplane created 2nd instance

This appears to be an intermittent issue. It works sometimes but fails most times.

Here is the scenario:

On a VM (e.g., socketplane-1) I create a new network.

sudo socketplane network create web 10.2.0.0/16

Then launch a new docker instance on that same host.

sudo socketplane run -n web -itd ubuntu:14.04

Then I attach to the docker instance.

sudo socketplane attach

The run the apt-get

sudo apt-get update

This works fine.

Then I jump onto a second host (e.g., socketplane-2) and do the following.

The I launch a new docker instance on the second host.

sudo socketplane run -n web -itd ubuntu:14.04

Then I attach to the docker instance.

sudo socketplane attach

The run the apt-get

sudo apt-get update

This is the hanging response...

root@e1ec329bdc1b:/# sudo apt-get update
Ign http://archive.ubuntu.com trusty InRelease
Ign http://archive.ubuntu.com trusty-updates InRelease
Ign http://archive.ubuntu.com trusty-security InRelease
Hit http://archive.ubuntu.com trusty Release.gpg
Get:1 http://archive.ubuntu.com trusty-updates Release.gpg [933 B]
Get:2 http://archive.ubuntu.com trusty-security Release.gpg [933 B]
Hit http://archive.ubuntu.com trusty Release
Ign http://archive.ubuntu.com trusty-updates Release                           
Ign http://archive.ubuntu.com trusty-security Release
100% [Waiting for headers]  

I could not recreate this issue running just docker w/o socketplane installed.

Network create and delete Unit-Tests run only in sudo

Since the Network Create and delete operations depends on netlink APIs, the tests needs to be run with sudo privileges. #109 tried to address it, but there are a few discrepancies such as ecc.Put() is called in NetworkCreate only after all the netlink apis succeed. Which seems reasonable to me. in that case, GetNetwork will fail as expected.

Vagrant up runs in error

Vagrantfile uses socketplane-specific box url on S3. Please switch to native ubuntu or make box url configurable, i.e. by ENV variables.

Can't run socketplane on a single host (the revenge of --iface=auto)

A while back we added some logic to interface detection so it would wait until it saw peers before choosing an interface...

In a single host environment it yields the following logs

INFO[0007] Identifying interface to bind ... Use --iface option for static binding
INFO[0014] Identifying interface to bind ... Use --iface option for static binding
INFO[0021] Identifying interface to bind ... Use --iface option for static binding
INFO[0028] Identifying interface to bind ... Use --iface option for static binding
INFO[0035] Identifying interface to bind ... Use --iface option for static binding

No containers can be spawned and the socketplane commands give errors

Add an explicit --mdns-iface option as install param

MDNS discovery automatically identifies the interface to bind to. But there are scenarios in which the user might want to override it. The backend Agent already supports the --iface option. But the socketplane.sh script doesnt support this and infact it overrides --iface with its own value.

This issue is opened to fix the existing but in socketplane.sh and change the option name from --iface to --mdns-iface.

"--iface=auto" is hard-code in socketplane.sh

I try to use socketplane in my openstack VMs. And I find that if I use "--iface=auto", the consul cluster can't work well. After I modify "--iface=eth0" in socketplane.sh, it works in my envorinment(My Openstack VMs have only one interface eth0). So I think it will be better if "--iface" is configurable in socketplane command .

Socketplane info command recommendations

Here a re a couple of recommendations for the socketplane info command

  1. The output needs to display a container status field.
  2. The socketplane info command should have an output format option. This is very useful for difffernt forms of reporting and automation options.

for example similar to the chef knife command...

-F FORMAT, --format FORMAT
Output format: summary (default), text, json, yaml

soduo scoketplane info -F text

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.