Giter Site home page Giter Site logo

hashicorp / consul-replicate Goto Github PK

View Code? Open in Web Editor NEW
511.0 60.0 70.0 1.49 MB

Consul cross-DC KV replication daemon.

Home Page: https://www.hashicorp.com/

License: Mozilla Public License 2.0

Makefile 6.39% Go 80.01% Shell 6.32% Dockerfile 3.34% HCL 3.95%
consul go replication

consul-replicate's Introduction

Consul Replicate

Build Status

This project provides a convenient way to replicate values from one Consul datacenter to another using the consul-replicate daemon.

The daemon consul-replicate integrates with Consul to perform cross-data-center K/V replication. This makes it possible to manage application configuration from a central data center, with low-latency asynchronous replication to other data centers, thus avoiding the need for smart clients that would need to write to all data centers and queue writes to handle network failures.


The documentation in this README corresponds to the master branch of Consul Replicate. It may contain unreleased features or different APIs than the most recently released version.

Please see the Git tag that corresponds to your version of Consul Replicate for the proper documentation.


Installation

  1. Download a pre-compiled, released version from the Consul Replicate releases page.

  2. Extract the binary using unzip or tar.

  3. Move the binary into $PATH.

To compile from source, please see the instructions in the contributing section.

Usage

For the full list of options:

$ consul-replicate -h

Command Line Flags

The CLI interface supports all options in the configuration file and visa-versa. Here are a few examples of common integrations on the command line.

Replicate all keys under "global" from the nyc1 data center:

$ consul-replicate \
  -prefix "global@nyc1"

Replicate all keys under "global" from the nyc1 data center, renaming the key to "default" in the replicated stores:

$ consul-replicate \
  -prefix "global@nyc1:default"

Replicate all keys under "global" from the nyc1 data center, but do not poll or watch for changes (just do it one time):

$ consul-replicate \
  -prefix "global@nyc1" \
  -once

Replicate all keys under "global" from the nyc1 data center, but exclude the global/private prefix:

$ consul-replicate \
  -prefix "global@nyc1" \
  -exclude "global/private" \
  -once

Configuration File Format

Configuration files are written in the HashiCorp Configuration Language. By proxy, this means the configuration is also JSON compatible.

# This denotes the start of the configuration section for Consul. All values
# contained in this section pertain to Consul.
consul {
  # This block specifies the basic authentication information to pass with the
  # request. For more information on authentication, please see the Consul
  # documentation.
  auth {
    enabled  = true
    username = "test"
    password = "test"
  }

  # This is the address of the Consul agent. By default, this is
  # 127.0.0.1:8500, which is the default bind and port for a local Consul
  # agent. It is not recommended that you communicate directly with a Consul
  # server, and instead communicate with the local Consul agent. There are many
  # reasons for this, most importantly the Consul agent is able to multiplex
  # connections to the Consul server and reduce the number of open HTTP
  # connections. Additionally, it provides a "well-known" IP address for which
  # clients can connect.
  address = "127.0.0.1:8500"

  # This is the ACL token to use when connecting to Consul. If you did not
  # enable ACLs on your Consul cluster, you do not need to set this option.
  #
  # This option is also available via the environment variable CONSUL_TOKEN.
  token = "abcd1234"

  # This controls the retry behavior when an error is returned from Consul.
  # Consul Replicate is highly fault tolerant, meaning it does not exit in the
  # face of failure. Instead, it uses exponential back-off and retry functions
  # to wait for the cluster to become available, as is customary in distributed
  # systems.
  retry {
    # This enabled retries. Retries are enabled by default, so this is
    # redundant.
    enabled = true

    # This specifies the number of attempts to make before giving up. Each
    # attempt adds the exponential backoff sleep time. Setting this to
    # zero will implement an unlimited number of retries.
    attempts = 12

    # This is the base amount of time to sleep between retry attempts. Each
    # retry sleeps for an exponent of 2 longer than this base. For 5 retries,
    # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s.
    backoff = "250ms"

    # This is the maximum amount of time to sleep between retry attempts.
    # When max_backoff is set to zero, there is no upper limit to the
    # exponential sleep between retry attempts.
    # If max_backoff is set to 10s and backoff is set to 1s, sleep times
    # would be: 1s, 2s, 4s, 8s, 10s, 10s, ...
    max_backoff = "1m"
  }

  # This block configures the SSL options for connecting to the Consul server.
  ssl {
    # This enables SSL. Specifying any option for SSL will also enable it.
    enabled = true

    # This enables SSL peer verification. The default value is "true", which
    # will check the global CA chain to make sure the given certificates are
    # valid. If you are using a self-signed certificate that you have not added
    # to the CA chain, you may want to disable SSL verification. However, please
    # understand this is a potential security vulnerability.
    verify = false

    # This is the path to the certificate to use to authenticate. If just a
    # certificate is provided, it is assumed to contain both the certificate and
    # the key to convert to an X509 certificate. If both the certificate and
    # key are specified, Consul Replicate will automatically combine them into an
    # X509 certificate for you.
    cert = "/path/to/client/cert"
    key  = "/path/to/client/key"

    # This is the path to the certificate authority to use as a CA. This is
    # useful for self-signed certificates or for organizations using their own
    # internal certificate authority.
    ca_cert = "/path/to/ca"

    # This is the path to a directory of PEM-encoded CA cert files. If both
    # `ca_cert` and `ca_path` is specified, `ca_cert` is preferred.
    ca_path = "path/to/certs/"

    # This sets the SNI server name to use for validation.
    server_name = "my-server.com"
  }
}

# This is the list of keys to exclude if they are found in the prefix. This can
# be specified multiple times to exclude multiple keys from replication.
exclude {
  source = "my-key"
}

# This is the signal to listen for to trigger a graceful stop. The default value
# is shown below. Setting this value to the empty string will cause Consul
# Replicate to not listen for any graceful stop signals.
kill_signal = "SIGINT"

# This is the log level. If you find a bug in Consul Replicate, please enable
# debug logs so we can help identify the issue. This is also available as a
# command line flag.
log_level = "warn"

# This is the maximum interval to allow "stale" data. By default, only the
# Consul leader will respond to queries; any requests to a follower will
# forward to the leader. In large clusters with many requests, this is not as
# scalable, so this option allows any follower to respond to a query, so long
# as the last-replicated data is within these bounds. Higher values result in
# less cluster load, but are more likely to have outdated data.
max_stale = "10m"

# This is the path to store a PID file which will contain the process ID of the
# Consul Replicate process. This is useful if you plan to send custom signals
# to the process.
pid_file = "/path/to/pid"

# This is the prefix and datacenter to replicate and the resulting destination.
prefix {
  source      = "global"
  datacenter  = "nyc1"
  destination = "default"
}

# This is the signal to listen for to trigger a reload event. The default value
# is shown below. Setting this value to the empty string will cause Consul
# Replicate to not listen for any reload signals.
reload_signal = "SIGHUP"

# This is the path in Consul to store replication and leader status.
status_dir = "service/consul-replicate/statuses"

# This block defines the configuration for connecting to a syslog server for
# logging.
syslog {
  # This enables syslog logging. Specifying any other option also enables
  # syslog logging.
  enabled = true

  # This is the name of the syslog facility to log to.
  facility = "LOCAL5"
}

# This is the quiescence timers; it defines the minimum and maximum amount of
# time to wait for the cluster to reach a consistent state before rendering a
# replicating. This is useful to enable in systems that have a lot of flapping,
# because it will reduce the the number of times a replication occurs.
wait {
  min = "5s"
  max = "10s"
}

Note that not all fields are required. If you are not logging to syslog, you do not need to specify a syslog configuration.

For additional security, tokens may also be read from the environment using the CONSUL_TOKEN environment variable. It is highly recommended that you do not put your tokens in plain-text in a configuration file.

Instruct Consul Replicate to use a configuration file with the -config flag:

$ consul-replicate -config "/my/config.hcl"

This argument may be specified multiple times to load multiple configuration files. The right-most configuration takes the highest precedence. If the path to a directory is provided (as opposed to the path to a file), all of the files in the given directory will be merged in lexical order, recursively. Please note that symbolic links are not followed.

Commands specified on the CLI take precedence over a config file!

Debugging

Consul Replicate can print verbose debugging output. To set the log level for Consul Replicate, use the -log-level flag:

$ consul-replicate -log-level info ...
<timestamp> [INFO] (cli) received redis from Watcher
<timestamp> [INFO] (cli) invoking Runner
# ...

You can also specify the level as trace:

$ consul-replicate -log-level trace ...
<timestamp> [DEBUG] (cli) creating Runner
<timestamp> [DEBUG] (cli) creating Consul API client
<timestamp> [DEBUG] (cli) creating Watcher
<timestamp> [DEBUG] (cli) looping for data
<timestamp> [DEBUG] (watcher) starting watch
<timestamp> [DEBUG] (watcher) all pollers have started, waiting for finish
<timestamp> [DEBUG] (redis) starting poll
<timestamp> [DEBUG] (service redis) querying Consul with &{...}
<timestamp> [DEBUG] (service redis) Consul returned 2 services
<timestamp> [DEBUG] (redis) writing data to channel
<timestamp> [DEBUG] (redis) starting poll
<timestamp> [INFO] (cli) received redis from Watcher
<timestamp> [INFO] (cli) invoking Runner
<timestamp> [DEBUG] (service redis) querying Consul with &{...}
# ...

FAQ

Q: Can I use this for master-master replication?
A: Master-master replication is not possible. A leader would never be elected.

Contributing

To build and install Consul Replicate locally, you will need to install the Docker engine:

Clone the repository:

$ git clone https://github.com/hashicorp/consul-replicate.git

To compile the consul-replicate binary for your local machine:

$ make dev

This will compile the consul-replicate binary into bin/consul-replicate as well as your $GOPATH and run the test suite.

If you want to compile a specific binary, set XC_OS and XC_ARCH or run the following to generate all binaries:

$ make bin

If you just want to run the tests:

$ make test

Or to run a specific test in the suite:

go test ./... -run SomeTestFunction_name

consul-replicate's People

Contributors

armon avatar calebalbers avatar colekowalski avatar cthain avatar daveadams avatar dependabot[bot] avatar hashicorp-tsccr[bot] avatar jeanneryan avatar jrasell avatar kahn avatar kyhavlov avatar mdeggies avatar roncodingenthusiast avatar ryanuber avatar sethvargo avatar sitano avatar slackpad avatar zitudu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

consul-replicate's Issues

Containerization

Oy.

The docs state that the leader doesn't get elected automatically, and that you need to aquire a lock first:

consul lock locks/replicate consul-replicate -prefix ...

The problem I'm having is that I'm trying to run the consul server and replicate under two different containers. So the binary of one isn't available in the other. I think I'm missing something here, because right now I have two datacenters with three servers each, on which a consul-server and a consul-replicate container are running, and the consul-replicate containers are basically doing nothing. Neither are lock or service directories pop up in the k/v store. So before digging deeper into this:

Am I to throw consul-replicate into the consul-server container in order to get this to work and have the replicators elect a leader when shooting the first lock-aquiring command by hand?

Thanks in advance.

<3, @doertedev.

Empty Folders in source DC being synced as jsut values in destination DC

Hello,
I have couple of folders in one dc and i tried to sync with a destination dc,
I see the folders that are not empty are being synced as folders and any folders that are empty are being synced as just keys. when I add something to the empty folder and sync, then it syncs as folder.But the previously created value still remains.

I am using consul-replicate v0.2.0

Thanks

Multi-prefix on JSON configuration file gets overwritten

Multi-prefix via configuration file does not seem to work, as keys are unique in JSON.

The following HCL:

consul = "127.0.0.1:8500"
token = "abcd1234"
retry = "10s"
max_stale = "10m"

auth {
  enabled = true
  username = "test"
  password = "test"
}

ssl {
  enabled = true
  verify = false
}

syslog {
  enabled = true
  facility = "LOCAL5"
}

prefix {
  source = "global@nyc1"
  destination = "local/nyc1"
}

prefix {
  source = "global@nyc2"
  destination = "local/nyc2"
}

Converts to this JSON:

{
    "auth": {
        "enabled": true,
        "password": "test",
        "username": "test"
    },
    "consul": "127.0.0.1:8500",
    "max_stale": "10m",
    "prefix": {
        "destination": "local/nyc2",
        "source": "global@nyc2"
    },
    "retry": "10s",
    "ssl": {
        "enabled": true,
        "verify": false
    },
    "syslog": {
        "enabled": true,
        "facility": "LOCAL5"
    },
    "token": "abcd1234"
}

In the example above, prefix configuration for global@nyc1 gets removed/overwritten since a duplicate key exists. A suggestion is to have prefix also accept an array of objects (i.e. hashes) if more than one prefix is specified. The same should apply for destination, so that it can be a string or an array of strings.

{
    "auth": {
        "enabled": true,
        "password": "test",
        "username": "test"
    },
    "consul": "127.0.0.1:8500",
    "max_stale": "10m",
    "prefix": [
        {
            "destination": "local/nyc1",
            "source": "global@nyc1"
        },
        {
            "destination": [
                "local/nyc2",
                "temp/nyc2"
            ],
            "source": "global@nyc2"
        }
    ],
    "retry": "10s",
    "ssl": {
        "enabled": true,
        "verify": false
    },
    "syslog": {
        "enabled": true,
        "facility": "LOCAL5"
    },
    "token": "abcd1234"
}

Consul keys are created with backslashes on Windows

When running Consul Replicate on Windows, the status keys in Consul are created with backslashes instead of forward slashes, creating garbled key names in Consul which can't be deleted. This appears to be because the status path variable is created using the filepath.Join method which will use backslashes on Windows. See runner.go:

return filepath.Join(r.config.StatusDir, enc)

when folder 'xyz' is synced it is also creating a new key 'xyz'

just tried out to sync a folder from dc2 to dc1 called 'xyz'
After sync was ready it also creates a new key 'xyz' in the kv store on dc1.

Deleting that key and changing/adding some new keys in dc2 had no impact on restarting the sync. All the changes are synced properly.

So it seems to me that new key shouldn't be created

Stepping down, session invalidated

When running consul-replicate I get regular error messages:
[ERR] Stepping down, session invalidated

This is causes the consul-replicate service to "flap". Is there anyway around this?

consul-replicate fails to start with config file

I just setup some test infrastructure to try out consul-replicate. If I run it from the command line, it works fine:

$ bin/consul-replicate -prefix "vault/sys/token@dc1" -log-level=debug
2016/03/18 18:16:35 [INFO] (runner) creating new runner (once: false)
2016/03/18 18:16:35 [INFO] (runner) creating consul/api client
2016/03/18 18:16:35 [DEBUG] (runner) setting basic auth
2016/03/18 18:16:35 [INFO] (runner) creating Watcher
2016/03/18 18:16:35 [INFO] (runner) starting
2016/03/18 18:16:35 [INFO] (watcher) adding "storeKeyPrefix(vault/sys/token@dc1)"
2016/03/18 18:16:35 [DEBUG] (watcher) "storeKeyPrefix(vault/sys/token@dc1)" starting
2016/03/18 18:16:35 [DEBUG] (view) "storeKeyPrefix(vault/sys/token@dc1)" starting fetch
2016/03/18 18:16:35 [DEBUG] ("storeKeyPrefix(vault/sys/token@dc1)") querying consul with &{Datacenter:dc1 AllowStale:false RequireConsistent:false WaitIndex:0 WaitTime:1m0s Token: Near:}
2016/03/18 18:16:35 [DEBUG] ("storeKeyPrefix(vault/sys/token@dc1)") Consul returned 116 key pairs
2016/03/18 18:16:35 [INFO] (view) "storeKeyPrefix(vault/sys/token@dc1)" received data
2016/03/18 18:16:35 [INFO] (runner) quiescence timers starting
2016/03/18 18:16:35 [DEBUG] (view) "storeKeyPrefix(vault/sys/token@dc1)" starting fetch
2016/03/18 18:16:35 [DEBUG] ("storeKeyPrefix(vault/sys/token@dc1)") querying consul with &{Datacenter:dc1 AllowStale:false RequireConsistent:false WaitIndex:1064306 WaitTime:1m0s Token: Near:}
2016/03/18 18:16:35 [INFO] (runner) quiescence minTimer fired
2016/03/18 18:16:35 [INFO] (runner) running
2016/03/18 18:16:35 [DEBUG] (runner) updated key "vault/sys/token/id/000d33273d96c0a6c1b8658005a75d46c899b491"
....
2016/03/18 18:16:35 [INFO] (runner) replicated 116 updates, 0 deletes

If I try the same thing, using a config file like this (foo.hcl):

consul = "127.0.0.1:8500"
retry = "10s"
max_stale = "10m"

prefix {
  source = "vault/sys/token@dc1"
}

I get nothing - it just sits there at "starting...":

$ bin/consul-replicate -config=./foo.hcl -log-level=debug
2016/03/18 18:18:08 [INFO] (runner) creating new runner (once: false)
2016/03/18 18:18:08 [INFO] (runner) creating consul/api client
2016/03/18 18:18:08 [DEBUG] (runner) setting basic auth
2016/03/18 18:18:08 [INFO] (runner) creating Watcher
2016/03/18 18:18:08 [INFO] (runner) starting
^CReceived interrupt, cleaning up...
2016/03/18 18:18:39 [INFO] (runner) stopping
2016/03/18 18:18:39 [INFO] (watcher) stopping all views

I've tried global and a host of other prefixes, and it fails to sync on all of them, if I try to use a config file. Works fine if I run the command directly...

EDIT: here's the version info:

bin/consul-replicate -version
consul-replicate v0.2.0.dev

Not able to compile into binary

Go 1.6.2

GOPATH=/home/ssen/work
GOROOT=/home/ssen/bin/go
CONSUL-REPLICATE=/home/ssen/work/src/consul-replicate

cd /home/ssen/work/src/consul-replicate
make

Then I get a bunch of errors stating it can't find packages from github.com/hashicorp/<INSERT REPO>

Snippet:

==> Generating...
can't load package: package consul-replicate: code in directory /home/ssen/work/src/consul-replicate     expects import "github.com/hashicorp/consul-replicate"
==> Running tests...
can't load package: package consul-replicate: code in directory /home/ssen/work/src/consul-replicate     expects import "github.com/hashicorp/consul-replicate"
import cycle not allowed
package consul-replicate/vendor/github.com/fatih/structs`

Replication is broken

  1. Config.Prefix.DataCenter in release 0.3.0 stands for nothing
  2. Prefix.Source = "path@datacenter" results into wrong keys being replicated:

i.e.

key -> key@dckey
a/b -> a/b@dca/b
...

Replicate failed with source=keys

Hello.
With consul-replicate 0.2 if "source" in prefix section of config is key (not folder) it is skipped from sync.
At 0.1 version this bug not exists.

Maybe it is why that string "vault/core/audit" from config becomes "vault/core/audit/" in debug log?

Debug logs with simple config

consul = "consul.service.consul:8500"
token = "mytoken"
retry = "10s"
max_stale = "10m"

prefix {
  source = "vault/core/audit@dc_ruweb"
}
/ # consul-replicate -log-level=debug -config=/etc/consul_replicate_tmp.hcl
2016/09/07 23:45:56 [INFO] consul-replicate v0.2.0
2016/09/07 23:45:56 [INFO] (runner) creating new runner (once: false)
2016/09/07 23:45:56 [DEBUG] (runner) final config (tokens suppressed):

{
  "Path": "/etc/consul_replicate_tmp.hcl",
  "Consul": "consul.service.consul:8500",
  "Token": "mytoken",
  "Prefixes": [
    {
      "Source": {
        "Prefix": "vault/core/audit/",
        "DataCenter": "dc_ruweb"
      },
      "SourceRaw": "vault/core/audit@dc_ruweb",
      "Destination": "vault/core/audit/"
    }
  ],
  "Auth": {
    "Enabled": false,
    "Username": "",
    "Password": ""
  },
  "PidFile": "",
  "SSL": {
    "Enabled": false,
    "Verify": true,
    "Cert": "",
    "Key": "",
    "CaCert": ""
  },
  "Syslog": {
    "Enabled": false,
    "Facility": "LOCAL0"
  },
  "MaxStale": 600000000000,
  "Retry": 10000000000,
  "Wait": {
    "min": 150000000,
    "max": 400000000
  },
  "LogLevel": "debug",
  "StatusDir": "service/consul-replicate/statuses"
}

2016/09/07 23:45:56 [INFO] (runner) creating consul/api client
2016/09/07 23:45:56 [DEBUG] (runner) setting address to consul.service.consul:8500
2016/09/07 23:45:56 [DEBUG] (runner) setting token to mytoken
2016/09/07 23:45:56 [DEBUG] (runner) setting basic auth
2016/09/07 23:45:56 [INFO] (runner) creating Watcher
2016/09/07 23:45:56 [INFO] (clients) creating consul/api client
2016/09/07 23:45:56 [DEBUG] (clients) setting consul address to "consul.service.consul:8500"
2016/09/07 23:45:56 [DEBUG] (clients) setting consul token
2016/09/07 23:45:56 [INFO] (runner) starting
2016/09/07 23:45:56 [INFO] (watcher) adding "storeKeyPrefix(vault/core/audit@dc_ruweb)"
2016/09/07 23:45:56 [DEBUG] (watcher) "storeKeyPrefix(vault/core/audit@dc_ruweb)" starting
2016/09/07 23:45:56 [DEBUG] (view) "storeKeyPrefix(vault/core/audit@dc_ruweb)" starting fetch
2016/09/07 23:45:56 [DEBUG] ("storeKeyPrefix(vault/core/audit@dc_ruweb)") querying consul with &{Datacenter:dc_ruweb AllowStale:true RequireConsistent:false WaitIndex:0 WaitTime:1m0s Token: Near:}
2016/09/07 23:45:56 [DEBUG] ("storeKeyPrefix(vault/core/audit@dc_ruweb)") Consul returned 0 key pairs
2016/09/07 23:45:56 [INFO] (view) "storeKeyPrefix(vault/core/audit@dc_ruweb)" received data
2016/09/07 23:45:56 [DEBUG] (view) "storeKeyPrefix(vault/core/audit@dc_ruweb)" starting fetch
2016/09/07 23:45:56 [DEBUG] ("storeKeyPrefix(vault/core/audit@dc_ruweb)") querying consul with &{Datacenter:dc_ruweb AllowStale:true RequireConsistent:false WaitIndex:172436 WaitTime:1m0s Token: Near:}
2016/09/07 23:45:56 [INFO] (runner) quiescence timers starting
2016/09/07 23:45:56 [INFO] (runner) quiescence minTimer fired
2016/09/07 23:45:56 [INFO] (runner) running

consul lock on supervisor such as upstart

I am trying to run consul-replicate on all server nodes for a datacenter. However, I keep getting these warnings:

2015/03/31 14:45:32 [WARN] serf: received old event consul:new-leader from time 21 (current: 715)
2015/03/31 14:45:46 [WARN] serf: received old event consul:new-leader from time 21 (current: 715)
2015/03/31 14:46:02 [WARN] serf: received old event consul:new-leader from time 21 (current: 715)

Additionally since I am running consul-replicate under consul lock on an upstart script, I end up not getting any logs for consul-replicate under /var/log/upstart/. How can I get around that and get logs?

script
...
    exec consul lock -token <token> locks/replicate consul-replicate -config /etc/consul_replicate
end script

Build broken

Consul-replicate build is broken.

I get this following error.

$ make
go get -d -v ./...
echo | xargs -n1 go get -d
go build -o bin/consul-replicate

**./runner.go:457: clientSet.Add undefined (type *dependency.ClientSet has no field or method Add)
make: * [build] Error 2

It looks like some change in "consul-template/dependency/" has broken this build.

Is there a way to work around it?
I didn't find the godeps in both the project "consul-template" and "consul-replicate".

Add Homebrew support.

I see consul y consul-template in Hombrew. It will be cool to have consul-replicate too.

Lock acquisition failing

I am running into the following issue:

Setting up lock at path: locks/replicate/.lock
Attempting lock acquisition
Lock acquisition failed: failed to create session: Unexpected response code: 500 (rpc error: No cluster leader)

The clusters are set up like so: 5 server nodes on dc1 and 5 on dc2. I am running multiple instances of consul-replicate through consul lock on upstart for high-availability on dc2, which is trying to replicate a set of KV's on dc1. I verified leadership in both datacenters via the /v1/status/leader API call and manually through consul info that the agent is indeed the leader.

consul-replicate not replicating all keys

We are looking at possible using consul-replicate to replicate data between data centers, but in testing we have found that not all keys are replicated, even though consul-replicate seems to think they are. Hopefully it's something with our setup, but maybe there is some other issue?

For testing we have a python script that writes as many keys as it can to a specified prefix within a specified number of seconds (i.e. write as many keys to sc-poc/ prefix as you can for 10 seconds). This script can be seen here: consul_benchmark.py

In the latest run this script was able to write 769 keys in 10 seconds, but only 457 of those keys were replicated.

The following commanded was used for the replication (debug added once we realized keys were missing):

consul-replicate -prefix "sc-poc/@us-east-1" -log-level debug > /tmp/repl.log 2>&1 &

consul-replicate was started before any keys in the prefix existed and was left running a while (10 - 15 min) after the writing of keys had stopped, so it wasn't like it was delayed.

Here is the output of an ipython session used to try and figure out what happened during the test.

The following shows me connecting to both the local cluster (source) and remote cluster (dest) and getting a count of keys written to the "sc-poc"/ prefix.

In [1]: import consul
In [2]: lc = consul.Consul(host='127.0.0.1')
In [3]: rc = consul.Consul(host='10.0.0.141')
In [4]: lkeys = lc.kv.get("sc-poc/",recurse=True)
In [5]: len(lkeys[1])
Out[5]: 769
In [6]: rkeys = rc.kv.get("sc-poc/",recurse=True)
In [7]: len(rkeys[1])
Out[7]: 457

Now I get a list of keys from both local and remote and generate a list of keys missing on the remote side. 312 keys missing (312 + 457 = 769)

In [8]: lkeylist = []
In [9]: for key in lkeys[1]:
   ...:         lkeylist.append(key['Key'])
   ...:
In [10]: rkeylist = []
In [11]: for key in rkeys[1]:
   ....:        rkeylist.append(key['Key'])
   ....:
In [12]: missing_keys = list(set(lkeylist) - set(rkeylist))
In [13]: len(missing_keys)
Out[13]: 312

Now I get a single missing keys and view the key both local and remote to validate the key is indeed missing on the remote.

In [14]: sample_key = missing_keys[0]
In [15]: sample_key
Out[15]: u'sc-poc/ip-10-1-0-5/key_238'
In [16]: lc.kv.get(sample_key)
Out[16]:
('72603',
 {u'CreateIndex': 72603,
  u'Flags': 0,
  u'Key': u'sc-poc/ip-10-1-0-5/key_238',
  u'LockIndex': 0,
  u'ModifyIndex': 72603,
  u'Value': '74107ca52f9ee603783d8d40613bc399eae203bd'})
In [17]: rc.kv.get(sample_key)
Out[17]: ('43347', None)

Next I look at the "service/consul-replicate/statuses" key to determine what key consul-replicate thinks is the last key it replicated. I then look for that key on both the local and remote.

In [18]: rc.kv.get("service",recurse=True)
Out[18]:
('43347',
 [{u'CreateIndex': 42850,
   u'Flags': 0,
   u'Key': u'service/consul-replicate/statuses/ba7a6cd8f91c8bac2bdcff42cd08df81',
   u'LockIndex': 0,
   u'ModifyIndex': 43347,
   u'Value': '{\n  "LastReplicated": 73133,\n  "Source": "sc-poc/",\n  "Destination": "sc-poc/"\n}'}])
In [19]: for key in lkeys[1]:
   ....:        if key['CreateIndex'] == 73133:
   ....:                        print(key)
   ....:
{u'LockIndex': 0, u'ModifyIndex': 73133, u'Value': '09565407012308833ea63643c958f185b20dd3a7', u'Flags': 0, u'Key': u'sc-poc/ip-10-1-0-5/key_768', u'CreateIndex': 73133}
In [20]: rc.kv.get('sc-poc/ip-10-1-0-5/key_768')
Out[20]: ('43347', None)

Lastly looking the debug output (attached repl.log.txt) for "sc-poc/ip-10-1-0-5/key_238" and "sc-poc/ip-10-1-0-5/key_768" it states that consul-replicate thinks those keys have indeed been replicated.

[ec2-user@ip-10-0-0-141 ~]$ grep sc-poc/ip-10-1-0-5/key_238 /tmp/repl.log
2015/11/19 18:27:05 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:05 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:06 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:06 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:06 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:07 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:07 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:08 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:08 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:08 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:09 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:09 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:10 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:10 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:10 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:10 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:11 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:11 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:12 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
2015/11/19 18:27:12 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_238" is already replicated
[ec2-user@ip-10-0-0-141 ~]$ grep sc-poc/ip-10-1-0-5/key_768 /tmp/repl.log
2015/11/19 18:27:12 [DEBUG] (runner) skipping because "sc-poc/ip-10-1-0-5/key_768" is already replicated
[ec2-user@ip-10-0-0-141 ~]$

Given that these keys are not in the remote side, something fishy seems to be going on.

Keys seem to be missing every run of a test when we write a few hundred keys.

It doesn't look like this issue is seen if replication is started after the writing of keys has been completed. For example is we write a bunch of keys, and then start consul-replicate everything seems to work fine.

$ consul-replicate -version
consul-replicate v0.2.0

Bug: replication broken when config read from file path

The bug is here https://github.com/hashicorp/consul-replicate/blob/master/config.go#L360 and at #L369.

In #L360 it's passing prefix.Source which in case of config should not contain @dc suffix, because this kind of data now sits in separate field.

Thus, dep.NewKVListQuery misses the point of extracting source datacenter from file config.

Then, assigning in #L369 assumes there is no @dc suffix in the source. So, both possible configuration options ends up broken with file config.

Any plans to support master-master replication?

Are there any plans to support master-master replication? With the current master-slave setup, if the master dc fails, all writes will be blocked unless we have some fail-over to automatically turn another dc into a master.

Use consult-replicate in non-WAN multicluster environments

We would like to use Consul in our different data centers as KV store, and use one cluster as the primary to read and write data KV data and replicate the data to all other clusters. Our network configuration does not allow communication between some data centers so we cannot setup WAN among all clusters. Would that be possible to enhance consult-replicate to support kv replication from one cluster to another regardless of being part of a WAN pool?

Add ACL support

consul-replicate needs a token capable of reading KVs from the remote DC and writing them to the local DC. Right now, there's no way to enable consul-replicate if ACLs are enabled.

consul-replicate doesn't attempt HTTPS connections when SSL flag specified

Attempting to replicate and my Consul servers require SSL. I have redacted certain information like datacenter and hostnames so please ignore dc1 and server.com.

Based on the GET log lines, it's using HTTP instead of HTTPS even though I have specified the SSL flag.

[root replicate]# consul-replicate -consul="server.com:8543" -ssl -ssl-verify -prefix="vault@dc1" -once -log-level=debug
2015/06/04 18:52:50 INFO creating new runner (once: true)
2015/06/04 18:52:50 INFO creating consul/api client
2015/06/04 18:52:50 DEBUG setting address to server.com:8543
2015/06/04 18:52:50 DEBUG setting basic auth
2015/06/04 18:52:50 INFO creating Watcher
2015/06/04 18:52:50 INFO starting
2015/06/04 18:52:50 INFO adding "storeKeyPrefix(vault@dc1)"
2015/06/04 18:52:50 DEBUG "storeKeyPrefix(vault@dc1)" starting
2015/06/04 18:52:50 DEBUG "storeKeyPrefix(vault@dc1)" starting fetch
2015/06/04 18:52:50 DEBUG querying Consul with &{Datacenter:dc1 AllowStale:false RequireConsistent:false WaitIndex:0 WaitTime:1m0s Token:}
2015/06/04 18:52:50 ERR "storeKeyPrefix(vault@dc1)" Get http://server.com:8543/v1/kv/vault?dc=dc1&recurse=&wait=60000ms: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
2015/06/04 18:52:50 INFO "storeKeyPrefix(vault@dc1)" errored, retrying in 5s
2015/06/04 18:52:50 ERR watcher reported error: Get http://server.com:8543/v1/kv/vault?dc=dc1&recurse=&wait=60000ms: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
2015/06/04 18:52:50 INFO running

consul replicate not replicating keys

What is the CLI for consul-replicate from dc-one to dc-two below?
Is there anything missing?

dc-one (physical datacenter-1)
consul-server-2 : dc:dc-one consul IP:192.168.0.153 client-ip:192.168.0.155

http://192.168.0.155:8500/v1/kv/?recurse=

[{"CreateIndex":1287,"ModifyIndex":1287,"LockIndex":0,"Key":"foo","Flags":0,"Value":"YmFy"},{"CreateIndex":1553,"ModifyIndex":1567,"LockIndex":0,"Key":"service/consul-replicate/statuses/4b6a8b56271d06bed31bfa838eb2235e","Flags":0,"Value":"ewogICJMYXN0UmVwbGljYXRlZCI6IDE1NTIsCiAgIlNvdXJjZSI6ICJ3ZWIiLAogICJEZXN0aW5hdGlvbiI6ICJ3ZWIiCn0="},{"CreateIndex":50,"ModifyIndex":1551,"LockIndex":0,"Key":"web/key1","Flags":0,"Value":"bmV3dmFs"},{"CreateIndex":51,"ModifyIndex":1552,"LockIndex":0,"Key":"web/key2","Flags":0,"Value":"bmV3dmFs"}]

dc-two (physical datacenter-2)
consul-server-4: dc: dc-two consul IP : 192.168.0.158 client-ip:localhost

@consul-server-4:~$ consul-replicate -consul=192.168.0.155:8500

@consul-server-4:$ curl http://localhost:8500/v1/kv/?recurse

@consul-server-4:
$

Support for multiple prefixes

There should be support for providing an array of prefixes instead of a single prefix root.

This would enable a cluster to replicate multiple K/V prefixes that are located under separate root paths.

consul-replicate never exits even if -once is passed

consul-replicate copies all of the keys we want over from one "datacenter" to another, but then it just hangs forever. In the debug output (found below) the nodes are in the same location as each other, we just have it split into two logical "datacenters" for testing consul-replicate.

edit: I should add that there are no errors in the consul server logs on either side.

consul version: v0.5.0
consul-replicate version: v0.2.0

debug output:

pobrien@ops:~$ sudo -u consul consul-replicate -prefix secrets@stg1 -once -log-level=debug
2015/04/13 22:41:01 [INFO] (runner) creating new runner (once: true)
2015/04/13 22:41:01 [INFO] (runner) creating consul/api client
2015/04/13 22:41:01 [DEBUG] (runner) setting basic auth
2015/04/13 22:41:01 [INFO] (runner) creating Watcher
2015/04/13 22:41:01 [INFO] (runner) starting
2015/04/13 22:41:01 [INFO] (watcher) adding "storeKeyPrefix(secrets@stg1)"
2015/04/13 22:41:01 [DEBUG] (watcher) "storeKeyPrefix(secrets@stg1)" starting
2015/04/13 22:41:01 [DEBUG] (view) "storeKeyPrefix(secrets@stg1)" starting fetch
2015/04/13 22:41:01 [DEBUG] ("storeKeyPrefix(secrets@stg1)") querying Consul with &{Datacenter:stg1 AllowStale:false RequireConsistent:false WaitIndex:0 WaitTime:1m0s Token:}
2015/04/13 22:41:01 [DEBUG] ("storeKeyPrefix(secrets@stg1)") Consul returned 20 key pairs
2015/04/13 22:41:01 [INFO] (view) "storeKeyPrefix(secrets@stg1)" received data from consul
2015/04/13 22:41:01 [INFO] (runner) quiescence timers starting
2015/04/13 22:41:02 [INFO] (runner) quiescence minTimer fired
2015/04/13 22:41:02 [INFO] (runner) running
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group1/assets_aws_secret" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_aws_secret" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_bing_api_key" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_box_fat" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_box_fit" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_tacks_api_key" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_mysql_password_04_2015" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_oauth_app_secret" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_tmw_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_default_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_la_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_oada_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_o_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group2/app1_sf_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group3/mailer_user_and_pass1" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/group3/mailer_user_and_pass2" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/shared/assets_auth_key" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/shared/ticket_password" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/shared/other_log_api_key" is already replicated
2015/04/13 22:41:02 [DEBUG] (runner) skipping because "secrets/shared/log_key" is already replicated

Recommendation for gathering "replication lag" status?

Hello, I wanted to see if there is a recommended method to show an indication of replication lag or "freshness" for Consul data that is copied with consul-replicate? The intent is to show that the "source" shows a ModifyIndex of e.g. 1000 while the "destination" contains only e.g. 998.

We are experimenting with the following approach, which is quite awkward.

  1. On the "destination", read from Consul the encoded status recorded by consul-replicate, and get the LastReplicated value
  2. On the "source", dump all KV data and find the maximum ModifyIndex
  3. Compare the LastReplicated and ModifyIndex values

Pseudo-commands for #1 and #2:

destination $ consul kv export service/consul-replicate/statuses | jq -r '.[0].value' | base64 -d | jq '.LastReplicated'
source $      curl -s localhost:8500/v1/kv/global?recurse=true | jq -r ' . | max_by(.ModifyIndex) | .ModifyIndex'"

Ideally, consul-replicate would track an equivalent of the above itself, and report the replication lag within a status message, preferably in a discrete value (non-base64) to make for easy monitoring & alerting.

Error if log-level declared in config

I am getting the following error when log-level is declared in the config file instead of being passed into the commad:

Consul Replicate returned errors:
runner: 1 error(s) occurred:

* 1 error(s) decoding:

* '' has invalid keys: log-level

config.json:

{
  "consul": "127.0.0.1:8500",
  "log-level": "info",
  "prefix": [
    {
      "source": "global@default"
    }
  ]
}

Can't install consul-replicate from source

Hi everyone,

if I try to install consul-replicate from source I got some errors:
ubuntu@consul-server01:/usr/lib/go-1.6/src/github.com/hashicorp/consul-replicate$ make
==> Building github.com/hashicorp/consul-replicate (locally)...
--> darwin/386: github.com/hashicorp/consul-replicate
github.com/hashicorp/consul-replicate/vendor/github.com/hashicorp/vault/api
/usr/lib/go-1.6/src/github.com/hashicorp/consul-replicate/vendor/github.com/hashicorp/vault/api/client.go:260: undefined: http.ErrUseLastResponse
Makefile:67: recipe for target 'bin-local' failed
make: *** [bin-local] Error 2

I 'm using Ubuntu 16.04 and go version 1.6.

Kind regards

Consul-replicate unnecessarily uses a shell to spawn its command

consul-replicate requires that the user running it have a valid shell, because it uses said shell to invoke whatever command it's wrapping. This isn't required, though - exec()ing the command should be sufficient.

Fixing this would allow us to set the shell of our dedicated consul-replicate user to /bin/false and reduce our attack surface.

Directory-clear if DCs share the same Directory-name

Oy.

Our nodes across all datacenters have their keys set in the same directory name. Yet when running consul-replicate, it comes along and clears the existing directory (having data for its datacenter) and replicates the remote datacenter into the same directory (persisting remote datacenter data).

I know this seems like a too-political-question, so I'm proposing:

  • check for keys in $directory that are present
  • check which ones are in remote $dc
  • check if there are overlaps
  • if there are none: pull entries & subdirectories in the existing directory without removing existing entries & subdirs.

<3, @doertedev.

Syslog and log level not honored via config file setting

Running the following yields no output to syslog or the console. However, the keys are synced correctly, I just don't see any logs.

./consul-replicate -config=/opt/consul-replicate/config.json

If I pass the parameters via the CLI, I get the intended results. I see output to console and it logs into a log file.

./consul-replicate -config=/opt/consul-replicate/config.json -syslog -log-level=debug

config.json

consul = "127.0.0.1:8500"
retry = "10s"
log_level = "debug"

syslog {
enabled = true
facility = "LOCAL1"
}

prefix {
  source = "apichecks@phl"
}

prefix {
  source = "deploylist@phl"
}

New release?

It looks like 0.2.0 is about a year old now, and there have been many fixes (looking specifically for issue #25, #40, and #41) Is there a new binaries release planned any time soon? Wondering if we can avoid building ourselves from tip.

Thanks!

Error while running make command

I have the below version of go language installed on CentOS 7.1 machine.

Installed:
  golang.x86_64 0:1.6.3-1.el7_2.1

Dependency Installed:
  cpp.x86_64 0:4.8.5-4.el7     gcc.x86_64 0:4.8.5-4.el7     golang-bin.x86_64 0:1.6.3-1.el7_2.1     golang-src.noarch 0:1.6.3-1.el7_2.1     libmpc.x86_64 0:1.0.1-3.el7     mpfr.x86_64 0:3.1.1-4.el7

Seeing the below error

 make
==> Generating...
can't load package: package _/root/consul-replicate: cannot find package "_/root/consul-replicate" in any of:
    /usr/lib/golang/src/_/root/consul-replicate (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/fatih/structs: cannot find package "_/root/consul-replicate/vendor/github.com/fatih/structs" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/fatih/structs (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/consul/api: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/consul/api" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/consul/api (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/consul-template/dependency: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/dependency" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/dependency (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/consul-template/logging: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/logging" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/logging (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/consul-template/test: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/test" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/test (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/consul-template/watch: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/watch" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/consul-template/watch (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/errwrap: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/errwrap" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/errwrap (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/go-cleanhttp: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/go-cleanhttp" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/go-cleanhttp (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/go-gatedio: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/go-gatedio" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/go-gatedio (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/go-multierror: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/go-multierror" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/go-multierror (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/go-rootcerts: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/go-rootcerts" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/go-rootcerts (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/go-syslog: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/go-syslog" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/go-syslog (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/ast: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/ast" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/ast (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/parser: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/parser" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/parser (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/scanner: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/scanner" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/scanner (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/strconv: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/strconv" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/strconv (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/token: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/token" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/hcl/token (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/parser: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/parser" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/parser (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/scanner: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/scanner" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/scanner (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/token: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/token" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/hcl/json/token (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/logutils: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/logutils" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/logutils (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/serf/coordinate: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/serf/coordinate" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/serf/coordinate (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/hashicorp/vault/api: cannot find package "_/root/consul-replicate/vendor/github.com/hashicorp/vault/api" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/hashicorp/vault/api (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/mitchellh/go-homedir: cannot find package "_/root/consul-replicate/vendor/github.com/mitchellh/go-homedir" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/mitchellh/go-homedir (from $GOROOT)
    ($GOPATH not set)
can't load package: package _/root/consul-replicate/vendor/github.com/mitchellh/mapstructure: cannot find package "_/root/consul-replicate/vendor/github.com/mitchellh/mapstructure" in any of:
    /usr/lib/golang/src/_/root/consul-replicate/vendor/github.com/mitchellh/mapstructure (from $GOROOT)
    ($GOPATH not set)
make: *** [generate] Error 123

Thoughts?

Vault with Consul

Hi,

We have Vault integrated with Consul. We normally write all our key/value in the secret path as shown below.

vault write secret/elements/test value=mes0sd0cker

How can this be used in consul-replicate? I see the data is stored under /vault/logical in Consul. (see attached)

screen shot 2016-09-14 at 3 16 52 pm

Here is the sample config file for consul-replicate
./consul-replicate -config=/srv/consul/consul_replicate.hcl

# more consul_replicate.hcl
consul = "127.0.0.1:8500"
retry = "10s"
log_level = "debug"
max_stale = "10m"

syslog {
   enabled = true
}

prefix {
    source = "vault/logical/@ndc_ho_b"
}

Keys are being skipped and deleted when using config option

Using codebase from master branch ...

If I run using the config file flag, it skips and/or deletes my keys (i re-sync them using the CLI flags). If I run and pass the flags via the command line it works fine.

Using config

./consul-replicate -config=/opt/consul-replicate/config.json

Using CLI flags works fine.

./consul-replicate -prefix="deploylist@phl" -consul "ocd.atl.ariasystems.net:8500" -syslog -log-level=debug

Also, when running the app using a config flag, the prefix for apichecks is coming through incorrectly. The keys should be apichecks/worldpay/<KEY>

2016/06/02 14:42:07 [DEBUG] (runner) skipping because "apichecksworldpay/" is already replicated
2016/06/02 14:42:07 [DEBUG] (runner) skipping because "apichecksworldpay/api_desc" is already replicated
2016/06/02 14:42:07 [DEBUG] (runner) skipping because "apichecksworldpay/api_type" is already replicated

My config:

consul = "127.0.0.1:8500"
retry = "10s"
log_level = "debug"
max_stale = "10m"

syslog {
    enabled = true
}

prefix {
    source = "apichecks@phl"
}

prefix {
    source = "deploylist@phl"
}

no datacenter parsed in consul-replicate 0.3.0

using command

$ ./consul-replicate -consul 10.110.68.14:8500 -prefix jx@dc_jx -log-level=debug

or using config file

consul = "10.110.68.14:8500"
prefix{
  source = "jx@dc_jx"
}

with command

$ ./consul-replicate -config conf -log-level=debug

get same log

2017/01/20 09:55:33.135233 [DEBUG] (runner) final config (tokens suppressed):
{
  "Path": "",
  "Consul": "10.110.68.14:8500",
  "Token": "",
  "Prefixes": [
    {
      "Dependency": {},
      "Source": "jx",
      "DataCenter": "",
      "Destination": "jx"
    }
  ],
...

My os is centos7, Am I missing something important? much thanks..

Trying to use client tls certificates always results in 'remote error: bad certificate'

Hello,

I am trying to use consul-replicate on a TLS-enabled consul, command looks like this:

CERT=/path/to/crt
KEY=/path/to/key
CONSULHOST=127.0.0.1:8500
consul-replicate -consul $CONSULHOST -ssl -ssl-cert $CERT -ssl-key $KEY -ssl-verify=false -prefix="key@my-dc"

This results in a remote error: bad certificate. However, both a curl -k $CONSULHOST --cert $CERT --key $KEY and a consul-template daemon using the same ssl-opts work fine, so this looks like a bug in consul-replicate to me.

Skipping keys because the key is already replicated?

I have two clusters in two different DCs running v0.2

Primary -> PHL (3 nodes)
Secondary -> ATL (3 nodes)

I'm running consul-replicate on a client in ATL.

./consul-replicate -config=/opt/consul-replicate/config.json

consul = "ocd.atl.ariasystems.net:8500"
retry = "10s"
log_level = "debug"
max_stale = "10m"

syslog {
   enabled = true
}

prefix {
    source = "apichecks/@phl"
}

prefix {
    source = "deploylist/@phl"
}

If I delete a key on the ATL side and run consul-replicate again, it tells me the key is already replicated.

consul-replicate[21216]: (runner) skipping because "apichecks/worldpay/url" is already replicated

consul-replicate Many-Many DC replication

Hello,

I have a KV namespace /shared which is present in all N (say 3 for now) DCs, DC1, DC2, DC3.

The objective is to keep /shared in-sync and KV could be added in any DC and it should get sync to other DCs but optimal sync is desired.

Please address below 4 queries:

  1. Does this mean (N*N) complexity ? Every DC try to pull replicate from "EACH" (N-1) other DCs

  2. Is there a recommended topology design for N DCs to reduce the complexity to N of sync data within /shared KV namespace?

    Example :
    If DC1 adds KV, (N-1) DCs should pull from DC1 i.e. (N-1) pulls and (N-1) DCs should not try to pull same KV update from other DCs?
    When DC3 can get KV from DC1 , DC3 should not pull KV from DC2 even-though KV is added in DC2 /shared namespace and DC3 is watching DC2/shared for replicate

  3. Will Cassandra or other DB suit well for such replicate between DCs to prevent N*N replication with /shared namespace?

Case-1:

If a KV is added in DC1 -

  • DC2 consul-replica would replicate in DC2\shared from DC1
  • DC3 consul-replica would replicate in DC3\shared from DC1

Now since KV is replicated in DC2\shared

  • DC1 consul-replica would try replicate in DC1\shared from DC2 [Already present no action] >>>>>>>>>>> why does DC1 try to pull ?
  • DC3 consul-replica would try replicate in DC3\shared from DC2 >>>>>>>>>>> why this replicate again it is already done via DC1 ?

Now since KV is replicated in DC3\shared

  • DC1 consul-replica would try to replicate in DC1\shared from DC3 [Already present no action] >>>>>>>>>>> why does DC1 try to pull ?
  • DC2 consul-replica would try to replicate in DC2\shared from DC3 >>>>>>>>>>> why this replicate again it is already done via DC1 ?

Case-2:

If a K1-V1 is added in DC1

  • DC2 consul-replica would replicate in DC2\shared

Now since K1-V1 is replicated in DC2\shared

  • DC1 consul-replicate would try to get replicate in DC1\shared

Now suppose K1-V2 is updated in DC1 in meantime

Meanwhile DC1 gets K1-V1 from DC2,

  1. Will DC1 hold K1-V2 or K1-V1 (from DC2) based on time-stamp?
    Will Casanndra suit better to decide based on time-stamp?

Case 3:

If DC2 is running consul-replicate pull from DC1
DC1 has 3 servers
if KV is added in DC1

  1. Will DC2 try to pull only one copy of KV from DC1 or pull one copy per server in DC1 i.e. 3 copy of same KV from DC1
    I assume it is leader based pull from DC1 and DC2 will pull only one copy of KV from DC1 consul-replicate server leader

  2. If DC1 has 3 servers and has say server-1 as consul leader
    Will the consul-replicate server leader in DC1 be always same as consul leader say server-1
    or consul-replicate leader election within DC1 servers would be independent of consul leader in DC1 servers?

    consul-replicate server leader in DC1 : sends the replicate data to DC2 on pull from DC2

Thanks,
Deepak

Release?

Would you all be willing to make a new release?

I'd like the latest -once and -exclude fixes, and I can't seem to build from source.

When I run make, I get this hot mess:

Only replicate changes, don't re-scan the whole KV

Replicating using full-scans seems inefficient. According to the watch documentation (https://www.consul.io/docs/agent/watches.html) it sounds like one can create a watch and only replicate changes. In reality consul-replicate does a full scan of the source and then chooses what to replicate. This ends up clobbering any "extra" keys that have been added to the destination. It also ends up having to traverse the whole tree.

Can consul-replicate be tweaked to only replicate changes (create/update/delete) instead of traversing the whole source tree? Is there a better way to do this?

Service Replication

Maybe this is a feature request, but I would like to be able to replicate service check data across consul cluster instances. Being able to export the service definition json, and import to a different environment would be valuable.

Style: redundant code

All such pieces of code which trying to precreate 0 len 1 cap array right before append are redundant.

if c.Prefixes == nil {
    c.Prefixes = make([]*Prefix, 0, 1)
}
c.Prefixes = append(c.Prefixes, p)

it should be

c.Prefixes = append(c.Prefixes, p)

Replicate not working when we have folders

I have two data centers dc1 and dc2 , dc2 being the master . If I have a folder common to house my common KVs , and inside common , I have two properties test1=123, test2=456 and try to replicate the same across to dc1 , three properties with the following names are created in dc1 - "common\test1" common\test2 and common(a kv for folder is also created),but when you click on these properties - they give a 404. No folder is created either.
If I dont have any folder structure , KVs sync up fine.

I used the following common to start consul replicate

consul-replicate -prefix "common@dc2" -consul "dc1 server ip:port" log-level debug

In the logs , i saw it actually picks up three properties instead of 2.

Restarting service results in errors

If I restart the service and get a new PID the old one doesn't appear to be invalidated, resulting in an error from the 'consul-replicate' service check.

use of 'consul lock'

The addition of the lock subcommand is interesting, but I'm puzzled by its use here. If I'm reading the documentation on consul lock correctly, If the lock is held on node A and then acquired on node B, consul-replicate will be terminated on node A but will not be restarted. So in order to ensure that it's always running it'll have to be supervised externally. That seems like unnecessary overhead when consul-replicate could just gracefully handle loss and acquisition of locks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.