Giter Site home page Giter Site logo

graylog2 / graylog2-images Goto Github PK

View Code? Open in Web Editor NEW
234.0 41.0 93.0 410 KB

Ready to run machine images

License: Apache License 2.0

Ruby 10.71% Shell 59.95% Groovy 29.34%
graylog docker dockerfile openstack vagrantfile packer ova aws aws-ec2 ami

graylog2-images's Introduction

graylog2-Images

This project allows you to create machine images with a full Graylog stack installed.

Graylog OVA appliance

Pre-Considerations

  • Please run this appliance always in a separated network that is isolated from the internet.

Dependencies

  • 64bit host system with Virtualbox/VMWare/XenServer hypervisor

Download

Detailed documentation can be found here.

Create machine images with Packer

This project creates machine images with a full Graylog stack installed.

Requirements

You need a recent packer version, get it here. To set your local properties copy the packerrc.sh.example to packerrc.sh and fill in the right values for your environment. Before you run packer source the packerrc.sh in your terminal.

$ cd packer
$ . packerrc.sh
$ packer build aws.json

This e.g. creates an Amazon AMI for you.

Usage

We install in all machine images our Omnibus package that comes with the graylog-ctl command. After spinning up the VM you have to login with the ubuntu user and execute at least

$ sudo graylog-ctl reconfigure

This will setup your Graylog installation and start all services. You can reach the web interface by pointing your browser to the IP of the appliance: http://<IP address>:9000

The default login is Username: admin Password: admin. You can change the admin password:

$ sudo graylog-ctl set-admin-password !SeCreTPasSwOrD?
$ sudo graylog-ctl reconfigure

Noticeable Options

AWS

Parameter Value
type Choose AWS storage type, EBS works fine for the beginning
ami_groups 'all' means publicly available
ami_regions Array of availability zones to copy the image after creation

Virtualbox

Parameter Value
disk_size Set the maximum disk size for the image
modifyvm --memory RAM size
modifyvm --cpus Number of CPUs
modifyvm --natpf1 Default port forwards

AMI Images

AWS EC2 Images can be found here

graylog2-images's People

Contributors

bernd avatar chuckleb avatar dls314 avatar eabay avatar edmundoa avatar elion avatar garybot2 avatar hggh avatar joschi avatar lavoiesl avatar mariussturm avatar muffl0n avatar murtnowski avatar needcaffeine avatar omercnet avatar paukul avatar ptman avatar rauno56 avatar schweizerbolzonello avatar sturman avatar vjekoslav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graylog2-images's Issues

Documentation improvements: Production use and security

Thank you for these images, awesome!

A suggestion concerning the documentation:

  • Maybe make clearer whether these images are production ready or not. I know that the quick setup app isn't and spells it out so.
  • Add a section about after-installation security. Like changing the ubuntu-user-password in the OVA image.

graylog2-server restart timeout on reconfigure

Recipe: graylog2::graylog2-server
  * service[graylog2-server] action restart

    ================================================================================
    Error executing action `restart` on resource 'service[graylog2-server]'
    ================================================================================

    Mixlib::ShellOut::ShellCommandFailed
    ------------------------------------
    Expected process to exit with [0], but received '1'
    ---- Begin output of /opt/graylog2/embedded/bin/chpst -u root /opt/graylog2/embedded/bin/sv -w 45 restart /opt/graylog2/service/graylog2-server ----
    STDOUT: timeout: run: /opt/graylog2/service/graylog2-server: (pid 1722) 68s
    STDERR: 
    ---- End output of /opt/graylog2/embedded/bin/chpst -u root /opt/graylog2/embedded/bin/sv -w 45 restart /opt/graylog2/service/graylog2-server ----
    Ran /opt/graylog2/embedded/bin/chpst -u root /opt/graylog2/embedded/bin/sv -w 45 restart /opt/graylog2/service/graylog2-server returned 1

    Resource Declaration:
    ---------------------
    # In /opt/graylog2/embedded/cookbooks/runit/definitions/runit_service.rb

    190:     service params[:name] do
    191:       control_cmd = node[:runit][:sv_bin]
    192:       if params[:owner]
    193:         control_cmd = "#{node[:runit][:chpst_bin]} -u #{params[:owner]} #{control_cmd}"
    194:       end
    195:       provider Chef::Provider::Service::Simple

Virtualbox OVA image /etc/issue incorrect

The system banner indicates that the default username/password is admin for the OVA file. It is in fact ubuntu/ubuntu - there is no user on the system called admin, the admin/admin would be for the web interface. Consider clarifying, or listing both in /etc/rc.local generator?

(sorry for edits!)

OVA Appliance - Cannot Log In No Graylog Server Available

I did the initial configuration of password for admin and timezone. All services are running but cannot see anything on port 12900 and getting No Graylog Servers running... Checked server log /var/log/graylog/server/current and see "ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid Configuraiton".

No idea why? Running in VMWare Workstation 7.1.3 on Windows 7 64-bit with 4GB allocated to the VM.

maybe add docs about env settings overlap : -port in GRAYLOG_SMTP_SERVER == won't start

I haven't dug through the sources to see how the environment variables are used, but I screwed up some cutting and pasting and had a -port with only one minus in GRAYLOG_SMTP_SERVER. Maybe mongodb or some other process slurped up this setting and started running on port 25. The cluster wouldn't start and there weren't really any log messages that helped me with the investigation.

Overall though docker seems a great way to distribute this and the exposed settings have covered everything I've needed. Nice job.

How can I update Elastic Search if I built Graylog2 server from this image?

There are some comments from Internet that needs me to do a "add-apt-repo", but by default Ubuntu server cannot do this. And I am trying to keep things simple. I tried to download the latest version of ES and replace the original one from "/opt/graylog/elasticsearch" but this won't work. I cannot connect to both Graylog sever and ES (I did stop all services, copy new ES into the directory and restart them).

Set JAVA_HOME

In order to run commands like this: /opt/graylog/elasticsearch/bin# ./plugin -install royrusso/elasticsearch-HQ we need to set JAVA_HOME to /opt/graylog/embedded/jre

Amazon AMI fails upon initial sudo graylog2-ctl reconfigure with dependency on libopts25 (>= 1:5.18)

Just to let you know that there seems to be an issue with the Amazon AMI (as it cannot be started following the documentation at Graylog2 AWS EC2 Images.

Observed behaviour

When using the current Amazon AMI (0.92.3, us-east-1, ami-ca9ff0a2) to launch an instance sudo graylog2-ctl reconfigure fails upon installing/configuring ntpd with the following error message:

[2015-01-02T10:52:48+00:00] ERROR: apt_package[ntp] (ntp::default line 28) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '100'
---- Begin output of apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2.14.04.1 ----
STDOUT: Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 ntp : Depends: libopts25 (>= 1:5.18) but it is not installable
STDERR: E: Unable to correct problems, you have held broken packages.
---- End output of apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2.14.04.1 ----
Ran apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2.14.04.1 returned 100

Notes

  • No specific information was supplied when choosing image (i.e. no kernel id etc)

Linux ip-10-157-90-96 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

  • Manual installation of libopt25 fails as package cannot be found (see below
    ubuntu@ip-10-157-90-96:~$ sudo apt-get install libopts25
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Package libopts25 is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source

    E: Package 'libopts25' has no installation candidate
  • Amazon details

Instance type c3.xlarge
Availability zone us-east-1d
AMI ID graylog2-0.92.3-1419362218 (ami-ca9ff0a2)
Kernel ID aki-919dcaf8

  • This is not really an important issue for me, I can work around otherwise. I just thought I let you know.

Docker: Connection refused: /127.0.0.1:12900

Trying to use the docker image on a CentOS Linux release 7.0.1406, but the web app hangs at startup with a connection refused error.

==> /var/log/graylog/web/current <==
2015-03-04_13:56:15.04150 [error] o.g.r.l.ApiClient - API call failed to execute.
2015-03-04_13:56:15.04154 java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: /127.0.0.1:12900 to http://127.0.0.1:12900/system/cluster/node
2015-03-04_13:56:15.04154   at com.ning.http.client.providers.netty.NettyResponseFuture.abort(NettyResponseFuture.java:342) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_13:56:15.04155   at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:108) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_13:56:15.04155   at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:431) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04155   at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:422) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04155   at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:384) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04156 Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:12900 to http://127.0.0.1:12900/system/cluster/node
2015-03-04_13:56:15.04158   at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:104) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_13:56:15.04158   at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:431) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04158   at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:422) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04158   at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:384) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04159   at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:109) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04159 Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:12900
2015-03-04_13:56:15.04159   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_71]
2015-03-04_13:56:15.04159   at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_71]
2015-03-04_13:56:15.04159   at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04160   at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_13:56:15.04161   at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) ~[io.netty.netty-3.9.3.Final.jar:na]

I checked the logs inside the container and elasticsearch and mongodb are up. This happens both when using the Docker Hub Registry, as well as for the locally built image.

AMIs for eu-central-1

Hi there,

would it be possible to also publish pre-built AWS EC2 AMIs for the new eu-central-1 region (Frankfurt)?

Thanks,
Thilo

Restarting services

Re: docker

Out of curiosity, what is the intended way to restart services on this image with the runsv stuff?

I found a workaround of moving directories out of /opt/graylog/sv/ momentarily and then moving them back to trigger a reload.

Enable HTTPS on Graylog OVA appliance

NGINX currently listens on port 80 without encryption. Users who want to run Graylog in production will want to encrypt their passwords, especially if they are using the amazingly simple LDAP integration.

  1. The appliance should generate a self-signed cert on first boot.
  2. NGINX should redirect port 80 to port 443
  3. NGINX should listen on 443 using the self-signed cert

This will enable users to use Graylog securely from day 1. In addition, it will make it simpler to transition to a valid signed SSL certificate by following the example of the self-signed certs.

Graylog.conf in OVA 1.0 is missing values compared to the 1.0 conf file

The conf file included in the 1.0 OVA image is missing some values compared to the graylog 1.0 packages. I have not generated a full list of them, but the most obvious missing options are related to "rotation_strategy".

Here are the contents of "/opt/graylog/conf/graylog.conf":
https://gist.github.com/tristanbob/c6e05fd263f9e7e8728b

Here are the contents of that file from the source:
https://github.com/Graylog2/graylog2-server/blob/master/misc/graylog2.conf

Thanks!

Missing steps in Vagrant setup?

I tried to install graylog2 via vagrant on my development machine to give it a first try. After starting/provisioning the machine with vagrant up for the first time and logging into the web interface, I was greeted with the message "No Graylog2 servers available. Cannot log in." So I fired up vagrant ssh to have a look at the problem.

  • First of all, java doesn't seem to get installed so I had to run sudo apt-get update && sudo apt-get install openjdk-7-jre manually.

  • Then I tried to start the server manually with sudo -u graylog2 java -jar graylog2-server.jar --debug according to the docs, but the config file /etc/graylog2.conf was missing, so I copied the example file and tried to start the server again.

  • Settings for password_secret and root_password_sha2(the pwgen-package is missing as well)

  • Yay, finally some more output when starting the server (unfortunately a stack trace as well). Missing steps again:

    sudo touch /etc/graylog2-server-node-id && sudo chmod -R 777 /etc/graylog2-server-node-id
    sudo mkdir /opt/graylog2/server/spool && sudo chmod -R 777 /opt/graylog2/server/spool
  • Oh, just discovered mongodb is missing as well โ€ฆ installing that, too (but don't do further setup).

  • Ok, starting again. Still doesn't work: ERROR: Could not successfully connect to Elasticsearch.
    I'm giving up at this point!

It would be great to have all those steps added to the Vagrantfile (e.g. with a default 'admin' password hash as well for root_password_sha2). Usually, I expect all those things to happen automatically when I'm setting up a dev environment, so I can start testing/developing. Having to go through all those manual steps and finding out what exactly causes those errors is really cumbersome and demotivating.

AWS image gives 502 bad gateway out of the box

Out of the box, the AWS image for 1.0 gives a 502 error with nginx.

Reproduce:

  1. Follow link for ami-f4f4a29c, deploy instance
  2. Run sudo graylog-ctl reconfigure
  3. Edit AWS security group to allow HTTP from my IP
  4. Visit public IP, get 502 error.

nginx error.log shows:
2015/02/19 15:25:19 [error] 2072#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: MYIP, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:9000/", host: "MYPUBLICIPONAWS"

netstat shows that nothing is actually listening on port 9000.

Am I doing something wrong here? Thanks for any pointers.

Official Graylog2 image in Docker Hub

I tried nearly every all-in-one Graylog2 image in the Docker registry but each of them either has something missing or did not work as smooth as like this one.

I was creating one but It would be nice to see Graylog2 as officially with its own namespace like torch/graylog2 or torch/graylog2-omnibus

What do you think about this?

Docker container contains nested JREs

Not really a bug, but something I came across when I wanted to update the underlying JVM from v1.7 to v1.8.

When you do a find / -name java you will notice that there are two JREs installed inside the Docker image:

root@e220459c899d:/opt/graylog/plugin# find / -name java
/opt/graylog/embedded/jre/jre/bin/java
/opt/graylog/embedded/jre/bin/java
/opt/graylog/embedded/lib/ruby/gems/2.1.0/gems/chef-12.0.3/spec/data/cookbooks/java
/opt/graylog/embedded/lib/ruby/gems/2.1.0/gems/coderay-1.1.0/lib/coderay/scanners/java

Both java versions are exactly the same:

root@e220459c899d:/opt/graylog/plugin# /opt/graylog/embedded/jre/jre/bin/java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)
root@e220459c899d:/opt/graylog/plugin# /opt/graylog/embedded/jre/bin/java -version
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)

However, only one of them is actually used:

graylog    486  0.7  3.2 3746688 159808 ?      Ssl  Mar02  23:27 /opt/graylog/embedded/jre/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSIni
graylog    573  1.0  9.3 2606504 456776 ?      Ssl  Mar02  32:05 /opt/graylog/embedded/jre/bin/java -Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ReservedCodeCacheSize=128m -Duser.dir=/opt/graylog/web -

What is strange that these two versions are actually nested inside each other. My questions:

  • Is this by design?
  • Which version would I have to change if I wanted to update to v1.8?
  • Is this unintentionally or some kind of leftover?

When I look at the phusion/baseimage:0.9.15 image there is no JVM. So maybe that slipped in through the dpkg -i graylog_latest.deb install? (so maybe this is the wrong repo to post the issue).

side note: the reason for wanting to update the JVM is twofold:

  1. approaching end of life support of JVM 1.7
  2. we are using plugins that make use of JVM 8 features

Just wanted to point this out. You can close it if you do not bother. Regards, Ronald

Docker image fails

graylog2-images/docker# docker build -t graylog2 .
graylog2-images/docker# docker run -t -p 9000:9000 -p 12201:12201 -e GRAYLOG2_PASSWORD=admin graylog2

results in

....
Recipe: ntp::apparmor
  * service[apparmor] action restart

    ================================================================================
    Error executing action `restart` on resource 'service[apparmor]'
    ================================================================================

    Errno::ENOENT
    -------------
    No such file or directory - /etc/init.d/apparmor stop

    Resource Declaration:
    ---------------------
    # In /opt/graylog2/embedded/cookbooks/ntp/recipes/apparmor.rb

     20: service 'apparmor' do
     21:   action :nothing
     22: end
     23:

    Compiled Resource:
    ------------------
    # Declared in /opt/graylog2/embedded/cookbooks/ntp/recipes/apparmor.rb:20:in `from_file'

    service("apparmor") do
      action [:nothing]
      supports {:restart=>false, :reload=>false, :status=>false}
      retries 0
      retry_delay 2
      guard_interpreter :default
      service_name "apparmor"
      pattern "apparmor"
      cookbook_name :ntp
      recipe_name "apparmor"
    end

Recipe: ntp::default
  * service[ntp] action restart
    - restart service service[ntp]

Running handlers:
[2014-12-09T15:07:39+00:00] ERROR: Running exception handlers
Running handlers complete
[2014-12-09T15:07:39+00:00] ERROR: Exception handlers complete
[2014-12-09T15:07:39+00:00] FATAL: Stacktrace dumped to /opt/graylog2/embedded/cookbooks/cache/chef-stacktrace.out
Chef Client failed. 99 resources updated in 71.764112455 seconds
[2014-12-09T15:07:39+00:00] ERROR: service[apparmor] (ntp::apparmor line 20) had an error: Errno::ENOENT: No such file or directory - /etc/init.d/apparmor stop
[2014-12-09T15:07:39+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
# docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa

Ubuntu 14.04

AMI image not working

when running: sudo graylog2-ctl reconfigure on latest AMI image I got:

ยดยดยด

Error executing action `install` on resource 'apt_package[ntp]'
================================================================================

Mixlib::ShellOut::CommandTimeout
--------------------------------
Command timed out after 900s:
Command exceeded allowed execution time, process terminated
---- Begin output of apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2 ----
STDOUT: Reading package lists...
Building dependency tree...
STDERR: 
---- End output of apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2 ----
Ran apt-get -q -y install ntp=1:4.2.6.p5+dfsg-3ubuntu2 returned 

Resource Declaration:
---------------------
# In /opt/graylog2/embedded/cookbooks/ntp/recipes/default.rb

 28:     package ntppkg
 29:   end

Compiled Resource:
------------------
# Declared in /opt/graylog2/embedded/cookbooks/ntp/recipes/default.rb:28:in `block in from_file'

apt_package("ntp") do
  action :install
  retries 0
  retry_delay 2
  default_guard_interpreter :default
  package_name "ntp"
  version "1:4.2.6.p5+dfsg-3ubuntu2"
  timeout 900
  cookbook_name :ntp
  recipe_name "default"
end

ยดยดยด

Docker image: admin password can't be changed after the first start

Since the web UI doesn't allow changing the admin password, I tried:

  • edited graylog-secrets.json to invalidate the password
  • restarted with docker exec -i -t $(docker ps -q) /usr/bin/graylog-ctl restart
  • initial admin password is still active
  • start with an entirely new allinone image
  • docker exec -i -t $(docker ps -q) /usr/bin/graylog-ctl set-admin-password newpassword
  • restarted with docker exec -i -t $(docker ps -q) /usr/bin/graylog-ctl restart graylog-web
  • initial admin password is still active

Is there a proper way to set the admin password? Should I not be using graylog-ctl to restart and stop/start the container instead (which would lose log data during the restart period)?

Thank you!

User permissions problem

Hi,

I create another user with either read or admin permissions, when I try to access any saved dashboards, I get the following error message below. I also tried exposing the TCP port 12900 to the localhost's IP 10.64.7.36. Is there anything else I need to configure or changed? See the commands I passed to docker below to run graylog & the docker inspect output. Any help would be greatly appreciated.

Thanks,
Yi

(You caused a org.graylog2.restclient.lib.APIException. API call failed GET http://@172.17.0.4:12900/search/universal/relative?filter=&offset=0&query=75..*%20and%20IMCP&limit=100&range=300&sort=timestamp:desc&range_type=relative returned 403 Forbidden body: {"type":"ApiError","message":"Not authorized"})
DOCKER COMMANDS

docker run -t -p 10.64.7.36:9000:9000 -p 10.64.7.36:12201:12201 -p 10.64.7.36:12900:12900 -p 10.64.7.36:514:514/tcp -p 10.64.7.36:514:514/udp -e GRAYLOG_PASSWORD=RXXXXXXX -v /graylog/data:/var/opt/graylog/data -v /graylog/logs:/var/log/graylog --name graylog graylog2/allinone

##### DOCKER INSPECT OUTPUT

[{
"AppArmorProfile": "",
"Args": [],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/opt/graylog/embedded/share/docker/my_init"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"GRAYLOG_PASSWORD=RXXXXXXXX",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOME=/root",
"DEBIAN_FRONTEND=noninteractive"
],
"ExposedPorts": {
"12201/tcp": {},
"12201/udp": {},
"12900/tcp": {},
"4001/tcp": {},
"514/tcp": {},
"514/udp": {},
"9000/tcp": {}
},
"Hostname": "c73c70cd6501",
"Image": "graylog2/allinone",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": true,
"User": "",
"Volumes": {
"/opt/graylog/plugin": {},
"/var/log/graylog": {},
"/var/opt/graylog/data": {}
},
"WorkingDir": ""
},
"Created": "2015-03-23T15:21:23.301814234Z",
"Driver": "devicemapper",
"ExecDriver": "native-0.2",
"HostConfig": {
"Binds": [
"/graylog/data:/var/opt/graylog/data",
"/graylog/logs:/var/log/graylog"
],
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {
"12201/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "12201"
}
],
"12900/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "12900"
}
],
"514/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "514"
}
],
"514/udp": [
{
"HostIp": "10.64.7.36",
"HostPort": "514"
}
],
"9000/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "9000"
}
]
},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/c73c70cd6501ff18a706cf580977f274cbc92053a9ac039ef704e8f156cc2650/hostname",
"HostsPath": "/var/lib/docker/containers/c73c70cd6501ff18a706cf580977f274cbc92053a9ac039ef704e8f156cc2650/hosts",
"Id": "c73c70cd6501ff18a706cf580977f274cbc92053a9ac039ef704e8f156cc2650",
"Image": "987a654e973ea76c3bbd3417b44a97e74ee93c4882a828bac37b6a5a398e9c0f",
"MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c724,c891",
"Name": "/graylog",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:04",
"PortMapping": null,
"Ports": {
"12201/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "12201"
}
],
"12201/udp": null,
"12900/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "12900"
}
],
"4001/tcp": null,
"514/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "514"
}
],
"514/udp": [
{
"HostIp": "10.64.7.36",
"HostPort": "514"
}
],
"9000/tcp": [
{
"HostIp": "10.64.7.36",
"HostPort": "9000"
}
]
}
},
"Path": "/opt/graylog/embedded/share/docker/my_init",
"ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c724,c891",
"ResolvConfPath": "/var/lib/docker/containers/c73c70cd6501ff18a706cf580977f274cbc92053a9ac039ef704e8f156cc2650/resolv.conf",
"State": {
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Paused": false,
"Pid": 8779,
"Restarting": false,
"Running": true,
"StartedAt": "2015-03-23T15:21:24.501188118Z"
},
"Volumes": {
"/opt/graylog/plugin": "/var/lib/docker/vfs/dir/9586d426fc5836adfa2b42fbdba92104db44554f4510cf07aa455c5d465e0260",
"/var/log/graylog": "/graylog/logs",
"/var/opt/graylog/data": "/graylog/data"
},
"VolumesRW": {
"/opt/graylog/plugin": true,
"/var/log/graylog": true,
"/var/opt/graylog/data": true
}
}

open file limit is too low

Hi, after
$ docker pull graylog2/allinone
$ docker run -t -p 9000:9000 -p 12201:12201 graylog2/allinone

I get the message:
"open file limit is too low: [50000]. Set it to at least 64000."

Is not it should be resolved within the Docker image configuration?

Not able to change webinterface password

Hello

First off, thanks for this awesome project. I just started using it. One issue which I have is thay : I am not able to change web interface password from admin, I used this command: sudo graylog2-ctl set-admin-password and rerun sudo graylog2-ctl reconfigure but it doesn't work, password remain default admin only. I also edited file: /opt/graylog2/conf/graylog2.conf but then when I run reconfigure command it gets overwritten. Any idea, where I am wrong?

BTW i am using an AMI image of graylog2.

thanks

Automatically adjust Elasticsearch HEAP size

It appears that the heap size for Elasticsearch is supposed to be set automatically set to 50% of total memory:

https://github.com/Graylog2/omnibus-graylog2/blob/1.0.0-1/files/graylog-cookbooks/graylog/recipes/elasticsearch.rb#L35

However, on my VM with 16 GB of memory I am not sure if it is using the right HEAP size (unless I am reading my PS output wrong, which could be the case):

ubuntu@graylog:/opt/graylog/elasticsearch/config$ free -h
total used free shared buffers cached
Mem: 15G 7.4G 8.2G 512K 173M 5.3G
-/+ buffers/cache: 2.0G 13G
Swap: 4.0G 0B 4.0G

ubuntu@graylog:/opt/graylog/elasticsearch/config$ ps faux | grep elasticsearch
root 884 0.0 0.0 4212 440 ? Ss Feb19 0:00 _ runsv elasticsearch
root 898 0.0 0.0 4356 580 ? S Feb19 0:00 | _ svlogd -tt /var/log/graylog/elasticsearch
graylog 6137 5.5 3.5 10015160 583852 ? Ssl Feb19 57:16 | _ /opt/graylog/embedded/jre/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -server -Djava.net.preferIPv4Stack=true -Des.config=/opt/graylog/conf/elasticsearch.yml -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=3333 -Xms128m -Xmx8024m -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/graylog/elasticsearch -cp :/opt/graylog/elasticsearch/lib/:/opt/graylog/elasticsearch/lib/sigar/:/opt/graylog/elasticsearch/lib/elasticsearch-1.4.1.jar:/opt/graylog/elasticsearch/lib/:/opt/graylog/elasticsearch/lib/sigar/ org.elasticsearch.bootstrap.Elasticsearch

Thanks!

Issue with etcd

Hi there!
Could you please help me with following issue.

Graylog suddenly stopped to write messages to indices (0 msg/s on 1 node.), while journal is still collecting messages (There are 621,536 unprocessed messages in the journal).

When making sudo graylog-ctl status i see that etcd is always down for 0 or 1 second.

When trying to reconfigure i got:

Error executing action run on resource 'ruby_block[add node to server list]'
================================================================================

Errno::ECONNREFUSED
-------------------
Connection refused - connect(2) for "127.0.0.1" port 4001

When making sudo graylog-ctl tail etcd i got following:

2015-05-14_09:59:52.99557 2015/05/14 12:59:52 etcd: listening for peers on http://localhost:2380
2015-05-14_09:59:52.99562 2015/05/14 12:59:52 etcd: listening for peers on http://localhost:7001
2015-05-14_09:59:52.99565 2015/05/14 12:59:52 etcd: listening for client requests on http://0.0.0.0:2379
2015-05-14_09:59:52.99568 2015/05/14 12:59:52 etcd: listening for client requests on http://0.0.0.0:4001
2015-05-14_09:59:52.99774 2015/05/14 12:59:52 etcdserver: recovered store from snapshot at index 7430743
2015-05-14_09:59:52.99823 2015/05/14 12:59:52 etcdserver: name = default
2015-05-14_09:59:52.99824 2015/05/14 12:59:52 etcdserver: data dir = /var/opt/graylog/data/etcd
2015-05-14_09:59:52.99825 2015/05/14 12:59:52 etcdserver: heartbeat = 100ms
2015-05-14_09:59:52.99826 2015/05/14 12:59:52 etcdserver: election = 1000ms
2015-05-14_09:59:52.99828 2015/05/14 12:59:52 etcdserver: snapshot count = 10000
2015-05-14_09:59:52.99829 2015/05/14 12:59:52 etcdserver: advertise client URLs = http://localhost:2379,http://localhost:4001
2015-05-14_09:59:52.99830 2015/05/14 12:59:52 etcdserver: loaded cluster information from store: default=http://localhost:2380,default=http://localhost:7001
2015-05-14_09:59:53.04518 2015/05/14 12:59:53 etcdserver: read wal error: unexpected EOF

java.net.ConnectException: Connection refused: /127.0.0.1:12900 to http://127.0.0.1:12900/system/cluster/node

I don't know if I have missed anything, when I tried to start the docker container with below two steps:

docker pull graylog2/allinone
docker run -t -p 9000:9000 -p 12201:12201 graylog2/allinone

docker version: 1.3.4
OS: centos 6 x86_64

 netstat -ltn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN
tcp        0      0 :::9000                     :::*                        LISTEN
tcp        0      0 :::12201                    :::*                        LISTEN
tcp        0      0 :::9200                     :::*                        LISTEN
tcp        0      0 :::9300                     :::*                        LISTEN
tcp        0      0 :::5044                     :::*                        LISTEN
tcp        0      0 :::9301                     :::*                        LISTEN
tcp        0      0 :::9302                     :::*                        LISTEN
tcp        0      0 :::22                       :::*                        LISTEN
tcp        0      0 :::9303                     :::*                        LISTEN

Yes, the 12900 port is not listening.

I always failed with below exception:

2015-03-04_05:47:05.38666 java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: /127.0.0.1:12900 to http://127.0.0.1:12900/system/cluster/node
2015-03-04_05:47:05.38667       at com.ning.http.client.providers.netty.NettyResponseFuture.abort(NettyResponseFuture.java:342) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_05:47:05.38668       at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:108) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_05:47:05.38669       at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:431) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38670       at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:422) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38671       at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:384) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38672 Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:12900 to http://127.0.0.1:12900/system/cluster/node
2015-03-04_05:47:05.38675       at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:104) ~[com.ning.async-http-client-1.8.14.jar:na]
2015-03-04_05:47:05.38676       at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:431) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38677       at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:422) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38678       at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:384) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38678       at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:109) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38679 Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:12900
2015-03-04_05:47:05.38680       at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_71]
2015-03-04_05:47:05.38680       at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_71]
2015-03-04_05:47:05.38681       at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38683       at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) ~[io.netty.netty-3.9.3.Final.jar:na]
2015-03-04_05:47:05.38684       at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) ~[io.netty.netty-3.9.3.Final.jar:na]

so does this mean the rest_listen_uri is using http://0.0.0.0:12900/, as some other post related with this issue, it should be http://127.0.0.1:12900/.
Is there any way I can configure this or any option for the run command? I could not find.

Thanks,
Sophia

Docker container cannot restart services after being stopped

Observed behaviour

After starting a container via docker run -d -t -i -P graylog2/allinone the system boots up successfully and is fully functioning (accepting log messages etc). However after issuing a docker stop <ContainerId> and then restarting this container via docker start <ContainerId> the container will run into an endless loop as the services (such as ElasticSearch and subordinate services cannot be started). One of these error messages:

2014-12-31 10:11:34,820 ERROR: org.graylog2.initializers.IndexerSetupService - Could not connect to Elasticsearch at http://172.17.0.84:9200/, is it running?
java.net.ConnectException: No route to host to http://172.17.0.84:9200/_nodes

ERROR: Could not successfully connect to Elasticsearch. Check that your cluster state is not RED and that Elasticsearch is running properly.

Need help?

* Official documentation: http://graylog2.org/resources/documentation
* Community support: http://graylog2.org/resources/community-support
* Commercial support: http://graylog2.org/products

But we also got some specific help pages that might help you in this case:

* http://www.graylog2.org/resources/documentation/setup/elasticsearch

Terminating. :(

Additional notes

Platform

I am running on CentOS 7.0 3.10.0-123.13.2.el7.x86_64 and docker version

Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa/1.3.2

graylog2 docker image:

REPOSITORY          TAG    IMAGE ID     CREATED    VIRTUAL SIZE
graylog2/allinone   latest 387b6e81d77d 7 days ago 1.153 GB

Docker image: No Graylog servers available. Cannot log in.

Just tried to use the docker image.

The webserver is reachable, but says No Graylog servers available. Cannot log in.

The log seems fine, and ends wich "Chef client finished, graylog reconfigured"

Full log:

    +# If mongodb_max_connections is 100, and mongodb_threads_allowed_to_block_multiplier is 5, then 500 threads can block. More than that and an exception will be thrown.
    +# http://api.mongodb.org/java/current/com/mongodb/MongoOptions.html#threadsAllowedToBlockForConnectionMultiplier
    +mongodb_threads_allowed_to_block_multiplier = 5
    +
    +# Drools Rule File (Use to rewrite incoming log messages)
    +# See: http://graylog2.org/resources/documentation/general/rewriting
    +#rules_file = /etc/graylog.drl
    +
    +# Email transport
    +transport_email_enabled = false
    +transport_email_hostname = 
    +transport_email_port = 587
    +transport_email_use_auth = false
    +transport_email_use_tls = true
    +transport_email_use_ssl = true
    +transport_email_auth_username = 
    +transport_email_auth_password = 
    +transport_email_subject_prefix = [graylog]
    +transport_email_from_email = graylog@542d341aac3d
    +
    +# Specify and uncomment this if you want to include links to the stream in your stream alert mails.
    +# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.
    +#
    +transport_email_web_interface_url = http://542d341aac3d
    +
    +# HTTP proxy for outgoing HTTP calls
    +#http_proxy_uri =
    +
    +# Disable the optimization of Elasticsearch indices after index cycling. This may take some load from Elasticsearch
    +# on heavily used systems with large indices, but it will decrease search performance. The default is to optimize
    +# cycled indices.
    +#disable_index_optimization = true
    +
    +# Optimize the index down to <= index_optimization_max_num_segments. A higher number may take some load from Elasticsearch
    +# on heavily used systems with large indices, but it will decrease search performance. The default is 1.
    +#index_optimization_max_num_segments = 1
    +
    +# Disable the index range calculation on all open/available indices and only calculate the range for the latest
    +# index. This may speed up index cycling on systems with large indices but it might lead to wrong search results
    +# in regard to the time range of the messages (i. e. messages within a certain range may not be found). The default
    +# is to calculate the time range on all open/available indices.
    +#disable_index_range_calculation = true
    +
    +# The threshold of the garbage collection runs. If GC runs take longer than this threshold, a system notification
    +# will be generated to warn the administrator about possible problems with the system. Default is 1 second.
    +#gc_warning_threshold = 1s
    +
    +# Connection timeout for a configured LDAP server (e. g. ActiveDirectory) in milliseconds.
    +#ldap_connection_timeout = 2000
    +
    +# https://github.com/bazhenov/groovy-shell-server
    +#groovy_shell_enable = false
    +#groovy_shell_port = 6789
    +
    +# Enable collection of Graylog-related metrics into MongoDB
    +#enable_metrics_collection = false
    +
    +# Disable the use of SIGAR for collecting system stats
    +#disable_sigar = false
    +
    +# TELEMETRY
    +# Enable publishing Telemetry data
    +#telemetry_enabled = false
    +
    +# Base URL of the Telemetry service
    +#telemetry_url = https://telemetry-in.graylog.com/submit/
    +
    +# Authentication token for the Telemetry service
    +#telemetry_token = 
    +
    +# How often the Telemetry data should be reported
    +#telemetry_report_interval = 1m
    +
    +# Number of Telemetry data sets to store locally if the connection to the Telemetry service fails
    +#telemetry_max_queue_size = 10
    +
    +# TTL for Telemetry data in local cache
    +#telemetry_cache_timeout = 1m
    +
    +# Connect timeout for HTTP connections
    +#telemetry_service_connect_timeout =  1s
    +
    +# Write timeout for HTTP connections
    +#telemetry_service_write_timeout = 5s
    +
    +# Read timeout for HTTP connections
    +#telemetry_service_read_timeout = 5s
    - change mode from '' to '0644'
    - change owner from '' to 'graylog'
    - change group from '' to 'graylog'
  * directory[/opt/graylog/sv/graylog-server] action create
    - create new directory /opt/graylog/sv/graylog-server
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/graylog-server/log] action create
    - create new directory /opt/graylog/sv/graylog-server/log
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/graylog-server/log/main] action create
    - create new directory /opt/graylog/sv/graylog-server/log/main
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/graylog-server/run] action create
    - create new file /opt/graylog/sv/graylog-server/run
    - update content in file /opt/graylog/sv/graylog-server/run from none to cd9f4a
    --- /opt/graylog/sv/graylog-server/run  2015-03-29 20:55:48.942953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-72blib   2015-03-29 20:55:48.942953000 +0000
    @@ -1 +1,11 @@
    +#!/bin/sh
    +exec 2>&1
    +
    +umask 077
    +export JAVA_HOME=/opt/graylog/embedded/jre
    +export GRAYLOG_SERVER_JAVA_OPTS="-Xms1g -Xmx1g -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow"
    +
    +# check if mongodb is up
    +timeout 600 bash -c "until curl -s http://127.0.0.1:27017; do sleep 1; done"
    +exec chpst -P -U graylog -u graylog /opt/graylog/embedded/bin/authbind $JAVA_HOME/bin/java $GRAYLOG_SERVER_JAVA_OPTS -jar /opt/graylog/server/graylog.jar server -f /opt/graylog/conf/graylog.conf
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/graylog-server/log/run] action create
    - create new file /opt/graylog/sv/graylog-server/log/run
    - update content in file /opt/graylog/sv/graylog-server/log/run from none to b6ccf1
    --- /opt/graylog/sv/graylog-server/log/run  2015-03-29 20:55:48.954953001 +0000
    +++ /tmp/chef-rendered-template20150329-20-1w2pfnx  2015-03-29 20:55:48.954953001 +0000
    @@ -1 +1,3 @@
    +#!/bin/sh
    +exec svlogd -tt /var/log/graylog/server
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/var/log/graylog/server/config] action create
    - create new file /var/log/graylog/server/config
    - update content in file /var/log/graylog/server/config from none to 623c00
    --- /var/log/graylog/server/config  2015-03-29 20:55:48.966953001 +0000
    +++ /tmp/chef-rendered-template20150329-20-1iowdm6  2015-03-29 20:55:48.966953001 +0000
    @@ -1 +1,7 @@
    +s209715200
    +n30
    +t86400
    +!gzip
    +
    +
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * ruby_block[reload graylog-server svlogd configuration] action nothing (skipped due to action :nothing)
  * file[/opt/graylog/sv/graylog-server/down] action delete (up to date)
  * link[/opt/graylog/init/graylog-server] action create
    - create symlink at /opt/graylog/init/graylog-server to /opt/graylog/embedded/bin/sv
  * link[/opt/graylog/service/graylog-server] action create
    - create symlink at /opt/graylog/service/graylog-server to /opt/graylog/sv/graylog-server
  * ruby_block[supervise_graylog-server_sleep] action run
    - execute the ruby block supervise_graylog-server_sleep
  * service[graylog-server] action nothing (skipped due to action :nothing)
  * execute[/opt/graylog/embedded/bin/graylog-ctl start graylog-server] action run
    - execute /opt/graylog/embedded/bin/graylog-ctl start graylog-server
  * ruby_block[add node to server list] action run
    - execute the ruby block add node to server list
Recipe: graylog::graylog-web
  * directory[/var/log/graylog/web] action create
    - create new directory /var/log/graylog/web
    - change mode from '' to '0700'
    - change owner from '' to 'graylog'
  * template[/opt/graylog/conf/graylog-web-interface.conf] action create
    - create new file /opt/graylog/conf/graylog-web-interface.conf
    - update content in file /opt/graylog/conf/graylog-web-interface.conf from none to 5b47ed
    --- /opt/graylog/conf/graylog-web-interface.conf    2015-03-29 20:55:53.186953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-1nmtp3   2015-03-29 20:55:53.186953000 +0000
    @@ -1 +1,29 @@
    +# graylog-server REST URIs (one or more, comma separated) For example: "http://127.0.0.1:12900/,http://127.0.0.1:12910/"
    +graylog2-server.uris = "http://127.0.0.1:12900/"
    +
    +# Learn how to configure custom logging in the documentation:
    +#    http://support.torch.sh/help/kb/graylog-web-interface/configuring-web-interface-logging
    +
    +# Secret key
    +# ~~~~~
    +# The secret key is used to secure cryptographics functions. Set this to a long and randomly generated string.
    +# If you deploy your application to several instances be sure to use the same key!
    +# Generate for example with: pwgen -s 96
    +application.secret = "9661826b5141161d477b73e44397f2a0b75e700169b30f636e1f863152b73e3276567705e3d5d012e3e9c5a8cd81ea4a75d80b95fafe2691dc5a0bb6a93d4e9e"
    +
    +# Web interface timezone
    +# Graylog stores all timestamps in UTC. To properly display times, set the default timezone of the interface.
    +# If you leave this out, Graylog will pick your system default as the timezone. Usually you will want to configure it explicitly.
    +timezone = Etc/UTC
    +
    +# Message field limit
    +# Your web interface can cause high load in your browser when you have a lot of different message fields. The default
    +# limit of message fields is 100. Set it to 0 if you always want to get all fields. They are for example used in the
    +# search result sidebar or for autocompletion of field names.
    +field_list_limit = 100
    +
    +# Use this to run Graylog with a path prefix
    +
    +# You usually do not want to change this.
    +application.global=lib.Global
    - change mode from '' to '0644'
    - change owner from '' to 'graylog'
    - change group from '' to 'graylog'
  * template[/opt/graylog/conf/web-logger.xml] action create
    - create new file /opt/graylog/conf/web-logger.xml
    - update content in file /opt/graylog/conf/web-logger.xml from none to f80476
    --- /opt/graylog/conf/web-logger.xml    2015-03-29 20:55:53.202953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-6tyc4s   2015-03-29 20:55:53.202953000 +0000
    @@ -1 +1,35 @@
    +<configuration>
    +     <conversionRule conversionWord="coloredLevel" converterClass="play.api.Logger$ColoredLevel" />
    +
    +    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
    +     <file>/var/log/graylog/web/application.log</file>
    +     <encoder>
    +       <pattern>%date - [%level] - from %logger in %thread %n%message%n%xException%n</pattern>
    +     </encoder>
    +   </appender>
    +
    +    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    +        <encoder>
    +           <pattern>%coloredLevel %logger{15} - %message%n%xException{5}</pattern>
    +        </encoder>
    +    </appender>
    +
    +    <root level="ERROR">
    +        <appender-ref ref="STDOUT" />
    +        <appender-ref ref="FILE" />
    +    </root>
    +
    +    <logger name="com.jolbox.bonecp" level="DEBUG">
    +        <appender-ref ref="STDOUT" />
    +    </logger>
    +
    +    <logger name="play" level="INFO">
    +        <appender-ref ref="STDOUT" />
    +    </logger>
    +
    +    <logger name="application" level="INFO">
    +        <appender-ref ref="STDOUT" />
    +    </logger>
    +
    +</configuration>
    - change mode from '' to '0644'
    - change owner from '' to 'graylog'
    - change group from '' to 'graylog'
  * directory[/opt/graylog/sv/graylog-web] action create
    - create new directory /opt/graylog/sv/graylog-web
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/graylog-web/log] action create
    - create new directory /opt/graylog/sv/graylog-web/log
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/graylog-web/log/main] action create
    - create new directory /opt/graylog/sv/graylog-web/log/main
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/graylog-web/run] action create
    - create new file /opt/graylog/sv/graylog-web/run
    - update content in file /opt/graylog/sv/graylog-web/run from none to 55cd07
    --- /opt/graylog/sv/graylog-web/run 2015-03-29 20:55:53.218953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-16q8dos  2015-03-29 20:55:53.218953000 +0000
    @@ -1 +1,9 @@
    +#!/bin/sh
    +exec 2>&1
    +
    +umask 077
    +export JAVA_HOME=/opt/graylog/embedded/jre
    +
    +rm -f /var/opt/graylog/web.pid
    +exec chpst -P -U graylog -u graylog /opt/graylog/web/bin/graylog-web-interface -Dconfig.file=/opt/graylog/conf/graylog-web-interface.conf -Dhttp.port=9000 -Dhttp.address=0.0.0.0 -Dpidfile.path=/var/opt/graylog/web.pid -Dlogger.file=/opt/graylog/conf/web-logger.xml
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/graylog-web/log/run] action create
    - create new file /opt/graylog/sv/graylog-web/log/run
    - update content in file /opt/graylog/sv/graylog-web/log/run from none to 591533
    --- /opt/graylog/sv/graylog-web/log/run 2015-03-29 20:55:53.226953001 +0000
    +++ /tmp/chef-rendered-template20150329-20-i6uqqz   2015-03-29 20:55:53.226953001 +0000
    @@ -1 +1,3 @@
    +#!/bin/sh
    +exec svlogd -tt /var/log/graylog/web
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/var/log/graylog/web/config] action create
    - create new file /var/log/graylog/web/config
    - update content in file /var/log/graylog/web/config from none to 623c00
    --- /var/log/graylog/web/config 2015-03-29 20:55:53.230953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-12aacbk  2015-03-29 20:55:53.230953000 +0000
    @@ -1 +1,7 @@
    +s209715200
    +n30
    +t86400
    +!gzip
    +
    +
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * ruby_block[reload graylog-web svlogd configuration] action nothing (skipped due to action :nothing)
  * file[/opt/graylog/sv/graylog-web/down] action delete (up to date)
  * link[/opt/graylog/init/graylog-web] action create
    - create symlink at /opt/graylog/init/graylog-web to /opt/graylog/embedded/bin/sv
  * link[/opt/graylog/service/graylog-web] action create
    - create symlink at /opt/graylog/service/graylog-web to /opt/graylog/sv/graylog-web
  * ruby_block[supervise_graylog-web_sleep] action run
    - execute the ruby block supervise_graylog-web_sleep
  * service[graylog-web] action nothing (skipped due to action :nothing)
  * execute[/opt/graylog/embedded/bin/graylog-ctl start graylog-web] action run
    - execute /opt/graylog/embedded/bin/graylog-ctl start graylog-web
Recipe: graylog::nginx
  * directory[/opt/graylog/conf/nginx/ca] action create
    - create new directory /opt/graylog/conf/nginx/ca
    - change mode from '' to '0700'
    - change owner from '' to 'root'
  * directory[/var/log/graylog/nginx] action create
    - create new directory /var/log/graylog/nginx
    - change mode from '' to '0700'
    - change owner from '' to 'root'
  * file[/opt/graylog/conf/nginx/ca/graylog.key] action create
    - create new file /opt/graylog/conf/nginx/ca/graylog.key
    - update content in file /opt/graylog/conf/nginx/ca/graylog.key from none to 63fb58
    --- /opt/graylog/conf/nginx/ca/graylog.key  2015-03-29 20:55:55.426953001 +0000
    +++ /opt/graylog/conf/nginx/ca/.graylog.key20150329-20-1gccn9r  2015-03-29 20:55:55.426953001 +0000
    @@ -1 +1,28 @@
    +-----BEGIN RSA PRIVATE KEY-----
    +MIIEogIBAAKCAQEAvRLGvE4gHQ9Xfw999PV6BDQA2K3ppc7VtnyuKRuRdmmNp2XC
    +YSeQmAVSowF4YsQMlN4z2mTRZippfNiH2lVmbRLAc5t+i9bOIvraj0lil+W7UBGO
    +zik8ht+/gvj9hVoVPLrj6Jn6JYs4Fwvz6eQn0ef5GMIexUMQoP5dAnGYsdKhjjBj
    +5S0iDWCDu7/LJoYGRVQElwzGcAig41u3wSU4wPJaJZnvhxvMhNyIRJgZUb+sKh+0
    +GrGKS5t1Sq8NHseR2KI3o9CiVt8QUwz8pT/lwcxJCcqsEn3boIJHqvEP4OiJOavF
    +kZZlwjOfD6JyIYHykEdynuLIe0u/pOfEmowv7wIDAQABAoIBAEVX8ZN2g8ikq85p
    +/CQvM8T+3aCaiCrLpQ38xFNHTR5EsDNI2vWO8TUQHrKyA1kV1hdzN0lN2I7D11R2
    +hbzJvXsbeYHs8YiQC6JAppAOth5Hn19KUTnDXfOJdE+wyipyU3+me5f/gQLsAHJT
    +a+3IQ+J0VaOC7o4ifqLNJ4eR6hKtMBXYyb0iFLy0r9uNNRVV90M2tjRXgy34zAQZ
    +wMm6K2YHbIsNbNqurkLxSHqacVQxge+bCCHAXzkNFG819TcfHQImVj7D2ZQ2mFHQ
    +gPhLb46BmiJzgaiJiCTTk4afInzADkpUEmy2/TeuqyHeOCQxuY+JG9QBKNym66HI
    +nXsQHyECgYEA5XGRGQ/wmHqx+0Ei3kCQ2GBIufVrf0CI6Q6KSvC40y6bhoVG4CrY
    +aMvWzXAcfSLktGO8wF3JC21qzMPl/sCqIze8m6/mz0HGerjwzEH20HSvbHlaiHvv
    +0Z7jyqqb8Jz3A9c8bj799oqrom+eLvRH9zH5MdnoL0jy+SIhxiKMLcUCgYEA0vUI
    +85rJk0G4qqTSLA0Twv3nMR9VHANc2Q1KXWXB2I3rx/gmf54W5vbt1fgcOSVo8LDn
    +iOHBE+CSlhOP4PEMY8BYP7+o+r6yHnr1bvZ5RL1xol5G8OPuGd893Ikv1TkYYo17
    +LTkEy5gHFV8eRXOWHO0KApmkMNfcX1HHESHXFiMCgYAq3MthZjPpGEq1iFaONHua
    +oGoVqz5YuGKbPycgltXARd2yBKXX7Mke0q2fFUmNKv6UoGk7eom7Q8aG2DXYIH/o
    +Mlperz6sCzqb5H6/ebc0/AdleUorYxPLEia1zqdxDLGsmwHkCoqBCyjDIJzpYqMr
    +D7/gyzdv1e3mErVCgWO0jQKBgEbdNy+V3IbR+fWgvlU741qKLiJrMwzg+EyVUVjE
    +ePSE4CJhcpVGBs15P3W0Dc8IiRLpai2qIFDMDJHLanaWoqHTmBF6EYqBipYAmfe3
    +Zg84UDbJ0qzS9EXOnxo5H09SCaX5fto3ICxAGokMVb/gzxlSax1qfSRHLuj6MJPJ
    +uVXfAoGAH8Gw9MCiG8zPqEMOtHhAUiRl37TuOFKlb6xAjW8QSXZUq+tM+OZSA8FK
    +E0QTRg4vNXi3EVzClPdSWZ0S+p+c51dx6z0V3iOLhaJFKBCubV0gT8L2dx24IZBp
    +Ece4Cd5wZnFdIml+jRIhEtcVW0Knv2sYQxg6pzc08AujD6mtYiA=
    +-----END RSA PRIVATE KEY-----
    - change mode from '' to '0644'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * file[/opt/graylog/conf/nginx/ca/graylog-ssl.conf] action create
    - create new file /opt/graylog/conf/nginx/ca/graylog-ssl.conf
    - update content in file /opt/graylog/conf/nginx/ca/graylog-ssl.conf from none to b7a628
    --- /opt/graylog/conf/nginx/ca/graylog-ssl.conf 2015-03-29 20:55:55.446953000 +0000
    +++ /opt/graylog/conf/nginx/ca/.graylog-ssl.conf20150329-20-bclyhz  2015-03-29 20:55:55.446953000 +0000
    @@ -1 +1,13 @@
    +  [ req ]
    +  distinguished_name = req_distinguished_name
    +  prompt = no
    +
    +  [ req_distinguished_name ]
    +  C                      = DE
    +  ST                     = Hamburg
    +  L                      = Hamburg
    +  O                      = Graylog
    +  OU                     = Operations
    +  CN                     = 542d341aac3d
    +  emailAddress           = graylog@542d341aac3d
    - change mode from '' to '0644'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * ruby_block[create crtfile] action run
  Recipe: <Dynamically Defined Resource>
    * file[/opt/graylog/conf/nginx/ca/graylog.crt] action create
      - create new file /opt/graylog/conf/nginx/ca/graylog.crt
      - update content in file /opt/graylog/conf/nginx/ca/graylog.crt from none to 435ba6
      --- /opt/graylog/conf/nginx/ca/graylog.crt    2015-03-29 20:55:55.482953000 +0000
      +++ /opt/graylog/conf/nginx/ca/.graylog.crt20150329-20-1eif1c1    2015-03-29 20:55:55.478953000 +0000
      @@ -1 +1,23 @@
      +-----BEGIN CERTIFICATE-----
      +MIIDpjCCAo4CCQCHBhBY29RHcjANBgkqhkiG9w0BAQUFADCBlDELMAkGA1UEBhMC
      +REUxEDAOBgNVBAgMB0hhbWJ1cmcxEDAOBgNVBAcMB0hhbWJ1cmcxEDAOBgNVBAoM
      +B0dyYXlsb2cxEzARBgNVBAsMCk9wZXJhdGlvbnMxFTATBgNVBAMMDDU0MmQzNDFh
      +YWMzZDEjMCEGCSqGSIb3DQEJARYUZ3JheWxvZ0A1NDJkMzQxYWFjM2QwHhcNMTUw
      +MzI5MjA1NTU1WhcNMjUwMzI2MjA1NTU1WjCBlDELMAkGA1UEBhMCREUxEDAOBgNV
      +BAgMB0hhbWJ1cmcxEDAOBgNVBAcMB0hhbWJ1cmcxEDAOBgNVBAoMB0dyYXlsb2cx
      +EzARBgNVBAsMCk9wZXJhdGlvbnMxFTATBgNVBAMMDDU0MmQzNDFhYWMzZDEjMCEG
      +CSqGSIb3DQEJARYUZ3JheWxvZ0A1NDJkMzQxYWFjM2QwggEiMA0GCSqGSIb3DQEB
      +AQUAA4IBDwAwggEKAoIBAQC9Esa8TiAdD1d/D3309XoENADYremlztW2fK4pG5F2
      +aY2nZcJhJ5CYBVKjAXhixAyU3jPaZNFmKml82IfaVWZtEsBzm36L1s4i+tqPSWKX
      +5btQEY7OKTyG37+C+P2FWhU8uuPomfolizgXC/Pp5CfR5/kYwh7FQxCg/l0CcZix
      +0qGOMGPlLSINYIO7v8smhgZFVASXDMZwCKDjW7fBJTjA8lolme+HG8yE3IhEmBlR
      +v6wqH7QasYpLm3VKrw0ex5HYojej0KJW3xBTDPylP+XBzEkJyqwSfduggkeq8Q/g
      +6Ik5q8WRlmXCM58PonIhgfKQR3Ke4sh7S7+k58SajC/vAgMBAAEwDQYJKoZIhvcN
      +AQEFBQADggEBABgmp1i8YgqrquP7gmJfiQHt/G81nxcIbBTsHoRXFD+wUcE3o/ZZ
      +/U7OBfb+E/Te4lktdiUoCyhvM+RTxmIhIcalr7SzFJ0urQvx3WF20/KHBIHxwH+O
      +4m8ZmsnP7vAZI7MpgGJ9r9CKxzobGqVlwl0tI9I+dDYprOqF+FVOves610Gsdso2
      +r5obxfej3VKw86ONxWvAMPMbU11mBIEywW3fJI2EEI0RHRFYYrz/9UmqgtMyqU2K
      +eaN1Zm7shRlQcpp0g9HQ/AYML0I4eyN25loF5djNxdTA8YGttzvI1DYCPzyuGGhP
      +fEa50X8XsxVEdJf4m2zpmWIKZR3irqRiusY=
      +-----END CERTIFICATE-----
      - change mode from '' to '0644'
      - change owner from '' to 'root'
      - change group from '' to 'root'
    - execute the ruby block create crtfile
Recipe: graylog::nginx
  * template[/opt/graylog/conf/nginx/nginx.conf] action create
    - update content in file /opt/graylog/conf/nginx/nginx.conf from 95363d to 2b5bc7
    --- /opt/graylog/conf/nginx/nginx.conf  2015-03-13 15:56:15.000000000 +0000
    +++ /tmp/chef-rendered-template20150329-20-1pe8aof  2015-03-29 20:55:55.502953001 +0000
    @@ -1,118 +1,55 @@
    -
    -#user  nobody;
     worker_processes  1;
    +daemon off;

    -#error_log  logs/error.log;
    -#error_log  logs/error.log  notice;
    -#error_log  logs/error.log  info;
    -
    -#pid        logs/nginx.pid;
    -
    -
     events {
         worker_connections  1024;
     }

    -
     http {
    -    include       mime.types;
    +    include       /opt/graylog/conf/nginx/mime.types;
         default_type  application/octet-stream;

    -    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    -    #                  '$status $body_bytes_sent "$http_referer" '
    -    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    -
    -    #access_log  logs/access.log  main;
    -
    -    sendfile        on;
    -    #tcp_nopush     on;
    -
    -    #keepalive_timeout  0;
    -    keepalive_timeout  65;
    -
    -    #gzip  on;
    -
         server {
    -        listen       80;
    -        server_name  localhost;
    -
    -        #charset koi8-r;
    -
    -        #access_log  logs/host.access.log  main;
    -
    -        location / {
    -            root   html;
    -            index  index.html index.htm;
    -        }
    -
    -        #error_page  404              /404.html;
    -
    -        # redirect server error pages to the static page /50x.html
    -        #
    -        error_page   500 502 503 504  /50x.html;
    -        location = /50x.html {
    -            root   html;
    -        }
    -
    -        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    -        #
    -        #location ~ \.php$ {
    -        #    proxy_pass   http://127.0.0.1;
    -        #}
    -
    -        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    -        #
    -        #location ~ \.php$ {
    -        #    root           html;
    -        #    fastcgi_pass   127.0.0.1:9000;
    -        #    fastcgi_index  index.php;
    -        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    -        #    include        fastcgi_params;
    -        #}
    -
    -        # deny access to .htaccess files, if Apache's document root
    -        # concurs with nginx's one
    -        #
    -        #location ~ /\.ht {
    -        #    deny  all;
    -        #}
    +      listen 80;
    +      location / {
    +        proxy_pass http://localhost:9000/;
    +        proxy_set_header Host $host;
    +        proxy_set_header X-Real-IP $remote_addr;
    +        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    +        proxy_pass_request_headers on;
    +        proxy_connect_timeout 150;
    +        proxy_send_timeout 100;
    +        proxy_read_timeout 100;
    +        proxy_buffers 4 32k;
    +        client_max_body_size 8m;
    +        client_body_buffer_size 128k;
    +      }
         }
    +    
    +    server {
    +      listen 443;

    +      ssl on;
    +      ssl_certificate /opt/graylog/conf/nginx/ca/graylog.crt;
    +      ssl_certificate_key /opt/graylog/conf/nginx/ca/graylog.key;
    +      ssl_session_timeout 5m;
    +      ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    +      ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
    +      ssl_prefer_server_ciphers on;

    -    # another virtual host using mix of IP-, name-, and port-based configuration
    -    #
    -    #server {
    -    #    listen       8000;
    -    #    listen       somename:8080;
    -    #    server_name  somename  alias  another.alias;
    -
    -    #    location / {
    -    #        root   html;
    -    #        index  index.html index.htm;
    -    #    }
    -    #}
    -
    -
    -    # HTTPS server
    -    #
    -    #server {
    -    #    listen       443 ssl;
    -    #    server_name  localhost;
    -
    -    #    ssl_certificate      cert.pem;
    -    #    ssl_certificate_key  cert.key;
    -
    -    #    ssl_session_cache    shared:SSL:1m;
    -    #    ssl_session_timeout  5m;
    -
    -    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    -    #    ssl_prefer_server_ciphers  on;
    -
    -    #    location / {
    -    #        root   html;
    -    #        index  index.html index.htm;
    -    #    }
    -    #}
    -
    +      location / {
    +        proxy_pass http://localhost:9000/;
    +        proxy_set_header Host $host;
    +        proxy_set_header X-Real-IP $remote_addr;
    +        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    +        proxy_pass_request_headers on;
    +        proxy_connect_timeout 150;
    +        proxy_send_timeout 100;
    +        proxy_read_timeout 100;
    +        proxy_buffers 4 32k;
    +        client_max_body_size 8m;
    +        client_body_buffer_size 128k;
    +      }
    +    }
     }
    - change owner from 'root' to 'graylog'
    - change group from 'root' to 'graylog'
  * directory[/opt/graylog/sv/nginx] action create
    - create new directory /opt/graylog/sv/nginx
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/nginx/log] action create
    - create new directory /opt/graylog/sv/nginx/log
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * directory[/opt/graylog/sv/nginx/log/main] action create
    - create new directory /opt/graylog/sv/nginx/log/main
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/nginx/run] action create
    - create new file /opt/graylog/sv/nginx/run
    - update content in file /opt/graylog/sv/nginx/run from none to 4ab11f
    --- /opt/graylog/sv/nginx/run   2015-03-29 20:55:55.558953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-1fkr3a4  2015-03-29 20:55:55.558953000 +0000
    @@ -1 +1,7 @@
    +#!/bin/sh
    +exec 2>&1
    +
    +export LC_ALL=C
    +umask 077
    +exec /opt/graylog/embedded/sbin/nginx -c /opt/graylog/conf/nginx/nginx.conf
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/opt/graylog/sv/nginx/log/run] action create
    - create new file /opt/graylog/sv/nginx/log/run
    - update content in file /opt/graylog/sv/nginx/log/run from none to 43211b
    --- /opt/graylog/sv/nginx/log/run   2015-03-29 20:55:55.566953001 +0000
    +++ /tmp/chef-rendered-template20150329-20-196fk8p  2015-03-29 20:55:55.566953001 +0000
    @@ -1 +1,3 @@
    +#!/bin/sh
    +exec svlogd -tt /var/log/graylog/nginx
    - change mode from '' to '0755'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/var/log/graylog/nginx/config] action create
    - create new file /var/log/graylog/nginx/config
    - update content in file /var/log/graylog/nginx/config from none to 623c00
    --- /var/log/graylog/nginx/config   2015-03-29 20:55:55.574953000 +0000
    +++ /tmp/chef-rendered-template20150329-20-pth6li   2015-03-29 20:55:55.574953000 +0000
    @@ -1 +1,7 @@
    +s209715200
    +n30
    +t86400
    +!gzip
    +
    +
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * ruby_block[reload nginx svlogd configuration] action nothing (skipped due to action :nothing)
  * file[/opt/graylog/sv/nginx/down] action delete (up to date)
  * link[/opt/graylog/init/nginx] action create
    - create symlink at /opt/graylog/init/nginx to /opt/graylog/embedded/bin/sv
  * link[/opt/graylog/service/nginx] action create
    - create symlink at /opt/graylog/service/nginx to /opt/graylog/sv/nginx
  * ruby_block[supervise_nginx_sleep] action run
    - execute the ruby block supervise_nginx_sleep
  * service[nginx] action nothing (skipped due to action :nothing)
  * execute[/opt/graylog/embedded/bin/graylog-ctl start nginx] action run
    - execute /opt/graylog/embedded/bin/graylog-ctl start nginx
Recipe: ntp::default
  * apt_package[ntp] action install (up to date)
  * apt_package[ntpdate] action install (up to date)
  * directory[/var/lib/ntp] action create (up to date)
  * directory[/var/log/ntpstats/] action create (up to date)
  * cookbook_file[/etc/ntp.leapseconds] action create
    - create new file /etc/ntp.leapseconds
    - update content in file /etc/ntp.leapseconds from none to 274665
    --- /etc/ntp.leapseconds    2015-03-29 20:56:02.546953001 +0000
    +++ /etc/.ntp.leapseconds20150329-20-93j4nc 2015-03-29 20:56:02.546953001 +0000
    @@ -1 +1,219 @@
    +#
    +#  In the following text, the symbol '#' introduces
    +#  a comment, which continues from that symbol until 
    +#  the end of the line. A plain comment line has a
    +#  whitespace character following the comment indicator.
    +#  There are also special comment lines defined below. 
    +#  A special comment will always have a non-whitespace 
    +#  character in column 2.
    +#
    +#  A blank line should be ignored.
    +#
    +#  The following table shows the corrections that must
    +#  be applied to compute International Atomic Time (TAI)
    +#  from the Coordinated Universal Time (UTC) values that
    +#  are transmitted by almost all time services.
    +#
    +#  The first column shows an epoch as a number of seconds
    +#  since 1900.0 and the second column shows the number of
    +#  seconds that must be added to UTC to compute TAI for
    +#  any timestamp at or after that epoch. The value on 
    +#  each line is valid from the indicated initial instant
    +#  until the epoch given on the next one or indefinitely 
    +#  into the future if there is no next line.
    +#  (The comment on each line shows the representation of
    +#  the corresponding initial epoch in the usual 
    +#  day-month-year format. The epoch always begins at
    +#  00:00:00 UTC on the indicated day. See Note 5 below.)
    +#  
    +#  Important notes:
    +#
    +#  1. Coordinated Universal Time (UTC) is often referred to
    +#  as Greenwich Mean Time (GMT). The GMT time scale is no
    +#  longer used, and the use of GMT to designate UTC is
    +#  discouraged.
    +#
    +#  2. The UTC time scale is realized by many national 
    +#  laboratories and timing centers. Each laboratory
    +#  identifies its realization with its name: Thus
    +#  UTC(NIST), UTC(USNO), etc. The differences among
    +#  these different realizations are typically on the
    +#  order of a few nanoseconds (i.e., 0.000 000 00x s)
    +#  and can be ignored for many purposes. These differences
    +#  are tabulated in Circular T, which is published monthly
    +#  by the International Bureau of Weights and Measures
    +#  (BIPM). See www.bipm.fr for more information.
    +#
    +#  3. The current defintion of the relationship between UTC 
    +#  and TAI dates from 1 January 1972. A number of different 
    +#  time scales were in use before than epoch, and it can be 
    +#  quite difficult to compute precise timestamps and time 
    +#  intervals in those "prehistoric" days. For more information,
    +#  consult:
    +#
    +#      The Explanatory Supplement to the Astronomical
    +#      Ephemeris.
    +#  or
    +#      Terry Quinn, "The BIPM and the Accurate Measurement
    +#      of Time," Proc. of the IEEE, Vol. 79, pp. 894-905,
    +#      July, 1991.
    +#
    +#  4.  The insertion of leap seconds into UTC is currently the
    +#  responsibility of the International Earth Rotation Service,
    +#  which is located at the Paris Observatory: 
    +#
    +#  Central Bureau of IERS
    +#  61, Avenue de l'Observatoire
    +#  75014 Paris, France.
    +#
    +#  Leap seconds are announced by the IERS in its Bulletin C
    +#
    +#  See hpiers.obspm.fr or www.iers.org for more details.
    +#
    +#  All national laboratories and timing centers use the
    +#  data from the BIPM and the IERS to construct their
    +#  local realizations of UTC.
    +#
    +#  Although the definition also includes the possibility
    +#  of dropping seconds ("negative" leap seconds), this has 
    +#  never been done and is unlikely to be necessary in the 
    +#  foreseeable future.
    +#
    +#  5. If your system keeps time as the number of seconds since
    +#  some epoch (e.g., NTP timestamps), then the algorithm for
    +#  assigning a UTC time stamp to an event that happens during a positive
    +#  leap second is not well defined. The official name of that leap 
    +#  second is 23:59:60, but there is no way of representing that time 
    +#  in these systems. 
    +#  Many systems of this type effectively stop the system clock for 
    +#  one second during the leap second and use a time that is equivalent 
    +#  to 23:59:59 UTC twice. For these systems, the corresponding TAI 
    +#  timestamp would be obtained by advancing to the next entry in the
    +#  following table when the time equivalent to 23:59:59 UTC
    +#  is used for the second time. Thus the leap second which
    +#  occurred on 30 June 1972 at 23:59:59 UTC would have TAI
    +#  timestamps computed as follows:
    +#
    +#  ...
    +#  30 June 1972 23:59:59 (2287785599, first time): TAI= UTC + 10 seconds
    +#  30 June 1972 23:59:60 (2287785599,second time): TAI= UTC + 11 seconds
    +#  1  July 1972 00:00:00 (2287785600)      TAI= UTC + 11 seconds
    +#  ...
    +#
    +#  If your system realizes the leap second by repeating 00:00:00 UTC twice
    +#  (this is possible but not usual), then the advance to the next entry
    +#  in the table must occur the second time that a time equivlent to 
    +#  00:00:00 UTC is used. Thus, using the same example as above:
    +#
    +#  ...
    +#       30 June 1972 23:59:59 (2287785599):        TAI= UTC + 10 seconds
    +#       30 June 1972 23:59:60 (2287785600, first time):    TAI= UTC + 10 seconds
    +#       1  July 1972 00:00:00 (2287785600,second time):    TAI= UTC + 11 seconds
    +#  ...
    +#
    +#  in both cases the use of timestamps based on TAI produces a smooth
    +#  time scale with no discontinuity in the time interval.
    +#
    +#  This complexity would not be needed for negative leap seconds (if they 
    +#  are ever used). The UTC time would skip 23:59:59 and advance from 
    +#  23:59:58 to 00:00:00 in that case.  The TAI offset would decrease by 
    +#  1 second at the same instant.  This is a much easier situation to deal 
    +#  with, since the difficulty of unambiguously representing the epoch 
    +#  during the leap second does not arise.
    +#
    +#  Questions or comments to:
    +#      Jeff Prillaman
    +#      Time Service Department
    +#      US Naval Observatory
    +#      Washington, DC
    +#      [email protected]
    +#
    +#  Last Update of leap second values:  11 Feb 2014
    +#
    +#  The following line shows this last update date in NTP timestamp 
    +#  format. This is the date on which the most recent change to
    +#  the leap second data was added to the file. This line can
    +#  be identified by the unique pair of characters in the first two 
    +#  columns as shown below.
    +#
    +#$  3601065600
    +#
    +#  The data in this file will be updated periodically as new leap 
    +#  seconds are announced. In addition to being entered on the line
    +#  above, the update time (in NTP format) will be added to the basic 
    +#  file name leap-seconds to form the name leap-seconds.<NTP TIME>.
    +#  In addition, the generic name leap-seconds.list will always point to 
    +#  the most recent version of the file.
    +#
    +#  This update procedure will be performed only when a new leap second
    +#  is announced. 
    +#
    +#  The following entry specifies the expiration date of the data
    +#  in this file in units of seconds since 1900.0.  This expiration date 
    +#  will be changed at least twice per year whether or not a new leap 
    +#  second is announced. These semi-annual changes will be made no
    +#  later than 1 June and 1 December of each year to indicate what
    +#  action (if any) is to be taken on 30 June and 31 December, 
    +#  respectively. (These are the customary effective dates for new
    +#  leap seconds.) This expiration date will be identified by a
    +#  unique pair of characters in columns 1 and 2 as shown below.
    +#  In the unlikely event that a leap second is announced with an 
    +#  effective date other than 30 June or 31 December, then this
    +#  file will be edited to include that leap second as soon as it is
    +#  announced or at least one month before the effective date
    +#  (whichever is later). 
    +#  If an announcement by the IERS specifies that no leap second is 
    +#  scheduled, then only the expiration date of the file will 
    +#  be advanced to show that the information in the file is still
    +#  current -- the update time stamp, the data and the name of the file 
    +#  will not change.
    +#
    +#  Updated through IERS Bulletin C 47
    +#  File expires on:  1 Dec 2014
    +#
    +#@ 3626380800
    +#
    +2272060800 10  # 1 Jan 1972
    +2287785600 11  # 1 Jul 1972
    +2303683200 12  # 1 Jan 1973
    +2335219200 13  # 1 Jan 1974
    +2366755200 14  # 1 Jan 1975
    +2398291200 15  # 1 Jan 1976
    +2429913600 16  # 1 Jan 1977
    +2461449600 17  # 1 Jan 1978
    +2492985600 18  # 1 Jan 1979
    +2524521600 19  # 1 Jan 1980
    +2571782400 20  # 1 Jul 1981
    +2603318400 21  # 1 Jul 1982
    +2634854400 22  # 1 Jul 1983
    +2698012800 23  # 1 Jul 1985
    +2776982400 24  # 1 Jan 1988
    +2840140800 25  # 1 Jan 1990
    +2871676800 26  # 1 Jan 1991
    +2918937600 27  # 1 Jul 1992
    +2950473600 28  # 1 Jul 1993
    +2982009600 29  # 1 Jul 1994
    +3029443200 30  # 1 Jan 1996
    +3076704000 31  # 1 Jul 1997
    +3124137600 32  # 1 Jan 1999
    +3345062400 33  # 1 Jan 2006
    +3439756800 34  # 1 Jan 2009
    +3550089600 35  # 1 Jul 2012
    +#
    +#  the following special comment contains the
    +#  hash value of the data in this file computed
    +#  use the secure hash algorithm as specified
    +#  by FIPS 180-1. See the files in ~/sha for
    +#  the details of how this hash value is
    +#  computed. Note that the hash computation
    +#  ignores comments and whitespace characters
    +#  in data lines. It includes the NTP values
    +#  of both the last modification time and the 
    +#  expiration time of the file, but not the
    +#  white space on those lines.
    +#  the hash line is also ignored in the
    +#  computation.
    +#
    +#h 6660fba2 47c392c3 fc7bb657 d338b539 ce357d44
    +#
    - change mode from '' to '0644'
    - change owner from '' to 'root'
    - change group from '' to 'root'
  * template[/etc/ntp.conf] action create
    - update content in file /etc/ntp.conf from 4eb9a0 to 49f120
    --- /etc/ntp.conf   2015-02-06 15:24:35.000000000 +0000
    +++ /tmp/chef-rendered-template20150329-20-1uzp1a8  2015-03-29 20:56:02.606953000 +0000
    @@ -1,56 +1,32 @@
    -# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
    -
    +# Generated by Chef for 542d341aac3d
    +# Local modifications will be overwritten.
    +tinker panic 0
    +statsdir /var/log/ntpstats/
    +leapfile /etc/ntp.leapseconds
     driftfile /var/lib/ntp/ntp.drift

    -
    -# Enable this if you want statistics to be logged.
    -#statsdir /var/log/ntpstats/
    -
     statistics loopstats peerstats clockstats
     filegen loopstats file loopstats type day enable
     filegen peerstats file peerstats type day enable
     filegen clockstats file clockstats type day enable

    -# Specify one or more NTP servers.

    -# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
    -# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
    -# more information.
    -server 0.ubuntu.pool.ntp.org
    -server 1.ubuntu.pool.ntp.org
    -server 2.ubuntu.pool.ntp.org
    -server 3.ubuntu.pool.ntp.org
    +disable monitor

    -# Use Ubuntu's ntp server as a fallback.
    -server ntp.ubuntu.com

    -# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
    -# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
    -# might also be helpful.
    -#
    -# Note that "restrict" applies to both servers and clients, so a configuration
    -# that might be intended to block requests from certain clients could also end
    -# up blocking replies from your own upstream servers.
    +server 0.pool.ntp.org iburst
    +restrict 0.pool.ntp.org nomodify notrap noquery
    +server 1.pool.ntp.org iburst
    +restrict 1.pool.ntp.org nomodify notrap noquery
    +server 2.pool.ntp.org iburst
    +restrict 2.pool.ntp.org nomodify notrap noquery
    +server 3.pool.ntp.org iburst
    +restrict 3.pool.ntp.org nomodify notrap noquery

    -# By default, exchange time with everybody, but don't allow configuration.
    -restrict -4 default kod notrap nomodify nopeer noquery
    +restrict default kod notrap nomodify nopeer noquery
    +restrict 127.0.0.1 nomodify
     restrict -6 default kod notrap nomodify nopeer noquery
    +restrict -6 ::1 nomodify

    -# Local users may interrogate the ntp server more closely.
    -restrict 127.0.0.1
    -restrict ::1

    -# Clients from this (example!) subnet have unlimited access, but only if
    -# cryptographically authenticated.
    -#restrict 192.168.123.0 mask 255.255.255.0 notrust
    -
    -
    -# If you want to provide time to your local subnet, change the next line.
    -# (Again, the address is an example only.)
    -#broadcast 192.168.123.255
    -
    -# If you want to listen to time broadcasts on your local subnet, de-comment the
    -# next lines.  Please do this only if you trust everybody on the network!
    -#disable auth
    -#broadcastclient
  * service[ntp] action enable (up to date)
  * service[ntp] action start
    - start service service[ntp]
Recipe: graylog::etcd
  * ruby_block[reload etcd svlogd configuration] action create
    - execute the ruby block reload etcd svlogd configuration
Recipe: graylog::elasticsearch
  * service[elasticsearch] action restart
    - restart service service[elasticsearch]
  * ruby_block[reload elasticsearch svlogd configuration] action create
    - execute the ruby block reload elasticsearch svlogd configuration
Recipe: graylog::mongodb
  * ruby_block[reload mongodb svlogd configuration] action create
    - execute the ruby block reload mongodb svlogd configuration
Recipe: graylog::graylog-server
  * service[graylog-server] action restart
    - restart service service[graylog-server]
  * ruby_block[reload graylog-server svlogd configuration] action create
    - execute the ruby block reload graylog-server svlogd configuration
Recipe: graylog::graylog-web
  * service[graylog-web] action restart
    - restart service service[graylog-web]
  * ruby_block[reload graylog-web svlogd configuration] action create
    - execute the ruby block reload graylog-web svlogd configuration
Recipe: graylog::nginx
  * service[nginx] action restart
    - restart service service[nginx]
  * ruby_block[reload nginx svlogd configuration] action create
    - execute the ruby block reload nginx svlogd configuration
Recipe: ntp::default
  * service[ntp] action restart
    - restart service service[ntp]

Running handlers:
Running handlers complete
Chef Client finished, 107/120 resources updated in 36.558666794 seconds
graylog Reconfigured!

configured inputs do not persist

We were running a docker container using the suggested command line:

docker run -e GRAYLOG_PASSWORD=secret -p 5556:5555 -p 515:514 -p 9000:9000 -p 12202:12201 -v /graylog/data:/var/opt/graylog/data -v /graylog/logs:/var/log/graylog graylog2/allinone

and expected my changes to Graylog to be persistent from one docker run to another. However, Graylog settings like configured inputs are gone after running the container once more.

It appears to me that the inputs are stored in the MongoDB. Is this correct?
When doing a grep for the name of my configured input on the docker host:

root@srv1-graylog1:# grep -nre "marfris" /graylog/data/
Binary file /graylog/data/mongodb/graylog.0 matches

Still, there is no input configured when running the container a second time.

Other data like collected logs are persistent though.

Allow images in other regions?

Would it be possible to also make this image in other regions?

(For example, our current deployment is in the ap-southeast-2 region - hence, it would make sense to have a graylog2 box in that region as well), but just having any regions outside the US/EU would be good.

console username/pass for openstack img

I know there is no need to login to the console. Except 1, I need to know what IP the DHCP server assigned it. Yes, I can connect to the DHCP server and query for the newest host, but this isn't always practical. I tried ubuntu/ubuntu and admin/admin. Anything else it could be?

sudo graylog2-ctl reconfigure - error

Ive deployed the latest official ova file and changed the ip address to a static address and everything had been fine.

Ive since added dns-nameservers string to the graylog2 vm via /etc/network/interfaces as well as changed the hostname and . After a reboot of the appliance and re-running sudo graylog2-ctl reconfigure it errors now when it reaches

โ€œruby_block[add node to cluster list] action runโ€. It hangs from here and eventually hangs. Iโ€™m wondering if its due to the DNS servers being added or the hostname change. Ill add this to the bug tracker.

-------------------output of error-----------------------------------------------------------------------------------------

* execute[/opt/graylog2/embedded/bin/graylog2-ctl start elasticsearch] action run
    - execute /opt/graylog2/embedded/bin/graylog2-ctl start elasticsearch
  * ruby_block[add node to cluster list] action run

    ================================================================================
    Error executing action `run` on resource 'ruby_block[add node to cluster list]'
    ================================================================================

    NameError
    ---------
    undefined local variable or method `name' for #<Graylog2Registry:0x0000000411cc50>

    Cookbook Trace:
    ---------------
    /opt/graylog2/embedded/cookbooks/graylog2/libraries/registry.rb:94:in `rescue in add_node'
    /opt/graylog2/embedded/cookbooks/graylog2/libraries/registry.rb:91:in `add_node'
    /opt/graylog2/embedded/cookbooks/graylog2/libraries/registry.rb:44:in `add_es_node'
    /opt/graylog2/embedded/cookbooks/graylog2/recipes/elasticsearch.rb:48:in `block (2 levels) in from_file'

    Resource Declaration:
    ---------------------
    # In /opt/graylog2/embedded/cookbooks/graylog2/recipes/elasticsearch.rb

     46: ruby_block "add node to cluster list" do
     47:   block do
     48:     $registry.add_es_node(node['ipaddress'])
     49:   end
     50: end

    Compiled Resource:
    ------------------
    # Declared in /opt/graylog2/embedded/cookbooks/graylog2/recipes/elasticsearch.rb:46:in `from_file'

    ruby_block("add node to cluster list") do
      action "run"
      retries 0
      retry_delay 2
      default_guard_interpreter :default
      block_name "add node to cluster list"
      declared_type :ruby_block
      cookbook_name :graylog2
      recipe_name "elasticsearch"
      block #<Proc:0x0000000404e008@/opt/graylog2/embedded/cookbooks/graylog2/recipes/elasticsearch.rb:47>
    end

Recipe: timezone-ii::debian
  * bash[dpkg-reconfigure tzdata] action run
    - execute "bash"  "/tmp/chef-script20150211-1782-ty5ypx"

Running handlers:
[2015-02-11T11:54:03-05:00] ERROR: Running exception handlers
Running handlers complete
[2015-02-11T11:54:03-05:00] ERROR: Exception handlers complete
[2015-02-11T11:54:03-05:00] FATAL: Stacktrace dumped to /opt/graylog2/embedded/cookbooks/cache/chef-stacktrace.out
Chef Client failed. 5 resources updated in 308.503219999 seconds
[2015-02-11T11:54:03-05:00] ERROR: ruby_block[add node to cluster list] (graylog2::elasticsearch line 46) had an error: NameError: undefined local variable or method `name' for #<Graylog2Registry:0x0000000411cc50>
[2015-02-11T11:54:03-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

AWS EU-Central Image No Graylog Servers running

After running ubuntu@graylog:~$ sudo graylog-ctl reconfigure there is no process which listen on port 12900:

ubuntu@graylog:~$ sudo netstat -tap | grep LISTEN
tcp        0      0 *:57186                 *:*                     LISTEN      3115/java
tcp        0      0 *:3333                  *:*                     LISTEN      3115/java
tcp        0      0 *:27017                 *:*                     LISTEN      3141/mongod
tcp        0      0 localhost:2380          *:*                     LISTEN      3038/etcd
tcp        0      0 graylog:9200            *:*                     LISTEN      3115/java
tcp        0      0 *:http                  *:*                     LISTEN      3156/nginx.conf
tcp        0      0 *:38161                 *:*                     LISTEN      3115/java
tcp        0      0 graylog:9300            *:*                     LISTEN      3115/java
tcp        0      0 *:ssh                   *:*                     LISTEN      1081/sshd
tcp        0      0 localhost:afs3-callback *:*                     LISTEN      3038/etcd
tcp        0      0 *:https                 *:*                     LISTEN      3156/nginx.conf
tcp6       0      0 [::]:9000               [::]:*                  LISTEN      3070/java
tcp6       0      0 [::]:2379               [::]:*                  LISTEN      3038/etcd
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN      1081/sshd
tcp6       0      0 [::]:4001               [::]:*                  LISTEN      3038/etcd

AWS: Newly launched instance in us-west-2 does not successfully run.

Following the instructions for the AWS AMI image in us-west-2 (also of note: the Launch Instance link doesn't work).

I pulled down the image, launched it on a m3.medium instance, ran the reconfigure command and tried to login via http://. I get an error stating that no graylog servers can be reached, unable to login.

It appears that the server that is supposed to be running on port 12900 is not running.

ubuntu@graylog:/opt/graylog/conf$ sudo netstat -tap | grep LISTEN
tcp        0      0 *:3333                  *:*                     LISTEN      2739/java
tcp        0      0 localhost:2380          *:*                     LISTEN      2747/etcd
tcp        0      0 graylog:9200            *:*                     LISTEN      2739/java
tcp        0      0 *:http                  *:*                     LISTEN      2863/nginx.conf
tcp        0      0 *:46611                 *:*                     LISTEN      2739/java
tcp        0      0 graylog:9300            *:*                     LISTEN      2739/java
tcp        0      0 *:38324                 *:*                     LISTEN      2739/java
tcp        0      0 *:ssh                   *:*                     LISTEN      1123/sshd
tcp        0      0 localhost:afs3-callback *:*                     LISTEN      2747/etcd
tcp6       0      0 [::]:9000               [::]:*                  LISTEN      2802/java
tcp6       0      0 [::]:2379               [::]:*                  LISTEN      2747/etcd
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN      1123/sshd
tcp6       0      0 [::]:4001               [::]:*                  LISTEN      2747/etcd
ubuntu@graylog:/opt/graylog/conf$

docker container. http://127.0.0.1:12900 Never connected

Graylog Web Interface is disconnected.

Seems there is an issue connecting to mongo. Is that part of the docker image?

Running handlers:
Running handlers complete
Chef Client finished, 108/121 resources updated in 24.10659337 seconds
graylog Reconfigured!
2015-05-01_19:45:26.92754 INFO [CmdLineTool] Loaded plugins: []
2015-05-01_19:45:27.02057 INFO [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configuration=file:///opt/graylog/conf/log4j.xml
2015-05-01_19:45:27.63617 INFO [SigarService] Failed to load SIGAR. Falling back to JMX implementations.
2015-05-01_19:45:29.04492 INFO [InputBufferImpl] Message journal is enabled.
2015-05-01_19:45:29.40608 INFO [Log] Completed load of log messagejournal-0 with log end offset 0
2015-05-01_19:45:29.44333 INFO [LogManager] Created log for partition [messagejournal,0] in /var/opt/graylog/data/journal with properties {segment.index.bytes -> 1048576, file.delete.delay.ms -> 60000, segment.bytes -> 104857600, flush.ms -> 60000, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> 1073741824, cleanup.policy -> delete, segment.ms -> 3600000, max.message.bytes -> 2147483647, flush.messages -> 1000000, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 43200000}.
2015-05-01_19:45:29.44339 INFO [KafkaJournal] Initialized Kafka based journal at /var/opt/graylog/data/journal
2015-05-01_19:45:29.45483 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy , running 2 parallel message handlers.
2015-05-01_19:45:29.70471 INFO [NodeId] No node ID file found. Generated: 1c2f77b5-b33d-4af1-ba25-521162450be4
2015-05-01_19:45:29.96273 INFO [node] [graylog2-server] version[1.3.7], pid[583], build[3042293/2014-12-16T13:59:32Z]
2015-05-01_19:45:29.96279 INFO [node] [graylog2-server] initializing ...
2015-05-01_19:45:29.97608 INFO [plugins] [graylog2-server] loaded [graylog2-monitor], sites []
2015-05-01_19:45:32.10466 INFO [node] [graylog2-server] initialized
2015-05-01_19:45:32.12607 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2015-05-01_19:45:33.77873 INFO [KieRepositoryImpl] KieModule was added:MemoryKieModule[ ReleaseId=org.graylog2:dynamic-rules:0]
2015-05-01_19:45:33.84618 INFO [KieRepositoryImpl] Adding KieModule from resource :[ByteArrayResource resource=[B@492fa72a]
2015-05-01_19:45:33.95608 INFO [KieRepositoryImpl] KieModule was added:MemoryKieModule[ ReleaseId=org.graylog2:dynamic-rules:0]
2015-05-01_19:45:34.35663 INFO [RulesEngineProvider] No static rules file loaded.
2015-05-01_19:45:44.41310 INFO [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy .
2015-05-01_19:45:44.51439 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2015-05-01_19:45:54.54298 ERROR [CmdLineTool] Guice error (more detail on log level debug): Error injecting constructor, com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=127.0.0.1:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused}}]
2015-05-01_19:45:54.54309 ERROR [CmdLineTool] Guice error (more detail on log level debug): Error injecting constructor, com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=127.0.0.1:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused}}]
2015-05-01_19:45:54.54415 ERROR [Server]
2015-05-01_19:45:54.54419
2015-05-01_19:45:54.54420 ################################################################################
2015-05-01_19:45:54.54420
2015-05-01_19:45:54.54421 ERROR: Unable to connect to MongoDB. Is it running and the configuration correct?
2015-05-01_19:45:54.54422
2015-05-01_19:45:54.54422 Need help?
2015-05-01_19:45:54.54424
2015-05-01_19:45:54.54424 * Official documentation: https://www.graylog.org/documentation/intro/
2015-05-01_19:45:54.54425 * Community support: https://www.graylog.org/community-support/
2015-05-01_19:45:54.54425 * Commercial support: https://www.graylog.com/support/
2015-05-01_19:45:54.54426
2015-05-01_19:45:54.54426 Terminating. :(
2015-05-01_19:45:54.54427
2015-05-01_19:45:54.54428 ################################################################################
2015-05-01_19:45:54.54428
2015-05-01_19:55:55.46318 INFO [CmdLineTool] Loaded plugins: []
2015-05-01_19:55:55.57381 INFO [CmdLineTool] Running with JVM arguments: -Xms1g -Xmx1g -XX:NewRatio=1 -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow -Dlog4j.configuration=file:///opt/graylog/conf/log4j.xml
2015-05-01_19:55:56.18178 INFO [SigarService] Failed to load SIGAR. Falling back to JMX implementations.
2015-05-01_19:55:57.33499 INFO [InputBufferImpl] Message journal is enabled.
2015-05-01_19:55:57.63310 INFO [LogManager] Loading log 'messagejournal-0'
2015-05-01_19:55:57.68555 INFO [Log] Recovering unflushed segment 0 in log messagejournal-0.
2015-05-01_19:55:57.69890 INFO [Log] Completed load of log messagejournal-0 with log end offset 0
2015-05-01_19:55:57.71440 INFO [KafkaJournal] Initialized Kafka based journal at /var/opt/graylog/data/journal
2015-05-01_19:55:57.72496 INFO [InputBufferImpl] Initialized InputBufferImpl with ring size <65536> and wait strategy , running 2 parallel message handlers.
2015-05-01_19:55:57.92413 INFO [NodeId] Node ID: 1c2f77b5-b33d-4af1-ba25-521162450be4
2015-05-01_19:55:58.08563 INFO [node] [graylog2-server] version[1.3.7], pid[6208], build[3042293/2014-12-16T13:59:32Z]
2015-05-01_19:55:58.08568 INFO [node] [graylog2-server] initializing ...
2015-05-01_19:55:58.09641 INFO [plugins] [graylog2-server] loaded [graylog2-monitor], sites []
2015-05-01_19:56:00.22514 INFO [node] [graylog2-server] initialized
2015-05-01_19:56:00.23953 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2015-05-01_19:56:01.78416 INFO [KieRepositoryImpl] KieModule was added:MemoryKieModule[ ReleaseId=org.graylog2:dynamic-rules:0]
2015-05-01_19:56:01.85146 INFO [KieRepositoryImpl] Adding KieModule from resource :[ByteArrayResource resource=[B@cfacf0]
2015-05-01_19:56:01.95722 INFO [KieRepositoryImpl] KieModule was added:MemoryKieModule[ ReleaseId=org.graylog2:dynamic-rules:0]
2015-05-01_19:56:02.34225 INFO [RulesEngineProvider] No static rules file loaded.
2015-05-01_19:56:12.37963 INFO [OutputBuffer] Initialized OutputBuffer with ring size <65536> and wait strategy .
2015-05-01_19:56:12.47037 INFO [ProcessBuffer] Initialized ProcessBuffer with ring size <65536> and wait strategy .
2015-05-01_19:56:22.49624 ERROR [CmdLineTool] Guice error (more detail on log level debug): Error injecting constructor, com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=127.0.0.1:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused}}]
2015-05-01_19:56:22.49630 ERROR [CmdLineTool] Guice error (more detail on log level debug): Error injecting constructor, com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting to connect. Client view of cluster state is {type=Unknown, servers=[{address=127.0.0.1:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.ConnectException: Connection refused}}]
2015-05-01_19:56:22.49698 ERROR [Server]
2015-05-01_19:56:22.49702
2015-05-01_19:56:22.49702 ################################################################################
2015-05-01_19:56:22.49703
2015-05-01_19:56:22.49704 ERROR: Unable to connect to MongoDB. Is it running and the configuration correct?
2015-05-01_19:56:22.49706
2015-05-01_19:56:22.49706 Need help?
2015-05-01_19:56:22.49707
2015-05-01_19:56:22.49707 * Official documentation: https://www.graylog.org/documentation/intro/
2015-05-01_19:56:22.49708 * Community support: https://www.graylog.org/community-support/
2015-05-01_19:56:22.49709 * Commercial support: https://www.graylog.com/support/
2015-05-01_19:56:22.49709
2015-05-01_19:56:22.49710 Terminating. :(
2015-05-01_19:56:22.49710
2015-05-01_19:56:22.49713 ################################################################################
2015-05-01_19:56:22.49714

Support cloud-aws Elasticsearch Plugin Within Graylog ES Client

I'm able to install cloud-aws within the Docker container, but the Graylog ES client is unable to use the ec2 discovery method to locate other nodes within the ES cluster.

When providing the below options in graylog-elasticsearch.yml (specified in graylog.conf), the Graylog server errors out:

cluster.name: <clusterName>
discovery.ec2.groups: <securityGroup>
discovery.ec2.tag.role: <instanceRole>
discovery.type: ec2
2015-05-12_06:06:42.62617 ERROR [CmdLineTool] Guice error (more detail on log level debug): Tried proxying org.graylog2.outputs.OutputRegistry to support a circular dependency, but it is not an interface.
2015-05-12_06:06:42.62631 Exception in thread "main" com.google.inject.CreationException: Guice creation errors:
2015-05-12_06:06:42.79681 
2015-05-12_06:06:42.79682 1) Error in custom provider, org.elasticsearch.common.settings.NoClassSettingsException: Failed to load class setting [discovery.type] with value [ec2]
2015-05-12_06:06:42.79683   while locating org.graylog2.bindings.providers.EsNodeProvider
2015-05-12_06:06:42.79684   at org.graylog2.bindings.ServerBindings.bindSingletons(ServerBindings.java:151)
2015-05-12_06:06:42.79685   while locating org.elasticsearch.node.Node
2015-05-12_06:06:42.79685     for parameter 0 at org.graylog2.indexer.messages.Messages.<init>(Messages.java:63)
2015-05-12_06:06:42.79686   at org.graylog2.indexer.messages.Messages.class(Messages.java:56)
2015-05-12_06:06:42.79687   while locating org.graylog2.indexer.messages.Messages
2015-05-12_06:06:42.79688     for parameter 1 at org.graylog2.outputs.BlockingBatchedESOutput.<init>(BlockingBatchedESOutput.java:76)
2015-05-12_06:06:42.79688   while locating org.graylog2.outputs.BlockingBatchedESOutput
2015-05-12_06:06:42.79689   at org.graylog2.bindings.MessageOutputBindings.configure(MessageOutputBindings.java:48)
2015-05-12_06:06:42.79690   while locating org.graylog2.plugin.outputs.MessageOutput annotated with @org.graylog2.outputs.DefaultMessageOutput()
2015-05-12_06:06:42.79691     for parameter 0 at org.graylog2.outputs.OutputRegistry.<init>(OutputRegistry.java:69)
2015-05-12_06:06:42.79694   at org.graylog2.outputs.OutputRegistry.class(OutputRegistry.java:49)
2015-05-12_06:06:42.79695   while locating org.graylog2.outputs.OutputRegistry
2015-05-12_06:06:42.79696     for parameter 2 at org.graylog2.streams.OutputServiceImpl.<init>(OutputServiceImpl.java:48)
2015-05-12_06:06:42.79697   while locating org.graylog2.streams.OutputServiceImpl
2015-05-12_06:06:42.79698   while locating org.graylog2.streams.OutputService
2015-05-12_06:06:42.79699     for parameter 3 at org.graylog2.streams.StreamServiceImpl.<init>(StreamServiceImpl.java:63)
2015-05-12_06:06:42.79700   while locating org.graylog2.streams.StreamServiceImpl
2015-05-12_06:06:42.79701   while locating org.graylog2.streams.StreamService
2015-05-12_06:06:42.79702     for parameter 1 at org.graylog2.periodical.AlertScannerThread.<init>(AlertScannerThread.java:56)
2015-05-12_06:06:42.79703   while locating org.graylog2.periodical.AlertScannerThread
2015-05-12_06:06:42.79704   while locating org.graylog2.plugin.periodical.Periodical annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=102)
2015-05-12_06:06:42.79705   at org.graylog2.shared.bindings.SharedPeriodicalBindings.configure(SharedPeriodicalBindings.java:30)
2015-05-12_06:06:42.79706   while locating java.util.Set<org.graylog2.plugin.periodical.Periodical>
2015-05-12_06:06:42.79709     for parameter 2 at org.graylog2.shared.initializers.PeriodicalsService.<init>(PeriodicalsService.java:50)
2015-05-12_06:06:42.79710   at org.graylog2.shared.initializers.PeriodicalsService.class(PeriodicalsService.java:40)
2015-05-12_06:06:42.79711   while locating org.graylog2.shared.initializers.PeriodicalsService
2015-05-12_06:06:42.79714   while locating com.google.common.util.concurrent.Service annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=3)
2015-05-12_06:06:42.79715   at org.graylog2.shared.bindings.GenericInitializerBindings.configure(GenericInitializerBindings.java:30)
2015-05-12_06:06:42.79716   while locating java.util.Set<com.google.common.util.concurrent.Service>
2015-05-12_06:06:42.79717     for field at org.graylog2.shared.bindings.providers.ServiceManagerProvider.services(ServiceManagerProvider.java:33)
2015-05-12_06:06:42.79719   while locating org.graylog2.shared.bindings.providers.ServiceManagerProvider
2015-05-12_06:06:42.79720   at org.graylog2.shared.bindings.GenericBindings.configure(GenericBindings.java:70)
2015-05-12_06:06:42.79720   while locating com.google.common.util.concurrent.ServiceManager
2015-05-12_06:06:42.79721 Caused by: org.elasticsearch.common.settings.NoClassSettingsException: Failed to load class setting [discovery.type] with value [ec2]
2015-05-12_06:06:42.79723   at org.elasticsearch.common.settings.ImmutableSettings.loadClass(ImmutableSettings.java:471)
2015-05-12_06:06:42.79724   at org.elasticsearch.common.settings.ImmutableSettings.getAsClass(ImmutableSettings.java:459)
2015-05-12_06:06:42.79725   at org.elasticsearch.discovery.DiscoveryModule.spawnModules(DiscoveryModule.java:53)
2015-05-12_06:06:42.79726   at org.elasticsearch.common.inject.ModulesBuilder.add(ModulesBuilder.java:44)
2015-05-12_06:06:42.79728   at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:171)
2015-05-12_06:06:42.79728   at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
2015-05-12_06:06:42.79729   at org.graylog2.bindings.providers.EsNodeProvider.get(EsNodeProvider.java:57)
2015-05-12_06:06:42.79730   at org.graylog2.bindings.providers.EsNodeProvider.get(EsNodeProvider.java:40)
2015-05-12_06:06:42.79731   at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:55)
2015-05-12_06:06:42.79731   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79732   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79734   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79735   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79736   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79736   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79737   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79738   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79739   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79739   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79740   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79741   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79742   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79743   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79745   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79746   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79747   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79748   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79751   at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
2015-05-12_06:06:42.79752   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79753   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79754   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79755   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79756   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79757   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79760   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79761   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79762   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79763   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79764   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79765   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79766   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79767   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79768   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79769   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79773   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79774   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79775   at com.google.inject.internal.InjectorImpl$3.get(InjectorImpl.java:737)
2015-05-12_06:06:42.79776   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79777   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79778   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79779   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79780   at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
2015-05-12_06:06:42.79781   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79782   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79783   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79784   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79786   at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
2015-05-12_06:06:42.79787   at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
2015-05-12_06:06:42.79788   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79789   at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
2015-05-12_06:06:42.79790   at com.google.inject.multibindings.Multibinder$RealMultibinder.get(Multibinder.java:326)
2015-05-12_06:06:42.79791   at com.google.inject.multibindings.Multibinder$RealMultibinder.get(Multibinder.java:220)
2015-05-12_06:06:42.79793   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79794   at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
2015-05-12_06:06:42.79795   at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
2015-05-12_06:06:42.79796   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
2015-05-12_06:06:42.79797   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79798   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79799   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79800   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79800   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79801   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79802   at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
2015-05-12_06:06:42.79802   at com.google.inject.internal.InjectorImpl$4$1.call(InjectorImpl.java:978)
2015-05-12_06:06:42.79803   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79804   at com.google.inject.internal.InjectorImpl$4.get(InjectorImpl.java:974)
2015-05-12_06:06:42.79804   at com.google.inject.multibindings.Multibinder$RealMultibinder.get(Multibinder.java:326)
2015-05-12_06:06:42.79805   at com.google.inject.multibindings.Multibinder$RealMultibinder.get(Multibinder.java:220)
2015-05-12_06:06:42.79806   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79807   at com.google.inject.internal.SingleFieldInjector.inject(SingleFieldInjector.java:53)
2015-05-12_06:06:42.79808   at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
2015-05-12_06:06:42.79809   at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:94)
2015-05-12_06:06:42.79809   at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
2015-05-12_06:06:42.79810   at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:53)
2015-05-12_06:06:42.79811   at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
2015-05-12_06:06:42.79812   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
2015-05-12_06:06:42.79813   at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
2015-05-12_06:06:42.79814   at com.google.inject.Scopes$1$1.get(Scopes.java:65)
2015-05-12_06:06:42.79815   at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
2015-05-12_06:06:42.79816   at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:204)
2015-05-12_06:06:42.79818   at com.google.inject.internal.InternalInjectorCreator$1.call(InternalInjectorCreator.java:198)
2015-05-12_06:06:42.79818   at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
2015-05-12_06:06:42.79819   at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:198)
2015-05-12_06:06:42.79820   at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:179)
2015-05-12_06:06:42.79820   at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
2015-05-12_06:06:42.79821   at com.google.inject.Guice.createInjector(Guice.java:95)
2015-05-12_06:06:42.79822   at com.google.inject.Guice.createInjector(Guice.java:72)
2015-05-12_06:06:42.79823   at org.graylog2.shared.bindings.Hk2GuiceBridgeJitInjector.create(Hk2GuiceBridgeJitInjector.java:59)
2015-05-12_06:06:42.79824   at org.graylog2.shared.bindings.GuiceInjectorHolder.createInjector(GuiceInjectorHolder.java:32)
2015-05-12_06:06:42.79825   at org.graylog2.bootstrap.CmdLineTool.setupInjector(CmdLineTool.java:353)
2015-05-12_06:06:42.79826   at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:175)
2015-05-12_06:06:42.79827   at org.graylog2.bootstrap.Main.main(Main.java:58)
2015-05-12_06:06:42.79829 Caused by: java.lang.ClassNotFoundException: org.elasticsearch.discovery.ec2.Ec2DiscoveryModule
2015-05-12_06:06:42.79830   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
2015-05-12_06:06:42.79831   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
2015-05-12_06:06:42.79832   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
2015-05-12_06:06:42.79833   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
2015-05-12_06:06:42.79834   at org.elasticsearch.common.settings.ImmutableSettings.loadClass(ImmutableSettings.java:469)
2015-05-12_06:06:42.79835   ... 101 more

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.