Giter Site home page Giter Site logo

logstash's Introduction

Apache Mesos Repository Has Moved

Apache Mesos is now a Top-Level Apache project, and we've moved the codebase. The downloads page explains the essential information, but here's the scoop:

Please check out the source code from Apache's git repostory:

git clone https://git-wip-us.apache.org/repos/asf/mesos.git

or if you prefer GitHub, use the GitHub mirror:

git clone git://github.com/apache/mesos.git

For issue tracking and patches, we use Apache-maintained infrastructure including the JIRA issue tracker instead of the GitHub one, and review board for patches instead of pull requests.

Other information including documentation and a getting started guide are available on the Mesos website: http://mesos.apache.org

Thanks!

-- The Mesos developers

logstash's People

Contributors

alwqx avatar floriangrundig avatar frankscholten avatar mwl avatar philwinder avatar sadovnikov avatar smw avatar suppandi avatar swemail avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash's Issues

LogStreamTest#cutsMultibyteUnicodeCharactersInHalf() failure on Ubuntu

Execution of ./gradlew -a --info clean build :system-test:test on Ubuntu results in LogStreamTest#cutsMultibyteUnicodeCharactersInHalf() test failure

org.junit.ComparisonFailure: expected:<�[��r��]an> but was:<�[]an>

Build log also contains these lines

Executing org.gradle.api.internal.tasks.compile.ApiGroovyCompiler@3707fb9 in compiler daemon.
Compiling with JDK Java compiler API.
/host/sources/mesos-logstash/logstash-executor/src/test/java/org/apache/mesos/logstash/executor/LogStreamTest.java:59: error: unmappable character for encoding ASCII
        String testString = "        Fl??r??an";

Is source code encoding missing?

Connect logstash to ElasticSearch

Tasks:

  • Deploy a singlenode ES to minimesos (@mwl)
  • Deploy Logstash to minimesos and point it to ElasticSearch (@jhftrifork)
  • Run logger -n localhost:???? "RANDOM GENERATED TOKEN" on a random node
  • assert ES has "RANDOM GENERATED TOKEN".

The web UI should provide assistance for configuration validation

The web ui should provide facilities for executing (and capturing output of) e.g

logstash --config-test

on each slave, thereby verifying that the configuration file is correctly written and doesn't contain any syntax errors.

OR

The web ui should allow the user to view logstash's own log-file wherein configuration problems will typically be written.

Failed to detect a master: Failed to parse data of unknown label 'json.info'

Here is my logstash json for marathon

{
   "id": "/logstash",
   "cpus": 2,
   "mem": 1024.0,
   "instances": 1,
   "container": {
     "type": "DOCKER",
     "docker": {
       "image": "mesos/logstash-scheduler:0.0.6",
       "network": "HOST"
     }
   },
   "env": {
     "JAVA_OPTS": "-Dmesos.logstash.framework.name=logstash_framework -Dmesos.zk=zk://node-1:2181,node-2:2181,node-3:2181/mesos"
   }
 }
I1215 16:28:02.958233 17601 exec.cpp:133] Version: 0.24.1
I1215 16:28:02.961136 17609 exec.cpp:207] Executor registered on slave 20151215-143223-1750233610-5050-1-S2
Warning: '-c' is deprecated, it will be replaced by '--cpu-shares' soon. See usage.
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@716: Client environment:host.name=OCLD1LX-MESOS-S3
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@724: Client environment:os.arch=3.10.0-327.3.1.el7.x86_64
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Wed Dec 9 14:09:15 UTC 2015
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@log_env@753: Client environment:user.dir=/
2015-12-15 16:28:08,078:6(0x7fd5d76de700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=node-1:2181,node-2:2181,node-3:2181 sessionTimeout=20000 watcher=0x7fd601bd2a60 sessionId=0 sessionPasswd=<null> context=0x7fd5cc003270 flags=0
2015-12-15 16:28:08,095:6(0x7fd5d2ed5700):ZOO_INFO@check_events@1703: initiated connection to server [10.114.82.117:2181]
2015-12-15 16:28:08,097:6(0x7fd5d2ed5700):ZOO_INFO@check_events@1750: session establishment complete on server [10.114.82.117:2181], sessionId=0x351a616af790050, negotiated timeout=20000
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@716: Client environment:host.name=OCLD1LX-MESOS-S3
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@724: Client environment:os.arch=3.10.0-327.3.1.el7.x86_64
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@725: Client environment:os.version=#1 SMP Wed Dec 9 14:09:15 UTC 2015
I1215 16:28:08.314399    21 sched.cpp:157] Version: 0.22.1
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@log_env@753: Client environment:user.dir=/
2015-12-15 16:28:08,314:6(0x7fd5d46d8700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=node-1:2181,node-2:2181,node-3:2181 sessionTimeout=10000 watcher=0x7fd601bd2a60 sessionId=0 sessionPasswd=<null> context=0x1fe52e0 flags=0
2015-12-15 16:28:08,336:6(0x7fd5d1ed3700):ZOO_INFO@check_events@1703: initiated connection to server [10.115.255.228:2181]
2015-12-15 16:28:08,346:6(0x7fd5d1ed3700):ZOO_INFO@check_events@1750: session establishment complete on server [10.115.255.228:2181], sessionId=0x151a613b0760051, negotiated timeout=10000
I1215 16:28:08.349599    37 group.cpp:313] Group process (group(1)@10.114.49.179:56925) connected to ZooKeeper
I1215 16:28:08.349645    37 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1215 16:28:08.349659    37 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I1215 16:28:08.395557    37 detector.cpp:138] Detected a new leader: (id='25')
I1215 16:28:08.395725    37 group.cpp:659] Trying to get '/mesos/json.info_0000000025' in ZooKeeper
Failed to detect a master: Failed to parse data of unknown label 'json.info'

Use explicit task reconciliation

Currently we're using implicit task reconciliation which might take to long until the scheduler accepts new offers or UI updates are shown.

Logs output an error when an observed container which is stopped by e.g. marathon

13:40:42.487 [pool-1-thread-1] INFO o.a.m.l.executor.ConfigManager - New Containers Discovered. Reconfiguring...
13:40:42.488 [pool-1-thread-1] INFO o.a.m.l.executor.ConfigManager - Stop streaming of DockerLogPath: Framework busybox:latest (ContainerID be6f41f62113980abe24e0724ea8a0acc64638d18c448ab5d6f8a45a220ee996) - path: /var/log/test.log
13:40:42.488 [pool-1-thread-1] DEBUG o.a.m.l.e.docker.DockerStreamer - Killing logstash process in container be6f41f62113980abe24e0724ea8a0acc64638d18c448ab5d6f8a45a220ee996 - logstash pid 155
13:40:42.503 [pool-1-thread-1] ERROR o.a.m.l.executor.docker.DockerClient - Error executing in container be6f41f62113980abe24e0724ea8a0acc64638d18c448ab5d6f8a45a220ee996: com.spotify.docker.client.DockerRequestException: Request error: POST http://..:2376/v1.12/containers/be6f41f62113980abe24e0724ea8a0acc64638d18c448ab5d6f8a45a220ee996/exec: 500
1

Implement option for enforcing execution on all slaves

Currently, the scheduler launches a new logstash instance on every slave that it is offered.

If another framework already utilizes a slave heavily, that slave will not be offered to us and will therefore not run logstash.

We should consider providing a mechanism that enforces that the scheduler be offered all slaves in the cluster.

Failed to detect a master: Failed to parse data of unknown label 'json.info'

Below is the json file which I am using

{
   "id": "mist-logstash",
   "cpus": 1,
   "mem": 1024.0,
   "instances": 1,
   "container": {
     "type": "DOCKER",
     "docker": {
       "image": "mesos/logstash-scheduler:0.0.6",
       "network": "HOST"
     }
   },
   "env": {
     "JAVA_OPTS": "-Dmesos.logstash.framework.name=logstash -Dmesos.zk=zk://zkhost:2181/mesos"
   }
 }

still getting below error

2016-01-16 01:12:19,956:5(0x7f80a1ffb700):ZOO_INFO@check_events@1703: initiated connection to server [172.31.4.72:2181]
2016-01-16 01:12:19,959:5(0x7f80a1ffb700):ZOO_INFO@check_events@1750: session establishment complete on server [172.31.4.72:2181], sessionId=0x1516706148feac2, negotiated timeout=10000
I0116 01:12:19.959508    34 group.cpp:313] Group process (group(1)@172.31.4.28:59624) connected to ZooKeeper
I0116 01:12:19.959650    34 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0116 01:12:19.959738    34 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I0116 01:12:19.980105    34 detector.cpp:138] Detected a new leader: (id='85')
I0116 01:12:19.980309    34 group.cpp:659] Trying to get '/mesos/json.info_0000000085' in ZooKeeper
Failed to detect a master: Failed to parse data of unknown label 'json.info'

Mesos version is 0.25.

Executors keep running after removing the framework

I installed mesos/logstash in my DCOS cluster as marathon app with options like below

"JAVA_OPTS": "-Xmx256m -Dmesos.logstash.web.port=9092 -Dmesos.logstash.framework.name=marathon-logstashv1 -Dmesos.logstash.logstash.heap.size=256 -Dmesos.logstash.executor.heap.size=512 -Dmesos.zk=zk://master.mesos:2181 -Dmesos.logstash.volumes=/var/log/mesos"

but it never worked , we found issues like docker containers log files never get observed.

When we killed the app , logstash executor kept running and also we removed the logstash framework from zokeeper , logstash tasks kept running .

following the dcos instructions, I just get TASK_FAILED when launching the framework

Following the DCOS instructions in the readme, I launched the framework. I used the basic logstash-options.json provided by the example:

{
    "logstash": {
        "executor" : {
            "volumes" : "/var/log/mesos"
        }
    }
}

When I run the framework, it deploys, but fails. Here is a full copy of stderr:

mesos-docker-executor: /lib64/libcurl.so.4: no version information available (required by /opt/mesosphere/packages/mesos--8467f0ef9a5aa54a502e0e0ab0f6515d8aecf00f/lib/libmesos-0.24.1.so)
I1214 20:31:52.896688 11492 exec.cpp:133] Version: 0.24.1
I1214 20:31:52.899741 11496 exec.cpp:207] Executor registered on slave 20151214-193035-2030436362-5050-2211-S3
2015-12-14 20:31:58,816:6(0x7f06e2f9a700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-12-14 20:31:58,816:6(0x7f06e2f9a700):ZOO_INFO@log_env@716: Client environment:host.name=ip-10-0-1-163.us-west-2.compute.internal
2015-12-14 20:31:58,816:6(0x7f06e2f9a700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2015-12-14 20:31:58,816:6(0x7f06e2f9a700):ZOO_INFO@log_env@724: Client environment:os.arch=4.1.7-coreos
2015-12-14 20:31:58,816:6(0x7f06e2f9a700):ZOO_INFO@log_env@725: Client environment:os.version=#2 SMP Wed Sep 16 22:54:37 UTC 2015
2015-12-14 20:31:58,817:6(0x7f06e2f9a700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2015-12-14 20:31:58,817:6(0x7f06e2f9a700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2015-12-14 20:31:58,817:6(0x7f06e2f9a700):ZOO_INFO@log_env@753: Client environment:user.dir=/
2015-12-14 20:31:58,817:6(0x7f06e2f9a700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master.mesos:2181 sessionTimeout=20000 watcher=0x7f07394cfa60 sessionId=0 sessionPasswd=<null> context=0x7f06c0000930 flags=0
2015-12-14 20:31:58,819:6(0x7f06de791700):ZOO_INFO@check_events@1703: initiated connection to server [10.0.6.122:2181]
2015-12-14 20:31:58,822:6(0x7f06de791700):ZOO_INFO@check_events@1750: session establishment complete on server [10.0.6.122:2181], sessionId=0x351a1f8ae2403cb, negotiated timeout=20000
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@716: Client environment:host.name=ip-10-0-1-163.us-west-2.compute.internal
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@724: Client environment:os.arch=4.1.7-coreos
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@725: Client environment:os.version=#2 SMP Wed Sep 16 22:54:37 UTC 2015
I1214 20:31:59.066318    21 sched.cpp:157] Version: 0.22.1
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@log_env@753: Client environment:user.dir=/
2015-12-14 20:31:59,066:6(0x7f06e2799700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f07394cfa60 sessionId=0 sessionPasswd=<null> context=0x7f06cc001720 flags=0
2015-12-14 20:31:59,070:6(0x7f06dd78f700):ZOO_INFO@check_events@1703: initiated connection to server [10.0.6.123:2181]
2015-12-14 20:31:59,073:6(0x7f06dd78f700):ZOO_INFO@check_events@1750: session establishment complete on server [10.0.6.123:2181], sessionId=0x151a1f8c66a03e9, negotiated timeout=10000
I1214 20:31:59.076843    39 group.cpp:313] Group process (group(1)@10.0.1.163:38057) connected to ZooKeeper
I1214 20:31:59.076889    39 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I1214 20:31:59.076936    39 group.cpp:385] Trying to create path '/mesos' in ZooKeeper
I1214 20:31:59.103050    39 detector.cpp:138] Detected a new leader: (id='2')
I1214 20:31:59.103186    39 group.cpp:659] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
Failed to detect a master: Failed to parse data of unknown label 'json.info'

and for good measure, stdout:

--container="mesos-20151214-193035-2030436362-5050-2211-S7.34616e7e-c7b5-4ed4-a2cb-b9edc43a4579" --docker="docker" --help="false" --initialize_driver_logging="true" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/slave/slaves/20151214-193035-2030436362-5050-2211-S7/frameworks/20151214-193035-2030436362-5050-2211-0000/executors/logstash.b95cb352-a2a2-11e5-9dd5-0242474a1239/runs/34616e7e-c7b5-4ed4-a2cb-b9edc43a4579" --stop_timeout="0ns"
--container="mesos-20151214-193035-2030436362-5050-2211-S7.34616e7e-c7b5-4ed4-a2cb-b9edc43a4579" --docker="docker" --help="false" --initialize_driver_logging="true" --logbufsecs="0" --logging_level="INFO" --mapped_directory="/mnt/mesos/sandbox" --quiet="false" --sandbox_directory="/var/lib/mesos/slave/slaves/20151214-193035-2030436362-5050-2211-S7/frameworks/20151214-193035-2030436362-5050-2211-0000/executors/logstash.b95cb352-a2a2-11e5-9dd5-0242474a1239/runs/34616e7e-c7b5-4ed4-a2cb-b9edc43a4579" --stop_timeout="0ns"
Registered docker executor on ip-10-0-0-6.us-west-2.compute.internal
Starting task logstash.b95cb352-a2a2-11e5-9dd5-0242474a1239
20:39:09.837 [main] INFO  o.a.m.logstash.scheduler.Application - Starting Application on ip-10-0-0-6.us-west-2.compute.internal with PID 6 (/tmp/logstash-scheduler.jar started by root in /)
20:39:09.840 [main] DEBUG o.a.m.logstash.scheduler.Application - Running with Spring Boot v1.2.5.RELEASE, Spring v4.1.7.RELEASE
20:39:09.884 [main] INFO  o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@2de7eb69: startup date [Mon Dec 14 20:39:09 UTC 2015]; root of context hierarchy
20:39:11.224 [main] INFO  o.s.b.f.s.DefaultListableBeanFactory - Overriding bean definition for bean 'beanNameViewResolver': replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]]
20:39:11.633 [main] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final
20:39:12.381 [main] INFO  o.s.b.c.e.j.JettyEmbeddedServletContainerFactory - Server initialized with port: 9092
20:39:12.386 [main] INFO  org.eclipse.jetty.server.Server - jetty-9.2.11.v20150529
20:39:12.620 [main] INFO  / - Initializing Spring embedded WebApplicationContext
20:39:12.621 [main] INFO  o.s.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 2740 ms
20:39:13.866 [main] INFO  o.s.b.c.e.ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
20:39:13.870 [main] INFO  o.s.b.c.e.FilterRegistrationBean - Mapping filter: 'characterEncodingFilter' to: [/*]
20:39:13.870 [main] INFO  o.s.b.c.e.FilterRegistrationBean - Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
20:39:14.298 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.s.b.c.e.j.JettyEmbeddedWebAppContext@3648664e{/,null,AVAILABLE}
20:39:14.298 [main] INFO  org.eclipse.jetty.server.Server - Started @5782ms
20:39:14.446 [main] INFO  o.a.m.logstash.config.ConfigManager - Fetched latest config: null
20:39:14.488 [main] DEBUG o.a.m.l.scheduler.LogstashScheduler - Setting webuiUrl to http:\/\/ip-10-0-0-6.us-west-2.compute.internal:9092
20:39:14.491 [main] DEBUG o.a.m.l.scheduler.LogstashScheduler - Setting webuiUrl to http:\/\/ip-10-0-0-6.us-west-2.compute.internal:9092
20:39:14.493 [main] INFO  o.a.m.l.scheduler.LogstashScheduler - Starting Logstash Framework: 
user: "root"
name: "logstash"
failover_timeout: 3.14496E7
checkpoint: true
role: "*"
webui_url: "http:\\/\\/ip-10-0-0-6.us-west-2.compute.internal:9092"

What should I do differently to get this going?

Core dump (outside of DCOS)

Hi,

Is this supported outside of DCOS? When I try to manually run the logstash-scheduler like so:

docker run --net=host -e 'JAVA_OPTS=-Xmx256m -Dmesos.logstash.web.port=9092 -Dmesos.logstash.framework.name=logstash -Dmesos.logstash.logstash.heap.size=512 -Dmesos.logstash.executor.heap.size=256 -Dmesos.zk=zk://127.0.0.1:2181/mesos -Dmesos.logstash.volumes=/var/log/mesos' mesos/logstash-scheduler:0.0.6

...I'm getting a core dump:

15:53:01.543 [main] INFO  o.a.m.logstash.scheduler.Application - Starting Application with PID 9 (/tmp/logstash-scheduler.jar started by root in /)
15:53:01.554 [main] DEBUG o.a.m.logstash.scheduler.Application - Running with Spring Boot v1.2.5.RELEASE, Spring v4.1.7.RELEASE
15:53:02.344 [main] INFO  o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@7f38eabc: startup date [Wed Sep 16 15:53:02 UTC 2015]; root of context hierarchy
15:53:04.984 [main] INFO  o.s.b.f.s.DefaultListableBeanFactory - Overriding bean definition for bean 'beanNameViewResolver': replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/ErrorMvcAutoConfiguration$WhitelabelErrorViewConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter; factoryMethodName=beanNameViewResolver; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$WebMvcAutoConfigurationAdapter.class]]
15:53:05.726 [main] INFO  o.h.validator.internal.util.Version - HV000001: Hibernate Validator 5.1.3.Final
15:53:07.036 [main] INFO  o.s.b.c.e.j.JettyEmbeddedServletContainerFactory - Server initialized with port: 9092
15:53:07.043 [main] INFO  org.eclipse.jetty.server.Server - jetty-9.2.11.v20150529
15:53:07.344 [main] INFO  / - Initializing Spring embedded WebApplicationContext
15:53:07.345 [main] INFO  o.s.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 5144 ms
15:53:09.241 [main] INFO  o.s.b.c.e.ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
15:53:09.249 [main] INFO  o.s.b.c.e.FilterRegistrationBean - Mapping filter: 'characterEncodingFilter' to: [/*]
15:53:09.250 [main] INFO  o.s.b.c.e.FilterRegistrationBean - Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
15:53:10.080 [main] INFO  o.e.j.server.handler.ContextHandler - Started o.s.b.c.e.j.JettyEmbeddedWebAppContext@1112dfb5{/,null,AVAILABLE}
15:53:10.084 [main] INFO  org.eclipse.jetty.server.Server - Started @11379ms
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0916 15:53:10.293172    19 process.cpp:889] Name or service not known
*** Check failure stack trace: ***
/tmp/start-scheduler.sh: line 2:     9 Aborted                 (core dumped) java $JAVA_OPTS -Djava.library.path=/usr/local/lib -jar /tmp/logstash-scheduler.jar

Any idea? Thanks.

Enhancement Idea: match on mesos task id

It would be nice to match docker containers on mesos task id in addition to docker image name. This would support the usage where the same docker image is used by different applications.

gradlew build fails

after nov 13 update, build fails and leaves a message

Could not find method idea() for arguments [build_2fao9yuf8kuoutww4z5toe0rq$_run_closure4@5ed0b4e3] on project ':logstash-commons'.

Warn the user if specifying an inaccessible slave-local file for logging

Currently, the user can specify files from the mesos slave itself that he wishes to log (e.g /var/log/mesos/mesos-info.log). However, if he hasn't also specified e.g /var/log/mesos as a volume (in his logstash-mesos configuration), that file will not be accessible from within the executor which means we cannot log it.

We should warn the user whenever we encounter 'host-path's in our configurations that are not contained inside a volume.

Scheduler should remove its frameworkId from zookeeper on shutdown

Currently it is not possible to restart a scheduler once it has terminated because the path /logstash/frameworkId already exists in zookeper and we get a collision.

The scheduler should make sure it erases its frameworkId as soon as it terminates so that it can be restarted.

Tag log events with ID of Mesos slave

Each Logstash instance should tag all the log events that go through it with the id of that Logstash instance (e.g. the id of the Mesos slave it is running on).

Motivations:

  • debuggability and auditing of where logs came from
  • disambiguation of some kinds of log, e.g. if a Mesos slave complains “out of memory”, we’ll get an "out of memory” log line, but won’t necessarily know which slave is complaining

Executor should use a more reliable mechanism to observe files

Currently if we want to observe a log file we stream the content into a separate file in the executor container itself. The file is therefor a kind of mirrored and that mirrored file inside the executor container is observed by logstash. To limit the file size of these mirrored/streamed files we cut of the content after (currently fixed) 5 MegaBytes. That means after reaching that limit we delete the mirrored file and start from position 0. With that approach we can not assure that the content before deleting the file and start from position 0 was processed by logstash. Some lines in the log file might not observed and therefor not monitored.
Replace the current file mirroring and cutting of with something more reliable...

DCOS configuration

Provide DCOS configuration to launch the framework from the DCOS cli.

The web UI the logstash configuration-file that each executor is using

We perform a lot of "configuration magic" based on which docker containers we encounter on each mesos slave.

It should be possible to see exactly how the logstash process is configured within each executor in the cluster.

Implementation note:
The executors already produce their own logstash configuration files. Implementing this is just a matter of attaching the whole configuration string when sending a status update back to the scheduler

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.