Giter Site home page Giter Site logo

logstash-docker's Introduction

This repository is no longer used to generate the official Logstash Docker image from Elastic.

To build Logstash docker images for pre-6.6 releases, switch branches in this repo to the matching release.

logstash-docker's People

Contributors

conky5 avatar dliappis avatar hkulekci avatar jarpy avatar jonahbull avatar jordansissel avatar mgreau avatar robbavey avatar vberetti avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-docker's Issues

entrypoint: logstash -e doesn't work anymore with 6.1.2

I use logstash docker from within a stack in which I have the following service definition:

    image: docker.elastic.co/logstash/logstash:6.1.2
    environment:
      config.support_escapes: "true"
    logging:
      driver: "json-file"
    networks:
      - logging
    ports:
      - "12201:12201"
    entrypoint: logstash -e 'input { ........
                        output { stdout{ } elasticsearch { hosts => ["http://elasticsearch:9200"] } }'

Unfortunately this causes: ERROR: Settings 'path.config' (-f) and 'config.string' (-e) can't be used simultaneously.

I don't understand why this happens as I don't specify -f at all in the entrypoint. How can I get this to work?

Output logs on host machine

300-output.conf
output {
elasticsearch { hosts => [ "localhost:9200" ] }
}
Elasticsearch 5.2.2 running on host and docker logstash running successfully so i get no connection is working:

2017-03-08T10:21:12,398][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-03-08T10:21:12,401][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x19b0893 URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Error executing apt-get update on the 5.2.2 image

I get the following error when building out a custom image from 5.2.2. I need to install a jdbc driver and got the following error when running the below file defined. The issue is that in the user is changed to "logstash". Easy enough to work around but very untypical for docker containers. I should be able to setup extend the image with additional drivers easily.

W: chmod 0700 of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Begin dockerfile

FROM docker.elastic.co/logstash/logstash:5.2.2

RUN apt-get -y update
&& apt-get install -y wget

End dockerfile

To fix the issue change the user back to root in the docker file. I think this should be done in the logstash image though to keep with convention.

Logstash cannot create file log - Permission deneid

I created a log4j.properties file with the functionality to save my log to a file, in a certain folder.

But when I upload my container I get the following message: Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties

I changed property path.logs to /usr/share/logstash/logs in file logstash.yml.

I already entered the container, gave all possible permissions but it continues to give error.

See the complete exception below:

container | Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
container | 2017-10-05 20:39:11,989 main ERROR Unable to create file my-logstash.log java.io.IOException: Permission denied
container | at java.io.UnixFileSystem.createFileExclusively(Native Method)
container | at java.io.File.createNewFile(File.java:1012)
container | at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:421)
container | at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:403)
container | at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:73)
container | at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
container | at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
container | at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
container | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
container | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
container | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
container | at java.lang.reflect.Method.invoke(Method.java:498)
container | at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:132)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:918)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:858)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:850)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:479)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:219)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:231)
container | at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:496)
container | at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:566)
container | at org.apache.logging.log4j.core.LoggerContext.setConfigLocation(LoggerContext.java:555)
container | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
container | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
container | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
container | at java.lang.reflect.Method.invoke(Method.java:498)
container | at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:451)
container | at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:312)
container | at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:45)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.IfNode.interpret(IfNode.java:116)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112)
container | at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206)
container | at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:157)
container | at org.jruby.runtime.Block.yield(Block.java:142)
container | at org.jruby.ext.thread.Mutex.synchronize(Mutex.java:149)
container | at org.jruby.ext.thread.Mutex$INVOKER$i$0$0$synchronize.call(Mutex$INVOKER$i$0$0$synchronize.gen)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:316)
container | at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:145)
container | at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:154)
container | at org.jruby.ast.CallNoArgBlockNode.interpret(CallNoArgBlockNode.java:64)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.IfNode.interpret(IfNode.java:118)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
container | at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
container | at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:139)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:187)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:306)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:136)
container | at org.jruby.ast.VCallNode.interpret(VCallNode.java:88)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:204)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:211)
container | at org.jruby.runtime.callsite.SuperCallSite.cacheAndCall(SuperCallSite.java:366)
container | at org.jruby.runtime.callsite.SuperCallSite.callBlock(SuperCallSite.java:192)
container | at org.jruby.runtime.callsite.SuperCallSite.call(SuperCallSite.java:197)
container | at org.jruby.runtime.callsite.SuperCallSite.callVarargs(SuperCallSite.java:108)
container | at org.jruby.ast.SuperNode.interpret(SuperNode.java:115)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
container | at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
container | at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:225)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:219)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:346)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:204)
container | at usr.share.logstash.lib.bootstrap.environment.file(/usr/share/logstash/lib/bootstrap/environment.rb:71)
container | at usr.share.logstash.lib.bootstrap.environment.load(/usr/share/logstash/lib/bootstrap/environment.rb)
container | at org.jruby.Ruby.runScript(Ruby.java:857)
container | at org.jruby.Ruby.runScript(Ruby.java:850)
container | at org.jruby.Ruby.runNormally(Ruby.java:729)
container | at org.jruby.Ruby.runFromMain(Ruby.java:578)
container | at org.jruby.Main.doRunFromMain(Main.java:393)
container | at org.jruby.Main.internalRun(Main.java:288)
container | at org.jruby.Main.run(Main.java:217)
container | at org.jruby.Main.main(Main.java:197)
container |
container | 2017-10-05 20:39:12,055 main ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.RollingFileAppender for element RollingFile. java.lang.reflect.InvocationTargetException
container | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
container | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
container | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
container | at java.lang.reflect.Method.invoke(Method.java:498)
container | at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:132)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:918)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:858)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:850)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:479)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:219)
container | at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:231)
container | at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:496)
container | at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:566)
container | at org.apache.logging.log4j.core.LoggerContext.setConfigLocation(LoggerContext.java:555)
container | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
container | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
container | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
container | at java.lang.reflect.Method.invoke(Method.java:498)
container | at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:451)
container | at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:312)
container | at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:45)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.IfNode.interpret(IfNode.java:116)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112)
container | at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206)
container | at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:157)
container | at org.jruby.runtime.Block.yield(Block.java:142)
container | at org.jruby.ext.thread.Mutex.synchronize(Mutex.java:149)
container | at org.jruby.ext.thread.Mutex$INVOKER$i$0$0$synchronize.call(Mutex$INVOKER$i$0$0$synchronize.gen)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:316)
container | at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:145)
container | at org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:154)
container | at org.jruby.ast.CallNoArgBlockNode.interpret(CallNoArgBlockNode.java:64)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.IfNode.interpret(IfNode.java:118)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
container | at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
container | at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:139)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:187)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:306)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:136)
container | at org.jruby.ast.VCallNode.interpret(VCallNode.java:88)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:204)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:211)
container | at org.jruby.runtime.callsite.SuperCallSite.cacheAndCall(SuperCallSite.java:366)
container | at org.jruby.runtime.callsite.SuperCallSite.callBlock(SuperCallSite.java:192)
container | at org.jruby.runtime.callsite.SuperCallSite.call(SuperCallSite.java:197)
container | at org.jruby.runtime.callsite.SuperCallSite.callVarargs(SuperCallSite.java:108)
container | at org.jruby.ast.SuperNode.interpret(SuperNode.java:115)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
container | at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.ast.RescueNode.executeBody(RescueNode.java:221)
container | at org.jruby.ast.RescueNode.interpret(RescueNode.java:116)
container | at org.jruby.ast.BeginNode.interpret(BeginNode.java:83)
container | at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
container | at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
container | at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:225)
container | at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:219)
container | at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:346)
container | at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:204)
container | at usr.share.logstash.lib.bootstrap.environment.file(/usr/share/logstash/lib/bootstrap/environment.rb:71)
container | at usr.share.logstash.lib.bootstrap.environment.load(/usr/share/logstash/lib/bootstrap/environment.rb)
container | at org.jruby.Ruby.runScript(Ruby.java:857)
container | at org.jruby.Ruby.runScript(Ruby.java:850)
container | at org.jruby.Ruby.runNormally(Ruby.java:729)
container | at org.jruby.Ruby.runFromMain(Ruby.java:578)
container | at org.jruby.Main.doRunFromMain(Main.java:393)
container | at org.jruby.Main.internalRun(Main.java:288)
container | at org.jruby.Main.run(Main.java:217)
container | at org.jruby.Main.main(Main.java:197)
container | Caused by: java.lang.IllegalStateException: ManagerFactory [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory@3b92466e] unable to create manager for [my-logstash.log] with data [org.apache.logging.log4j.core.appender.rolling.RollingFileManager$FactoryData@7b677e33[pattern=my-logstash-%d{yyyy-MM-dd}.log, append=true, bufferedIO=true, bufferSize=8192, policy=CompositeTriggeringPolicy(policies=[TimeBasedTriggeringPolicy(nextRolloverMillis=0, interval=1, modulate=true)]), strategy=DefaultRolloverStrategy(min=1, max=7), advertiseURI=null, layout=[%d{ISO8601}][%-5p][%-25c{1.}] %marker%.10000m%n]]
container | at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:75)
container | at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:81)
container | at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:103)
container | at org.apache.logging.log4j.core.appender.RollingFileAppender.createAppender(RollingFileAppender.java:191)
container | ... 98 more

Could any one help me?

Sorry for my English =)

issues with latest logstash docker image

Hello,
When trying to run logstash-plugin install --development on the latest docker image, we are getting the following errors

====> Build docker image for test
Sending build context to Docker daemon  124.9kB
Step 1/18 : FROM logstash
latest: Pulling from library/logstash
723254a2c089: Pull complete 
abe15a44e12f: Pull complete 
409a28e3cc3d: Pull complete 
a9511c68044a: Pull complete 
9d1b16e30bc8: Pull complete 
0fc5a09c9242: Pull complete 
d34976006493: Pull complete 
3b70003f0c10: Pull complete 
28c269c66aee: Pull complete 
d262de89628e: Pull complete 
2d6d89279b4d: Pull complete 
5ecd3cb56d42: Pull complete 
e8659ee27256: Pull complete 
7a837fab7c70: Pull complete 
a0b35e7de5d6: Pull complete 
11661c7d961e: Pull complete 
Digest: sha256:c42fbe030992aca1c02751467ae406f7155b77d1413320879d6550c1603ba274
Status: Downloaded newer image for logstash:latest
 ---> e4f449326d36
Step 2/18 : RUN logstash-plugin install --development
 ---> Running in 6bd9c5111185
Installing logstash-devutils, logstash-input-generator, logstash-codec-json, logstash-output-null, benchmark-ips, rspec, logstash-patterns-core, logstash-filter-grok, flores, stud, pry, rspec-wait, childprocess, ftw, logstash-output-elasticsearch, rspec-sequencing, gmetric, gelf, timecop, jdbc-derby, jdbc-mysql, jar-dependencies, logstash-codec-plain, logstash-codec-multiline, logstash-codec-json_lines, addressable, json, gzip, elasticsearch, logstash-filter-kv, logstash-filter-ruby, sinatra, webrick, poseidon, snappy, webmock, logstash-codec-line
Error Bundler::InstallError, retrying 1/10
An error occurred while installing webrick (1.4.2), and Bundler cannot continue.
Make sure that `gem install webrick -v '1.4.2'` succeeds before bundling.
WARNING: can not set Session#timeout=(0) no session context
Error Bundler::InstallError, retrying 2/10
An error occurred while installing webrick (1.4.2), and Bundler cannot continue.
Make sure that `gem install webrick -v '1.4.2'` succeeds before bundling.

Exception: java.lang.ThreadDeath thrown from the UncaughtExceptionHandler in thread "Thread-9"
Error Bundler::InstallError, retrying 3/10
An error occurred while installing webrick (1.4.2), and Bundler cannot continue.
Make sure that `gem install webrick -v '1.4.2'` succeeds before bundling.
WARNING: can not set Session#timeout=(0) no session context
Error Bundler::InstallError, retrying 4/10
An error occurred while installing webrick (1.4.2), and Bundler cannot continue.
Make sure that `gem install webrick -v '1.4.2'` succeeds before bundling.
Error Bundler::Fetcher::CertificateFailureError, retrying 5/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Error Bundler::Fetcher::CertificateFailureError, retrying 6/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Error Bundler::Fetcher::CertificateFailureError, retrying 7/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Error Bundler::Fetcher::CertificateFailureError, retrying 8/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Error Bundler::Fetcher::CertificateFailureError, retrying 9/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Error Bundler::Fetcher::CertificateFailureError, retrying 10/10
Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.
Too many retries, aborting, caused by Bundler::Fetcher::CertificateFailureError
ERROR: Installation Aborted, message: Could not verify the SSL certificate for https://rubygems.org/quick/Marshal.4.8/rest-client-1.8.0-x86-mswin32.gemspec.rz.
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about OpenSSL certificates, see bit.ly/ruby-ssl. To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.

With regards to the webrick error, I found the following GitHub issue that apparently fixes the issue.

elastic/logstash#8845

Not sure if this is a case of just needing to build an new image, but our Logstash test suite is currently broken due to this error.

mysql jdbc plugin not working

Hi,

I get following error in logstash docker version 5.5.2

logstash_1       | [2017-09-09T17:12:20,535][ERROR][logstash.agent           ] An exception happened when converging configuration {:exception=>RuntimeError, :message=>"Could not fe
tch the configuration, message: The following config files contains non-ascii characters but are not UTF-8 encoded [\"/usr/share/logstash/pipeline/mysql-connector-java-5.1.44-bin.ja
r\"]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/agent.rb:155:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in `exe
cute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:359:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in
`block in initialize'"]}

In case I run logstash natively with docker this error does not appear, is there some missing package in docker image?

env2yaml prints passwords to stdout

The env2yaml binary that is used to translate environment variables to a YAML configuration file prints every setting to stdout. This is great for debugging, but does include any passwords that may be set (xpack.monitoring.elasticsearch.password, xpack.monitoring.elasticsearch.ssl.truststore.password, xpack.monitoring.elasticsearch.ssl.keystore.password). Because it's not unusual for container stdout to be ingested into external systems, this could lead to sensitive information leakage.

Use deb package?

Looks like the docker image is based on ubuntu 16.04. Any reason not to use the deb package we publish with Logstash releases?

Logstash fails to start without x-pack

I have a simple docker image that uses the default one and uninstalls x-pack:

ARG VERSION
FROM docker.elastic.co/logstash/logstash:$VERSION

# https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-uninstalling
RUN logstash-plugin remove x-pack

Said image is being used in a docker-compose, but the container fails to start with the error message:

[FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "xpack.monitoring.elasticsearch.url" hasn't been registered>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:32:in `get_setting'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:64:in `set_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:83:in `merge'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:83:in `merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:135:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:244:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:209:in `run'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:71:in `(root)'"]}

The semi-official image from DockerHub seems to be working just fine.

xpack out of the box?

In 71285d3 some X-Pack support is added.
https://github.com/elastic/logstash-docker/blob/master/build/logstash/config/logstash.yml#L2-L4

This causes error spam:

[2017-02-26T23:04:38,679][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x55d22cea URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}

And not the nice user experience that this block of code is supposed to give.

# Provide a minimal configuration, so that simple invocations will provide
# a good experience.

env2yaml config support for cloud.* settings

There's a useful feature in Logstash 6.x which permits users to pass a cloud.id and cloud.auth setting in their settings yaml to automatically negotiate a Cloud URL endpoint for outputting to Elasticsearch. I attempted to use this but it looks like the setting isn't whitelisted in the env2yaml config. Could we get it included to permit passing those settings in through the environment?

which: no javac when installing local gem file

I have an issue when installing any local gem file in the 5.4.0 logstash container.

An example is this plugin: https://github.com/lukewaite/logstash-input-cloudwatch-logs If I build the gem locally, e.g. gem build logstash-input-cloudwatch-logs.gemspec, then copy the resulting gem file into my dockerfile, and attempt to install with:

bin/logstash-plugin install logstash-input-cloudwatch-logs.gem

It will fail with:

which: no javac in (/usr/share/logstash/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)

It doesn't seem to matter which plugin I install. But the logstash image will build without errors if I'm using 5.3.2.

Here's a fragment of the Dockerfile I'm using:

FROM docker.elastic.co/logstash/logstash:5.4.0

ENV PATH_CONFIG=/usr/share/logstash/pipeline/prod/

COPY *.gem /usr/share/logstash/
RUN cd /usr/share/logstash && ls *.gem | xargs bin/logstash-plugin install

I have also posted to the ES forum, but have not yet had a response:

https://discuss.elastic.co/t/which-no-javac-for-local-plugin-installation/86181

Unable to configure SSL for beats input

I have tried numerous versions of the logstash docker container and have not been able to successfully configure the beats input to use SSL.

Docker: 17.12.0-ce-win47
Logstash container: 6.2.1

Logs:

 logstash_1       | [2018-02-20T18:27:08,421][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
 logstash_1       | [2018-02-20T18:27:08,480][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 => ECDHE-ECDSA-AES256-GCM-SHA384
 logstash_1       | [2018-02-20T18:27:08,516][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Beats port=>5044, ssl=>true, ssl_certificate=>\"/etc/pki/tls/certs/logstash-forwarder.crt\", ssl_key=>\"/etc/pki/tls/private/logstash-forwarder.key\", id=>\"2ef1f57167a8005f6aca7168d8b8c4b409e6a55a2e96c3236597cadf72a5a9cc\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_2c167b7f-67b1-469a-a20d-baa39be9dd84\", enable_metric=>true, charset=>\"UTF-8\">, host=>\"0.0.0.0\", ssl_verify_mode=>\"none\", include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>[\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\"], client_inactivity_timeout=>60, executor_threads=>8>", :error=>"Cipher `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384` is not available", :thread=>"#<Thread:0x546f28e8 run>"}
 logstash_1       | [2018-02-20T18:27:08,531][DEBUG][logstash.inputs.tcp      ] Closing {:plugin=>"LogStash::Inputs::Tcp"}

Investigation showed that it seems to be caused by the inability to extract a shared library and load it from netty. After extracting the library from the jar, I checked the following:

$ ldd /tmp/libnetty_tcnative_linux_x86_64.so
ldd: warning: you do not have execution permission for `/tmp/libnetty_tcnative_linux_x86_64.so'
        linux-vdso.so.1 =>  (0x00007fff110b5000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f7ec67ea000)
        libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f7ec65b3000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7ec6397000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f7ec6193000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f7ec5dd0000)
        libfreebl3.so => /lib64/libfreebl3.so (0x00007f7ec5bcd000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f7ec6dee000)

Other information: logstash-plugins/logstash-input-beats#288

--path.settings doesn't appear to be respected

$ docker run -v /srv/docker/logstash:/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 logstash --path.settings /logstash/logstash.yml
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /logstash/log4j2.properties. Using default config which logs to console
(etc etc etc)

The file I'm referencing exists, is readable by logstash, and so on:

$ docker run -v /srv/docker/logstash:/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 cat /logstash/logstash.yml
pipeline:
  workers: 64
  batch:
    size: 250
path:
  config: '/logstash/logstash.conf'
  queue: '/logstash/queue'
  reload:
    automatic: true
$ docker run -v /srv/docker/logstash:/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 id
uid=1000(logstash) gid=1000(logstash) groups=1000(logstash)

If I instead set LS_SETTINGS_DIR appropriately, everything is happy:

$ docker run -v /srv/docker/logstash:/logstash -e LS_SETTINGS_DIR=/logstash --rm docker.elastic.co/logstash/logstash:5.2.2
Could not find log4j2 configuration at path /logstash/log4j2.properties. Using default config which logs to console
(etc etc etc)

Filebeat cannot connect to logstash in docker

With minimal filebeat and logstash setup i am getting those errors on filebeat:

filebeat_1  | 2017/09/07 12:20:22.148421 sync.go:85: ERR Failed to publish events caused by: EOF
filebeat_1  | 2017/09/07 12:20:22.148484 single.go:91: INFO Error publishing events (retrying): EOF
filebeat_1  | 2017/09/07 12:20:23.149887 sync.go:85: ERR Failed to publish events caused by: EOF
filebeat_1  | 2017/09/07 12:20:23.149945 single.go:91: INFO Error publishing events (retrying): EOF
filebeat_1  | 2017/09/07 12:20:25.150945 sync.go:85: ERR Failed to publish events caused by: EOF
filebeat_1  | 2017/09/07 12:20:25.150966 single.go:91: INFO Error publishing events (retrying): EOF
filebeat_1  | 2017/09/07 12:20:29.152188 sync.go:85: ERR Failed to publish events caused by: EOF
filebeat_1  | 2017/09/07 12:20:29.152264 single.go:91: INFO Error publishing events (retrying): EOF
...

The full setup is in repo: https://github.com/nazarewk/docker-logstash-filebeat/tree/9856472999c6ea16f059b7e07c9c9102a7af0175

Already spent like 6 hours trying to debug it and i've no clue what might be wrong.

CONFIG_STRING with '=' gets truncated

When you define a CONFIG_STRING environment variable, the string is truncated at the first '=' character.
The value

environment:
   - "CONFIG_STRING=input { tcp { port => 5000 } } output { elasticsearch { hosts => elasticsearch:9200 } }"

gets truncated to

cat /usr/share/logstash/config/logstash.yml
config.string: input { tcp { port 

I believe this is happening because of env2yaml.go:

	// Merge any valid settings found in the environment.
	foundNewSettings := false
	for _, line := range os.Environ() {
		kv := strings.Split(line, "=")

logstash-output-riemann not installed

The logstash-output-riemann is not installed in the logstash:5.2 docker image. Is there a reason for this? I need the output and if I want to use this version, it seems I need to use the Dockerfile to install it.

Is this by design?

Add exposed ports in dockerfile

It allows to use, for example, registrator with custom CNI networking, that registers exposed ports if you pass -internal flag.

Docker pull fails with LCOW

I am running Docker for Windows Edge (18.03 CE RC3) to enable running in Linux Containers on Windows mode. I was able to pull Elasticsearch and Kibana 6.2.2 and Logstash 5.6.8 but any of the Logstash 6.x.x images fail similar to this:
PS C:> docker pull --platform linux docker.elastic.co/logstash/logstash:6.0.1
6.0.1: Pulling from logstash/logstash
85432449fd0f: Pull complete
a3e15d6940fb: Pull complete
fae632482ab1: Pull complete
7c65416b4f7b: Pull complete
d9542679fed1: Pull complete
4493646df9ed: Pull complete
8eb869b55294: Pull complete
4b301d5e876d: Pull complete
d6b839a61292: Pull complete
0474a9f69bfc: Download complete
f39c661e445d: Download complete
failed to register layer: failed sending to tar2vhd for C:\ProgramData\Docker\lcow\25d3d7b6e14a3bf0825964fb2d8666bd0c9def023163892b70b71d9036fb1f99\layer.vhd: opengcs: copyWithTimeout: error reading: 'open \tmp\f1fa24015610f5617c03b2155233dee8cc853bb845af4f5b8e411aba64b11b67-mount\usr\local\bin\docker-entrypoint: The system cannot find the path specified.' after 0 bytes (stdin of tar2vhd for generating C:\ProgramData\Docker\lcow\25d3d7b6e14a3bf0825964fb2d8666bd0c9def023163892b70b71d9036fb1f99\layer.vhd)

Disable monitoring not working?

I use XPACK_MONITORING_ENABLED: "false" in my docker-compose file.

I can see when I peek into my container that it is set:

# env | grep -i MONI
XPACK_MONITORING_ENABLED=false

However that doesn't appear to be changing the logstash.yml file the way the docs page (https://www.elastic.co/guide/en/logstash/current/_configuring_logstash_for_docker.html) says it will:

# cat logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: changeme

So I'm left with tons of monitoring related error messages:

sa-demo-apachelogs-ls-build        | [2017-11-22T17:54:36,204][INFO ][logstash.inputs.metrics  ] Monitoring License OK
sa-demo-apachelogs-ls-build        | [2017-11-22T17:54:36,300][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://elasticsearch:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
sa-demo-apachelogs-ls-build        | [2017-11-22T17:54:38,311][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://elasticsearch:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
sa-demo-apachelogs-ls-build        | [2017-11-22T17:54:42,393][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://elasticsearch:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
sa-demo-apachelogs-ls-build        | [2017-11-22T17:54:50,408][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://elasticsearch:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}
sa-demo-apachelogs-ls-build        | [2017-11-22T17:55:06,415][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://elasticsearch:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}

env2yaml not processing file in --path.confg

Would be nice if the docker-entrypoint or logstash would replace ENV. This was a feature in the now deprecated docker image.

Hack I'm doing on docker-entrypoint:

#!/bin/bash -e

# Map environment variables to entries in logstash.yml.
# Note that this will mutate logstash.yml in place if any such settings are found.
# This may be undesirable, especially if logstash.yml is bind-mounted from the
# host system.
#env2yaml /usr/share/logstash/config/logstash.yml

PATH_CONFIG=/opt/logstash/conf.d #TODO grab from args
while IFS='=' read -r name value ; do
    # replace ${name}, ${name|default}, ${name | default} => value
    find ${PATH_CONFIG} -type f -name "*.conf" -print0 | xargs -0 sed -i.bak -r 's~\$\{'${name}' ?(|\|[^\}]{1,})}~'${value}'~g'
done < <(env)
rm ${PATH_CONFIG}/*.bak
if [[ -z $1 ]] || [[ ${1:0:1} == '-' ]] ; then
  exec logstash "$@"
else
  exec "$@"
fi

Base image different

Why is the base image different for the 3 tools?

ubuntu:16.04 - logstash-docker

docker.elastic.co/kibana/kibana-ubuntu-base:latest - kibana-docker

docker.elastic.co/elasticsearch/elasticsearch-alpine-base:latest - elasticsearch-docker

Unifying this as well will save space in host machines. Is this planned in the near future?

How to set LS_HEAP_SIZE?

In docker-compose.yml, I tried

environment:
  - "LS_HEAP_SIZE=3g"

But I still have 1g in the status of Logstash:

{
  "version": "5.3.0",
  "http_address": "0.0.0.0:9600",
  "jvm": {
    "pid": 1,
    "version": "1.8.0_121",
    "vm_name": "OpenJDK 64-Bit Server VM",
    "vm_version": "1.8.0_121",
    "vm_vendor": "Oracle Corporation",
    "start_time_in_millis": 1493031633901,
    "mem": {
      "heap_init_in_bytes": 268435456,
      "heap_max_in_bytes": 1037959168,
      "non_heap_init_in_bytes": 2555904,
      "non_heap_max_in_bytes": 0
    },
    "gc_collectors": [
      "ParNew",
      "ConcurrentMarkSweep"
    ]
  }
}

Trouble running logstash as a Docker service

Hi guys! Using a Dockerfile like this:

FROM docker.elastic.co/logstash/logstash:5.4.0

building it like this:

docker build -t logstash .

I can run it like this:

docker run -it logstash

and it runs fine. But if I make it a service on a docker swarm:

docker service create --name logstash logstash

Logstash (or rather, the logstash service) keeps restarting. This is the entirety of the output:

Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
16:31:24.769 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
16:31:24.804 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"f763b768-87fd-4b4c-b758-fa1e63f29454", :path=>"/var/lib/logstash/uuid"}
16:31:25.075 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
16:31:25.119 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
16:31:25.341 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
16:31:28.290 [LogStash::Runner] WARN  logstash.agent - stopping pipeline {:id=>"main"}

I can't figure out what I'm doing wrong, or what the difference is between running the image in a swarm and "normally". Any tips? I'm using Docker version 17.03.1-ce-mac12 (17661)

logstash-output-monasca_log_api plugin install fails

Hello,
I'm having problems in building a Docker container with Logstash 5.x and the logstash-output-monasca_log_api plugin version 0.5.3, that can be found here: https://rubygems.org/gems/logstash-output-monasca_log_api

The plugin installation fails as shown below:

`Building monasca-log-agent
Step 1/4 : FROM logstash:5
---> 413f93f5f1f2
Step 2/4 : RUN logstash-plugin install logstash-output-monasca_log_api
---> Running in 21f002491274
Validating logstash-output-monasca_log_api
Installing logstash-output-monasca_log_api
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "logstash-codec-json":
In snapshot (Gemfile.lock):
logstash-codec-json (= 3.0.2)

In Gemfile:
logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java

logstash-output-monasca_log_api (>= 0) java depends on
logstash-codec-json (~> 0.1.6) java

logstash-output-udp (>= 0) java depends on
logstash-codec-json (>= 0) java
Running bundle update will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.`

The problem seems to be that Logstash 5.x locks the logstash-codec-json plugin version to 3.0.2, whereas the logstash-output-monasca_log_api plugin depends on a previous version.

So far, I have not been able to do the suggested bundle update.
The bundle executable is there but when I try to execute it from inside the container, I just get:

sh: 18: bundle: not found

The bundle environment does not seem to be configured properly. Running the executable from Docker CLI or Dockerfile just fails:

starting container process caused "no such file or directory"

Any suggestion on how to deal with this problem?
Thanks

Can't build the project : "No such image: docker.elastic.co/logstash/logstash-x-pack:6.0.1"

Hi,

The last commit : "d1a04609d91a9f5fe2e295adefbba271622ba759", seems to prevent me to build the project (trying the 6.0 branch,), when building the image, I get stuck at the step 5 of the docker build and get the error :

Step 5/20 : RUN curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-6.0.1.tar.gz | tar zxf - -C /usr/share && mv /usr/share/logstash-6.0.1 /usr/share/logstash && chown --recursive logstash:logstash /usr/share/logstash/ && ln -s /usr/share/logstash /opt/logstash
---> Running in 93508e5f494c
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 311 100 311 0 0 657 0 --:--:-- --:--:-- --:--:-- 657

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c curl -Lo - https://artifacts.elastic.co/downloads/logstash/logstash-6.0.1.tar.gz | tar zxf - -C /usr/share && mv /usr/share/logstash-6.0.1 /usr/share/logstash && chown --recursive logstash:logstash /usr/share/logstash/ && ln -s /usr/share/logstash /opt/logstash' returned a non-zero code: 2
Error response from daemon: No such image: docker.elastic.co/logstash/logstash-x-pack:6.0.1
make: *** [build] Error 1

I changed the definition of the version in the json. and it worked again.

SSL issue with Logstash docker

These is an issue with SSL in JRuby 9.1.13.0 and it breaks the BigQuery output plugin. Please see the following issue: [(https://github.com/jruby/jruby/issues/4802)] and the stack trace below.
This issue should be solved by JRuby 9.1.14.0 as stated in the link

[2018-03-08T14:39:28
386][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main"
:plugin=>"#<LogStash::OutputDelegator:0x71e006d5 @namespaced_metric=#<LogStash::Instrument::NamespacedMetric:0x704bf7ea @Metric=#<LogStash::Instrument::Metric:0x2e1c48ce @collector=#<LogStash::Instrument::Collector:0x679a9dd0 @agent=nil
@metric_store=#<LogStash::Instrument::MetricStore:0x163df958 @store=#<Concurrent::Map:0x00000000000fac entries=3 default_proc=nil>
@structured_lookup_mutex=#Mutex:0x4637ba6b
@fast_lookup=#<Concurrent::Map:0x00000000000fb0 entries=55 default_proc=nil>>>>
@namespace_name=[:stats
:pipelines
:main
:plugins
:outputs
:"06220cb17fd1ec99bacec88f4d0ff1f73395d878e416b02b0f4ceaf493116b95"]>
@Metric=#<LogStash::Instrument::NamespacedMetric:0x3315f3c1 @Metric=#<LogStash::Instrument::Metric:0x2e1c48ce @collector=#<LogStash::Instrument::Collector:0x679a9dd0 @agent=nil
@metric_store=#<LogStash::Instrument::MetricStore:0x163df958 @store=#<Concurrent::Map:0x00000000000fac entries=3 default_proc=nil>
@structured_lookup_mutex=#Mutex:0x4637ba6b
@fast_lookup=#<Concurrent::Map:0x00000000000fb0 entries=55 default_proc=nil>>>>
@namespace_name=[:stats
:pipelines
:main
:plugins
:outputs]>
@out_counter=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0
@strategy=#<LogStash::OutputDelegatorStrategies::Single:0x3b61eeb0 @mutex=#Mutex:0x4fcaaa0c
@output=<LogStash::Outputs::GoogleBigQuery project_id=>"31121231313"
dataset=>"test"
csv_schema=>"id:STRING
name:STRING
location:STRING"
key_path=>"/usr/share/logstash/company.p12"
service_account=>"[email protected]"
temp_directory=>"/tmp/logstash-bq"
temp_file_prefix=>"logstash_bq"
date_pattern=>"%Y-%m-%dT%H:00"
flush_interval_secs=>2
uploader_interval_secs=>60
deleter_interval_secs=>60
id=>"06220cb17fd1ec99bacec88f4d0ff1f73395d878e416b02b0f4ceaf493116b95"
enable_metric=>true
codec=><LogStash::Codecs::Plain id=>"plain_c9cd2a30-234e-4003-924e-7cf66c0b293f"
enable_metric=>true
charset=>"UTF-8">
workers=>1
table_prefix=>"logstash"
table_separator=>"
"
ignore_unknown_values=>false
key_password=>"notasecret">>
@in_counter=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0
@id="06220cb17fd1ec99bacec88f4d0ff1f73395d878e416b02b0f4ceaf493116b95"
@time_metric=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0
@metric_events=#<LogStash::Instrument::NamespacedMetric:0x53ddc3cf @Metric=#<LogStash::Instrument::Metric:0x2e1c48ce @collector=#<LogStash::Instrument::Collector:0x679a9dd0 @agent=nil
@metric_store=#<LogStash::Instrument::MetricStore:0x163df958 @store=#<Concurrent::Map:0x00000000000fac entries=3 default_proc=nil>
@structured_lookup_mutex=#Mutex:0x4637ba6b
@fast_lookup=#<Concurrent::Map:0x00000000000fb0 entries=55 default_proc=nil>>>>
@namespace_name=[:stats
:pipelines
:main
:plugins
:outputs
:"06220cb17fd1ec99bacec88f4d0ff1f73395d878e416b02b0f4ceaf493116b95"
:events]>
@output_class=LogStash::Outputs::GoogleBigQuery>"
:error=>"certificate verify failed"
:thread=>"#<Thread:0x5035009e run>"}
[2018-03-08T14:39:28
432][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main"
:exception=>#Faraday::SSLError
:backtrace=>["org/jruby/ext/openssl/SSLSocket.java:228:in connect_nonblock'" "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:938:in connect'"
"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:868:in do_start'" "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:857:in start'"
"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/http.rb:1409:in request'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:82:in perform_request'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:40:in block in call'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:87:in with_net_http_connection'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/adapter/net_http.rb:32:in call'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/request/url_encoded.rb:15:in call'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in build_response'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in run_request'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:177:in post'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/signet-0.8.1/lib/signet/oauth_2/client.rb:967:in fetch_access_token'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/signet-0.8.1/lib/signet/oauth_2/client.rb:1005:in fetch_access_token!'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/google-api-client-0.8.7/lib/google/api_client/auth/jwt_asserter.rb:105:in authorize'"
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-google_bigquery-3.2.3/lib/logstash/outputs/google_bigquery.rb:552:in initialize_google_client'" "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-google_bigquery-3.2.3/lib/logstash/outputs/google_bigquery.rb:209:in register'"
"/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:10:in register'" "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:42:in register'"
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in register_plugin'" "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:352:in block in register_plugins'"
"org/jruby/RubyArray.java:1734:in each'" "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:352:in register_plugins'"
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:735:in maybe_setup_out_plugins'" "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:362:in start_workers'"
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:289:in run'" "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:249:in block in start'"]
:thread=>"#<Thread:0x5035009e run>"}
[2018-03-08T14:39:28
529][ERROR][logstash.agent ] Failed to execute action {:id=>:main
:action_type=>LogStash::ConvergeResult::FailedAction
:message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main
action_result: false"
:backtrace=>nil}

Multiple Pipelines problem with docker image

Just starting to use the LS 6.0 docker images for the first time and ran into some problems. I followed the instructions on the wiki page, notably:

FROM docker.elastic.co/logstash/logstash:6.0.0-rc2
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/

And received this error:

ERROR: Pipelines YAML file is empty. Location: /usr/share/logstash/config/pipelines.yml
usage:
  bin/logstash -f CONFIG_PATH [-t] [-r] [] [-w COUNT] [-l LOG]
  bin/logstash --modules MODULE_NAME [-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"] [-t] [-w COUNT] [-l LOG]
  bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace] [-w COUNT] [-l LOG]
  bin/logstash -i SHELL [--log.level fatal|error|warn|info|debug|trace]
  bin/logstash -V [--log.level fatal|error|warn|info|debug|trace]
  bin/logstash --help

It's even more confusing because the docs page (https://www.elastic.co/guide/en/logstash/6.0/docker.html) doesn't mention pipelines.yml the way it mentions logstash.yml and the linked docs page (https://www.elastic.co/guide/en/logstash/6.0/config-setting-files.html) doesn't mention pipelines.yml. I've opened a separate docs ticket for that second link on the logstash repo.

I think maybe either the docker docs page should mention that you need to supply both a logstash.yml as well as pipelines.yml AND your pipeline conf files, or maybe we default the docker image to the old behavior of ignoring pipelines.yml and just loading all the pipelines defined in pipelines/ i'm not sure which way is better, I don't really have a strong opinion.

logstash-plugin update fails on redis 4.0.0 dependency due to java version 2.2 requirement

Full debug output attached, but the root of it is
Gem::InstallError: redis requires Ruby version >= 2.2.2.

This dependency is the result of s.add_runtime_dependency(%q<redis>, [">= 0"]) in the logstash-input-redis-3.1.3.gemspec that's being used.

Results can be replicated with a minimal Dockerfile:

FROM docker.elastic.co/logstash/logstash:5.5.2
# uncomment these ENV lines for full debug output
#ENV JARS_DEBUG=true
#ENV JARS_VERBOSE=true
#ENV DEBUG=1
RUN logstash-plugin update

docker.build.txt

Customizing logstash image

Hi! I'd like to ask if is there any way to customize the logstash itself because I'd like to install some tools, let say rsyslog or nmap (I know that doesn't have any relation with Logstash)

Thanks.

Not able to find 5.6.0 image

Following error is reported in case I try to pull 5.6.0 image, whereas 6.0.0-beta2 works fine. Is this image not published?

docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:5.6.0
Unable to find image 'docker.elastic.co/logstash/logstash:5.6.0' locally
docker: Error response from daemon: manifest for docker.elastic.co/logstash/logstash:5.6.0 not found.
See 'docker run --help'.

Error: image logstash/logstash/5.0.0-beta1:latest not found.

I am trying to pull image but not found.

[root@docker02 docker]# export ELASTIC_REG=docker.elastic.co/logstash
[root@docker02 docker]# export LOGSTASH_VERSION=5.0.0-beta1
[root@docker02 docker]# export LOGSTASH_IMAGE=$ELASTIC_REG/logstash/$LOGSTASH_VERSION

[root@docker02 docker]# docker run -it -v /my/logstash/configs/:/opt/logstash/conf.d/ $LOGSTASH_IMAGE
Unable to find image 'docker.elastic.co/logstash/logstash/5.0.0-beta1:latest' locally
Pulling repository docker.elastic.co/logstash/logstash/5.0.0-beta1
docker: Error: image logstash/logstash/5.0.0-beta1:latest not found.
See 'docker run --help'.

[root@docker02 docker]# docker pull $LOGSTASH_IMAGE
Using default tag: latest
Pulling repository docker.elastic.co/logstash/logstash/5.0.0-beta1
Error: image logstash/logstash/5.0.0-beta1:latest not found

Where is this image published?

So maybe this is a stupid question, but...

where can I pull this image from? Is it published to a registry somewhere?

New Config Files get deleted

Hello,

I am trying to set some new Configs like this in my Dockerfile:

FROM docker.elastic.co/logstash/logstash:5.4.3

COPY logstash.yml /usr/share/logstash/config/logstash.yml
COPY config/* /usr/share/logstash/pipeline/

CMD ["logstash", "-f", "/usr/share/logstash/pipeline"]

In the config directory there is one example config file:

input {
  beats {
    port => 5000
  }
}

filter {
  if "saved file with name" in [message] {
    mutate {
      add_field => { "foo_upload" => "Upload" }
    }
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

But when I enter the running container there is just the default configuration file in /usr/share/logstash/pipeline.
What am I doing wrong?

Thanks

path.config settings file entry is ignored

I'm finding all the rippers today...

Somewhat the inverse of #23: this time, instead of the command line being ignored, the settings file is ignored. My logstash.yml looks like this:

pipeline:
  workers: 64
  batch:
    size: 250
path:
  config: '/logstash/logstash.conf'
  queue: '/logstash/queue'
config:
  reload:
    automatic: true

I've confirmed that logstash is, indeed, reading this on startup by mangling a key name and noting that logstash fails to start.

However, the contents of /logstash/logstash.conf is not read -- the default "beats" input is started instead:

$ docker run -v /srv/docker/logstash:/logstash -e LS_SETTINGS_DIR=/logstash --rm docker.elastic.co/logstash/logstash:5.2.2
Could not find log4j2 configuration at path /logstash/log4j2.properties. Using default config which logs to console
21:30:30.355 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"62331978-ec56-4e93-a17b-c03728d5721f", :path=>"/usr/share/logstash/data/uuid"}
21:30:30.899 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
21:30:30.900 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
21:30:30.998 [[.monitoring-logstash]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x72a718e4 URL:http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
21:30:30.999 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x21d220e0 URL:http://localhost:9200>]}
21:30:30.999 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
21:30:31.001 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.pipeline - Pipeline .monitoring-logstash started
21:30:31.007 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>64, "pipeline.batch.size"=>250, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>16000}
21:30:31.007 [[main]-pipeline-manager] WARN  logstash.pipeline - CAUTION: Recommended inflight events max exceeded! Logstash will run with up to 16000 events in memory in your current configuration. If your message sizes are large this may cause instability with the default heap size. Please consider setting a non-standard heap size, changing the batch size (currently 250), or changing the number of pipeline workers (currently 64)
21:30:31.355 [[main]-pipeline-manager] INFO  logstash.inputs.beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}

The config file I want to use doesn't mention beats at all:

$ docker run -v /srv/docker/logstash:/logstash -e LS_SETTINGS_DIR=/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 grep beats /logstash/logstash.conf

And logstash should be able to read it Just Fine:

$ docker run -v /srv/docker/logstash:/logstash -e LS_SETTINGS_DIR=/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 head -n 4 /logstash/logstash.conf
input {
  lumberjack {
    port            => 5150
    type            => "logs"

Interestingly, when I start logstash in the above configuration and look at my process listings, I find this:

22212 ?        Ssl    2:05 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb -f /usr/share/logstash/pipeline/

Specifically, the last bit is interesting: it appears that something is chucking in a -f /usr/share/logstash/pipeline/, which (I assume) overrides the value for path.config that I've set in the settings file. This is supported by the fact that if I explicitly specify -f on the container command line:

$ docker run -v /srv/docker/logstash:/logstash -e LS_SETTINGS_DIR=/logstash --rm docker.elastic.co/logstash/logstash:5.2.2 logstash -f /logstash/logstash.conf

The process listing then shows that -f is set correctly:

10385 ?        Ssl    1:26 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb -f /logstash/logstash.conf
12

And, further, that my config is now applied:

Could not find log4j2 configuration at path /logstash/log4j2.properties. Using default config which logs to console
[...]
21:21:20.051 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash-elasticsearch:9200/]}}
21:21:20.052 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash-elasticsearch:9200/, :path=>"/"}
21:21:20.119 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0x6c30c698 URL:http://logstash-elasticsearch:9200/>}
[...]
21:21:21.066 [[main]-pipeline-manager] INFO  logstash.inputs.lumberjack - Starting lumberjack input listener {:address=>"0.0.0.0:5150"}

Logstash.outputs.elasticsearch is trying to connect to another elasticsearch

I'm looking at Settings File | Logstash Reference [5.6] | Elastic and I'm unable to figure out, which setting to adjust to override elasticsearch:9200?

My docker:

root@app11:/opt/elastic/logstash# grep -v ^# docker-compose.yml 
version: '3'
services:
        logstash:
                image: docker.elastic.co/logstash/logstash:5.6.2
                container_name: logstash11
root@app11:/opt/elastic/logstash#

Please advise.


I created same topic at Logstash.outputs.elasticsearch is trying to connect to another elasticsearch - Logstash - Discuss the Elastic Stack

5.2.2. Xpack config

I've copied the following docker config from the elastic website and cannot get the xpack monitoring to stop reporting errors

FROM docker.elastic.co/logstash/logstash:5.2.2
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/

config/logstash.yml
xpack.monitoring.enabled: false

If I run the 5.2.2-env everything works as expected but I don't see why following the instructions from the website and adding my own logstash.yml config file does not work. I've exec'd into the container and can see my changes in logstash.yml but they seem to be ignored although creating a deliberate syntax error stops the container from running which is expected.

logstash not honouring PATH_CONFIG?!

Seem to have a weird issue where logstash reports that there is no config in /usr/share/logstash/pipeline directory, even though PATH_CONFIG has been set to /usr/share/logstash/pipeline/prod/

Here's the output from running the docker container:

Setting from environment 'path.config: /usr/share/logstash/pipeline/prod/'
Setting from environment 'xpack.monitoring.elasticsearch.url: rara'
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
No persistent UUID file found. Generating new UUID {:uuid=>"c4ed0f71-e0c3-4ec4-b7c7-d00961e7ab26", :path=>"/usr/share/logstash/data/uuid"}
[2017-05-24T16:27:27,756][ERROR][logstash.agent           ] failed to fetch pipeline configuration {:message=>"No config files found: /usr/share/logstash/pipeline/. Can you make sure this path is a logstash config file?"}

Here's a fragment of the docker file :

FROM docker.elastic.co/logstash/logstash:5.3.2

ENV PATH_CONFIG=/usr/share/logstash/pipeline/prod/
[...]
COPY pipeline/ /usr/share/logstash/pipeline

The pipeline/ directory in the docker build context has a prod folder, inside which there is a logstash.conf folder.

Environment variables not set in logstash config

According to the docs I should be able to specify a configuration file that can use environment variables for the pipeline config. But when I try to set environment variables and mount this config file, it doesn't appear to work.

input { stdin { } }
output {
  elasticsearch {
    hosts => "${ES_HOSTS}"
    user => "${ES_USERNAME}"
    password => "${ES_PASSWORD}"
    index => "logs"
  }
  stdout { codec => rubydebug }
}

You can reproduce this by mounting the config file above with a command like this:

docker run --rm -it \
-v /path/to/my-logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-e ES_HOSTS=http://my.elastic.node:9200 \
-e ES_USERNAME=elastic \
-e ES_PASSWORD=mypwd \
 docker.elastic.co/logstash/logstash:5.3.1

I see the error [2017-04-26T19:03:02,580][ERROR][logstash.agent ] Cannot load an invalid configuration {:reason=>"bad URI(is not URI?): ${ES_HOSTS}"} so the config file is copied to the correct location but the environment variables aren't replaced.

Am I missing something, or should this be working?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.