fabric8io / elasticsearch-cloud-kubernetes Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
I have been trying to use the plugin for ES 2.1.0 but when I start the container, I get the error below:
[2015-12-15 00:30:28,049][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Freakmaster] Exception caught during discovery: access denied ("java.io.FilePermission" "/home/elasticsearch/.kube/config" "read")
java.security.AccessControlException: access denied ("java.io.FilePermission" "/home/elasticsearch/.kube/config" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at sun.nio.fs.UnixPath.checkRead(UnixPath.java:795)
What I understood is that the plugin is using user's home directory for storing kube config. What I don't understand is that even when I run as the same user, it fails. Permission denied.
I tried on my mac as well as a docker image. All the same. It has no effect when I reset my $HOME to my installation folder. It always points to %USER.HOME% and permission gets denied.
Just so I (and possibly others) know when this is released.
hi, there i'm using elasticsearch 1.4.4 and kubectl 1.5.1
i used 1.0x verison of this plugin
then i install this plugin which gives me a ERROR log like this
i'm newer in java so i got trouble
_`### [2017-04-27 08:54:00,561][INFO ][node ] [Captain Britain] version[1.4.4], pid[11], build[c88f77f/2015-02-19T13:05:36Z]
[2017-04-27 08:54:00,561][INFO ][node ] [Captain Britain] initializing ...
[2017-04-27 08:54:00,596][INFO ][plugins ] [Captain Britain] loaded [cloud-kubernetes], sites []
[2017-04-27 08:54:05,185][INFO ][node ] [Captain Britain] initialized
[2017-04-27 08:54:05,186][INFO ][node ] [Captain Britain] starting ...
[2017-04-27 08:54:05,277][INFO ][transport ] [Captain Britain] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.32.0.13:9300]}
[2017-04-27 08:54:05,294][INFO ][discovery ] [Captain Britain] elasticsearch/JNHAE0d6RjmcTj8l2nbrvQ
[elasticsearch[Captain Britain][generic][T#1]] WARN org.apache.cxf.phase.PhaseInterceptorChain - Interceptor for {http://localhost:8080}WebClient has thrown exception, unwinding now
org.apache.cxf.interceptor.Fault: Could not send Message.
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:619)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
at com.sun.proxy.$Proxy29.getPods(Unknown Source)
at io.fabric8.kubernetes.api.KubernetesHelper.getFilteredPodMap(KubernetesHelper.java:417)
at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:413)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:99)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:316)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:224)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:146)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:124)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:943)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:337)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$6000(ZenDiscovery.java:80)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1320)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods: Connection refused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1359)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1343)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:638)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
... 19 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:266)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1557)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1527)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1330)
... 22 more
[2017-04-27 08:54:05,378][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Captain Britain] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods: Connection refused
javax.ws.rs.ProcessingException: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods: Connection refused
at org.apache.cxf.jaxrs.client.AbstractClient.checkClientException(AbstractClient.java:552)
at org.apache.cxf.jaxrs.client.AbstractClient.preProcessResult(AbstractClient.java:534)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:676)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
at com.sun.proxy.$Proxy29.getPods(Unknown Source)
at io.fabric8.kubernetes.api.KubernetesHelper.getFilteredPodMap(KubernetesHelper.java:417)
at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:413)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:99)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:316)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:224)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:146)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:124)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:943)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:337)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$6000(ZenDiscovery.java:80)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1320)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
_
any helps ? thank you !!
Hello, would it be possible to have a release of this plugin compatible with elasticsearch v5.3.1 released on April 20th? Thank you in advance.
When I try to switch from openjdk:8-jre to ibmjava:8-jre, the Kubernetes cloud plugin failed to be started with following error,
[2017-03-30 05:58:02,353][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Meggan] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider$$Lambda$1.00000000DC07A210.run(Unknown Source)
at java.security.AccessController.doPrivileged(AccessController.java:594)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:945)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:360)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:785)
Caused by: javax.net.ssl.SSLException: Received fatal alert: protocol_version
at com.ibm.jsse2.j.a(j.java:35)
at com.ibm.jsse2.j.a(j.java:31)
at com.ibm.jsse2.as.b(as.java:806)
at com.ibm.jsse2.as.a(as.java:102)
at com.ibm.jsse2.as.i(as.java:969)
at com.ibm.jsse2.as.a(as.java:680)
at com.ibm.jsse2.as.startHandshake(as.java:859)
at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:188)
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 17 more
Not sure the difference between openjdk:8-jre and ibmjava:8-jre.
Hi,
although it seems that changes for 5.4.0 have been committed, release is not yet in place:
$ /usr/share/elasticsearch/bin/elasticsearch-plugin install io.fabric8:elasticsearch-cloud-kubernetes:5.4.0
Downloading io.fabric8:elasticsearch-cloud-kubernetes:5.4.0 from maven central
Exception in thread "main" java.io.FileNotFoundException: https://repo1.maven.org/maven2/io/fabric8/elasticsearch-cloud-kubernetes/5.4.0/elasticsearch-cloud-kubernetes-5.4.0.zip
Thanks
Following the deployment of an app with 2GB of memory usage, ES cannot be resized or deployed in k8s.
Following error when spinning up ES cluster in k8s:
2015-04-25T00:05:01.856141447Z Caused by: com.fasterxml.jackson.databind.JsonMappingException: Numeric value (2147483648) out of range of int
2015-04-25T00:05:01.856141447Z at [Source: sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@511e7433; line: 2062, column: 35] (through reference chain: io.fabric8.kubernetes.api.model.PodList["items"]->io.fabric8.kubernetes.api.model.Pod["desiredState"]->io.fabric8.kubernetes.api.model.PodState["manifest"]->io.fabric8.kubernetes.api.model.ContainerManifest["containers"]->io.fabric8.kubernetes.api.model.Container["memory"])
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:197)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1415)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:244)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:676)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:800)
2015-04-25T00:05:01.856141447Z at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBodyReader(JAXRSUtils.java:1322)
2015-04-25T00:05:01.856141447Z at org.apache.cxf.jaxrs.impl.ResponseImpl.doReadEntity(ResponseImpl.java:369)
2015-04-25T00:05:01.856141447Z ... 15 more
I found a warn log with KubernetesClientException in my elastic search log, but the stack trace is not printed:
[2017-01-20T03:07:10,393][WARN ][i.f.e.d.k.KubernetesUnicastHostsProvider] [es-client-2799791230-qm6t0] Exception caught during discovery: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
The log is printed like this in source code
logger.warn("Exception caught during discovery: {}", e, e.getMessage());
For slf4j log, it seems the exception should be the last param:
logger.warn("Exception caught during discovery: {}", e.getMessage(), e);
While spinning a deployment on GCE kubernetes 1.4, with three equivalent nodes (master=true, data=true), it appears that nodes fail to synchronize and agree on master role.
[2016-10-01 01:44:14,470][WARN ][http.netty ] [Sally Floyd] Caught exception while handling client http traffic, closing connection [id: 0xbf51eae3, /10.0.1.14:56154 => /10.0.0.10:9200]
java.lang.IllegalArgumentException: invalid version format: INTERNAL:DISCOVERY/ZEN/UNICAST๏พฒ๏ฟ^OPENTARGETS-PRODJACK O'LANTERNDJD5LDRZT0US5XZK1S2SVW 10.0.1.14 10.0.1.14
at org.jboss.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:94)
at org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:62)
at org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:75)
at org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:191)
at org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:102)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[2016-10-01 01:44:15,043][WARN ][http.netty ] [Sally Floyd] Caught exception while handling client http traffic, closing connection [id: 0xfe74a744, /10.0.2.9:60868 => /10.0.0.10:9200]
java.lang.IllegalArgumentException: invalid version format: INTERNAL:DISCOVERY/ZEN/UNICAST๏พฒ๏ฟ^OPENTARGETS-PRODANTIPHON THE OVERSEERMGXOUTS4SQK4SSOJ5B84610.0.2.10.0.2.9
at org.jboss.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:94)
at org.jboss.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:62)
at org.jboss.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:75)
at org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:191)
at org.jboss.netty.handler.codec.http.HttpMessageDecoder.decode(HttpMessageDecoder.java:102)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
@portante per your request, opening this issue to modify the plugin to be configured to include an availability zone
Can an updated config file be made for ES 2.3.1?
Hello,
Would it be possible to release a version of this plugin compiled for ES v5.1.2? Only v5.1.1 is listed on Maven Central.
Thank you in advance.
I tried using plugin 1.3 with ES 1.7.4, as suggested by the docs. But it does not work.
[2016-01-06 01:26:16,382][INFO ][node ] [Crossbones] version[1.7.4], pid[23], build[0d3159b/2015-12-15T11:25:18Z]
[2016-01-06 01:26:16,382][INFO ][node ] [Crossbones] initializing ...
[2016-01-06 01:26:16,480][INFO ][plugins ] [Crossbones] loaded [cloud-kubernetes], sites []
[2016-01-06 01:26:16,510][INFO ][env ] [Crossbones] using [1] data paths, mounts [[/data (/dev/sda5)]], net usable_space [47gb], net total_space [150gb], types [ext4]
{1.7.4}: Initialization Failed ...
The Elasticsearch docs for discovery.zen.ping.unicast.hosts says:
If a hostname lookup resolves to multiple IP addresses then each IP address will be used for discovery
Given this is exactly what a headless kubernetes service would create, what is the advantage of using this plugin?
This way anyone can use this directly in their Elasticsearch deployments.
For instance, in a Dockerfile
:
ADD http://central.maven.org/maven2/io/fabric8/elasticsearch-cloud-kubernetes/1.0.0/elasticsearch-cloud-kubernetes-1.0.0.jar /data/plugins/
I am thinking about extending this plugin to support PetSets in its current 1.3 implementation.
I think it would be great if we could support a safe way of scaling down. I would recon it's a good idea to have a hook during shutting down (after SIGTERM) like that:
I am not really experienced with elasticsearch plugin development. But would a plugin solution like that be possible?
Another solution I could think of would be some go PID 1, that is doing that...
Any thoughts?
Someone else working on that?
Hi,
I'm currently working on a small test setup based on Elasticsearch 2.3.4 with this plugin running on Kubernetes 1.3. For some reason the plugin can't connect to the API server or at least it doesn't feed any endpoints into Elasticsearch. It might have something to do with the fact that we use client certificates to authenticate against the Kubernetes API server but i can't confirm this because the logging isn't working properly.
I'm using the following logging.yml
for elasticsearch:
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console
logger:
# log action execution errors for easier debugging
action: DEBUG
org.apache.http: INFO
# gateway
#gateway: DEBUG
#index.gateway: DEBUG
# peer shard recovery
#indices.recovery: DEBUG
# discovery
io.fabric8: TRACE
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"
For some reason i only get logging output from the plugin itself but not from the underlaying KubernetesClient Plugin.
Is there something wrong with the way i configure the logging in Elasticsearch? I already tried to set es.logger.level
to DEBUG
but still no output from the Kubernetes Client.
Thanks in advance for you help!
regards
christian
I see on the readme that you have:
elasticsearch-plugin install io.fabric8:elasticsearch-cloud-kubernetes:5.5.0
the package doesn't exist yet in maven:
https://repo1.maven.org/maven2/io/fabric8/elasticsearch-cloud-kubernetes/
any idea when it will be pushed?
Thanks!
Are the any plans for 5.4.1 release? There are some important bug fixes (including security) in that version...
Thanks
During our testing, we stumbled upon an issue using this plugin, ES cluster larger than one and readiness probe. I think it is possible that discovering nodes through the service could turn into a deadlock.
More thorough description with one possible solution that seems to work in our case can be found here:
wozniakjan#1
Bugzilla reporting this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1459430
Additional details:
plugin version 2.4.4
ES version 2.4
Could you please advise what would be your recommended resolution or take a look at the proposed PR and help with identifying possible issues this may bring?
Are there any plans to upgrade to 5.5.0?
Could it be just an increment of the version numbers like the commit for 5.4.2 did?
I'm happy to supply that commit if that is all required.
I've looked everywhere for the Dockerfile that y'all are using to build the elasticsearch-k8s image but can't find it. Can somebody add it to this repo or other GH repo?
To use the selector from the specified service instead.
Requesting a version that's installable on ES 2.1.1. Not sure if any 'real' changes are required for this to become a reality.
Hello,
image 5.4.2 would not start, I got ClassNotFound for the plugin class name:
ย | org.elasticsearch.bootstrap.StartupException: ElasticsearchException[Could not find plugin class [io.fabric8.elasticsearch.plugin.discovery.kubernetes.KubernetesDiscoveryPlugin]]; nested: ClassNotFoundException[io.fabric8.elasticsearch.plugin.discovery.kubernetes.KubernetesDiscoveryPlugin];
Then I had a look in the container, directory /usr/share/elasticsearch/plugins/discovery-kubernetes, comparing with 5.4.0, and 5.4.2, and the JAR for the plugin is just not there in 5.4.2...
Did I overlook something?
I am using another plugin which only supports 2.3.3 version of elasticsearch, it would be great if elasticsearch-cloud-kubernetes plugin is updated to support 2.3.3 version of elasticsearch.
I can not install this plugin for Elasticsearch 2.4.1:
# bin/plugin install io.fabric8:elasticsearch-cloud-kubernetes:2.4.1 --verbose
-> Installing io.fabric8:elasticsearch-cloud-kubernetes:2.4.1...
Trying https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1/2.4.1/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1-2.4.1.zip ...
Failed: FileNotFoundException[https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1/2.4.1/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1-2.4.1.zip]; nested: FileNotFoundException[https://download.elastic.co/elasticsearch/release/org/elasticsearch/plugin/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1/2.4.1/io.fabric8:elasticsearch-cloud-kubernetes:2.4.1-2.4.1.zip];
ERROR: failed to download out of all possible locations..., use --verbose to get detailed information
It seems this 2.4.1 plugin can not be found, can anyone please help?
There is a release for this version, but it's not in Docker Hub registry.
https://hub.docker.com/r/fabric8/elasticsearch-k8s/tags/
https://github.com/fabric8io/elasticsearch-cloud-kubernetes/releases/tag/elasticsearch-cloud-kubernetes-2.3.5
Hi,
Running with coreos-kubernetes multinodes vagrant setup (virtualbox) for some tests, everything looks pretty good, except that, 1 time out of 2, the serviceaccount used (not the "default", but an "elasticsearch" serviceaccount) is "401 - unauthorized" as:
[io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Thundra] Exception caught during discovery: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/safelogs-mvp/endpoints/es-transport-service. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked..
2017-03-14T17:01:43.904735876Z io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/safelogs-mvp/endpoints/es-transport-service. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked..
If I repeatedly call this command (which maps my setup):
curl -vvv -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/safelogs-mvp/endpoints/es-transport-service
I will 1 time get the expected JSON response back (200), and another the "unauthorized" message (401).
The result is that my elasticsearch nodes are not able to join the cluster, because they can't reach the service on port 9300.
Some version info:
kubectl
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3+coreos.0", GitCommit:"8fc95b64d0fe1608d0f6c788eaad2c004f31e7b7", GitTreeState:"clean", BuildDate:"2017-02-15T19:52:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Elasticsearch is 2.4.4, so I am using the corresponding plugin in my docker image, which you can find here (nothing fancy at all): https://hub.docker.com/r/madchap/sl_sandbox
Hi
Trying to use the plugin, but need to run 5.3.0. Is there an ETA for a build for 5.3.0?
When will be the support for es 5.5.1?
Can an updated config file be made for ES 2.3.2?
I believe this may have something to do with the new security context, but I'm having a real tough time getting this working with ES 2.0. Initially https://github.com/fabric8io/kubernetes-client was having trouble connecting due to an inability to read /var/run/secrets/kubernetes.io/serviceaccount/*
. This used to work fine when ES ran under root; no longer. I resorted to exporting KUBERNETES_AUTH_TOKEN
, KUBERNETES_CERTS_CLIENT_DATA
, KUBERNETES_AUTH_TRYSERVICEACCOUNT=false
, KUBERNETES_AUTH_TRYKUBECONFIG=false
to bypass the fs permissions issues. Now I'm getting nailed by:
[2015-11-17 17:57:59,605][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Robert Bruce Banner] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch. Cause: access denied ("java.net.NetPermission" "getProxySelector")
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch. Cause: access denied ("java.net.NetPermission" "getProxySelector")
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:27)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:81)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:316)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:230)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.security.AccessControlException: access denied ("java.net.NetPermission" "getProxySelector")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.net.ProxySelector.getDefault(ProxySelector.java:94)
at com.squareup.okhttp.OkHttpClient.copyWithDefaults(OkHttpClient.java:614)
at com.squareup.okhttp.Call.<init>(Call.java:48)
at com.squareup.okhttp.OkHttpClient.newCall(OkHttpClient.java:595)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
... 11 more
I'm working on getting an Elasticsearch cluster running on Kubernetes 1.6 using this plugin. The following error message is preventing the cluster from coming up:
[2017-05-08 19:13:59,473][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [engine-elasticsearch-data-0] Exception caught during discovery: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/engine-elasticsearch-discovery. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/engine-elasticsearch-discovery. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:290)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:241)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:212)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My guess is that Kubernetes 1.6 needs the service account created for this plugin to be bound to a specific role. How can I determine which role is the minimally permissioned one that this plugin can use?
Hello,
I'm trying to get ES 5.4.2 cluster to work on my K8s setup.
Here is the actual configuration:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-30T09:51:01Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
namespace: elasticsearch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: elasticsearch
namespace: elasticsearch
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: elasticsearch
namespace: elasticsearch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: elasticsearch
subjects:
- kind: ServiceAccount
name: elasticsearch
namespace: elasticsearch
---
apiVersion: "v1"
kind: "Service"
metadata:
name: elastic-cluster
namespace: elasticsearch
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 9300
targetPort: 9300
selector:
app: elasticsearch
component: elasticsearch
---
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
namespace: elasticsearch
data:
elasticsearch.yml: |
---
cluster.name: elasticsearch
network.host: 0.0.0.0
cloud:
kubernetes:
service: elastic-cluster
namespace: elasticsearch
discovery:
zen:
hosts_provider: kubernetes
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
app: elasticsearch
component: elasticsearch
name: elasticsearch
namespace: elasticsearch
spec:
replicas: 2
selector:
matchLabels:
app: elasticsearch
component: elasticsearch
serviceName: elasticsearch
template:
metadata:
labels:
app: elasticsearch
component: elasticsearch
name: elasticsearch
spec:
serviceAccount: elasticsearch
serviceAccountName: elasticsearch
nodeSelector:
elasticsearch-node: "true"
containers:
- image: quay.io/evilmartians/elasticsearch-k8s:5.4.2
name: elasticsearch
ports:
- containerPort: 9200
name: "http"
- containerPort: 9300
name: "transport"
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: elasticsearch-config-volume
subPath: elasticsearch.yml
resources:
limits:
cpu: "1"
memory: 3500Mi
requests:
cpu: 500m
memory: 700Mi
initContainers:
- name: sysctl
image: busybox
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command:
- sysctl
- -w
- vm.max_map_count=10485761
volumes:
- configMap:
name: elasticsearch-config
name: elasticsearch-config-volume
I've used the image I've built by myself using Dockerfile As the original fabric8/elasticsearch-k8s:5.4.2
fails to load some java classes with the configuration above.
I'm getting only one line of log messages from fabric8 plugin:
elasticsearch-0 elasticsearch [2017-07-04T13:16:00,278][WARN ][i.f.e.d.k.KubernetesUnicastHostsProvider] [yq9KlQs] Exception caught during discovery: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
elasticsearch-1 elasticsearch [2017-07-04T13:16:10,706][WARN ][i.f.e.d.k.KubernetesUnicastHostsProvider] [IlfXMMH] Exception caught during discovery: io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
That's all.
Kubernetes API server audit.log says plugin does not even try to use the ServiceAccount I've created:
AUDIT: id="f06e30d1-ba2d-4ade-9d32-a716a0a37592" ip="127.0.0.1" method="GET" user="<none>" groups="<none>" as="<self>" asgroups="<lookup>" namespace="wazuh" uri="/api/v1/namespaces/wazuh/endpoints/elastic-cluster"
Of course I can test my SA with curl manually(just to be sure it works fine):
curl --stderr /dev/null --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://192.168.128.1/api/v1/namespaces/wazuh/endpoints/elastic-cluster
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "elastic-cluster",
"namespace": "wazuh",
"selfLink": "/api/v1/namespaces/wazuh/endpoints/elastic-cluster",
"uid": "c104ab17-5ca2-11e7-810f-2a5a76cb5f8b",
"resourceVersion": "55557470",
"creationTimestamp": "2017-06-29T08:12:55Z"
},....
}
And here is how the correct request looks like in audit.log:
2017-07-04T16:38:22.618122102+03:00 AUDIT: id="58a92f82-6c68-43e2-a2b3-c4d01c9542b5" ip="10.129.24.57" method="GET" user="system:serviceaccount:wazuh:wazuh" groups="\"system:serviceaccounts\",\"system:serviceaccounts:wazuh\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" namespace="wazuh" uri="/api/v1/namespaces/wazuh/endpoints/elastic-cluster"
Please, help me dig further to solve this problem. I'm kinda out of ideas.
Thank you very much!
For some reason nodes cannot join the master.
I have 6 k8s nodes. Each of them have a worker pod, and there are two elasticsearch clients. One of these worker pods could be a master. For some reason kopf plugin shows that there is a master and no worker pods. But worker pods are exist.
Master pod is reachable (when I use telnet on data port, master complains that the data is wrong) , worker nodes have these logs:
[2016-11-07 16:03:28,438][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesDiscovery] [es-data-master-1916592259-bopgf] failed to send join request to master [{es-data-master-1916592259-n0kj9}{McCHthHJREiqQdr-llKX6w}{10.1.29.8}{10.1.29.8:9300}{master=true}], reason [RemoteTransportException[[es-data-master-1916592259-n0kj9][10.1.29.8:9300][internal:discovery/zen/join]]; nested: ConnectTransportException[[es-data-master-1916592259-bopgf][10.1.42.9:9300] connect_timeout[30s]]; ]
[2016-11-07 16:03:28,440][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.15.6, transport_address 10.1.15.6:9300
[2016-11-07 16:03:28,440][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.29.8, transport_address 10.1.29.8:9300
[2016-11-07 16:03:28,440][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.53.8, transport_address 10.1.53.8:9300
[2016-11-07 16:03:28,440][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.57.8, transport_address 10.1.57.8:9300
[2016-11-07 16:03:29,942][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.15.6, transport_address 10.1.15.6:9300
[2016-11-07 16:03:29,942][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.29.8, transport_address 10.1.29.8:9300
[2016-11-07 16:03:29,942][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.53.8, transport_address 10.1.53.8:9300
[2016-11-07 16:03:29,942][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.57.8, transport_address 10.1.57.8:9300
[2016-11-07 16:03:31,445][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.15.6, transport_address 10.1.15.6:9300
[2016-11-07 16:03:31,445][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.29.8, transport_address 10.1.29.8:9300
[2016-11-07 16:03:31,445][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.53.8, transport_address 10.1.53.8:9300
[2016-11-07 16:03:31,445][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [es-data-master-1916592259-bopgf] adding endpoint /10.1.57.8, transport_address 10.1.57.8:9300
I use Elasticsearch 2.4.1
What is wrong? And how can I fix that?
Using this plugin with ES 5.0.1 and enjoying it greatly, I was curious to know is support for ES 5.1.1 that was launched next week coming up, and if so when?
Thanks,
Yaron.
i know this is not the appropriate place to post this, but is there any chance an ES plugin could be created for Tutum?
https://www.tutum.co/
The service-account has been configured and provided to the pods.
[2015-06-14 18:50:06,164][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Conan the Barbarian] Exception caught during discovery javax.ws.rs.ProcessingException : javax.net.ssl.SSLHandshakeException: SSLHandshakeException invoking https://10.100.0.1:443/api/v1beta3/namespaces/default/endpoints/elasticsearch-discovery: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
javax.ws.rs.ProcessingException: javax.net.ssl.SSLHandshakeException: SSLHandshakeException invoking https://10.100.0.1:443/api/v1beta3/namespaces/default/endpoints/elasticsearch-discovery: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.apache.cxf.jaxrs.client.AbstractClient.checkClientException(AbstractClient.java:557)
at org.apache.cxf.jaxrs.client.AbstractClient.preProcessResult(AbstractClient.java:539)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:676)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
at com.sun.proxy.$Proxy29.endpointsForService(Unknown Source)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:123)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2$1.doRun(UnicastZenPing.java:232)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: SSLHandshakeException invoking https://10.100.0.1:443/api/v1beta3/namespaces/default/endpoints/elasticsearch-discovery: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
... 10 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1937)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1478)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:212)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1050)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1363)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1391)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1375)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:563)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1512)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335)
... 16 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1460)
... 33 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 39 more
[2015-06-14 18:50:06,211][INFO ][cluster.service ] [Conan the Barbarian] new_master [Conan the Barbarian][8LwgUksNRAujqNy2TCHuEQ][elasticsearch-master-houon][inet[/10.244.68.7:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
[2015-06-14 18:50:06,222][INFO ][node ] [Conan the Barbarian] started
[2015-06-14 18:50:06,356][INFO ][gateway ] [Conan the Barbarian] recovered [0] indices into cluster_state
If this image is run in OpenShift production environment as part of apiman application then it requires to be run as non-restricted container. So far I am using serviceAccount that belongs to privileged security context and runs as user id 0. The image should be able to run as any random UID.
It fails in starting script when it tries to do a chown.
apiman uses fabric8/elasticsearch-k8s:1.6.0 image
when running
elasticsearch-plugin install io.fabric8/elasticsearch-cloud-kubernetes/5.0.0
as suggested in the documentation I get the following output
A tool for managing installed elasticsearch plugins
Commands
--------
list - Lists installed elasticsearch plugins
install - Install a plugin
remove - Removes a plugin from elasticsearch
Non-option arguments:
command
Option Description
------ -----------
-h, --help show help
-s, --silent show minimal output
-v, --verbose show verbose output
ERROR: Unknown plugin io.fabric8/elasticsearch-cloud-kubernetes/5.0.0
It seems that elasticsearch is not looking for the plugin correctly...
I can install the plugin correctly by passing the full url of the zip hosted on maven-central
elasticsearch-plugin install http://repo1.maven.org/maven2/io/fabric8/elasticsearch-cloud-kubernetes/5.0.0/elasticsearch-cloud-kubernetes-5.0.0.zip
I guess this might just mean updating the documentation, but I am not really sure if this is a bug in elasticsearch, or something else I overlooked.
$ docker run --rm --cap-add=IPC_LOCK --privileged elasticsearch:2.0.0 -Des.bootstrap.mlockall=true
[2015-11-19 20:16:32,368][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2015-11-19 20:16:32,370][WARN ][bootstrap ] This can result in part of the JVM being swapped out.
[2015-11-19 20:16:32,370][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2015-11-19 20:16:32,370][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
Currently only supports default namespace.
Hello, if I use hostPath
as volume storage in kubernetes, then I receive this error in log. If I use emptyDir
storage, then no any problems.
Any ideas? I can attach my deployment file, if someone needed.
[2017-01-20T14:25:17,137][WARN ][org.elasticsearch.index.engine.Engine] failed engine [refresh failed]
org.apache.lucene.index.CorruptIndexException: codec footer mismatch (file truncated?): actual footer=16792064 vs expected footer=-1071082520 (resource=BufferedChecksumIndexInput(MMapIndexInput(path="/data/data/nodes/0/indices/MKbBB-qMTpmU3V14q-pLnw/0/index/_6x.fdx")))
at org.apache.lucene.codecs.CodecUtil.validateFooter(CodecUtil.java:499) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:411) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:103) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4893) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:516) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:480) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:539) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:653) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:438) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:291) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:266) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:256) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:156) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.search.SearcherManager.refreshIfNeeded(SearcherManager.java:58) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.apache.lucene.search.ReferenceManager.maybeRefreshBlocking(ReferenceManager.java:253) ~[lucene-core-6.3.0.jar:6.3.0 a66a44513ee8191e25b477372094bfa846450316 - shalin - 2016-11-02 19:47:11]
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:646) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:629) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.IndexService.maybeRefreshEngine(IndexService.java:689) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.IndexService.access$400(IndexService.java:93) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal(IndexService.java:831) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.index.IndexService$BaseAsyncTask.run(IndexService.java:742) [elasticsearch-5.1.1.jar:5.1.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:458) [elasticsearch-5.1.1.jar:5.1.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-06-29T19:33:05,628][WARN ][o.e.d.z.ZenDiscovery ] [es-master-4168550062-gm6l4] not enough master nodes discovered during pinging (found [[Candidate{node={es-master-4168550062-gm6l4}{H7tVok-ATvWmo5hehqpCqg}{zyrcNp2OQkqGuQ0ZlTS-kQ}{172.17.35.3}{172.17.35.3:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-06-29T19:33:10,628][WARN ][o.e.d.z.UnicastZenPing ] [es-master-4168550062-gm6l4] timed out after [5s] resolving host [elasticsearch-discovery]
[2017-06-29T19:33:13,629][WARN ][o.e.d.z.ZenDiscovery ] [es-master-4168550062-gm6l4] not enough master nodes discovered during pinging (found [[Candidate{node={es-master-4168550062-gm6l4}{H7tVok-ATvWmo5hehqpCqg}{zyrcNp2OQkqGuQ0ZlTS-kQ}{172.17.35.3}{172.17.35.3:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-06-29T19:33:13,629][WARN ][o.e.d.z.UnicastZenPing ] [es-master-4168550062-gm6l4] failed to resolve host [elasticsearch-discovery]
java.net.UnknownHostException: elasticsearch-discovery
at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_121]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_121]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_121]
at org.elasticsearch.transport.TcpTransport.parse(TcpTransport.java:888) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.transport.TcpTransport.addressesFromString(TcpTransport.java:843) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.transport.TransportService.addressesFromString(TransportService.java:674) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.discovery.zen.UnicastZenPing.lambda$null$0(UnicastZenPing.java:213) ~[elasticsearch-5.4.0.jar:5.4.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_121]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.0.jar:5.4.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Not sure if this is the correct repo for this, but the 5.2.0 image does not have the correct version of elasticsearch installed:
uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: Plugin [discovery-kubernetes] is incompatible with Elasticsearch [5.1.1]. Was designed for version [5.2.0]
elasticsearch@b4e01e29e890:/usr/share/elasticsearch$ elasticsearch --version
Version: 5.1.1, Build: 5395e21/2016-12-06T12:36:15.409Z, JVM: 1.8.0_111
When using ElasticSearch 2.3.5 with the appropriate version of the Plugin, it seems that the API-DNS name is not discovered on the first attempt, which causes the join of the cluster to fail. Unfortunately the node does not reattempt joining the cluster, nor just shutdown the node in that case.
When manually restarting the pod it does recover the DNS gets resolved and the node joins the existing cluster.
This is the corresponding log.
[2017-01-27 09:55:00,649][INFO ][node ] [elasticsearch-2] version[2.3.5], pid[1], build[90f439f/2016-07-27T10:36:52Z]
[2017-01-27 09:55:00,650][INFO ][node ] [elasticsearch-2] initializing ...
[2017-01-27 09:55:02,923][INFO ][plugins ] [elasticsearch-2] modules [reindex, lang-expression, lang-groovy], plugins [kopf, cloud-kubernetes], sites [kopf]
[2017-01-27 09:55:03,034][INFO ][env ] [elasticsearch-2] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvdba)]], net usable_space [934gb], net total_space [984.1gb], spins? [possibly], types [ext4]
[2017-01-27 09:55:03,034][INFO ][env ] [elasticsearch-2] heap size [990.7mb], compressed ordinary object pointers [true]
[2017-01-27 09:55:12,144][INFO ][node ] [elasticsearch-2] initialized
[2017-01-27 09:55:12,144][INFO ][node ] [elasticsearch-2] starting ...
[2017-01-27 09:55:12,627][INFO ][transport ] [elasticsearch-2] publish_address {10.10.81.3:9300}, bound_addresses {0.0.0.0:9300}
[2017-01-27 09:55:12,632][INFO ][discovery ] [elasticsearch-2] graylog2/gGarC_0CTri6gLvNI4bUWQ
[2017-01-27 09:55:23,937][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [elasticsearch-2] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 16 more
[2017-01-27 09:55:25,526][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [elasticsearch-2] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:249)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: kubernetes.default.svc
at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 11 more
[2017-01-27 09:55:27,030][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [elasticsearch-2] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2$1.doRun(UnicastZenPing.java:253)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: kubernetes.default.svc
at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 11 more
[2017-01-27 09:55:27,135][INFO ][cluster.service ] [elasticsearch-2] new_master {elasticsearch-2}{gGarC_0CTri6gLvNI4bUWQ}{10.10.81.3}{10.10.81.3:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2017-01-27 09:55:27,148][INFO ][http ] [elasticsearch-2] publish_address {10.10.81.3:9200}, bound_addresses {0.0.0.0:9200}
[2017-01-27 09:55:27,149][INFO ][node ] [elasticsearch-2] started
[2017-01-27 09:55:27,235][INFO ][gateway ] [elasticsearch-2] recovered [0] indices into cluster_state
Thanks for the plugin, it saves us a lot of time.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.