Giter Site home page Giter Site logo

harisekhon / dockerfiles Goto Github PK

View Code? Open in Web Editor NEW
1.2K 48.0 459.0 7.9 MB

50+ DockerHub public images for Docker & Kubernetes - DevOps, CI/CD, GitHub Actions, CircleCI, Jenkins, TeamCity, Alpine, CentOS, Debian, Fedora, Ubuntu, Hadoop, Kafka, ZooKeeper, HBase, Cassandra, Solr, SolrCloud, Presto, Apache Drill, Nifi, Spark, Consul, Riak

Home Page: https://www.linkedin.com/in/HariSekhon

License: MIT License

Makefile 8.60% Shell 57.06% Erlang 4.51% Dockerfile 29.83%
hadoop hbase cassandra solr solrcloud kafka consul zookeeper apache-drill dockerhub

dockerfiles's Issues

Unable to create a volume for solr data

I tried to create volumes on /solr/example/cloud/ but the permissions are incorrect.
Could you please add a "chmod" in the DockerFile to set the right permissions to /solr/example/cloud/ before starting solr ?

make run cassandra

echo "docker run -ti --rm harisekhon/cassandra-dev:3.11"
docker run -ti --rm harisekhon/cassandra-dev:3.11
Unable to find image 'harisekhon/cassandra-dev:3.11' locally
3.11: Pulling from harisekhon/cassandra-dev
cd784148e348: Pull complete
e9fa13bbd229: Pull complete
e7ee5a846f96: Pull complete
936840502b7d: Pull complete
c7aed77144e9: Pull complete
d4bbe4cf2406: Pull complete
8e253e2bfd1c: Pull complete
00b13d39ece5: Pull complete
e5196f978906: Pull complete
Digest: sha256:6623d34a680190f7025daf7e345cb9b722a9217ebfc12a44fa7f77bfe9c6e46c
Status: Downloaded newer image for harisekhon/cassandra-dev:3.11
grep: /cassandra/logs/system.log: No such file or directory
.OpenJDK 64-Bit Server VM warning: Cannot open file /cassandra/bin/../logs/gc.log due to No such file or directory

grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory


Didn't find CQL startup in cassandra system.log, trying CQL anyway


su cassandra /cassandra/bin/cqlsh
Traceback (most recent call last):
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 2443, in <module>
    main(*read_options(sys.argv[1:], os.environ))
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 2421, in main
    encoding=options.encoding)
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 485, in __init__
    load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
  File "/usr/lib/python2.7/site-packages/cassandra/policies.py", line 426, in __init__
    for endpoint in socket.getaddrinfo(a, None, socket.AF_UNSPEC, socket.SOCK_STREAM)]
socket.gaierror: [Errno -2] Name does not resolve
make: *** [../Makefile.in:193: run] Error 1

Cannot start the container and successfully run hbase shell

Apologies if some of these issues will be obvious with more experience. I have tried several HBase containers (including a few authored by others) and so far no luck, a variety of issues . Note I am running RHEL7 on VMware, and I started the container this way --

docker run -t -i harisekhon/hbase-dev /bin/bash

running ./start-hbase.sh --
localhost: /hbase/bin/zookeepers.sh: line 52: ssh: command not found

ssh is indeed not in /bin, not going to make much progress like this. Please advise, thanks.

Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied

Hi HariSekhon,

i having problem to start the hadoop docker-compose with the error below :

λ  docker-compose up -d
Starting hadoop_hadoop_1 ... error

ERROR: for hadoop_hadoop_1  Cannot start service hadoop: driver failed programming external connectivity on endpoint hadoop_hadoop_1 (114a6d530bf9e4f6eb8bd3528f2ac847feb1cf089c76e4733a3c00e99ee97f32): Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied

ERROR: for hadoop  Cannot start service hadoop: driver failed programming external connectivity on endpoint hadoop_hadoop_1 (114a6d530bf9e4f6eb8bd3528f2ac847feb1cf089c76e4733a3c00e99ee97f32): Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.

Appreciate for any advice
Jason

Connectivity issues in hbase-dev 1.3

We use your hbase-dev image in our integration tests and are getting connectivity issues like Caused by: java.net.ConnectException: Connection refused.
The point of using a versioned image instead of latest was that it would not be updated :)

Could you please revert it back to the image that it was before your update? Or point me to a git commit hash that I can build from myself?

@HariSekhon

Add HBase 2

HBase 2 was released recently so maybe worth adding.

And I take this chance to thank you, excellent job with all your images, you have literally saved me and others tons of time by nicely packaging some of this projects!

hbase 0.98 isn't working

the 0.98 image for hbase isn't working.
This is the command I run to start the container:
docker run -ti -p 2181:2181 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 harisekhon/hbase:0.98

then when it starts I try to create a namespace with create_namespace 'crawler' I get the following error:

hbase(main):001:0> create_namespace 'crawler'
2017-08-22 19:03:57,379 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

ERROR: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2247)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
        at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3524)
        at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3430)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44958)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
        ... 7 more

Here is some help for this command:
Create namespace; pass namespace name,
and optionally a dictionary of namespace configuration.
Examples:

  hbase> create_namespace 'ns1'
  hbase> create_namespace 'ns1', {'PROPERTY_NAME'=>'PROPERTY_VALUE'}


hbase(main):002:0>

Solr Upgrade

Hari,

Could you push an update to Solr 6.6 on all the Solr projects? (I'm particularly interested in solrcloud but I use Solr all the time and would find the others useful).

Matthew

docker-compose up in hadoop or hadoop-dev folder error

$ docker-compose up
Creating network "hadoop_default" with the default driver
Pulling hadoop (harisekhon/hadoop:latest)...
latest: Pulling from harisekhon/hadoop
d9aaf4d82f24: Already exists
71e193c229b6: Pull complete
34e052ae12c1: Pull complete
7c28f3b3ed5b: Pull complete
b9aeb45a846c: Pull complete
20d3342cd6a7: Pull complete
96ad78d93f88: Pull complete
39f02a9b4821: Pull complete
934c7436ce6e: Pull complete
f4001b22b79b: Pull complete
ae9ff6a67139: Pull complete
Digest: sha256:6c2668f5e59d4b870352cf52f1bcd75945eebe88fb81a1ea3df2464a65951ee6
Status: Downloaded newer image for harisekhon/hadoop:latest
Creating hadoop_hadoop_1 ... done
Attaching to hadoop_hadoop_1
hadoop_1 | /bin/sh: error while loading shared libraries: /lib64/libdl.so.2: invalid ELF header
hadoop_hadoop_1 exited with code 127

mount volume cause FileSystemVersionException

I use docker compose to start janusgraph and hbase with following code
`version: "3"
services:

janusgraph:
    image: janusgraph/janusgraph:0.5.3
    container_name: janusgraph1
    volumes: 
        - ./importData:/opt/janusgraph/importData
        - ./remote-objects.yaml:/opt/janusgraph/conf/remote-objects.yaml
        - /opt/janusgraph/lib
        - /opt/janusgraph/ext
    environment:
        janusgraph.storage.backend: hbase
        janusgraph.storage.hostname: xxxxxxxx
        janusgraph.storage.port: 2181
        janusgraph.cache.db-cache: "true"
        janusgraph.cache.db-cache-clean-wait: 20
        janusgraph.cache.db-cache-time: 180000
        janusgraph.cache.db-cache-size: 0.5
        janusgraph.index.search.backend: elasticsearch
        janusgraph.index.search.hostname: xxxxxxxx
        index.search.port: 9200
    ports:
        - "8182:8182"
    depends_on:
        - hbase

hbase:
    image: harisekhon/hbase:2.1
    container_name: hbase
    ports:
        - "2181:2181"
        - "8080:8080"
        - "8085:8085"
        - "9090:9090"
        - "9095:9095"
        - "16000:16000"
        - "16010:16010"
        - "16020:16020"
        - "16030:16030"
        - "16201:16201"
        - "16301:16301"
    volumes:
        - ./hbase-data/data:/hbase-data/data   `

then hbase throw Exception

2021-07-16 09:28:08,515 INFO [master/5748bd54b4eb:16000:becomeActiveMaster] master.ActiveMasterManager: Registered as active master=5748bd54b4eb,16000,1626427683630 2021-07-16 09:28:08,709 ERROR [master/5748bd54b4eb:16000:becomeActiveMaster] master.HMaster: Failed to become active master org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'. at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:446) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:271) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:860) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2272) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:581) at java.lang.Thread.run(Thread.java:748)

When i comment out the mount code, and it return to normal. Is there anything wrong
volumes: - ./hbase-data/data:/hbase-data/data

Docker Image

Hey this is basically inquiry, Bit new to docker, and just trying to learn and use it.
What I looking for is docker image or file, which will have combination of

  1. Java
  2. Scala
  3. Eclipse
  4. Maven
  5. Hadoop

If you can help me out or share the image path with having maximum of all these in one image.

Updated hbase:1.4 image causing connection failures

An update to the HarkiSekhon/hbase:1.4 image broke our Alpakka hbase connector integration test some time after March 8th (last successful hbase integration test build). An earlier cached version of the image (from 2 months ago) seems to work fine.

akka/alpakka#2185

Our test clients are failing with several different error messages, but I think the underlying errors are connection timeouts. We update our hosts file to point hbase to 127.0.0.1, but this doesn't seem to work locally or on travis, but it did with the old cached version I had.

A connection timeout:

[error] Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
[error] Wed Mar 11 10:29:06 EDT 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68437: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused row 'person2,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hbase,16020,1583936752151, seqNum=0
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:329)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
[error]     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:196)
[error]     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
[error]     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableAvailable(ConnectionManager.java:1057)
[error]     at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1537)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.$anonfun$getOrCreateTable$1(HBaseCapabilities.scala:53)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.twr(HBaseCapabilities.scala:26)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.twr$(HBaseCapabilities.scala:24)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.twr(HBaseFlowStage.scala:25)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.getOrCreateTable(HBaseCapabilities.scala:51)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.getOrCreateTable$(HBaseCapabilities.scala:49)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.getOrCreateTable(HBaseFlowStage.scala:25)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.table$lzycompute(HBaseFlowStage.scala:31)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.akka$stream$alpakka$hbase$impl$HBaseFlowStage$$anon$$table(HBaseFlowStage.scala:31)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1$$anon$3.$anonfun$onPush$1(HBaseFlowStage.scala:48)
[error]     at scala.collection.Iterator.foreach(Iterator.scala:941)
[error]     at scala.collection.Iterator.foreach$(Iterator.scala:941)
[error]     at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
[error]     at scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error]     at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error]     at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1$$anon$3.onPush(HBaseFlowStage.scala:46)
[error]     at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
[error]     at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:480)
[error]     at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
[error]     at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute(ActorGraphInterpreter.scala:47)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute$(ActorGraphInterpreter.scala:43)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary$OnNext.execute(ActorGraphInterpreter.scala:85)
[error]     at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
[error]     at akka.actor.Actor.aroundReceive(Actor.scala:539)
[error]     at akka.actor.Actor.aroundReceive$(Actor.scala:537)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
[error]     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
[error]     at akka.actor.ActorCell.invoke(ActorCell.scala:581)
[error]     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
[error]     at akka.dispatch.Mailbox.run(Mailbox.scala:229)
[error]     ... 3 more
[error] Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68437: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused row 'person2,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hbase,16020,1583936752151, seqNum=0
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:178)
[error]     at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
[error]     ... 3 more
[error] Caused by: java.net.ConnectException: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused
[error]     at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:165)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:389)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
[error]     at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
[error]     at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:422)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:327)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:94)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
[error]     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:37059)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:405)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:274)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:388)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:362)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:142)
[error]     ... 4 more
[error] Caused by: java.net.ConnectException: Connection refused
[error]     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[error]     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
[error]     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
[error]     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
[error]     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(BlockingRpcConnection.java:256)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(BlockingRpcConnection.java:437)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.writeRequest(BlockingRpcConnection.java:540)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.tracedWriteRequest(BlockingRpcConnection.java:520)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.access$200(BlockingRpcConnection.java:85)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection$4.run(BlockingRpcConnection.java:724)
[error]     at org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.notifyOnCancel(HBaseRpcControllerImpl.java:240)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.sendRequest(BlockingRpcConnection.java:699)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:420)
[error]     ... 15 more

Another test complaining about HADOOP_HOME. This was never necessary before, so it seems odd that it would be now.

--> [docs.scaladsl.HBaseStageSpec: HBase stage must write write entries to a sink] Start of log messages of test that [Failed(org.scalatest.concurrent.Futures$FutureConcept$$anon$1: A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 500000000 nanoseconds between each query.)]
10:27:51.314 INFO  [default-dispatcher-2] akka.event.slf4j.Slf4jLogger          Slf4jLogger started
10:27:51.327 DEBUG [default-dispatcher-2] akka.event.EventStream                logger log1-Slf4jLogger started
10:27:51.329 DEBUG [default-dispatcher-2] akka.event.EventStream                Default Loggers started
10:27:51.492 DEBUG [pool-1-thread-1     ] logcapture                            enabling CapturingAppender
10:27:51.631 DEBUG [pool-1-thread-1     ] org.apache.hadoop.util.Shell          Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:329)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:354)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
        at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
        at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
        at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
        at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
        at docs.scaladsl.HBaseStageSpec.<init>(HBaseStageSpec.scala:102)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at java.lang.Class.newInstance(Class.java:442)
        at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:450)
        at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:304)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

The hbase stdout indicates a connection is trying to be made for each of our tests, but does not succeed.

hbase_1                        | 2020-03-11 14:35:40,351 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /192.168.160.1:53018
hbase_1                        | 2020-03-11 14:35:40,357 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /192.168.160.1:53018

Here's the full log of hbase stdout: https://pastebin.com/x0dy7d8J

The hash of the image we're currently using.

harisekhon/hbase                                                                                                                                 1.4                                        0ae79dcd8e6b        6 days ago          243MB

Please let me know if I can provide any additional troubleshooting info or context.

Error in Dockerfiles/rabbitmq-cluster/rabbitmq-cluster ?

Hi,

I'm using your compose for rabbitmq and ran into a little bug:

when running non RAM workers, the join_cluster fails with:

joining cluster via seed rabbit_manager
Error: operation join_cluster used with invalid parameter: ["rabbit@rabbit_manager", []]

And everything is working on RAM nodes.
I think it's about an extra space when running the join_cluster without $RAM set.

👍 For your work !

How to map

The interface can be accessed through 16301 in VMware, but not in an external environment.If the mapping allows docker's services to be mapped outside

hbase:dock-compose up does not work

in the hbase dir, run sudo dock-compose up,

zookeeper gives :

Got user-level KeeperException when processing sessionid:0x100000d1c840001 type:setData cxid:0x41 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/hbase/meta-region-server Error:KeeperErrorCode = NoNode for /hbase/meta-region-server

HMaster gives:

hbase-master_1 | 2021-11-06 02:29:51,523 WARN [ProcExecTimeout] assignment.AssignmentManager: STUCK Region-In-Transition rit=OPENING, location=hbase_hbase-regionserver_1.hbase_default,16020,1636165658608, table=hbase:namespace, region=8bd53a267bdd63e7157795522f5cb0a4

hbase shell:

hbase(main):001:0> list_namespace
NAMESPACE

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2977)
at org.apache.hadoop.hbase.master.HMaster.getNamespaces(HMaster.java:3273)
at org.apache.hadoop.hbase.master.MasterRpcServices.listNamespaceDescriptors(MasterRpcServices.java:1233)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

For usage try 'help "list_namespace"'

docker run harisekhon/pytools dockerhub_search.py harisekhon

Unable to find image 'harisekhon/pytools:latest' locally
latest: Pulling from harisekhon/pytools
d7bfe07ed847: Pull complete
7009757257ba: Pull complete
9a869f37ea40: Pull complete
3c12d5305d29: Pull complete
7c328aa7c63e: Pull complete
1aa9e44ca37a: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:af996ac6da63066f28fe509a2995fc0136f31abfb4a9b16258fa757fe3015d8b
Status: Downloaded newer image for harisekhon/pytools:latest
Traceback (most recent call last):
  File "/github/pytools/pylib/harisekhon/nagiosplugin/docker_nagiosplugin.py", line 35, in <module>
    import docker
  File "/usr/local/lib/python2.7/dist-packages/docker/__init__.py", line 2, in <module>
    from .api import APIClient
  File "/usr/local/lib/python2.7/dist-packages/docker/api/__init__.py", line 2, in <module>
    from .client import APIClient
  File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 10, in <module>
    from .build import BuildApiMixin
  File "/usr/local/lib/python2.7/dist-packages/docker/api/build.py", line 6, in <module>
    from .. import auth
  File "/usr/local/lib/python2.7/dist-packages/docker/auth.py", line 9, in <module>
    from .utils import config
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/__init__.py", line 3, in <module>
    from .decorators import check_resource, minimum_version, update_headers
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py", line 4, in <module>
    from . import utils
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/utils.py", line 13, in <module>
    from .. import tls
  File "/usr/local/lib/python2.7/dist-packages/docker/tls.py", line 5, in <module>
    from .transport import SSLHTTPAdapter
  File "/usr/local/lib/python2.7/dist-packages/docker/transport/__init__.py", line 3, in <module>
    from .ssladapter import SSLHTTPAdapter
  File "/usr/local/lib/python2.7/dist-packages/docker/transport/ssladapter.py", line 23, in <module>
    from backports.ssl_match_hostname import match_hostname
ImportError: No module named ssl_match_hostname

Error while running apache drill with external zookeeper

I am trying to run apache drill image and get this

docker run harisekhon/apache-drill

Running non-interactively, will not open Apache Drill shell

For Apache Drill shell start this image with 'docker run -t -i' switches

Otherwise you will need to have a separate ZooKeeper container linked (one is available from harisekhon/zookeeper) and specify:

docker run -e ZOOKEEPER_HOST=<host>:2181 supervisord -n

I have a zookeeper already running on my localhost so I try again

docker run -e ZOOKEEPER_HOST=localhost:2181  supervisord -n harisekhon/apache-drill
Unable to find image 'supervisord:latest' locally
Pulling repository docker.io/library/supervisord
docker: Error: image library/supervisord:latest not found.

I thought maybe the sequence was wrong and I need to specify the image first

docker run  harisekhon/apache-drill -e ZOOKEEPER_HOST=localhost:2181  supervisord -n
container_linux.go:247: starting container process caused "exec: \"-e\": executable file not found in $PATH"
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"-e\\\": executable file not found in $PATH\"\n".

Any idea what can be done to fix this?

fstat unimplemented unsupported or native support failed to load

NotImplementedError: fstat unimplemented unsupported or native support failed to load; see http://wiki.jruby.org/Native-Libraries
initialize at org/jruby/RubyIO.java:1013
open at org/jruby/RubyIO.java:1154
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:426
initialize at /hbase/lib/ruby/irb/hirb.rb:47
start at /hbase/bin/../bin/hirb.rb:181

at /hbase/bin/../bin/hirb.rb:193

SolrCloud: tail error

When started, it keeps saying:

tail: read error: Is a directory
tail: read error: Is a directory
tail: read error: Is a directory

The tail command is done on several folders:

tail -f /dev/null /solr/example/cloud/node1/logs/archived /solr/example/cloud/node1/logs/solr-8983-console.log /solr/example/cloud/node1/logs/solr.log /solr/example/cloud/node1/logs/solr_gc.log.0.current /solr/example/cloud/node2/logs/archived /solr/example/cloud/node2/logs/solr-7574-console.log /solr/example/cloud/node2/logs/solr.log /solr/example/cloud/node2/logs/solr_gc.log.0.current

  • /solr/example/cloud/node1/logs/archived
  • /solr/example/cloud/node2/logs/archived

Error in spark Docker-compose.yml? Error loading shared library ld-linux-x86-64.so.2: No such file or directory

I have confirmed this on 2 machines on the spark folder:

docker-compose up
docker exec -ti spark_spark_1 /bin/bash
bash-4.3# bin/spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ / ___ / /
\ / _ / _ `/ __/ '/
/
/ .__/_,// //_\ version 1.6.2
/
/

Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-core-3.2.10.jar."
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-api-jdo-3.2.6.jar."
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar."
18/01/26 15:00:39 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:39 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:45 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/01/26 15:00:45 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-core-3.2.10.jar."
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar."
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-api-jdo-3.2.6.jar."
18/01/26 15:00:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
SQL context available as sqlContext.

**_scala>

scala> val lines = sc.textFile("README.md")_**
java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:171)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1014)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1011)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1011)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:27)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:36)
at $iwC$$iwC$$iwC$$iwC.(:38)
at $iwC$$iwC$$iwC.(:40)
at $iwC$$iwC.(:42)
at $iwC.(:44)
at (:46)
at .(:50)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:174)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
at org.xerial.snappy.Snappy.(Snappy.java:47)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
... 72 more

scala>

jython pip issues

Somehow pip is kind of crucial for Python in general, other Dockerhub jython Docker images do in-fact bundle pip. Somehow harisekhon/jython:latest does not contain pip though. Usually one would use ensurepip to retrieve a recent version of pip, though

$ jython -m ensurepip --upgrade
Ignoring ensurepip failure: pip 1.6 requires SSL/TLS

tracing down the jython code, this will message will show up if import ssl raises an exception. Trying to do so will result in missing module encodings.

$ jython
Jython 2.7.0 (default:9987c746f838, Apr 29 2015, 02:25:11) 
[OpenJDK 64-Bit Server VM (Oracle Corporation)] on java1.8.0_171
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/fwierzbicki/hg/jython/jython/dist/Lib/ssl.py", line 18, in <module>
  File "/Users/fwierzbicki/hg/jython/jython/dist/Lib/_socket.py", line 2, in <module>
ImportError: No module named encodings

Missing License File

You mention an accompanying LICENSE file in some of the files in the repository, but there is no LICENSE file in the repository itself. I don't really care myself, but I'm not allowed to use anything that doesn't have a specific license file (ty lawyers).

how can delete all docker containers not used?

docker rm bb2afb1ec924
docker rm 26b7e3c4e010
docker rm 6932e19f8031
docker rm fb72d73d1675
docker rm 9a658133ced9

this not good workflow each time for review cassandra avaible or not have a contanier

root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker ps -a
CONTAINER ID   IMAGE                                 COMMAND                  CREATED             STATUS                         PORTS     NAMES
26b7e3c4e010   harisekhon/nagios-plugins             "find_active_cassand…"   33 seconds ago      Exited (1) 31 seconds ago                beautiful_fermat
6932e19f8031   harisekhon/nagios-plugins             "find_active_cassand…"   2 minutes ago       Exited (3) 2 minutes ago                 quizzical_nightingale
fb72d73d1675   harisekhon/nagios-plugins             "find_active_cassand…"   4 minutes ago       Exited (3) 4 minutes ago                 blissful_perlman
bb2afb1ec924   harisekhon/nagios-plugins             "find_active_cassand…"   4 minutes ago       Exited (0) 4 minutes ago                 sleepy_darwin
9a658133ced9   harisekhon/nagios-plugins             "find_active_cassand…"   10 minutes ago      Exited (3) 10 minutes ago                suspicious_stonebraker
a623dd501d39   harisekhon/nagios-plugins             "check_zaloni_bedroc…"   29 minutes ago      Exited (3) 29 minutes ago                musing_neumann
566ce41b26e3   harisekhon/nagios-plugins:cassandra   "check_zaloni_bedroc…"   31 minutes ago      Exited (4) 31 minutes ago                pedantic_maxwell
812501ccb4b3   harisekhon/nagios-plugins:cassandra   "check_zaloni_bedroc…"   38 minutes ago      Exited (4) 38 minutes ago                wonderful_kepler
d5de82e52415   harisekhon/nagios-plugins:cassandra   "/bin/bash -c 'find …"   43 minutes ago      Exited (0) 43 minutes ago                vigorous_kalam
2d98bcbfeec7   harisekhon/cassandra-dev:latest       "/bin/sh -c /entrypo…"   46 minutes ago      Exited (137) 35 seconds ago              cassandra-dev_cassandra_1
86d472f28229   harisekhon/nagios-plugins             "check_ssl_cert.pl -V"   47 minutes ago      Exited (3) 47 minutes ago                compassionate_chatelet
69f03cedcf49   harisekhon/nagios-plugins             "check_ssl_cert.pl -…"   49 minutes ago      Exited (3) 49 minutes ago                interesting_jackson
878cb6c2a5a8   harisekhon/nagios-plugins             "/list_plugins.sh"       49 minutes ago      Exited (0) 49 minutes ago                sleepy_burnell
a66982209491   harisekhon/pytools                    "dockerhub_search.py…"   49 minutes ago      Exited (4) 49 minutes ago                peaceful_galois
50571f96794a   jasonrivers/nagios:latest             "/usr/local/bin/star…"   About an hour ago   Exited (4) About an hour ago             nagios4

for delete container

root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 26b7e3c4e010
26b7e3c4e010
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 6932e19f8031
6932e19f8031
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm fb72d73d1675
fb72d73d1675
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 9a658133ced9
9a658133ced9

Change the value of hbase.regionserver.thrift.framed for security?

Thanks for providing this useful container.

I have a question about one hbase config: It seems you keep the default setting of hbase.regionserver.thrift.framed for security to false.

However, The official document recommends to set hbase.regionserver.thrift.framed to at least true, for security: "This is the recommended transport for thrift servers and requires a similar setting on the client side. Changing this to false will select the default transport, vulnerable to DoS when malformed requests are issued due to THRIFT-601."

It is also recommended in Cloudera's troubleshoorting page to set hbase.regionserver.thrift.framed and hbase.regionserver.thrift.compact to true.

Shall we change the two settings to true?
Thanks.

HBase image shuts down if non-interactive

hbase_1  | HBase Shell; enter 'help<RETURN>' for list of supported commands.
hbase_1  | Type "exit<RETURN>" to leave the HBase Shell
hbase_1  | Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul  1 08:28:55 CDT 2016
hbase_1  | 


hbase_1  | stopping hbase....................
hbase_1  | localhost: /hbase/bin/zookeepers.sh: line 52: ssh: command not found
hbase_1  | pkill: invalid option -- 'i'
hbase_1  | 
hbase_1  | Usage:
hbase_1  |  pkill [options] <pattern>
hbase_1  | 
hbase_1  | Options:
hbase_1  |  -<sig>, --signal <sig>    signal to send (either number or name)
hbase_1  |  -e, --echo                display what is killed
hbase_1  |  -c, --count               count of matching processes
hbase_1  |  -f, --full                use full process name to match
hbase_1  |  -g, --pgroup <PGID,...>   match listed process group IDs
hbase_1  |  -G, --group <GID,...>     match real group IDs
hbase_1  |  -n, --newest              select most recently started
hbase_1  |  -o, --oldest              select least recently started
hbase_1  |  -P, --parent <PPID,...>   match only child processes of the given parent
hbase_1  |  -s, --session <SID,...>   match session IDs
hbase_1  |  -t, --terminal <tty,...>  match by controlling terminal
hbase_1  |  -u, --euid <ID,...>       match by effective IDs
hbase_1  |  -U, --uid <ID,...>        match by real IDs
hbase_1  |  -x, --exact               match exactly with the command name
hbase_1  |  -F, --pidfile <file>      read PIDs from file
hbase_1  |  -L, --logpidfile          fail if PID file is not locked
hbase_1  |  --ns <PID>                match the processes that belong to the same
hbase_1  |                            namespace as <pid>
hbase_1  |  --nslist <ns,...>         list which namespaces will be considered for
hbase_1  |                            the --ns option.
hbase_1  |                            Available namespaces: ipc, mnt, net, pid, user, uts
hbase_1  | 
hbase_1  |  -h, --help     display this help and exit
hbase_1  |  -V, --version  output version information and exit
hbase_1  | 
hbase_1  | For more details see pgrep(1).

Support for spark 2.x ?

Perhaps I'm reading it wrong, but it looks like the pre-built images for spark are only for 1.3-1.6...? Spark 2.x would be an improvement.

I'm happy to try to help with this, though I don't know where your pre-builts are configured, nor how to run regression tests.

Add install of package zip

I tried to build the pytools docker but I had this error:
make spark-deps make[4]: Entering directory '/github/pytools' rm -vf spark-deps.zip zip spark-deps.zip pylib make[4]: zip: Command not found
I fixed this issue by adding the package zip to the apt-get install command

Drill + Zookeeper : Unable to persist configuration

Hello
I am running well docker-compose command in drill project but i can't find any solution to persist drill configurations.
I can modify storage plugin configuration in the drill web interface but on docker-compose down, all is lost.
Is there any solution to create a volume or any other persistance solution ?

Mac M1 Version

Hello,

I've tried to run this image using mac m1, but the entrypoint_new.sh does not run, it does not gives an error. Hbase shell just won't come up.

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.