petergrace / opentsdb-docker Goto Github PK
View Code? Open in Web Editor NEWFiles required to make a trusted opentsdb Docker such that opentsdb can be used for other projects (e.g. scollector)
License: MIT License
Files required to make a trusted opentsdb Docker such that opentsdb can be used for other projects (e.g. scollector)
License: MIT License
Docker version: 1.3.2
OS version: CentOS release 6.5
CMD: docker run -tdi -p 4242:4242 petergrace/opentsdb-docker:latest
Newly created container's 4242 port can be visited after running container about 1 minute, but restarted container can no longer be visited.
gnuplot does not seem to be accessible by opentsdb in this container
Hi,
I am using the latest version of the docker image (docker version 1809).
It seem that the java heap get full and crash opentsdb (or hbase). This occur every night after been running for a few hours. I have about 25 scollector (https://bosun.org/scollector/) running and sending data to bosun, which send it to opentsdb.
It is running on a Linux (debian) virtual machine with 16GO of memory.
Here is the stack
2019-01-09 03:17:31,572 FATAL [main-EventThread] regionserver.HRegionServer: ABORTING region server otsdb-host,46227,1546900378726: regionserver:46227-0x1682a7288c00001, quorum=localhost:2181, baseZNode=/hbase regionserver:46227-0x1682a7288c00001 received expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:692)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:624)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
2019-01-09 03:17:31,572 FATAL [main-EventThread] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
2019-01-09 03:17:31,572 INFO [RS:0;otsdb-host:46227] regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2019-01-09 03:17:31,573 INFO [RS:0;otsdb-host:46227] regionserver.HRegionServer: Stopping infoServer
Exception in thread "pool-2-thread-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-2-thread-1"
Exception in thread "pool-1-thread-8" java.lang.OutOfMemoryError: Java heap space
Exception in thread "pool-1-thread-2" Exception in thread "pool-1-thread-12" java.lang.OutOfMemoryError: Java heap space
Exception in thread "pool-1-thread-10" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space.
It seem that i still have a enough free memory
someone@is-dev-tool-lx:/docker# free -m
total used free shared buff/cache available
Mem: 16051 8423 7215 16 411 7348
Swap: 16380 1529 14851
I can provide more information if required. I do not know java at all (neither hbase) so i dont really know what information to provide...
Thank you
Hi Peter,
thanks for preparing this Dockerfile. I'd like to reuse it, but unfortunately, your repository lacks a license permitting reuse. Would you mind adding a license?
Thanks,
Lukas
How to create a cluster using the image?
bash-4.4# du -sh *
212.0K CHANGES.txt
4.0K LEGAL
140.0K LICENSE.txt
396.0K NOTICE.txt
4.0K README.txt
208.0K bin
48.0K conf
286.2M docs
672.0K hbase-webapps
86.3M lib
8.0K logs
Hi,
I'm able to get this all up and running, but when I try to write data to it, I see it appear in the TSDB autocomplete fields, but I constantly get "No data found", even though when I write, i'm getting no error messages.
I bind mounted in a directory on my host system to see what is getting written to the /data/hbase
, and I do indeed see the tables and such, but there is no data in them.
What am I doing wrong?
Silly question, I can not seem to connect to OpenTSDB. I am able to run the docker image, and reach the web interface at port 4242. I am using the Python client from postdb. Most connections require the host name and port. I was under the impression the host name is "0.0.0.0" and the port is 4242. I want to start this connection to insert and query data.
Has anyone connected to this instance using a client?
Hi PeterGrace:
I want to change openTsdb default port number from 4242 to 4949. I change the port configuration number in all files which located in files directory (include Dockerfile), then I build a new IMAGE from Dockerfile.
I start a container from the new IMAGE, container runs well but openTsdb doesn't work, I can't connect to the ip:4949.
Can you tell me how to change the OpenTSDB default port?
Thanks
OpenTSDB stops working after a docker-compose down & docker-compose up
. Only starts working again after I delete all volumes, after it can recreate clean ones. I think that perhaps some data is getting corrupted after the shutdown. Is there a way to check/fix this kind of problem on container start? It is not a good thing to have to wipe all the data after the container is removed.
opentsdb | 2018-06-20 17:37:19,208 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:47044
opentsdb | 2018-06-20 17:37:19,209 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:47044
opentsdb | 2018-06-20 17:37:19,213 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x1641e4572a60019 with negotiated timeout 5000 for client /127.0.0.1:47044
opentsdb | 2018-06-20 17:37:19,222 INFO [ProcessThread(sid:0 cport:2181):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1641e4572a60019
opentsdb | 2018-06-20 17:37:19,224 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:47044 which had sessionid 0x1641e4572a60019
opentsdb | 2018-06-20 17:37:19,227 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:47048
opentsdb | 2018-06-20 17:37:19,227 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:47048
opentsdb | 2018-06-20 17:37:19,230 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x1641e4572a6001a with negotiated timeout 5000 for client /127.0.0.1:47048
opentsdb | 2018-06-20 17:37:19,245 INFO [ProcessThread(sid:0 cport:2181):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1641e4572a6001a
opentsdb | 2018-06-20 17:37:19,250 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:47048 which had sessionid 0x1641e4572a6001a
opentsdb | 2018-06-20 17:37:19,254 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:47052
opentsdb | 2018-06-20 17:37:19,256 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:47052
opentsdb | 2018-06-20 17:37:19,274 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x1641e4572a6001b with negotiated timeout 5000 for client /127.0.0.1:47052
opentsdb | 2018-06-20 17:37:19,296 INFO [ProcessThread(sid:0 cport:2181):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1641e4572a6001b
opentsdb | 2018-06-20 17:37:19,301 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:47052 which had sessionid 0x1641e4572a6001b
opentsdb | 2018-06-20 17:37:19,305 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:47056
opentsdb | 2018-06-20 17:37:19,306 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:47056
opentsdb | 2018-06-20 17:37:19,307 INFO [SyncThread:0] server.ZooKeeperServer: Established session 0x1641e4572a6001c with negotiated timeout 5000 for client /127.0.0.1:47056
opentsdb | 2018-06-20 17:37:19,328 INFO [ProcessThread(sid:0 cport:2181):] server.PrepRequestProcessor: Processed session termination for sessionid: 0x1641e4572a6001c
opentsdb | 2018-06-20 17:37:19,329 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:47056 which had sessionid 0x1641e4572a6001c
opentsdb | Exception in thread "main" java.lang.RuntimeException: Initialization failed
opentsdb | at net.opentsdb.tools.TSDMain.main(TSDMain.java:237)
opentsdb | Caused by: com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception:
opentsdb | at com.stumbleupon.async.DeferredGroup.done(DeferredGroup.java:169)
opentsdb | at com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:142)
opentsdb | at com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36)
opentsdb | at com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:82)
opentsdb | at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
opentsdb | at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
opentsdb | at com.stumbleupon.async.Deferred.access$300(Deferred.java:430)
opentsdb | at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366)
opentsdb | at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
opentsdb | at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
opentsdb | at com.stumbleupon.async.Deferred.handleContinuation(Deferred.java:1313)
opentsdb | at com.stumbleupon.async.Deferred.doCall(Deferred.java:1284)
opentsdb | at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
opentsdb | at com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
opentsdb | at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:720)
opentsdb | at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2558)
opentsdb | at org.hbase.async.HBaseClient.sendRpcToRegion(HBaseClient.java:2420)
opentsdb | at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:2444)
opentsdb | at org.hbase.async.HBaseClient$1RetryRpc.call(HBaseClient.java:2427)
opentsdb | at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278)
opentsdb | at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257)
opentsdb | at com.stumbleupon.async.Deferred.callback(Deferred.java:1005)
opentsdb | at org.hbase.async.HBaseClient$ZKClient$ZKCallback.processResult(HBaseClient.java:4394)
opentsdb | at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:561)
opentsdb | at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
opentsdb | Caused by: org.hbase.async.NonRecoverableException: Too many attempts: HBaseRpc(method=getClosestRowBefore, table="hbase:meta", key=[116, 115, 100, 98, 44, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60, 44, 58], region=RegionInfo(table="hbase:meta", region_name="hbase:meta,,1", stop_key=""), attempt=11, timeout=-1, hasTimedout=false)
opentsdb | at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2556)
opentsdb | ... 9 more
$ docker --version
Docker version 18.05.0-ce, build f150324
$ docker-compose --version
docker-compose version 1.21.2, build a133471
And here is the docker-compose.yml that concerns the OpenTSDB:
container_name: opentsdb
image: petergrace/opentsdb-docker:latest
restart: always
networks:
- dbnet
ports:
- 4242:4242
- 60030:60030
volumes:
- type: ${OPENTSDB_FSTYPE:-volume}
source: "${OPENTSDB_FSSOURCE}"
target: /data/hbase
- tsdb_tmp:/tmp
I'm using a bind volume.
executing the image on docker cloud, opentsdb and hbase start fine, data can be sent to the opentsdb daemon and it's received fine.
But I have the following error when trying to get a plot:
Request failed: Bad Request
Gnuplot stderr:nice: can't execute 'gnuplot': Permission denied </pre>
Old version of go-dnsmasq? Still seeing 'You need to specify some search domains'
Hi, regarding #14 and janeczku/docker-alpine-kubernetes#9:
I believe the
janeczku/alpine-kubernetes:3.2
image was built one day after(8 days ago) the latestpetergrace/opentsdb-docker
(9 days ago), which causes the 'You need to specify some search domains' error.Here are their builds:
https://hub.docker.com/r/janeczku/alpine-kubernetes/builds/My local images:
janeczku/alpine-kubernetes 3.2 4df7a36a6843 8 days ago 10.98 MB petergrace/opentsdb-docker latest 7dbffc0e8648 9 days ago 731.5 MB
The output from opentsdb container:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] done. [services.d] starting services Sleeping for 30 seconds to give HBase time to warm up starting hbase time="2016-06-15T16:39:29Z" level=fatal msg="You need to specify some search domains" [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks. [services.d] done. [s6-finish] sending all processes the TERM signal. stopping hbase [s6-finish] sending all processes the KILL signal and exiting.
And this is where we can see mismatched versions of
go-dnsmasq
$ docker run --entrypoint /bin/bash -ti -p 4242:4242 petergrace/opentsdb-docker bash-4.3# go-dnsmasq --version go-dnsmasq-minimal version 1.0.5
docker run --entrypoint /bin/sh -ti janeczku/alpine-kubernetes:3.2 / # go-dnsmasq --version go-dnsmasq version 1.0.6
Is it just a matter of a new docker build/push to docker hub?
Thanks!
Container fails when host go to sleep mode
Hi,
Thanks for the very easy to use image.
Before deploying my setup, I do some tests on my laptop.
Each time I put it in sleep mode, after resuming, I cannot send any values to the OpenTSDB database.Here is the error:
opentsdb_1 | org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/replication/rs/otsdb-host,37183,1590089865286 opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) opentsdb_1 | at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1532) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:292) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:462) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:490) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1490) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1412) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1294) opentsdb_1 | at org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:195) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:322) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:206) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.Replication.stopReplicationService(Replication.java:198) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2278) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1144) opentsdb_1 | at java.lang.Thread.run(Thread.java:748) opentsdb_1 | 2020-05-22 06:55:05,395 ERROR [RS:0;otsdb-host:37183] zookeeper.ZooKeeperWatcher: regionserver:37183-0x17238bdd2400001, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception opentsdb_1 | org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/replication/rs/otsdb-host,37183,1590089865286 opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) opentsdb_1 | at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1532) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:292) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchForNewChildren(ZKUtil.java:462) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenAndWatchThem(ZKUtil.java:490) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenBFSAndWatchThem(ZKUtil.java:1490) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursivelyMultiOrSequential(ZKUtil.java:1412) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNodeRecursively(ZKUtil.java:1294) opentsdb_1 | at org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.removeAllQueues(ReplicationQueuesZKImpl.java:195) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.join(ReplicationSourceManager.java:322) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.Replication.join(Replication.java:206) opentsdb_1 | at org.apache.hadoop.hbase.replication.regionserver.Replication.stopReplicationService(Replication.java:198) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.stopServiceThreads(HRegionServer.java:2278) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1144) opentsdb_1 | at java.lang.Thread.run(Thread.java:748) opentsdb_1 | 2020-05-22 06:55:05,395 INFO [RS:0;otsdb-host:37183] ipc.RpcServer: Stopping server on 37183 opentsdb_1 | 2020-05-22 06:55:05,396 INFO [RpcServer.listener,port=37183] ipc.RpcServer: RpcServer.listener,port=37183: stopping opentsdb_1 | 2020-05-22 06:55:05,397 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped opentsdb_1 | 2020-05-22 06:55:05,397 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping opentsdb_1 | 2020-05-22 06:55:05,401 WARN [RS:0;otsdb-host:37183] regionserver.HRegionServer: Failed deleting my ephemeral node opentsdb_1 | org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase/rs/otsdb-host,37183,1590089865286 opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) opentsdb_1 | at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) opentsdb_1 | at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:178) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1250) opentsdb_1 | at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1239) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:1502) opentsdb_1 | at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1152) opentsdb_1 | at java.lang.Thread.run(Thread.java:748) opentsdb_1 | 2020-05-22 06:55:05,402 INFO [RS:0;otsdb-host:37183] regionserver.HRegionServer: stopping server otsdb-host,37183,1590089865286; zookeeper connection closed. opentsdb_1 | 2020-05-22 06:55:05,402 INFO [RS:0;otsdb-host:37183] regionserver.HRegionServer: RS:0;otsdb-host:37183 exiting opentsdb_1 | 2020-05-22 06:55:05,413 INFO [Shutdown] mortbay.log: Shutdown hook executing opentsdb_1 | 2020-05-22 06:55:05,413 INFO [Shutdown] mortbay.log: Shutdown hook complete opentsdb_1 | 2020-05-22 06:55:05,415 INFO [Thread-6] regionserver.ShutdownHook: Shutdown hook starting; hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@5505ae1a opentsdb_1 | 2020-05-22 06:55:05,418 INFO [Thread-6] regionserver.ShutdownHook: Starting fs shutdown hook thread. opentsdb_1 | 2020-05-22 06:55:05,420 INFO [Thread-6] regionserver.ShutdownHook: Shutdown hook finished. opentsdb_1 | stopping hbase ^CGracefully stopping... (press Ctrl+C again to force)
As you can see, I'm using docker-compose. Maybe setting a
restart
policy can fix my problem, but perhaps you can handle this at the image side.Kind.
AlexisChunked content
Hi,
Even if your Dockerfile provides :
- tsd.http.request.enable_chunked = true
- tsd.http.request.max_chunk = 1000000
I'm only able to load a very small number of point at a time (less than 50). If I try to increase this number, I always receive this error : Chunked request not supported
Any idea ?
Regards,
Laurent
update opentsdb.conf
how can I update opentsdb.conf
tsd.core.meta.enable_realtime_uid
tsd.core.meta.enable_tsuid_tracking
tsd.core.meta.enable_tsuid_incrementing
tsd.core.meta.enable_realtime_ts
tsd.http.query.allow_deleteI need to make these value true.
I am using PeterGrace/opentsdb-docker
Unable to run 'opentsdb' as non-root user
When I run the image 'petergrace/opentsdb-docker' using below command
➜ ~ docker run -dp 4242:4242 -u 100 petergrace/opentsdb-docker
Opentsdb is not accessible at
127.0.0.1:4242
and below are the docker logs. If I just run it as the root user everything works fine. Our k8s clusters have pod security policy and can't run containers as root. Is there any workaround possible ? Could you help with this ?➜ ~ docker logs ed83aa74b0ab OpenTSDB config not imported, using defaults. cp: can't create '/etc/opentsdb/opentsdb.conf': Permission denied HBase config not imported, using defaults. cp: can't create '/opt/hbase/conf/hbase-site.xml': File exists starting hbase and sleeping 15 seconds for hbase to come online starting hbase OpenJDK 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /opt/hbase/bin/../logs/SecurityAuth.audit (No such file or directory) at java.io.FileOutputStream.open0(Native Method) at java.io.FileOutputStream.open(FileOutputStream.java:270) at java.io.FileOutputStream.<init>(FileOutputStream.java:213) at java.io.FileOutputStream.<init>(FileOutputStream.java:133) at org.apache.log4j.FileAppender.setFile(FileAppender.java:294) at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768) at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:262) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1025) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:844) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:541) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:292) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:269) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:655) at org.apache.hadoop.hbase.regionserver.HRegionServer.<clinit>(HRegionServer.java:216) 2022-11-03 06:32:30,254 INFO [main] util.VersionInfo: HBase 1.4.4 2022-11-03 06:32:30,257 INFO [main] util.VersionInfo: Source code repository git://apurtell-ltm4.internal.salesforce.com/Users/apurtell/src/hbase revision=fe146eb48c24d56dbcd2f669bb5ff8197e6c918b 2022-11-03 06:32:30,257 INFO [main] util.VersionInfo: Compiled by apurtell on Sun Apr 22 20:42:02 PDT 2018 2022-11-03 06:32:30,257 INFO [main] util.VersionInfo: From source with checksum d61e89b739ba7ddcfb25a30ed5e9cd53 2022-11-03 06:32:31,731 INFO [main] master.HMasterCommandLine: Starting a zookeeper cluster 2022-11-03 06:32:31,868 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT 2022-11-03 06:32:31,868 INFO [main] server.ZooKeeperServer: Server environment:host.name=ed83aa74b0ab 2022-11-03 06:32:31,868 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.8.0_252 2022-11-03 06:32:31,868 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=IcedTea 2022-11-03 06:32:31,869 INFO [main] server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre 2022-11-03 06:32:31,871 INFO [main] server.ZooKeeperServer: lengine-6.1.26.jar:/opt/hbase/bin/../lib/jetty-util-6.1.26.jar:/opt/hbase/bin/../lib/joni-2.1.2.jar:/opt/hbase/bin/../lib/jruby-complete-1.6.8.jar:/opt/hbase/bin/../lib/jsch-0.1.54.jar:/opt/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/bin/../lib/junit-4.12.jar:/opt/hbase/bin/../lib/leveldbjni-all-1.8.jar:/opt/hbase/bin/../lib/libthrift-0.9.3.jar:/opt/hbase/bin/../lib/log4j-1.2.17.jar:/opt/hbase/bin/../lib/metrics-core-2.2.0.jar:/opt/hbase/bin/../lib/metrics-core-3.1.2.jar:/opt/hbase/bin/../lib/netty-all-4.1.8.Final.jar:/opt/hbase/bin/../lib/paranamer-2.3.jar:/opt/hbase/bin/../lib/protobuf-java-2.5.0.jar:/opt/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/bin/../lib/slf4j-api-1.7.7.jar:/opt/hbase/bin/../lib/slf4j-log4j12-1.7.10.jar:/opt/hbase/bin/../lib/snappy-java-1.0.5.jar:/opt/hbase/bin/../lib/spymemcached-2.11.6.jar:/opt/hbase/bin/../lib/xmlenc-0.52.jar:/opt/hbase/bin/../lib/xz-1.0.jar:/opt/hbase/bin/../lib/zookeeper-3.4.10.jar: 2022-11-03 06:32:31,871 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2022-11-03 06:32:31,871 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp 2022-11-03 06:32:31,872 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA> 2022-11-03 06:32:31,872 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux 2022-11-03 06:32:31,881 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64 2022-11-03 06:32:31,881 INFO [main] server.ZooKeeperServer: Server environment:os.version=5.15.57-0-virt 2022-11-03 06:32:31,881 INFO [main] server.ZooKeeperServer: Server environment:user.name=? 2022-11-03 06:32:31,881 INFO [main] server.ZooKeeperServer: Server environment:user.home=? 2022-11-03 06:32:31,881 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/opt/downloads 2022-11-03 06:32:31,939 INFO [main] server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /tmp/hbase-?/zookeeper/zookeeper_0/version-2 snapdir /tmp/hbase-?/zookeeper/zookeeper_0/version-2 2022-11-03 06:32:31,940 INFO [main] server.ZooKeeperServer: minSessionTimeout set to -1 2022-11-03 06:32:31,940 INFO [main] server.ZooKeeperServer: maxSessionTimeout set to -1 2022-11-03 06:32:31,994 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181 2022-11-03 06:32:32,392 ERROR [main] server.ZooKeeperServer: ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes 2022-11-03 06:32:32,413 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:43134 2022-11-03 06:32:32,520 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ServerCnxn: The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, 2003003507=wchs, 2003003504=wchp, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}] 2022-11-03 06:32:32,520 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ServerCnxn: The list of enabled four letter word commands is : [[wchs, stat, stmk, conf, ruok, mntr, srvr, envi, srst, isro, dump, gtmk, crst, cons]] 2022-11-03 06:32:32,520 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Processing stat command from /127.0.0.1:43134 2022-11-03 06:32:32,529 INFO [Thread-2] server.NIOServerCnxn: Stat command output 2022-11-03 06:32:32,532 INFO [Thread-2] server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:43134 (no session established for client) 2022-11-03 06:32:32,533 INFO [main] zookeeper.MiniZooKeeperCluster: Started MiniZooKeeperCluster and ran successful 'stat' on client port=2181 2022-11-03 06:32:32,533 INFO [main] master.HMasterCommandLine: Starting up instance of localHBaseCluster; master=1, regionserversCount=1 2022-11-03 06:32:33,379 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2022-11-03 06:32:34,403 INFO [main] regionserver.RSRpcServices: master/ed83aa74b0ab/172.17.0.2:0 server-side HConnection retries=350 2022-11-03 06:32:34,964 INFO [main] ipc.RpcExecutor: RpcExecutor name using fifo as call queue; numCallQueues=3; maxQueueLength=300; handlerCount=30 2022-11-03 06:32:34,971 INFO [main] ipc.RpcExecutor: RpcExecutor name using fifo as call queue; numCallQueues=2; maxQueueLength=300; handlerCount=20 2022-11-03 06:32:34,971 INFO [main] ipc.RpcExecutor: RpcExecutor name using fifo as call queue; numCallQueues=1; maxQueueLength=300; handlerCount=3 2022-11-03 06:32:35,061 INFO [main] ipc.RpcServer: master/ed83aa74b0ab/172.17.0.2:0: started 10 reader(s) listening on port=41911 2022-11-03 06:32:35,669 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2022-11-03 06:32:36,044 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMasterjava.lang.NullPointerException: invalid null input: name at com.sun.security.auth.UnixPrincipal.<init>(UnixPrincipal.java:71) at com.sun.security.auth.module.UnixLoginModule.login(UnixLoginModule.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:815) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:777) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:650) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:293) at org.apache.hadoop.hbase.security.User.getCurrent(User.java:191) at org.apache.hadoop.hbase.security.Superusers.initialize(Superusers.java:59) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:590) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:447) at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:315) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139) at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:227) at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:162) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:225) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:138) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2810) at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:143) at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:227) at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:162) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:225) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:138) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2810) Caused by: java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:841) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:777) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:650) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:293) at org.apache.hadoop.hbase.security.User.getCurrent(User.java:191) at org.apache.hadoop.hbase.security.Superusers.initialize(Superusers.java:59) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:590) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:447) at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:315) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139) ... 7 more Caused by: javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input: name at com.sun.security.auth.UnixPrincipal.<init>(UnixPrincipal.java:71) at com.sun.security.auth.module.UnixLoginModule.login(UnixLoginModule.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:815) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:777) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:650) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.<init>(User.java:293) at org.apache.hadoop.hbase.security.User.getCurrent(User.java:191) at org.apache.hadoop.hbase.security.Superusers.initialize(Superusers.java:59) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:590) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:447) at org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.<init>(HMasterCommandLine.java:315) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139) at org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:227) at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:162) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:225) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:138) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2810) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:856) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:815) ... 20 more stopping hbase /opt/bin/start_hbase.sh: line 1: /var/log/hbase-stop.log: Permission denied
Tag for 2.3.1
Hello,
could you please provide also a tag for version 2.3.1 ?Thanks
Push 2.3 version on Docker hub
Hi,
Seems like https://hub.docker.com/r/petergrace/opentsdb-docker is still pulling the old, 2.2 OpenTSDB version, but github repo is using 2.3.
Any chance Docker hub can be updated?
Thanks!
Error message when building the Dockerimage
I want to build the docker image by using
docker build -t my-opentsb .
unfortunately I get the error message
The command '/bin/sh -c apk add --virtual builddeps ${BUILD_PACKAGES} && : Install OpenTSDB and scripts && wget --no-check-certificate -O v${TSDB_VERSION}.zip https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip && unzip v${TSDB_VERSION}.zip && rm v${TSDB_VERSION}.zip && cd /opt/opentsdb/opentsdb-${TSDB_VERSION} && echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf && echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf && ./build.sh && cp build-aux/install-sh build/build-aux && cd build && make install && cd / && rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}' returned a non-zero code: 2
Moreover here is the complete console output ix.io
Most likely the maven address is updated since opening the maven address
501 HTTPS Required. Use https://repo1.maven.org/maven2/ More information at https://links.sonatype.com/central/501-https-required
Where can this be adjusted
OOM Errors
first of all thanks for your work. How would I go about increasing the heap memory? I've tried JAVA_OPTS="-XX:PermSize=12g -XX:MaxPermSize=24g" but that doesn't seem to have any effect
gnuplot is not exist in http://dl-3.alpinelinux.org/alpine/edge/testing/
when i build the image in my local computer . i get error as follow:
gnuplot(missing) required by : world[gnuplot]
then i check the repository ' http://dl-3.alpinelinux.org/alpine/edge/testing/ ' . i did not find the gunplot. but i found it in http://dl-3.alpinelinux.org/alpine/edge/community/.
so i change the line 12 of Dockerfile to--repository http://dl-3.alpinelinux.org/alpine/edge/community/
and it was work!
Recommend Projects
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
TensorFlow
An Open Source Machine Learning Framework for Everyone
Django
The Web framework for perfectionists with deadlines.
Laravel
A PHP framework for web artisans
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
Recommend Topics
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
web
Some thing interesting about web. New door for the world.
server
A server is a program made to process requests and deliver data to clients.
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Visualization
Some thing interesting about visualization, use data art
Game
Some thing interesting about game, make everyone happy.
Recommend Org
We are working to build community through open source technology. NB: members must have two-factor auth.
Microsoft
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba
Alibaba Open Source for everyone
D3
Data-Driven Documents codes.
Tencent
China tencent open source team.