Giter Site home page Giter Site logo

harisekhon / dockerfiles Goto Github PK

View Code? Open in Web Editor NEW
1.2K 48.0 458.0 7.9 MB

50+ DockerHub public images for Docker & Kubernetes - DevOps, CI/CD, GitHub Actions, CircleCI, Jenkins, TeamCity, Alpine, CentOS, Debian, Fedora, Ubuntu, Hadoop, Kafka, ZooKeeper, HBase, Cassandra, Solr, SolrCloud, Presto, Apache Drill, Nifi, Spark, Consul, Riak

Home Page: https://www.linkedin.com/in/HariSekhon

License: MIT License

Makefile 8.60% Shell 57.06% Erlang 4.51% Dockerfile 29.83%
hadoop hbase cassandra solr solrcloud kafka consul zookeeper apache-drill dockerhub

dockerfiles's Introduction

Dockerfiles for DevOps, CI/CD, Big Data & NoSQL

GitHub stars GitHub forks Contributors Lines of Code License My LinkedIn GitHub Last Commit

Codacy CodeFactor Quality Gate Status Maintainability Rating Reliability Rating Security Rating Vulnerabilities

Linux Mac Docker StarTrack StarCharts

Alpine CentOS Debian Fedora Redhat Rocky Ubuntu

CI Builds Overview Jenkins Concourse GoCD TeamCity

CircleCI BuildKite AppVeyor Drone Codefresh Cirrus CI Semaphore Buddy Shippable Travis CI

Azure DevOps Pipeline GitLab Pipeline BitBucket Pipeline AWS CodeBuild GCP Cloud Build

Repo on Azure DevOps Repo on GitHub Repo on GitLab Repo on BitBucket Dockerfiles ShellCheck JSON YAML XML Validation Kics Grype Semgrep Semgrep Cloud Trivy

GitHub Actions Ubuntu Mac Mac 11 Mac 12 Ubuntu Ubuntu 20.04 Ubuntu 22.04 Debian Debian 10 Debian 11 Debian 12 Fedora Alpine Alpine 3

git.io/dockerhub

Contains 50+ DockerHub repos with 340+ tags, many different versions of standard official open source software, see Full Inventory futher down.

These docker images are tested by hundreds of tools and also used in the full functional test suites of various other GitHub repos.

See also the Kubernetes configs repo.

Overview - this repo contains:

  • Hadoop & Big Data ecosystem technologies (Spark, Kafka, Presto, Drill, Nifi, ZooKeeper)
  • NoSQL datastores (HBase, Cassandra, Riak, SolrCloud)
  • OS & development images (Alpine, CentOS, Debian, Fedora, Ubuntu)
  • DevOps, CI/CD (CircleCI, GitHub Actions, Jenkins, TeamCity etc), open source (RabbitMQ Cluster, Mesos, Consul)
  • My GitHub repos containing hundreds of tools related to these technologies with all dependencies pre-built in the docker images

These images are all available pre-built on My DockerHub - https://hub.docker.com/u/harisekhon/.

  • Quality and Testing - this repo has entire test suites run against it from various GitHub repositories to validate the docker images' functionality, branches vs tagged versions align, latest contains correct version from master branch, syntax checks covering all common build and file formats (Make/JSON/CSV/INI/XML/YAML configurations) etc.

These are reusable tests that can anybody can implement and can be found in my DevOps Python Tools and DevOps Bash Tools repos as well as the Advanced Nagios Plugins Collection which contains hundreds of technology specific API-level test programs to ensure the docker images are functioning as intended.

Continuous Integration in run on this and adjacent repos that form a bi-directional validation between these docker images and several other repositories full of hundreds of programs. All of this is intended to keep the quality of this repo as high as possible.

Hari Sekhon

Cloud & Big Data Contractor, United Kingdom

(ex-Cloudera, former Hortonworks Consultant)

My LinkedIn

(you're welcome to connect with me on LinkedIn)

Ready to run Docker images

docker search harisekhon
docker run harisekhon/nagios-plugins

To see more than the 25 DockerHub repos limited by docker search (docker issue 23055) I wrote dockerhub_search.py using the DockerHub API, available in my DevOps Python Tools github repo and as a pre-built docker image:

docker run harisekhon/pytools dockerhub_search.py harisekhon

There are lots of tagged versions of official software in my repos to allow development testing across multiple versions, usually more versions than available from the official repos (and new version updates available on request, just raise a GitHub issue).

DockerHub tags are not shown by docker search (docker issue 17238) so I wrote dockerhub_show_tags.py available in my DevOps Python Tools github repo and as a pre-built docker image - eg. to see an organized list of all CentOS tags:

docker run harisekhon/pytools dockerhub_show_tags.py centos

For service technologies like Hadoop, HBase, ZooKeeper etc for which you'll also want port mappings, each directory in the GitHub project contains both a standard docker-compose configuration as well as a make run shortcut (which doesn't require docker-compose to be installed) - either way you don't have to remember all the command line switches and port number specifics:

cd zookeeper
docker-compose up

or for technologies with interactive shells like Spark, ZooKeeper, HBase, Drill, Cassandra where you want to be dropped in to an interactive shell, use the make run shortcut instead:

cd zookeeper
make run

which is much easier to type and remember than the equivalent bigger commands like:

docker run -ti -p 2181:2181 harisekhon/zookeeper

and avoid this for more complex services like Hadoop / HBase:

docker run -ti -p 2181:2181 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 harisekhon/hbase
docker run -ti -p 8020:8020 -p 8032:8032 -p 8088:8088 -p 9000:9000 -p 10020:10020 -p 19888:19888 -p 50010:50010 -p 50020:50020 -p 50070:50070 -p 50075:50075 -p 50090:50090 harisekhon/hadoop

Full Inventory

Official Standard Open Source Technologies

More specific information can be found in the readme page under each respective directory in the Dockerfiles git repo.

Repos suffixed with -dev are the official technologies + development & debugging tools + my github repos with all dependencies pre-built.

My GitHub Repos (with all libs + deps pre-built)

You might like this Dockerfile trick for busting the Docker cache to get the latest repo updates:

# Cache Bust upon new commits
ADD https://api.github.com/repos/HariSekhon/DevOps-Bash-tools/git/refs/heads/master /.git-hashref
  • Advanced Nagios Plugins Collection - 450+ nagios plugins for every Hadoop distribution and every major NoSQL technology - Hadoop, Redis, Elasticsearch, Solr, HBase, Cassandra & DataStax OpsCenter, MongoDB, MySQL, Kafka, Riak, Memcached, Couchbase, CouchDB, Mesos, Spark, Neo4j, Datameer, H2O, WanDisco, Yarn, HDFS, Impala, Apache Drill, Presto, ZooKeeper, Cloudera, Hortonworks, MapR, IBM BigInsights, Infrastructure - Linux, DNS, Whois, SSL Certs etc
    • DockerHub Nagios Plugins Alpine
    • DockerHub Nagios Plugins Centos DockerHub Nagios Plugins Latest
    • DockerHub Nagios Plugins Debian
    • DockerHub Nagios Plugins Fedora
    • DockerHub Nagios Plugins Ubuntu
    • DockerHub Nagios Plugins Perl
    • DockerHub Nagios Plugins Python
  • harisekhon/tools - DevOps Tools superset of the below images, containing hundreds of programs:
    • DockerHub PyTools - DevOps Python Tools - 80+ DevOps CLI tools tools for AWS, Log Anonymizer, Spark, Hadoop, HBase, Hive, Impala, Linux, Docker, Spark Data Converters & Validators (Avro/Parquet/JSON/CSV/INI/XML/YAML), Travis CI, Ambari, Blueprints, CloudFormation, Elasticsearch, Solr, Pig etc.
    • DockerHub Bash Tools - DevOps Bash Tools - 750+ DevOps CLI tools for AWS, GCP, Kubernetes, Hadoop, Hive, Impala, Kafka, Docker, LDAP, Git, Code & build linting, package management for Linux / Mac / Python / Perl / Ruby / NodeJS / Golang, and lots more random goodies
    • DockerHub Perl Tools - DevOps Perl Tools - 25+ DevOps CLI Tools - Log Anonymizer, Hadoop HDFS & Hive tools, Solr/SolrCloud CLI, SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles, Pig Latin, Neo4j, InfluxDB), Linux, Nginx stats & HTTP(S) URL watchers for load balanced web farms, Ambari FreeIPA Kerberos, Datameer etc.
    • all of the above repos come with tags for alpine, centos, debian, fedora and ubuntu builds
  • Spotify Tools - Spotify API tools - eg. convert Spotify URIs to Artist - Track form by querying the Spotify API - readme

Github repos

DockerHub GitHub Ubuntu

My GitHub repo pre-built on major Linux distros with CLI programs located at /github/<project>

Available as both harisekhon/github:<distro> and harisekhon/<distro>-github for convenience, and to allow shorter use of :latest by using just harisekhon/github

harisekhon/github:latest is the same as harisekhon/github:ubuntu

Docker Build GitHub Alpine Docker Build GitHub Debian Docker Build GitHub Fedora Docker Build GitHub Ubuntu

  • DockerHub GitHub Alpine DockerHub GitHub Alpine
  • DockerHub GitHub CentOS DockerHub GitHub CentOS
  • DockerHub GitHub Debian DockerHub GitHub Debian
  • DockerHub GitHub Fedora DockerHub GitHub Fedora
  • DockerHub GitHub Ubuntu DockerHub GitHub Ubuntu DockerHub GitHub Ubuntu

Base Images

Linux Distros + Development Tools

Available as both harisekhon/<distro>-dev and harisekhon/dev:<distro>

harisekhon/dev:latest is the same as harisekhon/dev:ubuntu

Docker Build Alpine Dev Docker Build CentOS Dev Docker Build Debian Dev Docker Build Fedora Dev Docker Build Ubuntu Dev

  • DockerHub Alpine Dev - Alpine latest with Java JDK, Perl, Python, Jython, Ruby, Scala, Groovy, GCC, Maven, SBT, Gradle, Make, Expect etc.
  • DockerHub CentOS Dev - CentOS latest with Java JDK, Perl, Python, Jython, Ruby, Scala, Groovy, GCC, Maven, SBT, Gradle, Make, Expect, EPEL etc.
  • DockerHub Debian Dev - Debian latest with Java JDK, Perl, Python, Jython, Ruby, Scala, Groovy, GCC, Maven, SBT, Gradle, Make, Expect etc.
  • DockerHub Fedora Dev - Fedora latest with Java JDK, Perl, Python, Jython, Ruby, Scala, Groovy, GCC, Maven, SBT, Gradle, Make, Expect etc.
  • DockerHub Ubuntu Dev - Ubuntu latest with Java JDK, Perl, Python, Jython, Ruby, Scala, Groovy, GCC, Maven, SBT, Gradle, Make, Expect etc.
Base Images of Java / Scala

All builds use OpenJDK with jre and jdk numbered tags. See this article below for why it might be illegal to bundle Oracle Java (and why no Linux distributions do this either):

https://www.javacodegeeks.com/2016/03/running-java-docker-youre-breaking-law.html

Docker Build Alpine Java Docker Build CentOS Java Docker Build Debian Java Docker Build Fedora Java Docker Build Ubuntu Java

  • Docker Build Alpine Java - Alpine latest with Java 8
  • Docker Build CentOS Java - CentOS latest combinations of Java 7 / 8 and Scala 2.10 / 2.11
  • Docker Build Debian Java - Debian latest with Java 7, 8
  • Docker Build Fedora Java - Fedora latest combinations of Java 7/8 and Scala 2.10/2.11
  • Docker Build Ubuntu Java
    • Ubuntu 14.04 with Java 7
    • Ubuntu latest with Java 8, 9

Build from Source

All images come pre-built on DockerHub but if you want to compile from source for any reason such as developing improvements, I've made this easy to do:

git clone https://github.com/HariSekhon/Dockerfiles

cd Dockerfiles

To build all Docker images, just run the make command at the top level:

make

To build a specific Docker image, enter its directory and run make:

cd nagios-plugins

make

You can also build a specific version by checking out the git branch for the version and running the build:

cd consul
git checkout consul-0.9
make

or build all versions of a given software project like so:

cd hadoop
make build-versions

See the top level Makefile as well as the Makefile.in which is sourced per project with any project specific overrides in the <project_directory>/Makefile.

Support

Please raise tickets for issues and improvements at https://github.com/HariSekhon/Dockerfiles/issues

Related Repositories

  • HashiCorp Packer templates - Linux automated bare-metal installs and portable virtual machines OVA format appliances using HashiCorp Packer, Redhat Kickstart, Debian Preseed and Ubuntu AutoInstaller / Cloud-Init

  • DevOps Bash Tools - 1000+ DevOps Bash Scripts, Advanced .bashrc, .vimrc, .screenrc, .tmux.conf, .gitconfig, CI configs & Utility Code Library - AWS, GCP, Kubernetes, Docker, Kafka, Hadoop, SQL, BigQuery, Hive, Impala, PostgreSQL, MySQL, LDAP, DockerHub, Jenkins, Spotify API & MP3 tools, Git tricks, GitHub API, GitLab API, BitBucket API, Code & build linting, package management for Linux / Mac / Python / Perl / Ruby / NodeJS / Golang, and lots more random goodies

  • SQL Scripts - 100+ SQL Scripts - PostgreSQL, MySQL, AWS Athena, Google BigQuery

  • Jenkins - Advanced Jenkinsfile & Jenkins Groovy Shared Library

  • GitHub-Actions - GitHub Actions master template & GitHub Actions Shared Workflows library

  • Templates - dozens of Code & Config templates - AWS, GCP, Docker, Jenkins, Terraform, Vagrant, Puppet, Python, Bash, Go, Perl, Java, Scala, Groovy, Maven, SBT, Gradle, Make, GitHub Actions Workflows, CircleCI, Jenkinsfile, Makefile, Dockerfile, docker-compose.yml, M4 etc.

  • Kubernetes configs - Kubernetes YAML configs - Best Practices, Tips & Tricks are baked right into the templates for future deployments

  • Terraform - Terraform templates for AWS / GCP / Azure / GitHub management

  • DevOps Python Tools - 80+ DevOps CLI tools for AWS, GCP, Hadoop, HBase, Spark, Log Anonymizer, Ambari Blueprints, AWS CloudFormation, Linux, Docker, Spark Data Converters & Validators (Avro / Parquet / JSON / CSV / INI / XML / YAML), Elasticsearch, Solr, Travis CI, Pig, IPython

  • DevOps Perl Tools - 25+ DevOps CLI tools for Hadoop, HDFS, Hive, Solr/SolrCloud CLI, Log Anonymizer, Nginx stats & HTTP(S) URL watchers for load balanced web farms, Dockerfiles & SQL ReCaser (MySQL, PostgreSQL, AWS Redshift, Snowflake, Apache Drill, Hive, Impala, Cassandra CQL, Microsoft SQL Server, Oracle, Couchbase N1QL, Dockerfiles, Pig Latin, Neo4j, InfluxDB), Ambari FreeIPA Kerberos, Datameer, Linux...

  • The Advanced Nagios Plugins Collection - 450+ programs for Nagios monitoring your Hadoop & NoSQL clusters. Covers every Hadoop vendor's management API and every major NoSQL technology (HBase, Cassandra, MongoDB, Elasticsearch, Solr, Riak, Redis etc.) as well as message queues (Kafka, RabbitMQ), continuous integration (Jenkins, Travis CI) and traditional infrastructure (SSL, Whois, DNS, Linux)

  • Nagios Plugin Kafka - Kafka API pub/sub Nagios Plugin written in Scala with Kerberos support

  • HAProxy Configs - 80+ HAProxy Configs for Hadoop, Big Data, NoSQL, Docker, Elasticsearch, SolrCloud, HBase, Cloudera, Hortonworks, MapR, MySQL, PostgreSQL, Apache Drill, Hive, Presto, Impala, ZooKeeper, OpenTSDB, InfluxDB, Prometheus, Kibana, Graphite, SSH, RabbitMQ, Redis, Riak, Rancher etc.

  • Diagrams-as-Code - Cloud & Open Source architecture diagrams with Python & D2 source code provided - automatically regenerated via GitHub Actions CI/CD - AWS, GCP, Kubernetes, Jenkins, ArgoCD, Traefik, Kong API Gateway, Nginx, Redis, PostgreSQL, Kafka, Spark, web farms, event processing...

Stargazers over time

Stargazers over time

git.io/dockerhub

dockerfiles's People

Contributors

anmolnagpal avatar fhalim avatar harisekhon avatar iemejia avatar joshk0 avatar kailes avatar romuloceccon avatar webji avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dockerfiles's Issues

make run cassandra

echo "docker run -ti --rm harisekhon/cassandra-dev:3.11"
docker run -ti --rm harisekhon/cassandra-dev:3.11
Unable to find image 'harisekhon/cassandra-dev:3.11' locally
3.11: Pulling from harisekhon/cassandra-dev
cd784148e348: Pull complete
e9fa13bbd229: Pull complete
e7ee5a846f96: Pull complete
936840502b7d: Pull complete
c7aed77144e9: Pull complete
d4bbe4cf2406: Pull complete
8e253e2bfd1c: Pull complete
00b13d39ece5: Pull complete
e5196f978906: Pull complete
Digest: sha256:6623d34a680190f7025daf7e345cb9b722a9217ebfc12a44fa7f77bfe9c6e46c
Status: Downloaded newer image for harisekhon/cassandra-dev:3.11
grep: /cassandra/logs/system.log: No such file or directory
.OpenJDK 64-Bit Server VM warning: Cannot open file /cassandra/bin/../logs/gc.log due to No such file or directory

grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory
.grep: /cassandra/logs/system.log: No such file or directory


Didn't find CQL startup in cassandra system.log, trying CQL anyway


su cassandra /cassandra/bin/cqlsh
Traceback (most recent call last):
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 2443, in <module>
    main(*read_options(sys.argv[1:], os.environ))
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 2421, in main
    encoding=options.encoding)
  File "/apache-cassandra-3.11.4/bin/cqlsh.py", line 485, in __init__
    load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
  File "/usr/lib/python2.7/site-packages/cassandra/policies.py", line 426, in __init__
    for endpoint in socket.getaddrinfo(a, None, socket.AF_UNSPEC, socket.SOCK_STREAM)]
socket.gaierror: [Errno -2] Name does not resolve
make: *** [../Makefile.in:193: run] Error 1

docker run harisekhon/pytools dockerhub_search.py harisekhon

Unable to find image 'harisekhon/pytools:latest' locally
latest: Pulling from harisekhon/pytools
d7bfe07ed847: Pull complete
7009757257ba: Pull complete
9a869f37ea40: Pull complete
3c12d5305d29: Pull complete
7c328aa7c63e: Pull complete
1aa9e44ca37a: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:af996ac6da63066f28fe509a2995fc0136f31abfb4a9b16258fa757fe3015d8b
Status: Downloaded newer image for harisekhon/pytools:latest
Traceback (most recent call last):
  File "/github/pytools/pylib/harisekhon/nagiosplugin/docker_nagiosplugin.py", line 35, in <module>
    import docker
  File "/usr/local/lib/python2.7/dist-packages/docker/__init__.py", line 2, in <module>
    from .api import APIClient
  File "/usr/local/lib/python2.7/dist-packages/docker/api/__init__.py", line 2, in <module>
    from .client import APIClient
  File "/usr/local/lib/python2.7/dist-packages/docker/api/client.py", line 10, in <module>
    from .build import BuildApiMixin
  File "/usr/local/lib/python2.7/dist-packages/docker/api/build.py", line 6, in <module>
    from .. import auth
  File "/usr/local/lib/python2.7/dist-packages/docker/auth.py", line 9, in <module>
    from .utils import config
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/__init__.py", line 3, in <module>
    from .decorators import check_resource, minimum_version, update_headers
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/decorators.py", line 4, in <module>
    from . import utils
  File "/usr/local/lib/python2.7/dist-packages/docker/utils/utils.py", line 13, in <module>
    from .. import tls
  File "/usr/local/lib/python2.7/dist-packages/docker/tls.py", line 5, in <module>
    from .transport import SSLHTTPAdapter
  File "/usr/local/lib/python2.7/dist-packages/docker/transport/__init__.py", line 3, in <module>
    from .ssladapter import SSLHTTPAdapter
  File "/usr/local/lib/python2.7/dist-packages/docker/transport/ssladapter.py", line 23, in <module>
    from backports.ssl_match_hostname import match_hostname
ImportError: No module named ssl_match_hostname

mount volume cause FileSystemVersionException

I use docker compose to start janusgraph and hbase with following code
`version: "3"
services:

janusgraph:
    image: janusgraph/janusgraph:0.5.3
    container_name: janusgraph1
    volumes: 
        - ./importData:/opt/janusgraph/importData
        - ./remote-objects.yaml:/opt/janusgraph/conf/remote-objects.yaml
        - /opt/janusgraph/lib
        - /opt/janusgraph/ext
    environment:
        janusgraph.storage.backend: hbase
        janusgraph.storage.hostname: xxxxxxxx
        janusgraph.storage.port: 2181
        janusgraph.cache.db-cache: "true"
        janusgraph.cache.db-cache-clean-wait: 20
        janusgraph.cache.db-cache-time: 180000
        janusgraph.cache.db-cache-size: 0.5
        janusgraph.index.search.backend: elasticsearch
        janusgraph.index.search.hostname: xxxxxxxx
        index.search.port: 9200
    ports:
        - "8182:8182"
    depends_on:
        - hbase

hbase:
    image: harisekhon/hbase:2.1
    container_name: hbase
    ports:
        - "2181:2181"
        - "8080:8080"
        - "8085:8085"
        - "9090:9090"
        - "9095:9095"
        - "16000:16000"
        - "16010:16010"
        - "16020:16020"
        - "16030:16030"
        - "16201:16201"
        - "16301:16301"
    volumes:
        - ./hbase-data/data:/hbase-data/data   `

then hbase throw Exception

2021-07-16 09:28:08,515 INFO [master/5748bd54b4eb:16000:becomeActiveMaster] master.ActiveMasterManager: Registered as active master=5748bd54b4eb,16000,1626427683630 2021-07-16 09:28:08,709 ERROR [master/5748bd54b4eb:16000:becomeActiveMaster] master.HMaster: Failed to become active master org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'. at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:446) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:271) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:122) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:860) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2272) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:581) at java.lang.Thread.run(Thread.java:748)

When i comment out the mount code, and it return to normal. Is there anything wrong
volumes: - ./hbase-data/data:/hbase-data/data

Missing License File

You mention an accompanying LICENSE file in some of the files in the repository, but there is no LICENSE file in the repository itself. I don't really care myself, but I'm not allowed to use anything that doesn't have a specific license file (ty lawyers).

Mac M1 Version

Hello,

I've tried to run this image using mac m1, but the entrypoint_new.sh does not run, it does not gives an error. Hbase shell just won't come up.

Thank you

Cannot start the container and successfully run hbase shell

Apologies if some of these issues will be obvious with more experience. I have tried several HBase containers (including a few authored by others) and so far no luck, a variety of issues . Note I am running RHEL7 on VMware, and I started the container this way --

docker run -t -i harisekhon/hbase-dev /bin/bash

running ./start-hbase.sh --
localhost: /hbase/bin/zookeepers.sh: line 52: ssh: command not found

ssh is indeed not in /bin, not going to make much progress like this. Please advise, thanks.

SolrCloud: tail error

When started, it keeps saying:

tail: read error: Is a directory
tail: read error: Is a directory
tail: read error: Is a directory

The tail command is done on several folders:

tail -f /dev/null /solr/example/cloud/node1/logs/archived /solr/example/cloud/node1/logs/solr-8983-console.log /solr/example/cloud/node1/logs/solr.log /solr/example/cloud/node1/logs/solr_gc.log.0.current /solr/example/cloud/node2/logs/archived /solr/example/cloud/node2/logs/solr-7574-console.log /solr/example/cloud/node2/logs/solr.log /solr/example/cloud/node2/logs/solr_gc.log.0.current

  • /solr/example/cloud/node1/logs/archived
  • /solr/example/cloud/node2/logs/archived

Add install of package zip

I tried to build the pytools docker but I had this error:
make spark-deps make[4]: Entering directory '/github/pytools' rm -vf spark-deps.zip zip spark-deps.zip pylib make[4]: zip: Command not found
I fixed this issue by adding the package zip to the apt-get install command

Error in spark Docker-compose.yml? Error loading shared library ld-linux-x86-64.so.2: No such file or directory

I have confirmed this on 2 machines on the spark folder:

docker-compose up
docker exec -ti spark_spark_1 /bin/bash
bash-4.3# bin/spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ / ___ / /
\ / _ / _ `/ __/ '/
/
/ .__/_,// //_\ version 1.6.2
/
/

Using Scala version 2.10.5 (OpenJDK 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-core-3.2.10.jar."
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-api-jdo-3.2.6.jar."
18/01/26 15:00:39 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar."
18/01/26 15:00:39 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:39 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:45 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
18/01/26 15:00:45 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-core-3.2.10.jar."
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar."
18/01/26 15:00:47 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/spark-1.6.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/spark/lib/datanucleus-api-jdo-3.2.6.jar."
18/01/26 15:00:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
18/01/26 15:00:47 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
SQL context available as sqlContext.

**_scala>

scala> val lines = sc.textFile("README.md")_**
java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:171)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version$lzycompute(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec$.org$apache$spark$io$SnappyCompressionCodec$$version(CompressionCodec.scala:168)
at org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:152)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1014)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1011)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1011)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:27)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:36)
at $iwC$$iwC$$iwC$$iwC.(:38)
at $iwC$$iwC$$iwC.(:40)
at $iwC$$iwC.(:42)
at $iwC.(:44)
at (:46)
at .(:50)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/snappy-1.1.2-62459705-fdb3-414f-8be9-471659319a57-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:174)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
at org.xerial.snappy.Snappy.(Snappy.java:47)
at org.apache.spark.io.SnappyCompressionCodec$.liftedTree1$1(CompressionCodec.scala:169)
... 72 more

scala>

docker-compose up in hadoop or hadoop-dev folder error

$ docker-compose up
Creating network "hadoop_default" with the default driver
Pulling hadoop (harisekhon/hadoop:latest)...
latest: Pulling from harisekhon/hadoop
d9aaf4d82f24: Already exists
71e193c229b6: Pull complete
34e052ae12c1: Pull complete
7c28f3b3ed5b: Pull complete
b9aeb45a846c: Pull complete
20d3342cd6a7: Pull complete
96ad78d93f88: Pull complete
39f02a9b4821: Pull complete
934c7436ce6e: Pull complete
f4001b22b79b: Pull complete
ae9ff6a67139: Pull complete
Digest: sha256:6c2668f5e59d4b870352cf52f1bcd75945eebe88fb81a1ea3df2464a65951ee6
Status: Downloaded newer image for harisekhon/hadoop:latest
Creating hadoop_hadoop_1 ... done
Attaching to hadoop_hadoop_1
hadoop_1 | /bin/sh: error while loading shared libraries: /lib64/libdl.so.2: invalid ELF header
hadoop_hadoop_1 exited with code 127

Connectivity issues in hbase-dev 1.3

We use your hbase-dev image in our integration tests and are getting connectivity issues like Caused by: java.net.ConnectException: Connection refused.
The point of using a versioned image instead of latest was that it would not be updated :)

Could you please revert it back to the image that it was before your update? Or point me to a git commit hash that I can build from myself?

@HariSekhon

Updated hbase:1.4 image causing connection failures

An update to the HarkiSekhon/hbase:1.4 image broke our Alpakka hbase connector integration test some time after March 8th (last successful hbase integration test build). An earlier cached version of the image (from 2 months ago) seems to work fine.

akka/alpakka#2185

Our test clients are failing with several different error messages, but I think the underlying errors are connection timeouts. We update our hosts file to point hbase to 127.0.0.1, but this doesn't seem to work locally or on travis, but it did with the old cached version I had.

A connection timeout:

[error] Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
[error] Wed Mar 11 10:29:06 EDT 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68437: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused row 'person2,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hbase,16020,1583936752151, seqNum=0
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:329)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:242)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:275)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
[error]     at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:310)
[error]     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:196)
[error]     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
[error]     at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isTableAvailable(ConnectionManager.java:1057)
[error]     at org.apache.hadoop.hbase.client.HBaseAdmin.isTableAvailable(HBaseAdmin.java:1537)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.$anonfun$getOrCreateTable$1(HBaseCapabilities.scala:53)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.twr(HBaseCapabilities.scala:26)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.twr$(HBaseCapabilities.scala:24)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.twr(HBaseFlowStage.scala:25)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.getOrCreateTable(HBaseCapabilities.scala:51)
[error]     at akka.stream.alpakka.hbase.impl.HBaseCapabilities.getOrCreateTable$(HBaseCapabilities.scala:49)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.getOrCreateTable(HBaseFlowStage.scala:25)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.table$lzycompute(HBaseFlowStage.scala:31)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1.akka$stream$alpakka$hbase$impl$HBaseFlowStage$$anon$$table(HBaseFlowStage.scala:31)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1$$anon$3.$anonfun$onPush$1(HBaseFlowStage.scala:48)
[error]     at scala.collection.Iterator.foreach(Iterator.scala:941)
[error]     at scala.collection.Iterator.foreach$(Iterator.scala:941)
[error]     at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
[error]     at scala.collection.IterableLike.foreach(IterableLike.scala:74)
[error]     at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
[error]     at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
[error]     at akka.stream.alpakka.hbase.impl.HBaseFlowStage$$anon$1$$anon$3.onPush(HBaseFlowStage.scala:46)
[error]     at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
[error]     at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:480)
[error]     at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
[error]     at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute(ActorGraphInterpreter.scala:47)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$SimpleBoundaryEvent.execute$(ActorGraphInterpreter.scala:43)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$BatchingActorInputBoundary$OnNext.execute(ActorGraphInterpreter.scala:85)
[error]     at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:764)
[error]     at akka.actor.Actor.aroundReceive(Actor.scala:539)
[error]     at akka.actor.Actor.aroundReceive$(Actor.scala:537)
[error]     at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
[error]     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
[error]     at akka.actor.ActorCell.invoke(ActorCell.scala:581)
[error]     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
[error]     at akka.dispatch.Mailbox.run(Mailbox.scala:229)
[error]     ... 3 more
[error] Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68437: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused row 'person2,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hbase,16020,1583936752151, seqNum=0
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:178)
[error]     at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
[error]     ... 3 more
[error] Caused by: java.net.ConnectException: Call to hbase/127.0.0.1:16020 failed on connection exception: java.net.ConnectException: Connection refused
[error]     at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:165)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:389)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
[error]     at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
[error]     at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:422)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:327)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:94)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
[error]     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:37059)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:405)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:274)
[error]     at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:388)
[error]     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:362)
[error]     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:142)
[error]     ... 4 more
[error] Caused by: java.net.ConnectException: Connection refused
[error]     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
[error]     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
[error]     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
[error]     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
[error]     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupConnection(BlockingRpcConnection.java:256)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.setupIOstreams(BlockingRpcConnection.java:437)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.writeRequest(BlockingRpcConnection.java:540)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.tracedWriteRequest(BlockingRpcConnection.java:520)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.access$200(BlockingRpcConnection.java:85)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection$4.run(BlockingRpcConnection.java:724)
[error]     at org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.notifyOnCancel(HBaseRpcControllerImpl.java:240)
[error]     at org.apache.hadoop.hbase.ipc.BlockingRpcConnection.sendRequest(BlockingRpcConnection.java:699)
[error]     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:420)
[error]     ... 15 more

Another test complaining about HADOOP_HOME. This was never necessary before, so it seems odd that it would be now.

--> [docs.scaladsl.HBaseStageSpec: HBase stage must write write entries to a sink] Start of log messages of test that [Failed(org.scalatest.concurrent.Futures$FutureConcept$$anon$1: A timeout occurred waiting for a future to complete. Queried 11 times, sleeping 500000000 nanoseconds between each query.)]
10:27:51.314 INFO  [default-dispatcher-2] akka.event.slf4j.Slf4jLogger          Slf4jLogger started
10:27:51.327 DEBUG [default-dispatcher-2] akka.event.EventStream                logger log1-Slf4jLogger started
10:27:51.329 DEBUG [default-dispatcher-2] akka.event.EventStream                Default Loggers started
10:27:51.492 DEBUG [pool-1-thread-1     ] logcapture                            enabling CapturingAppender
10:27:51.631 DEBUG [pool-1-thread-1     ] org.apache.hadoop.util.Shell          Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:329)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:354)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
        at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1437)
        at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:67)
        at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:81)
        at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:96)
        at docs.scaladsl.HBaseStageSpec.<init>(HBaseStageSpec.scala:102)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at java.lang.Class.newInstance(Class.java:442)
        at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:450)
        at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:304)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

The hbase stdout indicates a connection is trying to be made for each of our tests, but does not succeed.

hbase_1                        | 2020-03-11 14:35:40,351 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /192.168.160.1:53018
hbase_1                        | 2020-03-11 14:35:40,357 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to establish new session at /192.168.160.1:53018

Here's the full log of hbase stdout: https://pastebin.com/x0dy7d8J

The hash of the image we're currently using.

harisekhon/hbase                                                                                                                                 1.4                                        0ae79dcd8e6b        6 days ago          243MB

Please let me know if I can provide any additional troubleshooting info or context.

HBase image shuts down if non-interactive

hbase_1  | HBase Shell; enter 'help<RETURN>' for list of supported commands.
hbase_1  | Type "exit<RETURN>" to leave the HBase Shell
hbase_1  | Version 1.2.2, r3f671c1ead70d249ea4598f1bbcc5151322b3a13, Fri Jul  1 08:28:55 CDT 2016
hbase_1  | 


hbase_1  | stopping hbase....................
hbase_1  | localhost: /hbase/bin/zookeepers.sh: line 52: ssh: command not found
hbase_1  | pkill: invalid option -- 'i'
hbase_1  | 
hbase_1  | Usage:
hbase_1  |  pkill [options] <pattern>
hbase_1  | 
hbase_1  | Options:
hbase_1  |  -<sig>, --signal <sig>    signal to send (either number or name)
hbase_1  |  -e, --echo                display what is killed
hbase_1  |  -c, --count               count of matching processes
hbase_1  |  -f, --full                use full process name to match
hbase_1  |  -g, --pgroup <PGID,...>   match listed process group IDs
hbase_1  |  -G, --group <GID,...>     match real group IDs
hbase_1  |  -n, --newest              select most recently started
hbase_1  |  -o, --oldest              select least recently started
hbase_1  |  -P, --parent <PPID,...>   match only child processes of the given parent
hbase_1  |  -s, --session <SID,...>   match session IDs
hbase_1  |  -t, --terminal <tty,...>  match by controlling terminal
hbase_1  |  -u, --euid <ID,...>       match by effective IDs
hbase_1  |  -U, --uid <ID,...>        match by real IDs
hbase_1  |  -x, --exact               match exactly with the command name
hbase_1  |  -F, --pidfile <file>      read PIDs from file
hbase_1  |  -L, --logpidfile          fail if PID file is not locked
hbase_1  |  --ns <PID>                match the processes that belong to the same
hbase_1  |                            namespace as <pid>
hbase_1  |  --nslist <ns,...>         list which namespaces will be considered for
hbase_1  |                            the --ns option.
hbase_1  |                            Available namespaces: ipc, mnt, net, pid, user, uts
hbase_1  | 
hbase_1  |  -h, --help     display this help and exit
hbase_1  |  -V, --version  output version information and exit
hbase_1  | 
hbase_1  | For more details see pgrep(1).

Unable to create a volume for solr data

I tried to create volumes on /solr/example/cloud/ but the permissions are incorrect.
Could you please add a "chmod" in the DockerFile to set the right permissions to /solr/example/cloud/ before starting solr ?

fstat unimplemented unsupported or native support failed to load

NotImplementedError: fstat unimplemented unsupported or native support failed to load; see http://wiki.jruby.org/Native-Libraries
initialize at org/jruby/RubyIO.java:1013
open at org/jruby/RubyIO.java:1154
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/input-method.rb:141
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb/context.rb:70
initialize at uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/irb.rb:426
initialize at /hbase/lib/ruby/irb/hirb.rb:47
start at /hbase/bin/../bin/hirb.rb:181

at /hbase/bin/../bin/hirb.rb:193

Change the value of hbase.regionserver.thrift.framed for security?

Thanks for providing this useful container.

I have a question about one hbase config: It seems you keep the default setting of hbase.regionserver.thrift.framed for security to false.

However, The official document recommends to set hbase.regionserver.thrift.framed to at least true, for security: "This is the recommended transport for thrift servers and requires a similar setting on the client side. Changing this to false will select the default transport, vulnerable to DoS when malformed requests are issued due to THRIFT-601."

It is also recommended in Cloudera's troubleshoorting page to set hbase.regionserver.thrift.framed and hbase.regionserver.thrift.compact to true.

Shall we change the two settings to true?
Thanks.

Support for spark 2.x ?

Perhaps I'm reading it wrong, but it looks like the pre-built images for spark are only for 1.3-1.6...? Spark 2.x would be an improvement.

I'm happy to try to help with this, though I don't know where your pre-builts are configured, nor how to run regression tests.

Docker Image

Hey this is basically inquiry, Bit new to docker, and just trying to learn and use it.
What I looking for is docker image or file, which will have combination of

  1. Java
  2. Scala
  3. Eclipse
  4. Maven
  5. Hadoop

If you can help me out or share the image path with having maximum of all these in one image.

How to map

The interface can be accessed through 16301 in VMware, but not in an external environment.If the mapping allows docker's services to be mapped outside

Error in Dockerfiles/rabbitmq-cluster/rabbitmq-cluster ?

Hi,

I'm using your compose for rabbitmq and ran into a little bug:

when running non RAM workers, the join_cluster fails with:

joining cluster via seed rabbit_manager
Error: operation join_cluster used with invalid parameter: ["rabbit@rabbit_manager", []]

And everything is working on RAM nodes.
I think it's about an extra space when running the join_cluster without $RAM set.

👍 For your work !

Drill + Zookeeper : Unable to persist configuration

Hello
I am running well docker-compose command in drill project but i can't find any solution to persist drill configurations.
I can modify storage plugin configuration in the drill web interface but on docker-compose down, all is lost.
Is there any solution to create a volume or any other persistance solution ?

Add HBase 2

HBase 2 was released recently so maybe worth adding.

And I take this chance to thank you, excellent job with all your images, you have literally saved me and others tons of time by nicely packaging some of this projects!

Error while running apache drill with external zookeeper

I am trying to run apache drill image and get this

docker run harisekhon/apache-drill

Running non-interactively, will not open Apache Drill shell

For Apache Drill shell start this image with 'docker run -t -i' switches

Otherwise you will need to have a separate ZooKeeper container linked (one is available from harisekhon/zookeeper) and specify:

docker run -e ZOOKEEPER_HOST=<host>:2181 supervisord -n

I have a zookeeper already running on my localhost so I try again

docker run -e ZOOKEEPER_HOST=localhost:2181  supervisord -n harisekhon/apache-drill
Unable to find image 'supervisord:latest' locally
Pulling repository docker.io/library/supervisord
docker: Error: image library/supervisord:latest not found.

I thought maybe the sequence was wrong and I need to specify the image first

docker run  harisekhon/apache-drill -e ZOOKEEPER_HOST=localhost:2181  supervisord -n
container_linux.go:247: starting container process caused "exec: \"-e\": executable file not found in $PATH"
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"-e\\\": executable file not found in $PATH\"\n".

Any idea what can be done to fix this?

Solr Upgrade

Hari,

Could you push an update to Solr 6.6 on all the Solr projects? (I'm particularly interested in solrcloud but I use Solr all the time and would find the others useful).

Matthew

hbase:dock-compose up does not work

in the hbase dir, run sudo dock-compose up,

zookeeper gives :

Got user-level KeeperException when processing sessionid:0x100000d1c840001 type:setData cxid:0x41 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/hbase/meta-region-server Error:KeeperErrorCode = NoNode for /hbase/meta-region-server

HMaster gives:

hbase-master_1 | 2021-11-06 02:29:51,523 WARN [ProcExecTimeout] assignment.AssignmentManager: STUCK Region-In-Transition rit=OPENING, location=hbase_hbase-regionserver_1.hbase_default,16020,1636165658608, table=hbase:namespace, region=8bd53a267bdd63e7157795522f5cb0a4

hbase shell:

hbase(main):001:0> list_namespace
NAMESPACE

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2977)
at org.apache.hadoop.hbase.master.HMaster.getNamespaces(HMaster.java:3273)
at org.apache.hadoop.hbase.master.MasterRpcServices.listNamespaceDescriptors(MasterRpcServices.java:1233)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)

For usage try 'help "list_namespace"'

Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied

Hi HariSekhon,

i having problem to start the hadoop docker-compose with the error below :

λ  docker-compose up -d
Starting hadoop_hadoop_1 ... error

ERROR: for hadoop_hadoop_1  Cannot start service hadoop: driver failed programming external connectivity on endpoint hadoop_hadoop_1 (114a6d530bf9e4f6eb8bd3528f2ac847feb1cf089c76e4733a3c00e99ee97f32): Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied

ERROR: for hadoop  Cannot start service hadoop: driver failed programming external connectivity on endpoint hadoop_hadoop_1 (114a6d530bf9e4f6eb8bd3528f2ac847feb1cf089c76e4733a3c00e99ee97f32): Error starting userland proxy: Bind for 0.0.0.0:50090: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.

Appreciate for any advice
Jason

jython pip issues

Somehow pip is kind of crucial for Python in general, other Dockerhub jython Docker images do in-fact bundle pip. Somehow harisekhon/jython:latest does not contain pip though. Usually one would use ensurepip to retrieve a recent version of pip, though

$ jython -m ensurepip --upgrade
Ignoring ensurepip failure: pip 1.6 requires SSL/TLS

tracing down the jython code, this will message will show up if import ssl raises an exception. Trying to do so will result in missing module encodings.

$ jython
Jython 2.7.0 (default:9987c746f838, Apr 29 2015, 02:25:11) 
[OpenJDK 64-Bit Server VM (Oracle Corporation)] on java1.8.0_171
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/fwierzbicki/hg/jython/jython/dist/Lib/ssl.py", line 18, in <module>
  File "/Users/fwierzbicki/hg/jython/jython/dist/Lib/_socket.py", line 2, in <module>
ImportError: No module named encodings

hbase 0.98 isn't working

the 0.98 image for hbase isn't working.
This is the command I run to start the container:
docker run -ti -p 2181:2181 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 harisekhon/hbase:0.98

then when it starts I try to create a namespace with create_namespace 'crawler' I get the following error:

hbase(main):001:0> create_namespace 'crawler'
2017-08-22 19:03:57,379 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

ERROR: java.io.IOException
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2247)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
        at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3524)
        at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3430)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44958)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
        ... 7 more

Here is some help for this command:
Create namespace; pass namespace name,
and optionally a dictionary of namespace configuration.
Examples:

  hbase> create_namespace 'ns1'
  hbase> create_namespace 'ns1', {'PROPERTY_NAME'=>'PROPERTY_VALUE'}


hbase(main):002:0>

how can delete all docker containers not used?

docker rm bb2afb1ec924
docker rm 26b7e3c4e010
docker rm 6932e19f8031
docker rm fb72d73d1675
docker rm 9a658133ced9

this not good workflow each time for review cassandra avaible or not have a contanier

root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker ps -a
CONTAINER ID   IMAGE                                 COMMAND                  CREATED             STATUS                         PORTS     NAMES
26b7e3c4e010   harisekhon/nagios-plugins             "find_active_cassand…"   33 seconds ago      Exited (1) 31 seconds ago                beautiful_fermat
6932e19f8031   harisekhon/nagios-plugins             "find_active_cassand…"   2 minutes ago       Exited (3) 2 minutes ago                 quizzical_nightingale
fb72d73d1675   harisekhon/nagios-plugins             "find_active_cassand…"   4 minutes ago       Exited (3) 4 minutes ago                 blissful_perlman
bb2afb1ec924   harisekhon/nagios-plugins             "find_active_cassand…"   4 minutes ago       Exited (0) 4 minutes ago                 sleepy_darwin
9a658133ced9   harisekhon/nagios-plugins             "find_active_cassand…"   10 minutes ago      Exited (3) 10 minutes ago                suspicious_stonebraker
a623dd501d39   harisekhon/nagios-plugins             "check_zaloni_bedroc…"   29 minutes ago      Exited (3) 29 minutes ago                musing_neumann
566ce41b26e3   harisekhon/nagios-plugins:cassandra   "check_zaloni_bedroc…"   31 minutes ago      Exited (4) 31 minutes ago                pedantic_maxwell
812501ccb4b3   harisekhon/nagios-plugins:cassandra   "check_zaloni_bedroc…"   38 minutes ago      Exited (4) 38 minutes ago                wonderful_kepler
d5de82e52415   harisekhon/nagios-plugins:cassandra   "/bin/bash -c 'find …"   43 minutes ago      Exited (0) 43 minutes ago                vigorous_kalam
2d98bcbfeec7   harisekhon/cassandra-dev:latest       "/bin/sh -c /entrypo…"   46 minutes ago      Exited (137) 35 seconds ago              cassandra-dev_cassandra_1
86d472f28229   harisekhon/nagios-plugins             "check_ssl_cert.pl -V"   47 minutes ago      Exited (3) 47 minutes ago                compassionate_chatelet
69f03cedcf49   harisekhon/nagios-plugins             "check_ssl_cert.pl -…"   49 minutes ago      Exited (3) 49 minutes ago                interesting_jackson
878cb6c2a5a8   harisekhon/nagios-plugins             "/list_plugins.sh"       49 minutes ago      Exited (0) 49 minutes ago                sleepy_burnell
a66982209491   harisekhon/pytools                    "dockerhub_search.py…"   49 minutes ago      Exited (4) 49 minutes ago                peaceful_galois
50571f96794a   jasonrivers/nagios:latest             "/usr/local/bin/star…"   About an hour ago   Exited (4) About an hour ago             nagios4

for delete container

root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 26b7e3c4e010
26b7e3c4e010
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 6932e19f8031
6932e19f8031
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm fb72d73d1675
fb72d73d1675
root@ubuntu20dockers:~/Dockerfiles/nagios-plugins-cassandra# docker rm 9a658133ced9
9a658133ced9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.