Giter Site home page Giter Site logo

clusterdock / clusterdock Goto Github PK

View Code? Open in Web Editor NEW
28.0 28.0 9.0 274 KB

clusterdock is a framework for creating Docker-based container clusters

Home Page: http://clusterdock.rtfd.io

License: Apache License 2.0

Python 97.17% Makefile 2.83%
big-data docker hadoop

clusterdock's Introduction

clusterdock

image

image

Documentation Status

image


clusterdock is a Python 3 project that enables users to build, start, and manage Docker container-based clusters. It uses a pluggable system for defining new types of clusters using folders called topologies and is a swell project, if I may say so myself.


"I hate reading, make this quick."

Before doing anything, install a recent version of Docker to your machine and install clusterdock:

$ pip3 install clusterdock

Next, clone a clusterdock topology to your machine. For this example, we'll use the nodebase topology. You could start a 2-node cluster:

$ git clone https://github.com/clusterdock/topology_nodebase.git
$ clusterdock start topology_nodebase
2017-08-03 10:04:18 PM clusterdock.models   INFO     Starting cluster on network (cluster) ...
2017-08-03 10:04:18 PM clusterdock.models   INFO     Starting node node-1.cluster ...
2017-08-03 10:04:19 PM clusterdock.models   INFO     Starting node node-2.cluster ...
2017-08-03 10:04:20 PM clusterdock.models   INFO     Cluster started successfully (total time: 00:00:01.621).

To list cluster nodes:

$ clusterdock ps

For cluster `famous_hyades` on network cluster the node(s) are:
CONTAINER ID     HOST NAME            PORTS              STATUS        CONTAINER NAME          VERSION    IMAGE
a205d88beb       node-2.cluster                          running       nervous_sinoussi        1.3.3      clusterdock/topology_nodebase:centos6.6
6f2825c596       node-1.cluster       8080->80/tcp       running       priceless_franklin      1.3.3      clusterdock/topology_nodebase:centos6.6

To SSH into a node and look around:

$ clusterdock ssh node-1.cluster
[root@node-1 ~]# ls -l / | head
total 64
dr-xr-xr-x   1 root root 4096 May 19 20:48 bin
drwxr-xr-x   5 root root  360 Aug  4 05:04 dev
drwxr-xr-x   1 root root 4096 Aug  4 05:04 etc
drwxr-xr-x   2 root root 4096 Sep 23  2011 home
dr-xr-xr-x   7 root root 4096 Mar  4  2015 lib
dr-xr-xr-x   1 root root 4096 May 19 20:48 lib64
drwx------   2 root root 4096 Mar  4  2015 lost+found
drwxr-xr-x   2 root root 4096 Sep 23  2011 media
drwxr-xr-x   2 root root 4096 Sep 23  2011 mnt
[root@node-1 ~]# exit

To see full usage instructions for the start action, use -h/--help:

$ clusterdock start topology_nodebase -h
usage: clusterdock start [-h] [--node-disks map] [--always-pull]
                         [--namespace ns] [--network nw] [-o sys] [-r url]
                         [--nodes node [node ...]]
                         topology

Start a nodebase cluster

positional arguments:
  topology              A clusterdock topology directory

optional arguments:
  -h, --help            show this help message and exit
  --always-pull         Pull latest images, even if they're available locally
                        (default: False)
  --namespace ns        Namespace to use when looking for images (default:
                        clusterdock)
  --network nw          Docker network to use (default: cluster)
  -o sys, --operating-system sys
                        Operating system to use for cluster nodes (default:
                        centos6.6)
  -r url, --registry url
                        Docker Registry from which to pull images (default:
                        None)

nodebase arguments:
  --node-disks map      Map of node names to block devices (default: None)

Node groups:
  --nodes node [node ...]
                        Nodes of the nodes group (default: ['node-1',
                        'node-2'])

When you're done and want to clean up:

$ clusterdock manage nuke
2017-08-03 10:06:28 PM clusterdock.actions.manage INFO     Stopping and removing clusterdock containers ...
2017-08-03 10:06:30 PM clusterdock.actions.manage INFO     Removed user-defined networks ...

To see full usage instructions for the build action, use -h/--help:

$ clusterdock build topology_nodebase -h
usage: clusterdock build [--network nw] [-o sys] [--repository repo] [-h]
                         topology

Build images for the nodebase topology

positional arguments:
  topology              A clusterdock topology directory

optional arguments:
  --network nw          Docker network to use (default: cluster)
  -o sys, --operating-system sys
                        Operating system to use for cluster nodes (default:
                        None)
  --repository repo     Docker repository to use for committing images
                        (default: docker.io/clusterdock)
  -h, --help            show this help message and exit

clusterdock's People

Contributors

dimaspivak avatar kirtiv1 avatar smazumdar avatar srids avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

clusterdock's Issues

Move cdh topology out of framework

The only topology appropriate for this repository would be the nodebase one since it can act as a quickstart example. The cdh topology should be moved out into its own repository named following the convention of topology followed by a descriptive name (e.g. topology_cdh).

Clusterdock does not work on ZSH

And I'm assuming the other shells either :(

jarcec@ip-10-0-0-164 ~ % clusterdock_ssh node-1.cluster                                                                                                                  (1.8.0_111-b14) (ruby-2.4.0-rc1) [16:00:27]
Password:
clusterdock_ssh:22: = not found
jarcec@ip-10-0-0-164 ~ % bash                                                                                                                                            (1.8.0_111-b14) (ruby-2.4.0-rc1) [16:00:38]
(reverse-i-search)`sou': source ~/streamsets/stf/testframework.sh
bash-3.2$ source  ~/streamsets/clusterdock/clusterdock.sh
bash-3.2$ clusterdock_ssh node-1.cluster
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Fri Jul 29 23:45:16 2016 from 192.168.124.1
[root@node-1 ~]#

Reduce size of Docker image

Right now, the clusterdock/framework:latest image is 184 MB in size. Let's see if we can reduce this a bit...

Pre-create a sample schemaless collection for CDH Solr

To support testing or working on sample CDH Solr documents, a schemaless collection can be pre-created. Bash steps would look like below for creating sample schemaless Solr collection called sample_collection. (note: below sed will uncomment "df" field to use "id" as the default field)
More about this feature is documented in:
https://www.cloudera.com/documentation/enterprise/5-10-x/topics/search_validate_deploy_solr_rest_api.html
https://www.cloudera.com/documentation/enterprise/5-10-x/topics/search_solrctl_managing_solr.html#concept_l3y_txb_mt
https://www.cloudera.com/documentation/enterprise/5-10-x/topics/search_faq.html#faq_search_general_schemalesserror

solrctl instancedir --generate $HOME/sample_collection_solr_configs -schemaless

sed -i 's/.<!--.<str name="df".*>/id</str>/' $HOME/sample_collection_solr_configs/conf/solrconfig.xml

solrctl instancedir --create sample_collection $HOME/sample_collection_solr_configs

solrctl collection --create sample_collection -s 1 -c sample_collection

profile.cfg shouldn't be a cfg file

My Hackathon-time decision to use cfg as the format for the profile is a regrettable one since it has since resulted in needing to use a verbose format for specifying command line arguments. e.g. from the Apache HBase topology:

arg.hbase-version
arg.hbase-version.help = The label to use when identifying the version of HBase
arg.hbase-version.metavar = ver

Any objections to switching this over to YAML across the board? We then get the readability of cfg, but with the nesting support of JSON (and with support for comments).

Convert project to Python 3

Python 2.x is for legacy code, so why not update this project to work with the Python it should have been using from the start? Especially since everything here is intended to run in Docker containers, it can't hurt and will likely only help maintainability moving forward.

clusterdock on Mac OS

I'm getting an error on Mac OS while running clusterdock start topology_nodebase

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/docker/api/client.py", line 222, in _raise_for_status
    response.raise_for_status()
  File "/usr/local/lib/python3.6/site-packages/requests/models.py", line 935, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: http+docker://localunixsocket/v1.30/containers/ca6e0f29389edd3f1f61ab86585802dff93de39bb78ac1ef498fbefe7a1d3128/start

During handling of the above exception, another exception occurred:


Traceback (most recent call last):
  File "/usr/local/bin/clusterdock", line 11, in <module>
    load_entry_point('clusterdock==1.3.2', 'console_scripts', 'clusterdock')()
  File "/usr/local/lib/python3.6/site-packages/clusterdock/cli.py", line 175, in main
    action.main(args)
  File "/Users/sany/phdata/topology_nodebase/start.py", line 37, in main
    cluster.start(args.network)
  File "/usr/local/lib/python3.6/site-packages/clusterdock/models.py", line 99, in start
    node.start(self.network)
  File "/usr/local/lib/python3.6/site-packages/clusterdock/models.py", line 327, in start
    client.api.start(container=container_id)
  File "/usr/local/lib/python3.6/site-packages/docker/utils/decorators.py", line 19, in wrapped
    return f(self, resource_id, *args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/docker/api/container.py", line 1085, in start
    self._raise_for_status(res)
  File "/usr/local/lib/python3.6/site-packages/docker/api/client.py", line 224, in _raise_for_status
    raise create_api_error_from_http_exception(e)
  File "/usr/local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
    raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 502 Server Error: Bad Gateway ("b'Mounts denied: \r\nThe path /etc/localtime\r\nis not shared from OS X and is not known to Docker.\r\nYou can configure shared paths from Docker -> Preferences... -> File Sharing.\r\nSee https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.\r\n.'")

Setup CI for framework

Once testing is in place, something like Travis CI would be ideal to make sure tests continue to pass and that deployment of code into the Docker Hub registry is done automatically.

Add entries for create cluster to /etc/hosts

Would be great if clusterdock could update /etc/hosts with hostnames of the running servers, otherwise, when one tries to use SOCKS proxy, he/she have to use the IP addresses.

Stuck in the `Updating NameNode references in Hive metastore...` phase

I've just cloned clusterdock from this GitHub repository and ran the following command:

clusterdock_run ./bin/start_cluster cdh

It is stuck in the Updating NameNode references in Hive metastore... phase

[root@node007 framework]# clusterdock_run ./bin/start_cluster cdh
INFO:clusterdock.cluster:Successfully started node-2.cluster (IP address: 192.168.123.3).
INFO:clusterdock.cluster:Successfully started node-1.cluster (IP address: 192.168.123.2).
INFO:clusterdock.cluster:Started cluster in 12.42 seconds.
INFO:clusterdock.topologies.cdh.actions:Changing server_host to node-1.cluster in /etc/cloudera-scm-agent/config.ini...
INFO:clusterdock.topologies.cdh.actions:Restarting CM agents...
cloudera-scm-agent is already stopped
Starting cloudera-scm-agent: [  OK  ]
Stopping cloudera-scm-agent: [  OK  ]
Starting cloudera-scm-agent: [  OK  ]
INFO:clusterdock.topologies.cdh.actions:Waiting for Cloudera Manager server to come online...
INFO:clusterdock.topologies.cdh.actions:Detected Cloudera Manager server after 172.24 seconds.
INFO:clusterdock.topologies.cdh.actions:CM server is now accessible at http://node007.mycompany.local:32769
INFO:clusterdock.topologies.cdh.cm:Detected CM API v13.
INFO:clusterdock.topologies.cdh.cm_utils:Updating database configurations...
INFO:clusterdock.topologies.cdh.cm:Updating NameNode references in Hive metastore...

My Docker version is:

Docker version 1.13.0, build 49bf474

Since the clusterdock is running on a CentOS server (CentOS Linux release 7.2.1511) without GUI capabilities, I've created an SSH tunnel from my machine to the server running clusterdock so that I could connect to Cloudera Manager from my local machine's web browser:

ssh -L 32769:10.100.55.7:32769 [email protected]

When I connect to Cloudera Manager, I also see that Hive is stuck waiting for more than a few minutes:

Update Hive Metastore to point to a NameNode's Nameservice name instead of hostname, normally performed after enabling HDFS High Availability. Back up Hive Metastore DB before running this command.

hive/hive.sh ["updatelocation","hdfs://node-1.cluster:8020"] 

See the screenshot below:

clusterdock_1

I tried to view full log files, e.g. http://localhost:32769/cmf/process/67/logs?filename=stdout.log but I get

HTTP ERROR 502

Problem accessing /cmf/process/67/logs. Reason:

    BAD_GATEWAY

And when I try to view the Role Log via http://localhost:32769/cmf/role/42/logs, I get the following:

HTTP ERROR 500

Problem accessing /cmf/role/42/logs. Reason:

    INTERNAL_SERVER_ERROR

Caused by:

java.lang.NullPointerException
	at org.springframework.web.servlet.view.RedirectView.appendQueryProperties(RedirectView.java:252)
	at org.springframework.web.servlet.view.RedirectView.renderMergedOutputModel(RedirectView.java:225)
	at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250)
	at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047)
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:669)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:574)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
	at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:78)
	at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:131)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at com.jamonapi.http.JAMonServletFilter.doFilter(JAMonServletFilter.java:48)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at com.cloudera.enterprise.JavaMelodyFacade$MonitoringFilter.doFilter(JavaMelodyFacade.java:109)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:311)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:116)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:101)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:146)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:182)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.session.ConcurrentSessionFilter.doFilter(ConcurrentSessionFilter.java:125)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:323)
	at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:173)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:88)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
	at org.mortbay.jetty.handler.StatisticsHandler.handle(StatisticsHandler.java:53)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
	at org.mortbay.jetty.Server.handle(Server.java:326)
	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

Powered by Jetty://

Is this expected?

Add support for devices

Docker allows containers started in non-privileged mode to have individual devices passed through. Add support for this so that people could develop topologies that use block devices from the host.

Only wait for sshd status if sshd is present

Some topologies use service containers for small tasks. In these scenarios, where sshd isn't even present, it's incorrect to wait for its status to be running before continuing.

NodeGroup.ssh doesn't work

I mixed up __iter__ and __getitem__, which results in topologies that use NodeGroup.ssh failing with a complaint about the object not being indexable.

Clusterdock `manage` command enhance ..

  1. manage cluster by labels. eg., command clusterdock manage nuke nukes everything on default network. it can be enhanced to nuke by certain docker label.
  2. list cluster: an example command such as clusterdock manage list lists the cluster nodes (with label support)

Calling clusterdock without arguments results in exception

jarcec@ip-192-168-142-84 ~ % clusterdock                                                                                                                                  (1.8.0_151-b12) (ruby-2.4.0-rc1) [8:37:18]
Traceback (most recent call last):
  File "/usr/local/bin//clusterdock", line 11, in <module>
    load_entry_point('clusterdock==1.3.1', 'console_scripts', 'clusterdock')()
  File "/usr/local/lib/python3.6/site-packages/clusterdock/cli.py", line 181, in main
    action = importlib.import_module('clusterdock.actions.{}'.format(args.action))
  File "/usr/local/Cellar/python3/3.6.3/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'clusterdock.actions.None'

Would it perhaps make sense to print help instead?

Distribute containers and volumes across cloud servers?

i want to implement a cloudera instance that involves using docker containers for nodes, but i want to distribute them across different cloud servers. Could i create and modify them in one, then distribute them? I just want to know if its possible. I know that managing the data volumes would be a bit of a headache.

Add `--only-pull` argument to start_cluster

In many use cases, it would be useful to be able to pull the Docker images needed to start a particular topology without then going through the steps needed actually start it. An easy way to do this would be to add an --only-pull argument to start_cluster, which topologies could choose to implement if this functionality would be useful to them. Then, people automating things like AMI creation could simply invoke ./bin/start_cluster --only-pull ..., which would pull relevant images and then return.

hdfs canary was failing big league

deployed clusterdock on a centos 7 VM. got literally everything working perfect but the hdfs canary was failing, and there was something about the Namenode not being able to find its datanodes. I suspect its an incompatibility b/n 7 and 6. logs for name node said:

Failed to find datanode (scope="" excludedScope="/default"). //excludes default scope???

Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false)

File /tmp/.cloudera_health_monitoring_canary_files/.canary_file_2017_06_12-18_01_01 could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.

Add unit and functional tests

Proper TDD would have been best, but wasn't conducive to the Hackathon mentality when I started this project. Better late than never.

clusterdock.sh shouldn't use sudo

Everyone I know who uses clusterdock regularly either runs it as the root user or as a user in the docker group that gives them access to the /var/run/docker.sock socket. Either way, having clusterdock.sh make Docker client invocations with sudo is unnecessary and can cause issues with automation. Let's remove it.

Add support for multi-host deployments

Either via explicit use of the overlay network driver or with the Swarm capabilities of Docker 1.12, we should make it easier for users to create topologies that will deploy clusters across multiple hosts.

Add manage_cluster bin script

Along with building a cluster and starting a cluster, a pretty typical workflow would be managing aspects of a running cluster. As an example, the Apache HBase topology might want to define specific behaviors like downloading logs from a cluster or returning details of the running cluster (e.g. Git hashes) by running a simple command. To facilitate this, let's add a manage_cluster script to ./bin and a def manage(args) to the nodebase topology as an example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.