Giter Site home page Giter Site logo

es2graphite's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

es2graphite's Issues

Disk Allocation metrics are not collected.

Currently it does not collect any of the per node drive allocation information. This information is accessed at 'http://(FQDN):9200/_cat/allocation?format=json&bytes=b'.

This needs to be done so the following requirements are met:

  • Information is stored in bytes for best usability in Graphite
  • Should be stored in the metric PREFIX.CLUSTER_NAME.NODE_NAME.disk.METRIC

I am currently working on adding this logic into the code. I will create a PR when its done. I just wanted to document the feature in an Issue in case anyone else had any additional requirements they would like to see added to this option.

ElasticSearch 1.x support

Doesnt seem to be any backwards compatibility in 1.x api for stats:

'2014-03-24 09:03:01: GET http://rd03:9200/_cluster/nodes/stats?all=true'
Unhandled exception in thread started by <function get_metrics at 0x1b7fb90>
Traceback (most recent call last):
  File "./es2graphite.py", line 129, in get_metrics
    node_stats_data = urllib2.urlopen(node_stats_url).read()
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 407, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 520, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 445, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found

Index Level process section failes

When including index level data processing you get the traceback of:

Traceback (most recent call last):
  File "./es2graphite.py", line 290, in <module>
    get_metrics()
  File "./es2graphite.py", line 240, in get_metrics
    indices_stats_metrics = process_indices_stats(args.prefix, indices_stats)
  File "./es2graphite.py", line 121, in process_indices_stats
    process_section(int(time.time()), metrics, (prefix, CLUSTER_NAME, 'indices'), stats['indices'])
TypeError: process_section() takes exactly 5 arguments (4 given)

socket.error: [Errno 32] Broken pipe

I was getting a broken pipe error, i believe caused by socket size limits when sending to graphite, so I changed the send_to_graphite to chunk up the data which seems to have fixed the issue. Not sure if this is the best way to handle it (it doesn't work with never versions of the script since the threading was added).

def chunks(data, size):
    for i in xrange(0, len(data), size):
        yield data[i:i+size]

def send_to_graphite(metrics, chunksize=500):
    if args.debug:
        for m, mval  in metrics:
            log('%s %s = %s' % (mval[0], m, mval[1]), True)
    else:
        if chunksize:
            chunked_metrics = list(chunks(metrics, chunksize))
        else:
            chunked_metrics = list(metrics)

        log('total %s chunks of %s size' % (len(chunked_metrics), chunksize))
        for c in chunked_metrics:
                log('sending chunk')
                payload = pickle.dumps(c)
                header = struct.pack('!L', len(payload))
                sock = socket.socket()
                sock.connect((args.graphite_host, args.graphite_port))
                sock.sendall('%s%s' % (header, payload))
                sock.close()

Add Memory Utilization Collection.

Need to add per node memory utilization metrics to the script. This information is available through the _cat/nodes interface. It is limited to total ram, percent ram used, total heap, percent heap used.

This will be added to PREFIX.CLUSTER_NAME.NODE_NAME.os.mem metric tree path.

I am actively working on this now and will create a PR when it is ready.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.