Giter Site home page Giter Site logo

brutasse / graphite-api Goto Github PK

View Code? Open in Web Editor NEW
493.0 493.0 133.0 762 KB

Graphite-web, without the interface. Just the rendering HTTP API.

Home Page: https://graphite-api.readthedocs.io

License: Apache License 2.0

Shell 1.38% Python 98.62%

graphite-api's People

Contributors

arvindch avatar ashangit avatar brutasse avatar dctrwatson avatar freeseacher avatar habnabit avatar iain-buclaw-sociomantic avatar ibuclaw avatar jsternberg avatar juise avatar kirillk77 avatar lewiseason avatar liyichao avatar mbell697 avatar nearbuyjason avatar neersighted avatar nonsenz avatar offlinehacker avatar olevchyk avatar patkoscsaba avatar rwky avatar ryan-williams avatar scottmlikens avatar setaou avatar smerrill avatar vincentbernat avatar winguru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphite-api's Issues

Handling multiple targets is far from beeing optimal

Some backends have fetch_multi support, but it's not utilized when there are several targets in URL. That causes very slow performance for some backends (like influxdb, which has quite big penalty for each select, and for example for 10 targets there'll be 20 queries - 10 lists and 10 selects).

While digging to that issue, I've also found that current grammar.parseString is a bit messy (not well documented, hard to understand why it's generating such consturctions).

As example, I've dumped results from a simple query: https://gist.github.com/Civil/c35293bc318e28ab07d1

I think this is bloated (and have lots of duplicating data) and can be reduced a lot. Also if parser is rewritten, it'll be possible to implement multi_fetch for data in more cleaner way

Trouble with sumSeries

I'm switching from carbon+graphite-web to cyanite+graphite-api and I'm having trouble with our dashboard graphs that use sumSeries. I can graph the individual series fine but when I try to sum them up I get no data points. It looks like wildcards are matching dses that don't have any data points during the time window and that's somehow messing up the sum. Possibly due to the note on line 162 of functions.py. I'm happy to contribute a fix but with the different projects involved I want to make sure I'm doing it in the right place.

Here's the raw data for the individual targets (stats.counters.ops.dc-worker-*.cloud_spooler.cloud.subscription.b.rate):

stats.counters.ops.dc-worker-2.cloud_spooler.cloud.subscription.b.rate,1411590830,1411592910,10|0.333333333333,None,None,None,None,None,0.333333333333,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None,None
stats.counters.ops.dc-worker-1.cloud_spooler.cloud.subscription.b.rate,1411589316,1411592916,10|

Here's the raw data for the sumSeries:

sumSeries(stats.counters.ops.dc-worker-*.cloud_spooler.cloud.subscription.b.rate),1411589387,1411592987,10|

Move to Graphite project

At what point will you consider submitting the API to the Graphite-Project?

What is missing to reach this milestone?

Oops! Graphite HTTP Request Error

I am receiving the Oops! Graphite HTTP Request Error on a fresh install of Grafana. I noticed that the requested URL (http://servername/render) in the graph details doesn't exist. Where do I need to look for the /render URL?

Here are the request details:

Url http://server-name/render
Method  POST
Accept    application/json, text/plain, */*
Content-Type    application/x-www-form-urlencoded

Here is my nginx config:

   server {

            listen *:80;
            server_name graphite-server;

            add_header 'Access-Control-Allow-Origin' '*';
            add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
            add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Accept';
            add_header 'Access-Control-Allow-Credentials' 'true';

            location / {
                    root /usr/share/grafana-1.5.4;
                    index index.html;
            }
    }

Let me know if you need any other details.

drawAsInfinite through JSON

Is this something that can be fixed in Graphite-API, this function works for the PNG renderer. Would allow more consistent use of it in Grafana and non Grafana dashboards. Not a high priority.

worker timeouts?

I'm using the ubuntu package to install graphite-api:

ubuntu$ sudo apt-cache show graphite-api
Package: graphite-api
Priority: extra
Section: default
Installed-Size: 14640
Maintainer: <root@rebuild>
Architecture: amd64
Version: 1.0.1-exoscale1400326939
Depends: libcairo2, libffi6
Filename: pool/trusty/main/g/graphite-api/graphite-api_1.0.1-exoscale1400326939_amd64.deb
Size: 5179504
MD5sum: e92a48207c9523db9dd0b20f7be516a7
SHA1: 36add7a0120a44111dc6e76a366554b6aab9fa42
SHA256: bb77c8078fde131c97144700fc93bf9bbcc0620e02cfbf0e0839dd1520a8a6a4
SHA512: fb7faa749a9d75f416e6d9fe3ee300b1f80a807dcec63950bf147fc43e34aab6880b1e96c58b32fb923dee7f0fd6f3571b95925b7249f5ad6cc4955e4bd11378
Description: Graphite-web, without the interface. Just the rendering HTTP API.
Description-md5: 5636204a3038121a1d57a25fccb3d950
Homepage: https://github.com/brutasse/graphite-api

I'm using the cyanite plugin and have the backend running. It seems to be working great, the logs are very boring, and when curling against it, it is very performant.

The problem is that when I go to query graphite-api with any request (take /metrics/find?query=* for example) the previously spawned worker thread seems to timeout, causing a new one to have to launch on each request. Resulting in output in the logs like this:

{"path": "/etc/graphite-api.yaml", "event": "loading configuration"}
{"index_path": "/srv/graphite/index", "event": "reading index data"}
{"index_path": "/srv/graphite/index", "duration": 2.6941299438476562e-05, "total_entries": 0, "event": "search index reloaded"}
2014-08-24 21:13:49 [2122] [CRITICAL] WORKER TIMEOUT (pid:2175)
2014-08-24 21:13:49 [2122] [CRITICAL] WORKER TIMEOUT (pid:2175)
2014-08-24 21:13:49 [2181] [INFO] Booting worker with pid: 2181
{"path": "/etc/graphite-api.yaml", "event": "loading configuration"}
{"index_path": "/srv/graphite/index", "event": "reading index data"}
{"index_path": "/srv/graphite/index", "duration": 2.8133392333984375e-05, "total_entries": 0, "event": "search index reloaded"}
2014-08-24 21:14:20 [2122] [CRITICAL] WORKER TIMEOUT (pid:2181)
2014-08-24 21:14:20 [2210] [INFO] Booting worker with pid: 2210

I thought the issue might be load balancer related, but removing the load balancers from the equation doesn't seem to change the situation.

Any idea how I could go about diagnosing, troubleshooting, or resolving this?

Cache query link support?

As far as I read the code, graphite-api is not able to get last values of the metrics landed in memory of carbon-cache through the cache query link, is it? If true, is there any plan to support this?

Based on current status of carbon-cache, it would mean adding sort of "pickle protocol support" in graphite-api. This lack of support is publicly and consciously listed in the differences from graphite-web, but is it also a design goal of graphite-api?

grafana

can we plug grafana to graphite-api ? I get "http error" all the time, while I can access the API directly.

list index out of range in hitcount()

on requests like `/render?target=hitcount(foo,30minute)
I get

IndexError: list index out of range

Stacktrace (most recent call last):

  File "flask/app.py", line 1817, in wsgi_app
    response = self.full_dispatch_request()
  File "flask/app.py", line 1477, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "newrelic/hooks/framework_flask.py", line 74, in wrapper_Flask_handle_exception
    return wrapped(*args, **kwargs)
  File "flask/app.py", line 1381, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "flask/app.py", line 1475, in full_dispatch_request
    rv = self.dispatch_request()
  File "flask/app.py", line 1461, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "newrelic/hooks/framework_flask.py", line 30, in handler_wrapper
    return wrapped(*args, **kwargs)
  File "graphite_api/app.py", line 370, in render
    series_list = evaluateTarget(context, target)
  File "graphite_api/app.py", line 450, in evaluateTarget
    result = evaluateTokens(requestContext, tokens)
  File "graphite_api/app.py", line 461, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression)
  File "graphite_api/app.py", line 476, in evaluateTokens
    return func(requestContext, *args, **kwargs)
  File "graphite_api/functions.py", line 2981, in hitcount
    buckets[start_bucket].append(

So for some reason start_bucket is sometimes too big for the size of the buckets list, apparently.
I tried going through the code and doing all the math, but i see no reason for this to happen.
Also the only relevant code change compared to graphite-web seems to be

-    interval = int(delta.seconds + (delta.days * 86400))
+    interval = to_seconds(delta)

in 86e55af, which i tried undoing, to no avail.

rendering metrics from=-12h fails

I think the is more related to cyanite, but just incase you had the same issue and can help resolving it, the issue is that when I try to get data for lets say tehn days I get this in cyanite logs

ERROR [2014-04-25 17:58:47,174] New I/O worker #14 - org.spootnik.cyanite.http - could not process request
clojure.lang.ExceptionInfo: Query execution failed {:values [("collectd.ip_80.load.load.longterm") 60 259200 1397577530 1398441530 14401], :query #<BoundStatement com.datastax.driver.core.BoundStatement@3f9d3bc>, :type :qbits.alia/execute, :exception #<InvalidQueryException com.datastax.driver.core.exceptions.InvalidQueryException: Cannot page queries with both ORDER BY and a IN restriction on the partition key; you must either remove the ORDER BY or the IN and sort client side, or disable paging for this query>}
at clojure.core$ex_info.invoke(core.clj:4403)
at qbits.alia$ex__GT_ex_info.invoke(alia.clj:125)
at qbits.alia$ex__GT_ex_info.invoke(alia.clj:127)
at qbits.alia$execute.doInvoke(alia.clj:251)
at clojure.lang.RestFn.invoke(RestFn.java:457)
at org.spootnik.cyanite.store$fetch.invoke(store.clj:231)
at org.spootnik.cyanite.http$fn__15785.invoke(http.clj:80)
at clojure.lang.MultiFn.invoke(MultiFn.java:227)
at org.spootnik.cyanite.http$wrap_process$fn__15797.invoke(http.clj:102)
at org.spootnik.cyanite.http$wrap_process.invoke(http.clj:98)
at org.spootnik.cyanite.http$start$handler__15808.invoke(http.clj:120)
at aleph.http.netty$start_http_server$fn$reify__15180$stage0_15166__15181.invoke(netty.clj:77)
at aleph.http.netty$start_http_server$fn$reify__15180.run(netty.clj:77)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
at aleph.http.netty$start_http_server$fn$reify__15180.invoke(netty.clj:77)
at aleph.http.netty$start_http_server$fn__15163.invoke(netty.clj:77)
at lamina.connections$server_generator_$this$reify__14959$stage0_14945__14960.invoke(connections.clj:376)
at lamina.connections$server_generator_$this$reify__14959.run(connections.clj:376)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$start_pipeline.invoke(pipeline.clj:78)
at lamina.connections$server_generator_$this$reify__14959.invoke(connections.clj:376)
at lamina.connections$server_generator_$this__14942.invoke(connections.clj:376)
at lamina.connections$server_generator_$this__14942.invoke(connections.clj:371)
at lamina.trace.instrument$instrument_fn$fn__6340$fn__6374.invoke(instrument.clj:140)
at lamina.trace.instrument$instrument_fn$fn__6340.invoke(instrument.clj:140)
at clojure.lang.AFn.applyToHelper(AFn.java:154)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.lang.AFunction$1.doInvoke(AFunction.java:29)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at lamina.connections$server_generator$fn$reify__15006.run(connections.clj:407)
at lamina.core.pipeline$fn__3632$run__3639.invoke(pipeline.clj:31)
at lamina.core.pipeline$resume_pipeline.invoke(pipeline.clj:61)
at lamina.core.pipeline$subscribe$fn__3665.invoke(pipeline.clj:118)
at lamina.core.result.ResultChannel.success_BANG_(result.clj:388)
at lamina.core.result$fn__1315$success_BANG___1318.invoke(result.clj:37)
at lamina.core.queue$dispatch_consumption.invoke(queue.clj:111)
at lamina.core.queue.EventQueue.enqueue(queue.clj:327)
at lamina.core.queue$fn__1946$enqueue__1961.invoke(queue.clj:131)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.channel.Channel.enqueue(channel.clj:63)
at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
at lamina.core$enqueue.invoke(core.clj:107)
at aleph.http.core$collapse_reads$fn__14021.invoke(core.clj:229)
at lamina.core.graph.propagator$bridge$fn__2919.invoke(propagator.clj:194)
at lamina.core.graph.propagator.BridgePropagator.propagate(propagator.clj:61)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.graph.node.Node.propagate(node.clj:282)
at lamina.core.graph.core$fn__1875$propagate__1880.invoke(core.clj:34)
at lamina.core.channel.SplicedChannel.enqueue(channel.clj:111)
at lamina.core.utils$fn__1070$enqueue__1071.invoke(utils.clj:74)
at lamina.core$enqueue.invoke(core.clj:107)
at aleph.netty.server$server_message_handler$reify__9192.handleUpstream(server.clj:135)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpContentEncoder.messageReceived(HttpContentEncoder.java:81)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$upstream_traffic_handler$reify__8884.handleUpstream(core.clj:258)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$connection_handler$reify__8877.handleUpstream(core.clj:240)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at aleph.netty.core$upstream_error_handler$reify__8867.handleUpstream(core.clj:199)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at aleph.netty.core$cached_thread_executor$reify__8830$fn__8831.invoke(core.clj:78)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Unknown Source)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Cannot page queries with both ORDER BY and a IN restriction on the partition key; you must either remove the ORDER BY or the IN and sort client side, or disable paging for this query
at com.datastax.driver.core.Responses$Error.asException(Responses.java:96)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:108)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:228)
at com.datastax.driver.core.RequestHandler.onSet(RequestHandler.java:354)
at com.datastax.driver.core.Connection$Dispatcher.messageReceived(Connection.java:571)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
any suggestion plz ?
I should also mention that the VM cyanite/cassandra turns on curently is 2 Go RAM, could that be the issue ?

If multiple parameters specified - last one should be chosed, not the first.

If for some reasons you specify multiple parameters, like "hideLegend" right now only first is used. Though it's more sane to use latest (so it'll be easier to redefine parameter from URL).

To reproduce you can do something like:
/render/?target=sumSeries(carbon.carbon-daemons.*.*.metrics_received)&hideLegend=true&hideLegend=false - legend will be hidden, but it'll be saner to show legend in this case.

Problem with debian init.d script

Hello,

In the init.d script, line 47, the var $PID_FILE is used but the pidofproc command fail as the declared variable (line 23) is named $PIDFILE.

Is there any place where we can find the package files to make a PR ? Otherwise, can you please update it ?

Is the .deb package the recommande way to install graphite-api on a Debian based server or should we use the python package ?

Thanks

incorrect parsing of metric names with equals sign in them

in my InfluxdbFinder I add a logging line to find_nodes like so:

    def find_nodes(self, query):
        logger.debug(msg="find_nodes invocation", findQuery=query)
        ...

then I do these requests:

~/w/e/graphite-api โฏโฏโฏ curl "http://localhost:8000/render/?target=alias(service=carbon-tagger,'foo')&format=json"
[]%
~/w/e/graphite-api โฏโฏโฏ curl "http://localhost:8000/render/?target=alias(servicecarbon-tagger,'foo')&format=json"
[]%   

they result in these logs:

{"msg": "find_nodes invocation", "findQuery": "<FindQuery: alias from Tue Sep  2 20:02:31 2014 until Wed Sep  3 20:02:31 2014>"}
{"msg": "find_nodes invocation", "findQuery": "<FindQuery: servicecarbon-tagger from Tue Sep  2 20:02:35 2014 until Wed Sep  3 20:02:35 2014>"}

conclusion. if metric key / pattern contains an equals sign, something in graphite-api breaks.

note: this used to work fine in graphite-web

Do not exclude tests in release

Is there any speciffic reason why do you exclude tests on pypi releases? We are in favour of running automated tests, before putting our packages in production, and also have setup that does so.

templating

in graphite-web/webapp/graphite/render/glyph.py the method loadTemplate reads the template from the config file. graphite-api doesn't do this.

  • it has been the case since initial release, yet this is not mentioned anywhere in the changelog (i.e. implying that the behavior is the same as graphite-web).
  • the docs still mention the template http param and graphTemplates.conf, glyph.py also refers to the file, but the file is not used and the param does nothing.

structlog doesn't honor debug level

logger = structlog.get_logger()
logger.debug("this will get logged whether")
logger.debug("gunicorn is being run with --log-level debug or not")

astimezone() cannot be applied to a naive datetime

Hello,

Using master, I encounter the following error:

  ...
  File "/usr/share/graphite-api/.venv27/lib/python2.7/site-packages/graphite_api/app.py", line 469, in evaluateTokens
    return fetchData(requestContext, tokens.pathExpression)
  File "/usr/share/graphite-api/.venv27/lib/python2.7/site-packages/graphite_api/render/datalib.py", line 86, in fetchData
    startTime = int(epoch(requestContext['startTime']))
  File "/usr/share/graphite-api/.venv27/lib/python2.7/site-packages/graphite_api/utils.py", line 91, in epoch
    return calendar.timegm(dt.astimezone(pytz.utc).timetuple())
ValueError: astimezone() cannot be applied to a naive datetime"

In fact, sometimes the dt variable has no timezone :

dt: datetime.datetime(2014, 8, 27, 14, 11, 46, 425395, tzinfo=<StaticTzInfo 'Universal'>)
dt: datetime.datetime(2014, 7, 28, 0, 0)

I did not have the time to search where does it come from, and instead I added this "last-chance" quick and dirty fix, that I don't like as it hardcodes the TZ (even if, as I'm guessing, it seems to occur from dates without times, and thus we could consider that the caller is referring to UTC TZ.). It may have, to be correct, to use the configured TZ in the config file ; however I'd prefer to find the source of the TZ-less datetime.

diff --git a/graphite_api/utils.py b/graphite_api/utils.py
index 156d628..8284183 100644
--- a/graphite_api/utils.py
+++ b/graphite_api/utils.py
@@ -84,4 +84,6 @@ def epoch(dt):
     """
     Returns the epoch timestamp of a timezone-aware datetime object.
     """
+    if dt.tzinfo is None:
+        dt = dt.replace(tzinfo=pytz.utc)
     return calendar.timegm(dt.astimezone(pytz.utc).timetuple())

ZeroDevisionError when rendering graph for seyren

I get the following error when trying to use graphite api with seyren:

stats# [  153.875298] waitress-serve[1115]: ERROR:flask.app:{"event": "Exception on /render [GET]", "exception": "Traceback (most recent call last):\n  File \"/nix/store/q5cbw2l8a1hq5xgiwwcnv0hj364jkjck-python2.7-flask-0.10.1/lib/python2.7/site-packages/flask/app.py\", line 1817, in wsgi_app\n    response = self.full_dispatch_request()\n  File \"/nix/store/q5cbw2l8a1hq5xgiwwcnv0hj364jkjck-python2.7-flask-0.10.1/lib/python2.7/site-packages/flask/app.py\", line 1477, in full_dispatch_request\n    rv = self.handle_user_exception(e)\n  File \"/nix/store/q5cbw2l8a1hq5xgiwwcnv0hj364jkjck-python2.7-flask-0.10.1/lib/python2.7/site-packages/flask/app.py\", line 1381, in handle_user_exception\n    reraise(exc_type, exc_value, tb)\n  File \"/nix/store/q5cbw2l8a1hq5xgiwwcnv0hj364jkjck-python2.7-flask-0.10.1/lib/python2.7/site-packages/flask/app.py\", line 1475, in full_dispatch_request\n    rv = self.dispatch_request()\n  File \"/nix/store/q5cbw2l8a1hq5xgiwwcnv0hj364jkjck-python2.7-flask-0.10.1/lib/python2.7/site-packages/flask/app.py\", line 1461, in dispatch_request\n    return self.view_functions[rule.endpoint](**req.view_args)\n  File \"/nix/store/jgxj9sq5h1xv3c8lv2zvrm535ada7pbl-python2.7-graphite-api-1.0.1/lib/python2.7/site-packages/graphite_api/app.py\", line 442, in render\n    image = doImageRender(request_options['graphClass'], graph_options)\n  File \"/nix/store/jgxj9sq5h1xv3c8lv2zvrm535ada7pbl-python2.7-graphite-api-1.0.1/lib/python2.7/site-packages/graphite_api/app.py\", line 555, in doImageRender\n    img = graphClass(**graphOptions)\n  File \"/nix/store/jgxj9sq5h1xv3c8lv2zvrm535ada7pbl-python2.7-graphite-api-1.0.1/lib/python2.7/site-packages/graphite_api/render/glyph.py\", line 383, in __init__\n    self.drawGraph(**params)\n  File \"/nix/store/jgxj9sq5h1xv3c8lv2zvrm535ada7pbl-python2.7-graphite-api-1.0.1/lib/python2.7/site-packages/graphite_api/render/glyph.py\", line 869, in drawGraph\n    self.consolidateDataPoints()\n  File \"/nix/store/jgxj9sq5h1xv3c8lv2zvrm535ada7pbl-python2.7-graphite-api-1.0.1/lib/python2.7/site-packages/graphite_api/render/gly
stats# [  153.885102] waitress-serve[1115]: ph.py\", line 1248, in consolidateDataPoints\n    bestXStep = numberOfPixels / divisor\nZeroDivisionError: float division by zero"}

diffSeries does not support an int as argument

According to spec - https://graphite.readthedocs.org/en/latest/functions.html#graphite.render.functions.diffSeries - diffSeries is supposed to be able to take an int as second argument (actually - any argument), but graphite_api dies with this:

event=u'Exception on /render [POST]' exception='Traceback (most recent call last):\n File "/opt/graphite-api/lib/python2.6/site-packages/flask/app.py", line 1817, in wsgi_app\n response = self.full_dispatch_request()\n File "/opt/graphite-api/lib/python2.6/site-packages/flask/app.py", line 1477, in full_dispatch_request\n rv = self.handle_user_exception(e)\n File "/opt/graphite-api/lib/python2.6/site-packages/flask/app.py", line 1381, in handle_user_exception\n reraise(exc_type, exc_value, tb)\n File "/opt/graphite-api/lib/python2.6/site-packages/flask/app.py", line 1475, in full_dispatch_request\n rv = self.dispatch_request()\n File "/opt/graphite-api/lib/python2.6/site-packages/flask/app.py", line 1461, in dispatch_request\n return self.view_functionsrule.endpoint\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/app.py", line 361, in render\n series_list = evaluateTarget(context, target)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/app.py", line 451, in evaluateTarget\n result = evaluateTokens(requestContext, tokens)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/app.py", line 461, in evaluateTokens\n return evaluateTokens(requestContext, tokens.expression)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/app.py", line 477, in evaluateTokens\n return func(requestContext, _args, *_kwargs)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/functions.py", line 277, in diffSeries\n seriesList, start, end, step = normalize(seriesLists)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/functions.py", line 152, in normalize\n seriesList = reduce(lambda L1, L2: L1+L2, seriesLists)\n File "/opt/graphite-api/lib/python2.6/site-packages/graphite_api/functions.py", line 152, in \n seriesList = reduce(lambda L1, L2: L1+L2, seriesLists)\nTypeError: can only concatenate list (not "int") to list'

Beyond fetch_multi : delegating function call to the backend

Hello,

I'm writing a finder / backend for a storage engine which already has some functions and aggregations implemented. (I target KairosDB, but it should be the same for others like InfluxDB, etc...)

What about delegating part of the (or the whole) request to the underlying backend when those functions are implemented in the backend ?
In my case it would really speed up the query.

It would require enhancing the plugin interface to specify which functions are ok to delegate ; and there would need some query rewrite as most of the storage engines out there do not understand wildcards (which, I think, is graphite's interesting point).

Any thoughts about it ?

sum(foo) converts null's into zeros

/render/?target=foo if foo has nulls, it shows the nulls,
but when you render target=sumSeries(foo) the nulls become zeros.

graphite-web doesn't do this (nulls stay nulls), which is imho the correct behavior.

not sure where this difference is from, as the sumseries and normalize functions seem pretty much the same.

graphite-api should complain when custom finders/readers return incorrect data

if a reader returns an empty list as a series, this exception gets thrown:

{"event": "Exception on /render [GET]", "exception": "Traceback (most recent call last):
  File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1817, in wsgi_app
    response = self.full_dispatch_request()
  File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1477, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1381, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1475, in full_dispatch_request
    rv = self.dispatch_request()
  File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1461, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/app.py\", line 356, in render
    series_list = evaluateTarget(context, target)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/app.py\", line 436, in evaluateTarget
    result = evaluateTokens(requestContext, tokens)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/app.py\", line 446, in evaluateTokens
    return evaluateTokens(requestContext, tokens.expression)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/app.py\", line 458, in evaluateTokens
    return func(requestContext, *args, **kwargs)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/functions.py\", line 190, in sumSeries
    seriesList, start, end, step = normalize(seriesLists)
  File \"/usr/local/lib/python2.7/dist-packages/graphite_api/functions.py\", line 147, in normalize
    step = reduce(lcm, [s.step for s in seriesList])
TypeError: reduce() of empty sequence with no initial value"}

graphite-api should rather raise an exection like finder <finder-name> returned empty series for key <metric id>

Graphite-api as backend for graphite-web dashboards

I have problem with graphite-web which is making requests for find with format=pickle

/metrics/find/?local=1&format=pickle&query=carbon.*

Is there any way, to communicate graphite-web with graphite-api on treejson format ?

I have look @graphite-web code and there is only one place for that - remote_storage.py, but only pickle is supported there.

show issue to user

via sentry i get to see various kinds of errors,
like

graphite_api.render.glyph in consolidateDataPoints

ZeroDivisionError: float division by zero

or

graphite_api.functions in divideSeries

ValueError: divideSeries second argument must reference exactly 1 series

or

graphite_api.render.attime in parseTimeReference

Exception: Unknown day reference

it would be nice if we also presented this error to the user

Problems running on CentOS 6.5

I'm trying to use graphite-api on CentOS 6.5. I'm using system Python (2.6.6,) and I have installed what I think are all the prereqs:

  • libyaml-devel
  • cairo-devel
  • python-devel
  • libffi-devel

I then make a virtualenv for graphite-api.

[smerrill@utility ~]$ virtualenv /usr/share/python/graphite
[smerrill@utility ~]$ /usr/share/python/graphite/bin/pip install graphite-api
[smerrill@utility ~]$ /usr/share/python/graphite/bin/pip install gunicorn

I also have a config file to point graphite-api at the whisper files from an existing Carbon/Whisper setup. /srv/graphite/index is an empty directory, but it does exist. This is my /etc/graphite-api.yaml:

search_index: /srv/graphite/index
finders:
  - graphite_api.finders.whisper.WhisperFinder
functions:
  - graphite_api.functions.SeriesFunctions
  - graphite_api.functions.PieFunctions
whisper:
  directories:
    - /var/lib/carbon/whisper
time_zone: America/New_York

Then when I try to start the API, I get a rather generic error message:

[smerrill@utility ~]$ /usr/share/python/graphite/bin/gunicorn --debug graphite_api.app:app
2014-05-05 18:33:37 [2096] [INFO] Starting gunicorn 18.0
2014-05-05 18:33:37 [2096] [INFO] Listening at: http://127.0.0.1:8000 (2096)
2014-05-05 18:33:37 [2096] [INFO] Using worker: sync
2014-05-05 18:33:37 [2101] [INFO] Booting worker with pid: 2101
{"path": "/etc/graphite-api.yaml", "event": "loading configuration"}
{"index_path": "/srv/graphite/index", "event": "reading index data"}
2014-05-05 18:33:38 [2096] [INFO] Shutting down: Master
2014-05-05 18:33:38 [2096] [INFO] Reason: Worker failed to boot.

I'm not very familiar with Python, so I don't know where to jump in and debug. Any suggestions?

can't have curent metrics

curl http://ip:8000/render?target=collectd.ip.memory.memory-used&format=csv&from=-3h
collectd.ip.memory.memory-used,2014-04-18 10:01:50,247963648.0
collectd.ip.memory.memory-used,2014-04-18 10:02:00,249802752.0
collectd.ip.memory.memory-used,2014-04-18 10:03:30,225345536.0
collectd.ip.memory.memory-used,2014-04-18 10:03:40,227315712.0
collectd.ip.memory.memory-used,2014-04-18 10:05:30,249901056.0
collectd.ip.memory.memory-used,2014-04-18 10:10:40,244023296.0
collectd.ip.memory.memory-used,2014-04-18 10:10:50,246218752.0
collectd.ip.memory.memory-used,2014-04-18 10:11:00,246341632.0
collectd.ip.memory.memory-used,2014-04-18 10:11:10,224964608.0
collectd.ip.memory.memory-used,2014-04-18 10:11:20,225087488.0
collectd.ip.memory.memory-used,2014-04-18 10:11:30,
collectd.ip.memory.memory-used,2014-04-18 10:11:40,
collectd.ip.memory.memory-used,2014-04-18 11:06:50,
collectd.ip.memory.memory-used,2014-04-18 11:07:00,
collectd.ip.memory.memory-used,2014-04-18 11:07:10,
collectd.ip.memory.memory-used,2014-04-18 11:07:20,
collectd.ip.memory.memory-used,2014-04-18 11:10:20,
collectd.ip.memory.memory-used,2014-04-18 11:10:30,
collectd.ip.memory.memory-used,2014-04-18 11:10:40,
collectd.ip.memory.memory-used,2014-04-18 11:10:50,
collectd.ip.memory.memory-used,2014-04-18 11:11:00,
collectd.ip.memory.memory-used,2014-04-18 11:11:10,
collectd.ip.memory.memory-used,2014-04-18 11:11:20,
collectd.ip.memory.memory-used,2014-04-18 11:11:30,
date
Fri Apr 18 12:13:09 CEST 2014

I use cyanite to store data here is the config file of cyanite

carbon:
  host: "0.0.0.0"
  port: 2003
  rollups:
    - period: 3600
      rollup: 10
http:
  host: "0.0.0.0"
  port: 8080
logging:
  level: info
  console: true
  files:
    - "/tmp/cyanite.log"
store:
  cluster: 'localhost'
  keyspace: 'metric'

I don(t understand why I can't have curent metrics, can any one help please

proper error logging in the docker image

While testing my custom finder, I can make gunicorn return "500 Internal Server Error" by doing requests to /render and /metrics/find

i use the image as per https://index.docker.io/u/brutasse/graphite-api/,
and hence run gunicorn -b 0.0.0.0:8000 -w 2 --log-level debug graphite_api.app:app
but it only logs https://gist.github.com/Dieterbe/10199015 i don't get anything more.
nothing on the terminal, no logfiles :(
i want to see why it errors, if there's any python errors and stack dumps, etc.

incorrect rendering area mode

hi!
in my opinion rendering in areamode=all is incorrect.
if one area is under another it is drwaed by line only. but perfect mode is like rrdgrapg do. i ought to see one area under another one
like this one.
graph_image php

as i can see filling mode is default cairo.OPERATOR_OVER. that will do exactly what i thing is right begavior http://cairographics.org/operators/

OPERATOR_OVER

but in some case drawing is still wrong
graphite

smartSummarize not work

I'm using Grafana 1.6.1 with graphite-api version 1.0.1.
summarize function works fine, but changing to smartSummarize caused a "Internal Server Error".

Anyone can help with this?

Thanks.

500 errors on /metrics/find?query=*

Hi there,

I recently attempted an upgrade from InfluxDB 0.6.4 to 0.8.1 (In between I have also tried 0.6.5, 0.7.0 as well)

After upgrading, I had a problem with influxdb snapshots for some reason, which caused me to delete my raft directory and have it rebuild one. I also re created the database and the database user - however, once this was all done, any query to /metrics/find?query=* returned a 500 error. Doing a query for some specific data (which did return correctly using the InfluxDB browser) simply returned "[ ]"

Is there anything I can do to debug this to figure out why that's happening, aside from just seeing the 500 error?

Any known issues with more recent versions of InfluxDB?

Thanks!

graphite-api with grafana

what is the purpose of graphite-api? Can it connect to opentsdb? can graphite-api work with grafana dashboarding solution?

Gunicorn worker failed to boot

I know Gunicorn is working properly because I tested it with another test app. However, when I point it to the graphite-api application, the workers fail to start up and I get no indication of why.

# python -V
Python 2.6.6
[root@ip graphite_api]# sudo gunicorn -w2 graphite_api.app:app
2014-03-31 23:29:28 [28346] [INFO] Starting gunicorn 18.0
2014-03-31 23:29:28 [28346] [INFO] Listening at: http://127.0.0.1:8000 (28346)
2014-03-31 23:29:28 [28346] [INFO] Using worker: sync
2014-03-31 23:29:28 [28351] [INFO] Booting worker with pid: 28351
2014-03-31 23:29:28 [28354] [INFO] Booting worker with pid: 28354
Traceback (most recent call last):
  File "/usr/bin/gunicorn", line 9, in <module>
    load_entry_point('gunicorn==18.0', 'console_scripts', 'gunicorn')()
  File "/usr/lib/python2.6/site-packages/gunicorn/app/wsgiapp.py", line 71, in run
    WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
  File "/usr/lib/python2.6/site-packages/gunicorn/app/base.py", line 143, in run
    Arbiter(self).run()
  File "/usr/lib/python2.6/site-packages/gunicorn/arbiter.py", line 203, in run
    self.halt(reason=inst.reason, exit_status=inst.exit_status)
  File "/usr/lib/python2.6/site-packages/gunicorn/arbiter.py", line 298, in halt
    self.stop()
  File "/usr/lib/python2.6/site-packages/gunicorn/arbiter.py", line 341, in stop
    self.reap_workers()
  File "/usr/lib/python2.6/site-packages/gunicorn/arbiter.py", line 452, in reap_workers
    raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>

Add better logging support

When running graphite-api with an application server (i.e. gunicorn) and hitting a 500 error the traceback, or at least the error message, should be logged to ease troubleshot the issue.
For example, if we forget to specify a unit into the from parameter:

$ curl 'http://127.0.0.1:2005/render?target=carbon.agents.*.errors&from=-600&format=json'
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

PS: a picky user will also remark that the response is HTML instead of JSON ;)

/metrics is missing

It contains some queries, useful for dashbords, like /metrics/index.json, which can be considered as part of Graphite API.

ValueError: divideSeries second argument must reference exactly 1 series

upon target=divideSeries(sumSeries(stats.*.memcached_stats_archive_read), sumSeries(stats.*.memcached_stats_archive_get))
i'm getting:

graphite_api.functions in divideSeries

ValueError: divideSeries second argument must reference exactly 1 series

this is triggered by

 def divideSeries(requestContext, dividendSeriesList, divisorSeriesList):
      if len(divisorSeriesList) != 1:
          raise ValueError(
              "divideSeries second argument must reference exactly 1 series")

however, the following works:

target=sumSeries(stats.*.memcached_stats_archive_read)&target=sumSeries(stats.*.memcached_stats_archive_get)

but it says "No Data"

so it looks like somehow the len(divisorSeriesList) breaks when divisorSeriesList is 1 series, but not containing any data, or something

Sample Ceres Configuration

Could you please provide a sample Ceres configuration? I have everything running on the same host: Ceres relays and writers, graphite-api and Grafana. All I need is a way to point graphite-api to the Ceres processes.

For example, I found this for graphite-api + cyanite:

finders:  
  - cyanite.CyaniteFinder
cyanite:
  url: http://localhost:8077

Finally, does graphite-api require any other dependency to be installed for it to be able to communicate with Ceres?

Question on fetch_multi

Hello,

While implementing fetch_multi for one datasource, I came to see the following:

  • fetch_multi returns a (time_info, series) where series is a dictionary with (possibly) multiple entries, but time_info is only expressed once.
  • fetchData (render/datalib) reorganizes it into a list of TimeSeries(path, start, end, step, values) : start / end / step coming from time_info ; and path, values from the multiple entries of series dictionary.

Sometime, I have series that do not share the same start, end, step : they may have no datapoints for a certain interval ; or the granularity of the data make the step different from one series to another.
Thus in fetch_multi I need to "merge" and align all the data in the same time interval / steps, which is both time and memory consuming for large series.

I was wondering if it could be of any use to have fetch_multi returning a "TimeSeries"-like output, with each series having its own time_info, (possibly) being different between each series ; and if graphite-api was able to cope afterwards with those differing time_info.

I did not prototype it yet, will try to do this.

What are your thoughts about it ?

[JSON] Target name should not include the function name

Summary

When requesting data through JSON and applying a function filter, the target name is augmented with the function name.

Steps to reproduce

Use one of the built-in functions within the target parameter.

curl 'http://127.0.0.1:2005/render?target=keepLastValue(carbon.agents.*.errors,2)&from=-600s&format=json'

Expected result

[{"target": "carbon.agents.ip-10-34-148-97-a.errors", "datapoints": [[0.0, 1399374960], [0.0, 1399375020], [0.0, 1399375080], [0.0, 1399375140], [0.0, 1399375200], [0.0, 1399375260, [0.0, 1399375320], [0.0, 1399375380], [0.0, 1399375440], [0.0, 1399375500]]}]

Actual result

[{"target": "keepLastValue(carbon.agents.ip-10-34-148-97-a.errors)", "datapoints": [[0.0, 1399375020], [0.0, 1399375080], [0.0, 1399375140], [0.0, 1399375200], [0.0, 1399375260], [0.0, 1399375320], [0.0, 1399375380], [0.0, 1399375440], [0.0, 1399375500], [0.0, 1399375560]]}]

Error building docker image

Hello,
I managed to build a docker image last week in a centos 7 server. I wiped the server and am trying to build the image again and following happens

docker build -t graphite .

Uploading context 182.6 MB
Uploading context
Step 0 : FROM brutasse/graphite-api
Pulling repository brutasse/graphite-api
fa1377097aed: Error pulling image (latest) from brutasse/graphite-api, Error mounting '/dev/mapper/docker-253:1-204140260-511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158' on '/var/lib/docker/devicemapper/mnt/511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158': device or resource busy ba7d4d2698e22c158': device or resource bu2014/09/29 12:37:54 Could not find repository on any of the indexed registries.

The contents of the Dockerfile are
FROM brutasse/graphite-api

Am I doing something wrong or has there been a change?

Single finder with multiple endpoints

Is it possible to have a single finder (i.e. CyaniteFinder) with multiple endpoints? Let's say I have three cyanite processes running in separate servers and would like to communicate with all of them.

finders:
  - cyanite.CyaniteFinder
cyanite:
  url: http://host1:8077, http://host2:8078, http://host3:8079

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.