Giter Site home page Giter Site logo

atlas's Introduction

Atlas

Backend for managing dimensional time series data.

Links

License

Copyright 2014-2024 Netflix, Inc.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

atlas's People

Contributors

brharrington avatar copperlight avatar davecromberge avatar dmuino avatar giancarlopro avatar janstenpickle avatar jfz avatar jkschneider avatar lavanyachennupati avatar manolama avatar matschaffer avatar nadavc avatar nathfisher avatar pjfanning avatar rspieldenner avatar sbailliez avatar skandragon avatar sullis avatar svachalek avatar tregoning avatar yingwuzhao avatar zimmermatt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlas's Issues

build started failing with EOFException

https://travis-ci.org/Netflix/atlas/builds/113438027

Exception in thread "Thread-30" Exception in thread "Thread-26" java.io.EOFException
    at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2601)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1319)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    at sbt.React.react(ForkTests.scala:114)
    at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:74)
    at java.lang.Thread.run(Thread.java:745)
java.io.EOFException
    at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2601)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1319)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1$React.react(Framework.scala:953)
    at org.scalatest.tools.Framework$ScalaTestRunner$Skeleton$1.run(Framework.scala:942)
    at java.lang.Thread.run(Thread.java:745)

Probably something changed with travis causing local hostname to stop resolving:

http://stackoverflow.com/questions/33287424/strange-exception-in-sbt-test/33767077#33767077

some labels are not truncated properly

This is using w=281&h=249&layout=iw. A width of 274 to 281 seems to have the same behavior.

bad-wrap

At w = 282 wrapping is no longer performed:

w-282

At w <= 273 it will truncate as expected:

w-273

what is the best way to package the app?

So far looked at:

  • Simple standalone package for use with the examples and getting started easily. The single jar works well for this.
  • Packaging with other jars and configs. Zero to cloud examples use gradle to add addtional jars and build a deb/rpm on the fly (https://github.com/brharrington/zerotocloud)

Current assumption is for non-toy use-cases we'll need to pull in some additional jars and configs.

Have MemoryBlockStore use atlas.webapi.publish.max-age

Noticed while attempting to do 1s resolution (on a very small data set) that MemoryBlockStore implements its own "data too old" check: https://github.com/Netflix/atlas/blob/master/atlas-core/src/main/scala/com/netflix/atlas/core/db/BlockStore.scala#L193

For now I can switch to 5s resolution but not sure 1s is possible across hosts since just a little clock skew could cause the data to be rejected.

If possible it'd be good to have this honor the max-age setting from publish.

Also having it include the timestamps in the message similar to https://github.com/Netflix/atlas/blob/master/atlas-webapi/src/main/scala/com/netflix/atlas/webapi/PublishApi.scala#L82 would be nice to make it clearer just how far skewed things are.

remove all relative dependencies, need explicit versions

$ cat dependencies.gradle | grep latest
  alertClient    = 'netflix:alert-client:latest.snapshot'
  atlasClient    = 'netflix:atlas-client:latest.snapshot'
  atlasPlugin    = 'netflix:atlas-plugin:latest.snapshot'
  atlasServlets  = 'netflix:atlas-servlets:latest.snapshot'
  baseServer     = 'netflix:base-server:latest.release'
  chronosClient  = 'netflix:chronos-client:latest.snapshot'
  metricExplorer = 'netflix:metrics-explorer:latest.release'
  platform       = 'netflix:platform:latest.release'
  servoCore      = 'com.netflix.servo:servo-core:latest.snapshot'
  spectatorApi   = 'com.netflix.spectator:spectator-api:latest.snapshot'
  spectatorNflx  = 'com.netflix.spectator:spectator-nflx:latest.snapshot'

setup open build

Most Netflix OSS builds use cloudbees. It has been suggested travis may be a better option.

use exact keys from query as tags in no data lines

Request from @tregoning :

Use exact keys from query as tags on a no data line. Example:

/api/v1/graph?q=name,sps,:eq,device,foo,:eq,:and,:sum,$device,:legend

This will show the legend as device rather than foo because nothing matches the query. However, in this case we can extract the value from the query expression and make the behavior more intuitive for the user.

Added a time offset legend example

I noticed the examples were missing any sort of reference to $(atlas.offset). Wanted to make sure you new I added it incase there was some sort of reference auto-generation that could clobber my change.

improve usability of total statistic in graph legends

The total statistic doesn't seem to be particularly useful and worse most users seem to interpret it incorrectly. Currently, the total statistic is the sum of all datapoints shown for a given time series on the graph. Most users I have talked with assume it is the total number of events (assuming a counter) that occurred within the time frame of the chart.

Options:

  1. Just remove total. This was done several years ago and it got re-added when someone complained. Though it is quite likely their interpretation of the value was wrong.
  2. Change the value reported with total to try and match the more common user interpretation. This is possible for simple expressions using sum over counters as the aggregate. With something like max or avg it would be an estimate of the total number for a single item. However, if combined with non-counter data it wouldn't be meaningful. Neither make much sense with gauges.
  3. Add support for some sort of legend stat control similar to the current :legend operator. It would be used to specify the stats format. We can then remote total by default and allow users to specify a total variant for counters. This could also help with requests to have some stats, but reduce the amount of vertical space taken up with the statistics. Complications:
    • Different statistics per line can make the legend rendering more complex. However it is likely needed for correctness as things like total don't really make sense for gauges.
    • It would be up to the user to make sure it made sense for the input line. One advantage of Option 2 if it could be done cleanly, is it would be more automatic based on the input data.

Feature Request: Sort Graph Legend

Summary

Given a query which results in multiple lines, it would be nice if there was an option to sort the legend by a summary statistic, i.e., sort the legend by max, min, avg, etc.

Example

Imagine we want to identify the node with the highest latency. In this example, it would be nice if we could sort the legend by max value and have the first entry in the legend be the node with the highest latency.

"Getting Started" does not work as described

$ curl -LO https://github.com/Netflix/atlas/releases/download/v1.4.1/atlas-1.4.1-standalone.jar
$ java -jar atlas-1.4.1-standalone.jar 
[main] INFO com.netflix.atlas.webapi.Main$$anon$1 - starting atlas on port 7101
[main] WARN com.netflix.spectator.api.Spectator - no registry impl found in classpath, using default
[atlas-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Exception in thread "main" java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.simontuffs.onejar.Boot.run(Boot.java:340)
    at com.simontuffs.onejar.Boot.main(Boot.java:166)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.netflix.atlas.webapi.ApiSettings$.newDbInstance(ApiSettings.scala:33)
    at com.netflix.atlas.webapi.Main$$anon$1.configure(Main.scala:29)
    at com.netflix.atlas.akka.WebServer.start(WebServer.scala:37)
    at com.netflix.atlas.webapi.Main$.main(Main.scala:33)
    at com.netflix.atlas.webapi.Main.main(Main.scala)
    ... 6 more
Caused by: java.lang.NoClassDefFoundError: java/time/ZoneId
    at com.netflix.atlas.core.db.StaticDatabase.<init>(StaticDatabase.scala:33)
    ... 15 more
Caused by: java.lang.ClassNotFoundException: java.time.ZoneId
    at com.simontuffs.onejar.JarClassLoader.findClass(JarClassLoader.java:713)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at com.simontuffs.onejar.JarClassLoader.loadClass(JarClassLoader.java:630)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 16 more

And

$ sudo netstat -natp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1145/sshd       
tcp        0    340 165.225.130.161:22      98.176.120.242:17592    ESTABLISHED 1432/sshd: ubuntu [
tcp6       0      0 :::22                   :::*                    LISTEN      1145/sshd       

Platform:

$ uname -a
Linux cf05dc8c-dcce-489c-f315-f2e7d20e294d 3.13.0-36-generic #63-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-0ubuntu0.14.04.1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

Provide presentation metadata in the JSON payload

In order to render Atlas data via the JSON API correctly (without having to fully understand the query language) presentation metadata (such as colors, area, number of axis, etc) would be very useful.

server log is confusing for 404s

It should show the status code 404, but is showing an exception instead:

2015-12-17T09:58:30.009 DEBUG [atlas-akka.actor.default-dispatcher-3] com.netflix.spectator.sandbox.HttpLogEntry: null  2015-12-17T09:58:30 1   1   start:1450375110008;logging:1450375110009;complete:1450375110009;   POST    null    http://localhost:7101/api/v1/publish    0:0:0:0:0:0:0:1 -1  -1  null    java.lang.IllegalStateException unexpected response type: spray.routing.Rejected    140 528 0   -1  Remote-Address:0:0:0:0:0:0:0:1;Content-Length:528;Content-Type:application/json;Accept:*/*;Host:localhost:7101;User-Agent:curl/7.37.1;      0   -1

double check if full GC due to metaspace clears up with u40

If we don't pin the size of metaspace we are occasionally seeing a full GC with a cause of Metadata_GC_Threshold. Though we can workaround these by pinning the size it looks like there are fixes in 8u40 that might make this unnecessary:

http://openjdk.java.net/jeps/156

We no longer see the full GC for a simple test program:

import com.google.common.collect.ImmutableSet;
import com.google.common.reflect.ClassPath;

public class Test {
  public static void main(String[] args) throws Exception {
    ImmutableSet<ClassPath.ClassInfo> classes = ClassPath
      .from(ClassLoader.getSystemClassLoader())
      .getAllClasses();
    int result = 0;
    for (ClassPath.ClassInfo c : classes) {
      try {
        Class<?> cls = c.load();
        result += cls.getName().length();
      } catch (NoClassDefFoundError e) {
      }
    }
    System.out.println(result);
  }
}

Running with:

jdk1.8.0_$version/bin/java \
  -verbosegc \
  -XX:+PrintGCDetails \
  -XX:+PrintGCTimeStamps \
  -Xloggc:gc.log \
  -XX:+UseG1GC \
  -classpath guava-18.0.jar:. \
  Test

With u25 we saw two full GCs due to metaspace:

$ strings gc.log.25 | grep -i meta
1.033: [Full GC (Metadata GC Threshold)  34M->5778K(20M), 0.1385757 secs]
   [Eden: 24.0M(48.0M)->0.0B(8192.0K) Survivors: 6144.0K->0.0B Heap: 34.8M(962.0M)->5778.5K(20.0M)], [Metaspace: 21013K->21013K(1067008K)]
1.793: [Full GC (Metadata GC Threshold)  35M->8433K(28M), 0.0633967 secs]
   [Eden: 24.0M(24.0M)->0.0B(12.0M) Survivors: 2048.0K->0.0B Heap: 35.8M(48.0M)->8433.8K(28.0M)], [Metaspace: 35282K->35282K(1077248K)]
 Metaspace       used 49639K, capacity 49806K, committed 50176K, reserved 1091584K

It didn't need full GC in u40:

$ strings gc.log.40 | grep -i meta
1.141: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0106612 secs]
1.787: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0135211 secs]
 Metaspace       used 51432K, capacity 51534K, committed 51740K, reserved 1091584K

So it does seem to work as advertised. Running a canary to confirm with the actual application.

Information on block size

Hi, I need some information regarding block size.
What is the meaning of block size? What is a drawbacks of using small number of blocks where each has a big size instead of larger number of blocks where each block has a small size (in context of in-memory storage)?

build fails on OS X

When I run make build on a fresh checkout under OS X (Mavericks w/Oracle Java 1.8) I get:

atlas$ make
project/sbt clean test checkLicenseHeaders
java.io.IOException: No such file or directory
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.createNewFile(File.java:1006)
    at xsbt.boot.Locks$.apply0(Locks.scala:34)
    at xsbt.boot.Locks$.apply(Locks.scala:28)
    at xsbt.boot.Launch.locked(Launch.scala:238)
    at xsbt.boot.Launch.app(Launch.scala:147)
    at xsbt.boot.Launch.app(Launch.scala:145)
    at xsbt.boot.Launch$.run(Launch.scala:102)
    at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
    at xsbt.boot.Launch$.launch(Launch.scala:117)
    at xsbt.boot.Launch$.apply(Launch.scala:18)
    at xsbt.boot.Boot$.runImpl(Boot.scala:41)
    at xsbt.boot.Boot$.main(Boot.scala:17)
    at xsbt.boot.Boot.main(Boot.scala)
Error during sbt execution: java.io.IOException: No such file or directory
make: *** [build] Error 1

publishing

This has several parts:

  • Bindings for servo (internal atlas-plugin lib)
  • Figure out a story for metrics (independent or leverage plugin for things like local alerting)
  • Configuration setup for running the clusters needs to be cleaned up a bit and documented.

More accurate dead letter count

The line here in UnboundedMeteredMailbox (https://github.com/Netflix/atlas/blob/master/atlas-akka/src/main/scala/com/netflix/atlas/akka/UnboundedMeteredMailbox.scala#L64)

 def cleanUp(owner: ActorRef, deadLetters: MessageQueue): Unit = {
    deadLettersCounter.increment(queue.size)
    queue.clear()
  }

attempts to count dead letters, but it will only record those that were already in the mailbox when the actor stops. Dead letters may arrive later than this point because either their enqueueing was in race with the shutdown, or because some actors are still sending messages to the reference after termination. In fact this latter is usually more important to record, since if an actor does not properly watch another, then the counter will increase monotonically, which the current implementation does not track (at leas as far as I see).

You can tap directly onto the dead letters as described here, maybe that is a better approach:

http://doc.akka.io/docs/akka/2.4.0/scala/event-bus.html#Dead_Letters

better compensation for NaN smoothing on aggregations

In some cases consolidation can lead to overlap of intervals with NaN values across shards that unexpected inflates the value of the aggregate, as an example:

6h_at_1m

6h_at_10m

Looking at a view by node makes the behavior more clear:

success_6h_at_1m

success_6h_at_10m

In the case with multiple lines in the result set like the second stack views there isn't much we can do, however, when doing an aggregate across them it would be better if we can avoid the increase. This is already compensated for locally to a shard, but not across shards.

Add Option to Invert Graph Colors

Is there something like a fgcolor and bgcolor for altas graphs? I would like to have black graphs and white text for our jumbotron. This would be a nice feature, and possibly useful for people with some degree of color blindness.

notice for 0 entries suppressed

If using exactly the number of lines for the max legend entries:

/api/v1/graph?q=1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1

It shows a suppression message even though nothing was suppressed:

screen shot 2016-01-15 at 12 18 45 am

It's not clear to me how to get data in

I presume data collection is orthogonal to atlas, but atlas does seem to be a storage engine. The get going examples are cool because they deliver data that you can play with... but that doesn't answer my question.

Any clue on how to get data in?

Feature request: KairosDB backend for Atlas

It looks to me that Atlas with kairosDB as a datastore backend would be a winning team. KairosDB has most of OpenTSDB features while being more flexible under many aspects.
Moreover, netflix is already known as a large contributor to Cassandra world, and Cassandra is by design the primary datastore for the KairosDB time series database with a very efficient design for time series data.

plan for 1.5

Goals for 1.5

  • binding for servo to atlas (probably in servo repo)
  • zero to cloud example setup
    • cloudwatch backend
    • cloudtrail backend
    • mirrored in-memory backend
    • all apps (eureka/edda/etc) publish to memory backend
    • regional aggregation cluster combining memory and cloudwatch backends
    • enable dynamic props (archaius) via dynamo for controlling atlas
    • system metric polling
  • experimental implementation of percentile db

Tentative date end of Q1.

can we modify to fetch latest point even when step size is big?

I've noticed we get data that's from over an hour ago as the last point when we specify a step size of 1h. I'd like that to always include the last point. It helps especially with Argus because we want Atlas and mantis graphs to line up when they represent the same data result. In the graph below, you can see that when we specify a step of 1 hour and a span of 7 days, we get a last result of over 30k, but when we specify a time of 1 hour and a short step we get a value less than 20K. The value less than 20K is the value that is accurate and correct. I would think we want this to be returned in either query if possible.

/api/v1/graph?q=command,FetchEvidenceCommand,:eq,atlasName,com.netflix.argus.mantis.hystrix.HystrixRpsCountEnum,:eq,:and,name,argus__Hystrix__Gauge,:eq,:and,:sum,(,gaugeName,),:by&s=e-7d&step=1h&e=now-12h

7d

6h_at_1h

6h_at_1m

connection closed for GET request with empty chunked body

If a GET request is received with an empty chunked body, the server closes the connection with no response:

$ telnet localhost 7101
Trying ::1...
Connected to localhost.
Escape character is '^]'.
GET /api/v1/tags HTTP/1.1
Host: localhost:7101
Transfer-Encoding: chunked

0
Connection closed by foreign host.

Using a fixed sized body it is ignored:

$ telnet localhost 7101
Trying ::1...
Connected to localhost.
Escape character is '^]'.
GET /api/v1/tags HTTP/1.1
Host: localhost:7101
Content-Length: 1

0
HTTP/1.1 200 OK
Server: atlas/1.3
Date: Mon, 08 Feb 2016 19:27:46 GMT
Content-Type: application/json
Content-Length: 76

["name","nf.app","nf.asg","nf.cluster","nf.node","statistic","type","type2"]

Ideally chunked behavior would be consistent with a fixed length body.

support an option to allow data range used to vary from display

From @mbossenbroek :

Is there a way to specify different durations for the display and data used in a graph? For example, because this graph has a 24H trend, the first 24 hours is always blank. Is there a way to tell it to fetch prior data (or display a truncated period)?

Sample:

/api/v1/graph?q=name,sps,:eq,:sum,:dup,6h,:trend&s=e-2d

trend

Question: Atlas Http Api

Hello @brharrington ,

I have a question about Atlas Http API. Is there an API which support the following pattern or similar?

Request -> Takes metric name, tags, start and end time.
Response -> Returns metric name, tags, datapoints as a JSON object

detail long term plan for offset variants

We have considered the list for deprecated for a while, but it might be more convenient for some use-cases. See discussion in: #62. When we get around to advertising deprecated usage and trying to phase it out we should have a good story around it.

Regular expression match not supported in memory backend

When running the sample data I can match things like http://localhost:7101/api/v1/graph?q=name,DiscoveryStatus.*,:re,:sum just fine, but when running the memory backend on my own data :re seems to only accept whole string matches.

For example http://localhost:7101/api/v1/graph?s=e-8h&q=prefix,prd.core-testnet-001,:re works but http://localhost:7101/api/v1/graph?s=e-8h&q=prefix,prd.core-testnet,:re (just trying to match the start of the string) doesn't. Also tried .* and \d+ for the numbers but no success.

I haven't had a chance to work up a test case with fake data going into the memory backend yet so thought I'd open this up to ask if it's even expected to work.

Production Setup

Hello,

Besides memory is or will there there be any other storage option available in 1.4 or 1.5?
I would like to have the same setup described here https://camo.githubusercontent.com/57ddedea3c60f776f5634474e70da232b64efee1/687474703a2f2f6e6574666c69782e6769746875622e696f2f61746c61732f696d616765732f77696b692f61746c61735f73746f726167655f61745f6e6574666c69782e737667
obviously the retentions will be different - we have a smaller volume. We would probably be able to keep 1w of data in memory.
Is EMR in the diagram used to roll up to coarser time units? Any other options for that? Ideally I don't want to deal with that at this stage, I think with our volume we should be okay with a simple rollup algorithm running directly on the S3 objects.

I can't find any documentation at all for this kind of setup.

More context on my project:

We are currently using spring-boot and metrics3 directly to collect metrics. metrics3 forwards them to Graphite. We are moving to Amazon ECS and autoscaling is increasingly being used so the fact that we have the instanceId(containerID) in the metric names is not a very good match for Whisper.

Now I'm looking to migrate that to spring-cloud-netflix with Spectator and Atlas. In order to be able to run Atlas in production I need to be able to also store and make historical data available.

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.