Giter Site home page Giter Site logo

logbee / keyscore Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 0.0 9.33 MB

License: Apache License 2.0

Scala 58.92% HTML 0.63% JavaScript 0.59% TypeScript 37.72% Shell 0.03% CSS 1.50% Groovy 0.62%
akka analytics angular big-data data-analysis data-mining flow ngrx pipeline pipelines pipelining rest-api scala typescript

keyscore's People

Contributors

endallbatan93 avatar jbaumgartl avatar kkdh avatar maxkarthan avatar mlandth avatar reimarstier avatar zedharper avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keyscore's Issues

Rewrite of the descriptor-api to improve usability for developers

In my opinion, writing descriptors for sources, filters and sinks is a pain. There are multiple reasons why we should rewrite the descriptor-api.

Problem 1: Mix of parameter definition and localization

Mixing the parameter definition an the translation of the name and description pollutes the code and make it hard to read. Additionally creating a map of descriptors (to support multiple languages) makes it even worse. In the example below taken from the AddFieldsFilterLogic is a lot of code related to localization and not defining parameters:

private val filterName = "io.logbee.keyscore.agent.pipeline.contrib.filter.AddFieldsFilterLogic"
private val bundleName = "io.logbee.keyscore.agent.pipeline.contrib.filter.AddFieldsFilter"
private val filterId = "1a6e5fd0-a21b-4056-8a4a-399e3b4e7610"

override def describe: MetaFilterDescriptor = {
  val descriptorMap = mutable.Map.empty[Locale, FilterDescriptorFragment]
  descriptorMap ++= Map(
    Locale.ENGLISH -> descriptor(Locale.ENGLISH),
    Locale.GERMAN -> descriptor(Locale.GERMAN)
  )
  MetaFilterDescriptor(fromString(filterId), filterName, descriptorMap.toMap)
}

private def descriptor(language: Locale): FilterDescriptorFragment = {
  val translatedText: ResourceBundle = ResourceBundle.getBundle(bundleName, language)
  FilterDescriptorFragment(
    displayName = translatedText.getString("displayName"),
    description = translatedText.getString("description"),
    previousConnection = FilterConnection(true),
    nextConnection = FilterConnection(true),
    parameters = List(
      MapParameterDescriptor("fieldsToAdd", translatedText.getString("fieldsToAddName"), translatedText.getString("fieldsToAddDescription"),
        TextParameterDescriptor("fieldName", translatedText.getString("fieldKeyName"), translatedText.getString("fieldKeyDescription")),
        TextParameterDescriptor("fieldValue", translatedText.getString("fieldValueName"), translatedText.getString("fieldValueDescription"))
      )
    ))
}

Problem 2: Extraction of parameter values from a given configuration

To extract the parameter values from a given configuration i have to search through the configuration and check the parameter by its name given as string. This approach is error prone and needs a lot boiler plate code as shown in the Example below also taken from the AddFieldsFilterLogic:

for (parameter <- configuration.parameters) {
  parameter.name match {
    case "fieldsToAdd" =>
      val dataMap = parameter.value.asInstanceOf[Map[String, String]]
      dataToAdd ++= dataMap.map(pair => (pair._1, TextField(pair._1, pair._2)))
    case _ =>
  }
}

Problem 3: Automatic configuration checking

There is no automatic check whether a given configuration suits a certain logic. The developer of a filter has to write it on its own or the filter crashes at runtime due to a missing paramter.

Suggestions

  • It should be possible to translate categories.
  • Greater variety of ParameterDescriptors:
    • e.g. a separate descriptor for:
      • regular expressions (patterns)
      • numbers with unit and precision information.
      • field names with information about whether they have to be present or absent in a dataset.
    • The UI could use these additional information to render specialized ui-controls which would in turn enhance the user experience.

Proposal

Scala Example

import io.logbee.keyscore.model.FieldNameHint.PresentField
import io.logbee.keyscore.model.PatternType.Grok
import io.logbee.keyscore.model._
import io.logbee.keyscore.model.localization.{Locale, Localization, TextRef}

val EN = Locale("en", "US")
val DE = Locale("de/DE")

val filterDisplayName = TextRef("displayName")
val filterDescription = TextRef("description")
val category = TextRef("aa5de1cd-1122-758f-97fa-228ca8911378")

val parameterARef = ParameterRef("37024d8b-4aec-4b3e-8074-21ef065e5ee2")
val parameterADisplayName = TextRef("parameterADisplayName")
val parameterADescription = TextRef("parameterADescription")

//      val parameterBRef = ParameterRef("ff543cab-15bf-114a-47a1-ce1f065e5513")
val parameterBDisplayName = TextRef("parameterBDisplayName")
val parameterBDescription = TextRef("parameterBDescription")

val parameterCRef = ParameterRef("b7cc9c84-ae6e-4ea3-bbff-f8d62af4caed")
//      val parameterDRef = ParameterRef("5f28c6dd-f88f-4530-afd1-c8b946bc5406")

val descriptor = Descriptor(
id = "1a6e5fd0-a21b-4056-8a4a-399e3b4e7610",
describe = FilterDescriptor(
  name = "io.logbee.keyscore.agent.pipeline.contrib.filter.AddFieldsFilterLogic",
  displayName = filterDisplayName,
  description = filterDescription,
  category = category,
  parameters = Seq(
    TextParameterDescriptor(parameterARef, ParameterInfo(parameterADisplayName, parameterADescription), defaultValue = "Hello World", validator = StringValidator("Hello*", PatternType.Glob)),
    BooleanParameterDescriptor(parameterCRef, ParameterInfo(TextRef("parameterDDisplayName"), TextRef("parameterDDescription")), defaultValue = true),
    ConditionalParameterDescriptor(condition = BooleanParameterCondition(parameterCRef, negate = true), parameters = Seq(
      PatternParameterDescriptor("98276284-a309-4f21-a0d8-50ce20e3376a", patternType = Grok),
      ListParameterDescriptor("ff543cab-15bf-114a-47a1-ce1f065e5513",
        ParameterInfo(parameterBDisplayName, parameterBDescription),
        FieldNameParameterDescriptor(hint = PresentField, validator = StringValidator("^_.*", PatternType.RegEx)),
        min = 1, max = Int.MaxValue)
    )),
    ChoiceParameterDescriptor("e84ad685-b7ad-421e-80b4-d12e5ca2b4ff", min = 1, max = 1, choices = Seq(
      Choice("red"),
      Choice("green"),
      Choice("blue")
    ))
  )
),
localization = Localization(Set(EN, DE), Map(
  filterDisplayName -> Map(
    EN -> "Add Fields",
    DE -> "Feld Hinzufuegen"
  ),
  filterDescription -> Map(
    EN -> "Adds the specified fields.",
    DE -> "Fuegt die definierten Felder hinzu."
  ),
  category -> Map(
    EN -> "Source",
    DE -> "Quelle"
  ),
  parameterADisplayName -> Map(
    EN -> "A Parameter",
    DE -> "Ein Parameter"
  ),
  parameterADescription -> Map(
    EN -> "A simple text parameter as example.",
    DE -> "Ein einfacher Textparameter als Beispiel."
  ),
  parameterBDisplayName -> Map(
    EN -> "A Parameter",
    DE -> "Ein Parameter"
  ),
  parameterBDescription -> Map(
    EN -> "A simple text parameter as example.",
    DE -> "Ein einfacher Textparameter als Beispiel."
  )
)
))

JSON Output

{
  "jsonClass": "io.logbee.keyscore.model.Descriptor",
  "id": "1a6e5fd0-a21b-4056-8a4a-399e3b4e7610",
  "describe": {
    "jsonClass": "io.logbee.keyscore.model.FilterDescriptor",
    "name": "io.logbee.keyscore.agent.pipeline.contrib.filter.AddFieldsFilterLogic",
    "displayName": {
      "id": "displayName"
    },
    "description": {
      "id": "description"
    },
    "category": {
      "id": "aa5de1cd-1122-758f-97fa-228ca8911378"
    },
    "parameters": [
      {
        "jsonClass": "io.logbee.keyscore.model.TextParameterDescriptor",
        "ref": {
          "id": "37024d8b-4aec-4b3e-8074-21ef065e5ee2"
        },
        "info": {
          "displayName": {
            "id": "parameterADisplayName"
          },
          "description": {
            "id": "parameterADescription"
          }
        },
        "defaultValue": "Hello World",
        "validator": {
          "pattern": "Hello*",
          "patternType": "Glob"
        }
      },
      {
        "jsonClass": "io.logbee.keyscore.model.BooleanParameterDescriptor",
        "ref": {
          "id": "b7cc9c84-ae6e-4ea3-bbff-f8d62af4caed"
        },
        "info": {
          "displayName": {
            "id": "parameterDDisplayName"
          },
          "description": {
            "id": "parameterDDescription"
          }
        },
        "defaultValue": true
      },
      {
        "jsonClass": "io.logbee.keyscore.model.ConditionalParameterDescriptor",
        "ref": {
          "id": ""
        },
        "condition": {
          "jsonClass": "io.logbee.keyscore.model.BooleanParameterCondition",
          "parameter": {
            "id": "b7cc9c84-ae6e-4ea3-bbff-f8d62af4caed"
          },
          "negate": true
        },
        "parameters": [
          {
            "jsonClass": "io.logbee.keyscore.model.PatternParameterDescriptor",
            "ref": {
              "id": "98276284-a309-4f21-a0d8-50ce20e3376a"
            },
            "patternType": "Grok",
            "defaultValue": ""
          },
          {
            "jsonClass": "io.logbee.keyscore.model.ListParameterDescriptor",
            "ref": {
              "id": "ff543cab-15bf-114a-47a1-ce1f065e5513"
            },
            "info": {
              "displayName": {
                "id": "parameterBDisplayName"
              },
              "description": {
                "id": "parameterBDescription"
              }
            },
            "kind": {
              "jsonClass": "io.logbee.keyscore.model.FieldNameParameterDescriptor",
              "ref": {
                "id": ""
              },
              "defaultValue": "",
              "hint": "PresentField",
              "validator": {
                "pattern": "^_.*",
                "patternType": "RegEx"
              }
            },
            "min": 1,
            "max": 2147483647
          }
        ]
      },
      {
        "jsonClass": "io.logbee.keyscore.model.ChoiceParameterDescriptor",
        "ref": {
          "id": "e84ad685-b7ad-421e-80b4-d12e5ca2b4ff"
        },
        "min": 1,
        "max": 1,
        "choices": [
          {
            "name": "red"
          },
          {
            "name": "green"
          },
          {
            "name": "blue"
          }
        ]
      }
    ]
  },
  "localization": {
    "locales": [
      {
        "language": "en",
        "country": "US"
      },
      {
        "language": "de",
        "country": "DE"
      }
    ],
    "mapping": {
      "parameterADisplayName": {
        "translations": {
          "en/US": "A Parameter",
          "de/DE": "Ein Parameter"
        }
      },
      "description": {
        "translations": {
          "en/US": "Adds the specified fields.",
          "de/DE": "Fuegt die definierten Felder hinzu."
        }
      },
      "parameterBDisplayName": {
        "translations": {
          "en/US": "A Parameter",
          "de/DE": "Ein Parameter"
        }
      },
      "aa5de1cd-1122-758f-97fa-228ca8911378": {
        "translations": {
          "en/US": "Source",
          "de/DE": "Quelle"
        }
      },
      "parameterADescription": {
        "translations": {
          "en/US": "A simple text parameter as example.",
          "de/DE": "Ein einfacher Textparameter als Beispiel."
        }
      },
      "displayName": {
        "translations": {
          "en/US": "Add Fields",
          "de/DE": "Feld Hinzufuegen"
        }
      },
      "parameterBDescription": {
        "translations": {
          "en/US": "A simple text parameter as example.",
          "de/DE": "Ein einfacher Textparameter als Beispiel."
        }
      }
    }
  }
}

startContainers task fails when containers are already in use

When starting the gradle task "startContainers" and one or more instances of those containers are already running, the task fails.
Currently, the user has to stop and remove those containers manually before executing the task again.

Possible Solution:
Stop and remove all specified containers automatically before executing the startContainers task.

Filter throughput time computation

The throughput Time is a good metric to evaluate the performance of KEYSCORE. The ValveStage is already prepared to compute the throughput time of a filter in front of a valve and the throughput time from the beginning of the pipeline to a valve.

  • The ValveStage uses the MovingMedian. But this class is currently not implemented.
  • The ValveState already contains the throughput time and the total throughput time.
  • The FilterController has to request the throughput time from the valves in question.
  • The FilterState has to be enhanced to carry the throughput time and the total throughput time.

When a SeedNode is restarted it creates a new cluster instead of joining the old one

Possible solution taken from https://groups.google.com/forum/#!topic/akka-user/z0y1kvcY97I:

One way is that in your seed node subscribe to cluster membership changes and write the current set of nodes to a file. When you restart the seed node you construct the list of seed nodes from the file, and include your own address as the first element in the seed-nodes list. Then it will first try to join the other nodes, before joining itself.

Multiple instances of a Pipeline

With the current implementation it is not possible to launch multiple instances of the same
Pipeline(Configuration). But there are use-cases where multiple pipelines can run
in parallel to speed-up data processing.

Node-Downing with keep-majority strategy

A fundamental problem in distributed systems is that network partitions (split brain scenarios) and machine crashes are indistinguishable for the observer [...] Temporary and permanent failures are indistinguishable because decisions must be made in finite time, and there always exists a temporary failure that lasts longer than the time limit for the decision.
[...]
The Akka cluster has a failure detector that will notice network partitions and machine crashes (but it cannot distinguish the two).
[...]
The failure detector in itself is not enough for making the right decision in all situations. The naive approach is to remove an unreachable node from the cluster membership after a timeout. This works great for crashes and short transient network partitions, but not for long network partitions. Both sides of the network partition will see the other side as unreachable and after a while remove it from its cluster membership. Since this happens on both sides the result is that two separate disconnected clusters have been created.

Python Integration

One of the most used programming languages in data analytics is python. That's why it is very import to KS to get an python integration. This integration can be spilt into two problems.

Problem 1: Running Python Filters

There are a couple of algorithms already written in python and a user of keyscore for whom it doesn't matter in which programming language a filter is written wants to use these algorithms as filters in her pipelines the same way as any other filter. So KS has to make it transparent to the user and there should be no difference between filters written in different programing language.

Problem 2: Developing Python Filters

Another user of keyscre, a filter developer, wants to write his algorithm in python. Therefore KS has to provide an environment in which a python developer can implement sources, filters and sinks in python. And she needs a machanism to plug these custom filters into KS.

Suggestions

I'm going to track these two problems in separate issues:

Enhance ParamterCopoment

  • adjust the parameter-map to fit the same style as the parameter list (AddFieldsFilter View)
  • improve the parameter-list and parameter-map with fitting descriptions
  • integrate Feedback when duplicates are added or fields do not exist

Needed custom icons for blocks etc

These icons are for keyscore-manager:

TypeIcons(Field value types)

  1. TextValue
  2. NumberValue
  3. BooleanValue
  4. DecimalValue
  5. TimestampValue
  6. DurationValue

BlockTypeIcons

  1. Filter
  2. Sink
  3. Source
  4. Merge
  5. Branch

StandardBlockIcons

  1. AddFields
  2. CounterWindow
  3. CSVParser
  4. D3Box
  5. DifferentialQuotient
  6. DropMessage
  7. Fingerprint
  8. GrokFilter
  9. JsonExtractor
  10. LoggerFilter
  11. RemoveFields
  12. RetainFields
  13. Kafka
  14. Elastic

Evalute Apache Avro as alternative for Protobuf

Apache Avro is a data serialization system. Avro provides:

  • Rich data structures.
  • A compact, fast, binary data format.
  • A container file, to store persistent data.
  • Remote procedure call (RPC).
  • Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages.

Scala

Python

Misc

Release v0.2.0

Release

Target of the second release is to stabilize exiting features and clean up the code base as preparation for the up-coming features.

Details

  • Protobuf-based data model
  • Rewrite of the descriptor-api [#5]
  • Rewrite of the PipelineConfiguration (aka. linked Filters)
    ...

Checklist:

  • Update RELEASES.adoc
  • Update README.adoc
  • Check documentation
  • Check examples
  • Create git-tag
  • Publish artifacts
  • Increment version

A node should persist the list of nodes it was connected to re-join the cluster even when a SeedNode is not reachable

Possible solution taken from https://groups.google.com/forum/#!topic/akka-user/z0y1kvcY97I:

One way is that in your seed node subscribe to cluster membership changes and write the current set of nodes to a file. When you restart the seed node you construct the list of seed nodes from the file, and include your own address as the first element in the seed-nodes list. Then it will first try to join the other nodes, before joining itself.

The original issue #4 was resolved by a work-around. After we introduce mechanisms to persist state of agents and frontiers we can re-work the issue.

KafkaSource and KafkaSink are not working anymore

KafkaSource and KafkaSink are not working anymore.

KafkaSource stops with: "Message [akka.kafka.KafkaConsumerActor$Internal$Stop$] without sender to Actor[akka://keyscore/system/kafka-consumer-2#2056765281] was not delivered. [3] dead letters encountered."

Kafka Sink stops with: "KafkaProducer - Closing the Kafka producer with timeoutMillis = 60000 ms."

I tried combining KafkaSource with StdOutSink and KafkaSink with HttpSource which produces the exact same errors, while a Stream with HttpSource and StdOut Sink works just as expected.

Make the Description and the Configuration Collapsable

  • add a collapse Button in the description and the configuration component so the user can hide elements he doesnt need

  • after a collapse action the rest of the ui has to rearrange and use the free space to actually improve it

  • Evaluate wheter its possible to merge these two components

Gradle startContainers fails due to quay Exception.

the task 'gradle startContainers' does fail locally with following Exceptions:

  1. UnauthorizedException: "message":"Get https://docker.elastic.co/v2/elasticsearch/elasticsearch-oss/manifests/6.2.4: unauthorized: authentication required"

which is followed by:

  1. InternalServerErrorException: {"message":"Get https://quay.io/v2/logbee/docker-kafka/manifests/latest: error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too many login attempts. \nPlease reset your Quay password and try again.""}

Seems only to be a promblem if you run the task locally. It works in Travis.

Configuration Store

The actual workflow provides that a user creates a PipelineConfiguration within the KS:M.
The configuration is send to the KS:F and from there to an agent which materializes the
configuration into a several stages and starts these stages. So there is currently no way
to just store a PipelineConfiguration without starting a pipeline.

KafkaSink Descriptor can not be translated

The Agent is not able to translate the KafkaSink Descriptor. On Registering the KafkaSink extension the Agent throws an

java.util.MissingResourceException: Can't find resource for bundle java.util.PropertyResourceBundle, key category2

In Localization.scala on Line 37 it tries to get the TextRef(category2) String from the bundle which is not there.
But i have to admit that i have no idea why it even tries to resolve the category2 key.

Search function for Datasets

  • add the functionality to search through datasets, weather search for keys, values or everything

  • the Ui-element for the search should be placed in the datasets-visualizer component

REST endpoint to delete all existing Pipelines at once

Delete all running pipelines and their configurations

Request:

Method: DELETE
URL: /pipeline/configuration/*
Body: <empty>

Response

.Case: Deletion was successful.

  • Code: 200 OK
  • Content: <empty>

.Case: Something went wrong.

  • Code: 500 INTERNAL SERVER ERROR
  • Content: <empty>

Delete all running pipelines but keep their configurations

Request:

Method: DELETE
URL: /pipeline/instances/*
Body: <empty>

Response

.Case: Deletion was successful.

  • Code: 200 OK
  • Content: <empty>

.Case: Something went wrong.

  • Code: 500 INTERNAL SERVER ERROR
  • Content: <empty>

Sidemenu

The Keyscore-Manager navigation should be changed to a sidemenu.

A common header bar for all pages of the web-ui

Problem

Every main component shows some kind of a title or other similar information. But ever component does it in an other way.

Solution

Build a header bar as a angular dump component. The other component can use use the header bar to show similar information like:

  • a title
  • a loading bar
  • bread crumbs
  • ...

Note: If a component is very special or does not have similar information to display it does't have to use the common header bar.

Manual node-downing

The KS:F should offer a rest-endpoint to remove a not responding agent from the cluster.

New DatasetVisualisation Component

  • build a new component which manages to display Input and Output Datasets next to each other
  • with only one dataset switch element to swipe through them
  • table cells have to match
  • the table cell where something changed is highlighted
  • there has to be a mechanism to abreviate message that are too long with ... or something
  • replace the Datatype column with Icons displayed near the fieldname with a tooltip
  • make the fields sorted by alphabet
  • add different sorting when clicking on table head for example

Show 404 if edit pipeline url is called with wrong UUID

Currently if the user navigates to /pipelines/pipeline/uuid and the given pipeline does not exist a new pipeline is created.

Showing a 404 page and redirecting the user to /pipelines/pipeline might be the better approach

Strange behaviour in the Integration Test

When inserting 2 Datasets in the first Pipeline, the expected count of extracted Datasets of the filter of the second Pipeline is 2 - but is actually 3.
The problem does not extend to the actual dataflow in the pipeline.
Only the extraction of datasets from a filter is not as expected.

Release v0.1.0

Release

The goal of the first release is to illustrate the functioning of KS and the interaction of all technologies.

Summary:

  • Distributed Processing:
    • KS:A and KS:F running in distributed environment.
    • KS:F offers a REST endpoints to create/configure/delete streams.
    • KS:A runs streams.
  • Simple set of filters:
    • There are a Source and a Sink to read from Kafka and write to Kafka.
    • There are filters to do simple transformations (grok) of log messages.
  • Container images: All parts of KS are provided as container images on quay.io

Checklist:

  • Update RELEASES.adoc
  • Update README.adoc
  • Check documentation
  • Increment version
  • Check examples
  • Create git-tag
  • Publish artifacts

Avoid „airplane cockpit“ like UI with a LOD based approach

Problem

More and more features and their components get integrated into the WebUI of keyscore. Some of them assist the user to monitor the system and provide information about the system‘s health and performance. Other components allow the user to tweak every aspect of the System in detail.

This can lead to a bloated UI. Where most users won‘t find what they are looking for. We call this an airplan cockpit. The UI offers too many levers, buttons and displays too many information at the same time.

Proposal

To solve the problem described above the UI could offer the posibility to reduce or increase the level of details. Think of it as you are looking at a picture of a lovely forest with many trees a small lake and great mountains in background. If you stand far away from the picture you see there is a forest some mountains etc. If you get a bit closer you notice that it isn’t just a big forest. There are many different trees and some goose on the small lake. If you get even closer you will spot the leaves of the trees and the feathers of the gooses. So the amount of details you see depends one the distance to the picture you looking.

With this in mind we build a UI where the user can increase or reduce the amount of information like he/she chnges the distance to a picture. To implement this, we define a range e.g. [1..9]. Than we assinge each UI element to one of the levels. A UI element is visible if it belongs to the currently selected level or below.

This approach enables a user to decide on its own how detailed and complex the UI should be. A experienced user can set a high LOD to tweak every detail of KEYSCORE. On the other hand a new user can set a low LOD, he/she just gets the most importent UI elements displayed to orientate oneself quickly.

Add the posibility to the web-ui to change the base url to the keyscore-frontier

The base-url is currently stored in the conf/application.conf file of the keyscore-manager which gets packt into the docker image during build. Therefore it is not possible to change the base-url after deployment easily.

To enhance the keyscore-manager a modal dialog settings-page could be implemented where the user can set the base-url to backend of his choice. These settings could be stored in a cookie.

Enhance Parameter Component

  • adjust the parameter-map to fit the same style as the parameter list (AddFieldsFilter View)
  • improve the parameter-list and parameter-map with fitting descriptions
  • integrate Feedback when duplicates are added or fields do not exist ParameterList
  • integrate Feedback when duplicates are added or fields do not exist ParameterMap

Building a Stream

Currently a stream cannot be built.
Because in the FilterManager the methods:

getSinkStageLogicConstructor,
getSourceStageLogicConstructor,
getFilterStageLogicConstructor

do fail because the logicClass.getConstructor(...) fail with a NoSuchMethodException

logicClass.getConstructors() returns an empty array.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.