Giter Site home page Giter Site logo

googlecloudplatform / terraform-splunk-log-export Goto Github PK

View Code? Open in Web Editor NEW
43.0 21.0 30.0 250 KB

Deploy Google Cloud log export to Splunk using Terraform

Home Page: https://cloud.google.com/architecture/deploying-production-ready-log-exports-to-splunk-using-dataflow

License: Apache License 2.0

HCL 100.00%
google-cloud-platform gcp dataflow pubsub splunk splunk-hec

terraform-splunk-log-export's Introduction

Terraform templates for Google Cloud log export to Splunk

Terraform scripts for deploying log export to Splunk per Google Cloud reference guide:
Deploying production-ready log exports to Splunk using Dataflow .

Resources created include an optional Cloud Monitoring custom dashboard to monitor your log export operations. For more details on custom metrics in Splunk Dataflow template, see New observability features for your Splunk Dataflow streaming pipelines.

These deployment templates are provided as is, without warranty. See Copyright & License below.

Architecture Diagram

Architecture Diagram of Log Export to Splunk

Terraform Module

Inputs

Name Description Type Default Required
dataflow_job_name Dataflow job name. No spaces string n/a yes
log_filter Log filter to use when exporting logs string n/a yes
network Network to deploy into string n/a yes
project Project ID to deploy resources in string n/a yes
region Region to deploy regional-resources into. This must match subnet's region if deploying into existing network (e.g. Shared VPC). See subnet parameter below string n/a yes
splunk_hec_url Splunk HEC URL to write data to. Example: https://[MY_SPLUNK_IP_OR_FQDN]:8088 string n/a yes
create_network Boolean value specifying if a new network needs to be created. bool false no
dataflow_job_batch_count Batch count of messages in single request to Splunk number 50 no
dataflow_job_disable_certificate_validation Boolean to disable SSL certificate validation bool false no
dataflow_job_machine_count Dataflow job max worker count number 2 no
dataflow_job_machine_type Dataflow job worker machine type string "n1-standard-4" no
dataflow_job_parallelism Maximum parallel requests to Splunk number 8 no
dataflow_job_udf_function_name Name of JavaScript function to be called string "" no
dataflow_job_udf_gcs_path GCS path for JavaScript file string "" no
dataflow_template_version Dataflow template release version (default 'latest'). Override this for version pinning e.g. '2021-08-02-00_RC00'. Must specify version only since template GCS path will be deduced automatically: 'gs://dataflow-templates/version/Cloud_PubSub_to_Splunk' string "latest" no
dataflow_worker_service_account Name of Dataflow worker service account to be created and used to execute job operations. In the default case of creating a new service account (use_externally_managed_dataflow_sa=false), this parameter must be 6-30 characters long, and match the regular expression a-z. If the parameter is empty, worker service account defaults to project's Compute Engine default service account. If using external service account (use_externally_managed_dataflow_sa=true), this parameter must be the full email address of the external service account. string "" no
deploy_replay_job Determines if replay pipeline should be deployed or not bool false no
gcs_kms_key_name Cloud KMS key resource ID, to be used as default encryption key for the temporary storage bucket used by the Dataflow job.
If set, make sure to pre-authorize Cloud Storage service agent associated with that bucket to use that key for encrypting and decrypting.
string "" no
primary_subnet_cidr The CIDR Range of the primary subnet string "10.128.0.0/20" no
scoping_project Cloud Monitoring scoping project ID to create dashboard under.
This assumes a pre-existing scoping project whose metrics scope contains the project where dataflow job is to be deployed.
See Cloud Monitoring settings for more details on scoping project.
If parameter is empty, scoping project defaults to value of project parameter above.
string "" no
splunk_hec_token Splunk HEC token. Must be defined if splunk_hec_token_source if type of PLAINTEXT or KMS. string "" no
splunk_hec_token_kms_encryption_key The Cloud KMS key to decrypt the HEC token string. Required if splunk_hec_token_source is type of KMS string "" no
splunk_hec_token_secret_id Id of the Secret for Splunk HEC token. Required if splunk_hec_token_source is type of SECRET_MANAGER string "" no
splunk_hec_token_source Define in which type HEC token is provided. Possible options: [PLAINTEXT, KMS, SECRET_MANAGER]. string "PLAINTEXT" no
subnet Subnet to deploy into. This is required when deploying into existing network (create_network=false) (e.g. Shared VPC) string "" no
use_externally_managed_dataflow_sa Determines if the worker service account provided by dataflow_worker_service_account variable should be created by this module (default) or is managed outside of the module. In the latter case, user is expected to apply and manage the service account IAM permissions over external resources (e.g. Cloud KMS key or Secret version) before running this module. bool false no

Outputs

Name Description
dataflow_input_topic n/a
dataflow_job_id n/a
dataflow_log_export_dashboard n/a
dataflow_output_deadletter_subscription n/a

Monitoring Dashboard (Batteries Included)

Deployment templates include an optional Cloud Monitoring custom dashboard to monitor your log export operations: Ops Dashboard of Log Export to Splunk

Permissions Required

At a minimum, you must have the following roles before you deploy the resources in this Terraform:

  • Logs Configuration Writer (roles/logging.configWriter) at the project and/or organization level
  • Compute Network Admin (roles/compute.networkAdmin) at the project level
  • Compute Security Admin (roles/compute.securityAdmin) at the project level
  • Dataflow Admin (roles/dataflow.admin) at the project level
  • Pub/Sub Admin (roles/pubsub.admin) at the project level
  • Storage Admin (roles/storage.admin) at the project level

To ensure proper pipeline operation, Terraform creates necessary IAM bindings at the resources level as part of this deployment to grant access between newly created resources. For example, Log sink writer is granted Pub/Sub Publisher role over the input topic which collects all the logs, and Dataflow worker service account is granted both Pub/Sub subscriber over the input subscription, and Pub/Sub Publisher role over the deadletter topic.

Dataflow permissions

The Dataflow worker service service account is the identity used by the Dataflow worker VMs. This module offers three options in terms of which worker service account to use and how to manage their IAM permissions:

  1. Module uses your project's Compute Engine default service account as Dataflow worker service account, and manages any required IAM permissions. The module grants that service account necessary IAM roles such as roles/dataflow.worker and IAM permissions over Google Cloud resources required by the job such as Pub/Sub, Cloud Storage, and secret or KMS if applicable. This is the default behavior.

  2. Module creates a dedicated service account to be used as Dataflow worker service account, and manages any required IAM permissions. The module grants that service account necessary IAM roles such as roles/dataflow.worker and IAM permissions over Google Cloud resources required by the job such as Pub/Sub, Cloud Storage, and secret or KMS key if applicable. To use this option, set dataflow_worker_service_account to the name of this new service account.

  3. Module uses a service account managed outside of the module. The module grants that service account necessary IAM permissions over Google Cloud resources created by the module such as Pub/Sub, Cloud Storage. You must grant this service account the required IAM roles (roles/dataflow.worker ) and IAM permissions over external resources such as any provided secret or KMS key (more below), before running this module. To use this option, set use_externally_managed_dataflow_sa to true and set dataflow_worker_service_account to the email address of this external service account.

For production workloads, as a security best practice, it's recommended to use option 2 or 3, both of which rely on user-managed worker service account, instead of the Compute Engine default service account. This ensures a minimally-scoped service account dedicated for this pipeline.

For option 3, make sure to grant:

  • The provided Dataflow service account the following roles:
    • roles/dataflow.worker
    • roles/secretmanager.secretAccessor on secret - if SECRET_MANAGER HEC token source is used
    • roles/cloudkms.cryptoKeyDecrypter on KMS key- if KMS HEC token source is used

For option 1 & 3, make sure to grant:

  • Your user account or service account used to run Terraform the following role:
    • roles/iam.serviceAccountUser on the Dataflow service account in order to impersonate the service account. See following note for more details.

Note about Dataflow worker service account impersonation: To run this Terraform module, you must have permission to impersonate the Dataflow worker service account in order to attach that service account to Dataflow worker VMs. In case of the default Dataflow worker service account (Option 1), ensure you have iam.serviceAccounts.actAs permission over Compute Engine default service account in your project. For security purposes, this Terraform does not modify access to your existing Compute Engine default service account due to risk of granting broad permissions. On the other hand, if you choose to create and use a user-managed worker service account (Option 2) by setting dataflow_worker_service_account (and keeping use_externally_managed_dataflow_sa = false), this Terraform will add necessary impersonation permission over the new service account.

See Security and permissions for pipelines to learn more about Dataflow service accounts and their permissions.

Getting Started

Requirements

  • Terraform 0.13+
  • Splunk Dataflow template 2022-04-25-00_RC00 or later

Enabling APIs

Before deploying the Terraform in a Google Cloud Platform Project, the following APIs must be enabled:

  • Compute Engine API
  • Dataflow API

For information on enabling Google Cloud Platform APIs, please see Getting Started: Enabling APIs.

Setup working directory

  1. Copy placeholder vars file sample.tfvars into new terraform.tfvars to hold your own settings.
  2. Update placeholder values in terraform.tfvars to correspond to your GCP environment and desired settings. See list of input parameters above.
  3. Initialize Terraform working directory and download plugins by running:
$ terraform init

Authenticate with GCP

Note: You can skip this step if this module is inheriting the Terraform Google provider (e.g. from a parent module) with pre-configured credentials.

$ gcloud auth application-default login --project <ENTER_YOUR_PROJECT_ID>

This assumes you are running Terraform on your workstation with your own identity. For other methods to authenticate such as using a Terraform-specific service account, see Google Provider authentication docs.

Deploy log export pipeline

$ terraform plan
$ terraform apply

View log export monitoring dashboard

  1. Retrieve dashboard id from terraform output
$ terraform output dataflow_log_export_dashboard

The output is of the form "projects/{project_id_or_number}/dashboards/{dashboard_id}".

Take note of dashboard_id value.

  1. Visit newly created Monitoring Dashboard in Cloud Console by replacing dashboard_id in the following URL: https://console.cloud.google.com/monitoring/dashboards/builder/{dashboard_id}

Deploy replay pipeline when needed

The replay pipeline is not deployed by default; instead it is only used to move failed messages from the PubSub deadletter subscription back to the input topic, in order to be redelivered by the main log export pipeline (as depicted in above diagram). Refer to Handling delivery failures for more detail.

Caution: Make sure to deploy replay pipeline only after the root cause of the delivery failure has been fixed. Otherwise, the replay pipeline will cause an infinite loop where failed messages are sent back for re-delivery, only to fail again, causing an infinite loop and wasted resources. For that same reason, make sure to tear down the replay pipeline once the failed messages from the deadletter subscription are all processed or replayed.

  1. To deploy replay pipeline, set deploy_replay_job variable to true, then follow the sequence of terraform plan and terraform apply.
  2. Once the replay pipeline is no longer needed (i.e. the number of messages in the PubSub deadletter subscription is 0), set deploy_replay_job variable to false, then follow the sequence of terraform plan and terraform apply.

Cleanup

To delete resources created by Terraform, run the following then confirm:

$ terraform destroy

Using customer-managed encryption keys (CMEK)

For those who require CMEK, this module accepts CMEK keys for the following services:

  • Cloud Storage: see gcs_kms_key_name input parameter. You are responsible for granting Cloud Storage service agent the role Cloud KMS CryptoKey Encrypter/Decrypter (roles/cloudkms.cryptoKeyEncrypterDecrypter) in order to use the provided Cloud KMS key for encrypting and decrypting objects in the temporary storage bucket. The Cloud KMS key must be available in the location that the temporary bucket is created in (specified in var.region). For more details, see Use customer-managed encryption keys in Cloud Storage docs.

Authors

Copyright & License

Copyright 2021 Google LLC

Terraform templates for Google Cloud Log Export to Splunk are licensed under the Apache license, v2.0. Details can be found in LICENSE file.

terraform-splunk-log-export's People

Contributors

frozen425 avatar ilakhtenkov avatar mhite avatar npredey avatar rarsan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-splunk-log-export's Issues

replaying deadletter with replay udf and includePubsubMessage=true doesn't work

The default configured behavior of the log export pipeline is to force includePubsubMessage to true[1].

When used in combination with sample Dataflow UDF message replay Javascript[2], replayed messages from the deadletter subscription won't be handled correctly. As messages enter the original/main log subscription, they will be re-wrapped in the data key again. You will end up with strange logs like this in Splunk:

{
  "data": {
    "time": 1677812444044,
    "event": "{\"data\":{\"insertId\":\"sv1hmye6bz6p\",\"logName\":\"projects/redacted-project/logs/[cloudaudit.googleapis.com](http://cloudaudit.googleapis.com/)%2Fdata_access\",\"protoPayload\":{\"@type\":\"[type.googleapis.com/google.cloud.audit.AuditLog\](http://type.googleapis.com/google.cloud.audit.AuditLog%5C)",\"authenticationInfo\":{\"principalEmail\":\"[redacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com](mailto:redacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com)\",\"principalSubject\":\"[serviceAccount:redacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com](mailto:serviceAccount%3Aredacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com)\",\"serviceAccountKeyName\":\"//[iam.googleapis.com/projects/redacted-project/serviceAccounts/redacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com/keys/a49e3382174e3775dec6fd4e5b593dbfed8c1ecc\](http://iam.googleapis.com/projects/redacted-project/serviceAccounts/redacteduser-sfx-scraper@redacted-project.iam.gserviceaccount.com/keys/a49e3382174e3775dec6fd4e5b593dbfed8c1ecc%5C)"},\"authorizationInfo\":[{\"granted\":true,\"permission\":\"monitoring.timeSeries.list\",\"resource\":\"50701599922\",\"resourceAttributes\":{}}],\"methodName\":\"google.monitoring.v3.MetricService.ListTimeSeries\",\"request\":{\"@type\":\"[type.googleapis.com/google.monitoring.v3.ListTimeSeriesRequest\](http://type.googleapis.com/google.monitoring.v3.ListTimeSeriesRequest%5C)",\"filter\":\"metric.type = \\\"[kubernetes.io/node/cpu/allocatable_utilization\\\](http://kubernetes.io/node/cpu/allocatable_utilization%5C%5C%5C)"\",\"name\":\"projects/redacted-project\",\"pageSize\":10000},\"requestMetadata\":{\"callerIp\":\"44.230.82.104\",\"callerSuppliedUserAgent\":\"grpc-java-netty/1.51.1,gzip(gfe)\",\"destinationAttributes\":{},\"requestAttributes\":{\"auth\":{},\"time\":\"2023-03-03T03:00:44.050942344Z\"}},\"resourceName\":\"projects/redacted-project\",\"serviceName\":\"[monitoring.googleapis.com](http://monitoring.googleapis.com/)\"},\"receiveTimestamp\":\"2023-03-03T03:00:44.174642381Z\",\"resource\":{\"labels\":{\"method\":\"google.monitoring.v3.MetricService.ListTimeSeries\",\"project_id\":\"redacted-project\",\"service\":\"[monitoring.googleapis.com](http://monitoring.googleapis.com/)\"},\"type\":\"audited_resource\"},\"severity\":\"INFO\",\"timestamp\":\"2023-03-03T03:00:44.044614574Z\"},\"attributes\":{\"[logging.googleapis.com/timestamp\](http://logging.googleapis.com/timestamp%5C)":\"2023-03-03T03:00:44.044614574Z\"},\"messageId\":\"7058887762594632\",\"delivery_attempt\":1}"
  },
  "attributes": {
    "errorMessage": "Splunk write status code: 403",
    "timestamp": "2023-03-03 03:00:34.000000"
  },
  "messageId": "7058953036869866",
  "delivery_attempt": 1
}

[1] - https://github.com/GoogleCloudPlatform/terraform-splunk-log-export/blob/main/main.tf#L85
[2] - https://storage.googleapis.com/splk-public/js/dataflow_udf_messages_replay.js

Issue while running module with service account impersonation

An error arises when applying the module by impersonating a service account.

Error: Error applying IAM policy for service account 'projects/´<my-project>/serviceAccounts/dataflow-worker-sa@´<my-project>.iam.gserviceaccount.com': Error setting IAM policy for service account 'projects/´<my-project>/serviceAccounts/dataflow-worker-sa@´<my-project>.iam.gserviceaccount.com': googleapi: Error 400: Principal gcp-tf-sa@<sa-project>.iam.gserviceaccount.com is of type "serviceAccount". The principal should appear as "serviceAccount:gcp-tf-sa@<sa-project>.iam.gserviceaccount.com". See https://cloud.google.com/iam/help/members/types for additional documentation., badRequest

  with module.scc.module.splunk_export.google_service_account_iam_binding.terraform_caller_impersonate_dataflow_worker[0],
  on .terraform/modules/scc.splunk_export/permissions.tf line 95, in resource "google_service_account_iam_binding" "terraform_caller_impersonate_dataflow_worker":
  95: resource "google_service_account_iam_binding" "terraform_caller_impersonate_dataflow_worker" {

Steps to reproduce:
export GOOGLE_OAUTH_ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account=<sa-name>.iam.gserviceaccount.com)
terraform apply -auto-approve

Terraform generates 600 lines of json in plan each time.

Monitoring dashboard resource relies on dynamic JSON, Terraform thinks it's a resource to be updated with every plan/apply.
This is not consequential since it's updating the dashboard but without any net effect. But the plan is 600 lines and it makes looking at real changes hard to validate.

Support Shared VPC subnetwork

According to template documentation, regarding subnetwork:

Value can be either a complete URL or an abbreviated path. If the subnetwork is located in a Shared VPC network, you must use the complete URL.

Currently providing complete URL is not possible because of internal logic of variables manipulation.

Enhancement should be backward compatible.

Dashboard fails to create

Hey there. Wondering if someone could assist with deploying this solution.

I got everything working with the exception of the Dashboard which fails to create and I am quite stuck.

  • dataflow_log_export_dashboard = (known after apply)
    google_monitoring_dashboard.splunk-export-pipeline-dashboard: Creating...

    │ Error: Error creating Dashboard: googleapi: Error 400: Field mosaicLayout.tiles[2].widget.xyChart.dataSets[0].timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/outbound-successful-events'.
    │ Field mosaicLayout.tiles[5].widget.xyChart.dataSets[0].timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/outbound-successful-events'.
    │ Field mosaicLayout.tiles[7].widget.xyChart.dataSets[0].timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/http-server-error-requests'.
    │ Field mosaicLayout.tiles[8].widget.xyChart.dataSets[0].timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/http-invalid-requests'.
    │ Field mosaicLayout.tiles[12].widget.xyChart.dataSets[0].timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/total-failed-messages'.
    │ Field mosaicLayout.tiles[14].widget.scorecard.timeSeriesQuery.timeSeriesQueryLanguage has an invalid value: Could not find a metric named 'custom.googleapis.com/dataflow/outbound-successful-events'.

    │ with google_monitoring_dashboard.splunk-export-pipeline-dashboard,
    │ on monitoring.tf line 22, in resource "google_monitoring_dashboard" "splunk-export-pipeline-dashboard":
    │ 22: resource "google_monitoring_dashboard" "splunk-export-pipeline-dashboard" {

Service account and iam bindings missing on replay pipeline

The replay pipeline in replay.tf does not have service_account_email parameter included while the dataflow job in pipeline.tf is using service_account_email parameter. I guess the replay pipeline uses a default SA when SA parameter is not provided but it felt it is inconsistent to not have service_account_email parameter in the replay pipeline. If we are going to use a custom SA in the replay pipeline, the SA need to have iam bindings "roles/pubsub.subscriber" and "roles/pubsub.viewer" on deadletter subscription and "roles/pubsub.publisher on input topic.

The document https://cloud.google.com/architecture/stream-logs-from-google-cloud-to-splunk/deployment also does not seem to be talking about the permissions required for the replay pipeline.

terraform destroy never finishes

Is there something about the Splunk Dataflow pipeline design that causes it to never be able to successfully drain?

I've gone through the full build + teardown (destroy) process at least a dozen times and have never seen it destroy successfully without intervention by manually canceling the dataflow job in the console.

google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h0m43s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h0m53s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m3s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m13s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m23s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m33s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m43s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h1m53s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m3s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m13s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m23s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m33s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m43s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h2m53s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h3m3s elapsed]
google_dataflow_job.dataflow_job: Still destroying... [id=2023-03-04_10_28_54-13121228337712679470, 6h3m13s elapsed]

Module hardening

As a hardening measures I propose following:

  • Add an ability to provide CMEK for following services:
    • GCS
    • Pub/Sub
    • Dataflow

Not pulling in access transparency logs

Hello!

We're using this terraform module with GCP's cloud audit logs. We're using the google_logging_organization_sink and everything is working great.

We did however notice that we aren't getting the Access Transparency Logs come through.

e.g.

log_filter="resource.labels.project_id=\"SOMEPROJECT\" AND log_name:\"projects/SOMEPROJECT/logs/cloudaudit.googleapis.com\"

Has anyone encountered this before? Do we need to provide more IAM permissions to something?

Anything helps, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.