Giter Site home page Giter Site logo

shipwright-io / build Goto Github PK

View Code? Open in Web Editor NEW
615.0 615.0 103.0 39.68 MB

Shipwright - a framework for building container images on Kubernetes

Home Page: https://shipwright.io

License: Apache License 2.0

Makefile 0.62% Dockerfile 0.13% Shell 2.17% Go 97.05% HCL 0.02%
cicd containers kubernetes

build's People

Contributors

adambkaplan avatar adarsh-jaiss avatar apoorvajagtap avatar avni-sharma avatar baijum avatar coreydaley avatar dalbar avatar dependabot[bot] avatar dewan-ahmed avatar dheerajodha avatar dhritishikhar avatar gabemontero avatar heavywombat avatar imjasonh avatar jkhelil avatar karanibm6 avatar kevydotvinu avatar mattcui avatar mayukhsobo avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar otaviof avatar qu1queee avatar routerhan avatar saschaschwarze0 avatar sbose78 avatar testwill avatar xiujuan95 avatar zhangtbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

build's Issues

Change to more official and common git repos for e2e tests

I think we need another PR to refine the different and common Git/source coverage for the current e2e tests:

  • Change all tests to use a more official or common git repo
  • Change another buildpacks(namespaced) to try another buildpack(language) and git repo
  • Change buildah maybe for another language such as golang
  • Change kaniko to use another different git repo

Proxied Cluster Support

today openshift build v1 can obtain certs for the global proxy and ensure they are available to the build process

ultimately buildv2 should have some form of this

it very well might make sense for DEVEX to help abstact out logic used for build v1 for use by buildv2 .... https://github.com/gabemontero/obu is a prototype of such an endeavor

Image Change Triggers for build v2

Today build v1 can define image change triggers on pipelines, and the image change trigger controller in the openshift controller manager can trigger builds with those ICT settings when imagestreamtags change.

the image change trigger controller can also trigger a finite set of generic k8s api objects based on the presence of an annotation, where it substitutes the image field on the underlying api object.

The DEVEX team could change the image change trigger controller to include build v2 objects in that list of k8s objects it can deal with. And tekton as well.

Or if build v2 want to build their own controller to mimic this support for build v2 specifically (though at first blush that seems more expensive).

leverage tekton SCM trigger support

today the https://github.com/tektoncd/triggers project provides github/gitlab styled webhook trigger support for creating tekton API object to trigger their CI/CD pipelines.

This is akin to the webhook support in openshift build v1

In theory build v2 could deploy tekton trigger event sinks in conjunction with their controller and define event triggers that will run the same underlying tekton objects which buildv2 maps too.

Add controllers namespace events

Idea:

Alternatives to logging are the k8s events per namespace. An event could be constructed in such a way that it clearly states:

  • to which controller the event belongs
  • a useful debug/info message
  • a resource, e.g. CRD type

Opinions on the above?

parallel / serial execution policies in build v2

Today build v1 allows for specification of these run policies at the build level

  • parallel
  • pure serial
  • no-op'ing intermediate build request while serial that have queued up, and only run the latest request

Tekton has

  • specification of dependencies between tasks in a pipeline, so one task has to run after another .. so a form of serialization at that level
  • pipeline executions always run in parallel

build v2 should surface some flavor / combinations of these in its API ... either leverage the tekton features as is, or add build v1 type controls in the build v2 controller.

Packaging the Build API/controllers - using OLM

In this approach, we've packaged our CRDs and service account with an OLM manifest. The operator and CRDs are lifecycle'd using OLM on OpenShift/Kuberntes.

Sample:
https://github.com/sbose78/buildv2-olm-csv-sample/blob/master/0.0.4/buildv2-operator.v0.0.1.clusterserviceversion.yaml

Pre-requisites

Install OpenShift Pipelines operator ( will be added as a dependency )

Create this object to list Buildv2 on OperatorHub

apiVersion: operators.coreos.com/v1
kind: OperatorSource
metadata:
  name: rhd-operators
  namespace: openshift-marketplace
spec:
  authorizationToken: {}
  endpoint: 'https://quay.io/cnr'
  registryNamespace: redhat-developer
  type: appregistry

The Build Operator should show up on

image

An admin can choose upgrade channels and the approval strategy

image

Overview

image

cancelling running/in flight BuildRun

openshift build v1 allows for cancelling in progress builds via the CLI and REST

tekton cli can also cancel in progress taskruns and pipelineruns

build v2 should tie in a cancel function (assuming it already plans on a start build function) to oc

and where possible, if it makes sense to vendor in tkn code into oc to facilitate things, by all means

align git source secret type with k8s/buildv1 types

@sbose78 - at first blush I would suggest changing https://github.com/redhat-developer/build/blob/master/pkg/apis/build/v1alpha1/gitsource.go#L33-L36

// SecretRef holds information about the secret that contains credentials to access the git repo
type SecretRef struct {
	// Name is the name of the secret that contains credentials to access the git repo
	Name string `json:"name"`
}

to be more in line with https://github.com/openshift/api/blob/master/build/v1/types.go#L1253

	// secretSource is a reference to the secret
	SecretSource corev1.LocalObjectReference `json:"secretSource" protobuf:"bytes,1,opt,name=secretSource"`

Was that considered and ruled out for some reason?

@bparees @adambkaplan fyi

Contribution readiness checklist

Checklist of items that needs to be done before we are open to contribution.

CC @otaviof

Add more e2e tests to cover the existing fucntions

Hi all,

We just have 4 e2e tests now, and we miss some functions verification in e2e tests. I think we can add more:

1, Add a new or change an existing e2e test to use private Github repo
We only have the e2e test cases by using public github repo, like:

spec:
  source:
    url: https://github.com/sclorg/nodejs-ex

But we also need a new test or existing test to cover the private github repo, like:

spec:
  source:
    url: https://github.ibm.com/sclorg/nodejs-ex
    credentials:
      name: icr-knbuild

To make sure the private secret for Github works fine.
But we should find a way to store the private Github key in CI/CD.

2, Add a new or change an existing e2e test to use different builder image or Docker file path
We can change the namespaced buildpack sample or Kaniko sample to use different builder image (with private registry secret) or different Dockerfile path.

3, Add a new or change an existing e2e test to verify the different build/buildrun configuration
Right now, we only have a special setting in buildrun (serviceaccount), we also need to add it in e2e test.

I think after we add these tests, we can cover most of existing functions by using automation. Then in future, we can safely add new features or enhancement based on these tests.

Please let me know if you have any suggestion.

[Feature Discuss] As a developer, I want to know my built image status

In OpenShift, there is an ImageStream resource to keep the built image status, such as:

  • Image path
  • Image tag
  • Date
  • Which Git or commit it uses
  • Which builder or base image it uses
  • Sync the real docker image info with docker registry

But in Build v2, we don't provide this kind of resource/information to end-user.

The end-user should be able to get this kind of information out of the Build/BuildRun status (#65)

Or...., is our Build equal to Image resource?

I think we should discuss together to see how to implement it for both two teams better.

FYi @sbose78

Separate build definition from build run

As a developer, I want to define the build separately from invocation of the build so that I can reuse the build definition across executions.

Acceptance Criteria

Ensure the Build and BuildRun relationship is documented.
Ensure changes to Build CR are handled reasonably well.
Ensure changes to BuildRun CR are reflected upon the actual build execution.
Do not trigger build on Build CR creation.
Trigger build on BuildRun CR.
Ensure multiple BuildRun CRs can be created per Build CR


Feel free to suggestion a type name alternative to BuildRun, as well.

image pull policy feature

/discussion
/feature

Today openshift build v1 API allows for control of the image pull policy of the builder image.

Do we want to expose a similar policy in build v2?

Currently the tekton Step type is a k8s container wrapper which allows specification of image pull policy of always, as needed, etc.

Setup a bi-weekly meeting to discuss for detail feature and requirement discussion

Hi @sbose78 ,
As you know, there are more team members from our Source-to-Image squad start working on the Build v2 deployment.
And we have more features and requirements would like to discuss with you.

How about schedule a bi-weekly meeting so that we can discuss the detail implementations together?

We come from German and China, so how about the Monday (Or let us know which day or time you prefer):

  • 09:00 AM your time
  • 09:00 PM China time
  • 05:00 PM German time

I can help send the invitation later. It is convenient for us to discuss and work closely. :)

Thanks!

Pruning of BuildRun Objects

openshift build v1 api today has fields that allow on a per build config basis the maximum amount of builds to keep in etcd.

Tekton has an unimplemented open features for this: tektoncd/pipeline#1334 , tektoncd/pipeline#1302 , and those also reference the upstream alpha feature around TTL for jobs/pods

build v2 tie in to the upstream tekton solution seems the long term solution

it that takes too long, do they want to build something akin to what exists in the build v1 controller for build v2?

Refactor e2e test by using some common functions

1, In the current e2e test framework, we run each test one by one and most of the code can be extracted as a common function:
https://github.com/redhat-developer/build/blob/master/test/e2e/main_test.go#L82-L110

The background behind this issue is: Right now, in our tenant environment, we only provide kaniko and buildpacks build strategies (buildah requires privileged permission which is not allowed in our env)

So we require to just run kaniko and buildpacks e2e tests.

So we would like to extract the common test code as a shared function like:

func RunBuildTest(xxx) {
	createClusterBuildStrategy(t, ctx, f, oE.clusterBuildStrategy)
	validateController(t, ctx, f, oE.build, oE.buildRun)
	deleteClusterBuildStrategy(t, f, oE.clusterBuildStrategy)

	// Run e2e tests
	oE = newOperatorEmulation(namespace,
		"example-build-s2i",
		"samples/buildstrategy/xxx/buildstrategy_xxx_cr.yaml",
		"samples/build/build_xxx_cr.yaml",
		"samples/buildrun/buildrun_xxx_cr.yaml",
	)
	err = BuildTestData(oE)
	require.NoError(t, err)
	validateOutputEnvVars(oE.build)

	createClusterBuildStrategy(t, ctx, f, oE.clusterBuildStrategy)
	validateController(t, ctx, f, oE.build, oE.buildRun)
	deleteClusterBuildStrategy(t, f, oE.clusterBuildStrategy)
}

So that we can refine the e2e test code and disable any one easily like:

  • RunBuildTest(kaniko, private_repo)
  • RunBuildTest(buildpacks, private_repo)
  • // RunBuildTest(buildah, private_repo)
  • // RunBuildTest(s2i, private_repo)

And we pre-install the ClusterBuildStrategies on the env first, also we also need to find a way to skip the createClusterBuildStrategy(t, ctx, f, oE.clusterBuildStrategy) step.

2, And we also would like to create a new function to include those schema validation:
https://github.com/redhat-developer/build/blob/master/test/e2e/main_test.go#L30-L53

Let me know if you have any comment. @sbose78 and @qu1queee

Improve the context used during reconciliation of controllers

Idea:

All of the calls to the API that take place during the different controller reconciliations, are using either a context.TODO or context.Background , this is not desirable while it´s never cancelled, see docs

I think a context with a deadline(timeout) is better, see docs, so that we can kill all signals after an specific time is reached and release all associated resources. This is of course to avoid the controllers to be doing unnecessary work.

Trail run on pre-prod

Next week we will first try to onboard the latest Build v2 on our dev environment and try to provide more e2e tests to make sure the existing features work fine, such as:

@zhangtbj , How did it go? Starting this ticket to track the status/feedback of test deployment.

inject source via image feature

Today openshift build v1 allows for adding files from specified images which are different from the builder image into the build container of use during the image building process.

The image is loaded at build time and the specified files/directories are copied into the context directory of the build process.

The image loading container step in the v1 build process can be replicated in a tekton task step, and file could be copied to the tekton workspace for use in later steps that build and push the image.

specify labels to be set on output images

Today in openshift build v1 one can specify labels to set on the resulting output image.

Do we want to replicate such a capability in the build v2 API?

In theory such labels could be propagated as tekton parameters to the various steps in a task, where the steps that produce/push the image could leverage the options of the build tool being leveraged to set the label.

Install well known strategies when the operator starts up

Why ?

When the operator is installed, ensure that the cluster is 'build ready' with popular strategies.

What?

We should ship our well-tested / well-known strategies out of the box - presumably, as cluster strategies.

How ?

  • When the operator starts up, install the well known strategies in the cluster scope.
  • If they change, ensure they are reconciled in the ClusterBuildStrategy controller/reconciler.

Long-term:

We might need to provide a distinction between upstream and downstream build strategies.
"upstream": strategies with upstream non-enterprise images example heroku/buildpacks:18
"downstream": strategies with RH/IBM/XYZ-supported images.

Add revision support for Git source

Right now, we only support set Github url from here:
https://github.com/redhat-developer/build/blob/master/pkg/apis/build/v1alpha1/gitsource.go#L12

And cannot set revision and only use the master branch as Ref:
https://github.com/redhat-developer/build/blob/master/pkg/apis/build/v1alpha1/gitsource.go#L15

But in our experience, some times, the user wants to use other branch or commit or revisions, so we also need to support it by adding it in generate_taskrun.go:
https://github.com/redhat-developer/build/blob/master/pkg/controller/buildrun/generate_taskrun.go#L174

Hi @sbose78 ,

BTW, Can I get the writer permission in this repo, so that I can modify the issue tag or status in future.

Thanks!

Add ADRs for k8s controllers

Idea:

Without reading the whole code for controllers, I don´t see any documentation on the reasoning behind each of them. As part of my exercise to fully understand the controllers logic and their interaction(e.g. why we need the BuildRun controller?) I would like to generate a set of ADRs inside the project.

This will help people in the future to go back and understand the reasoning behind the setup. This is definitely not high priority, but a good thing to have. I of course, would like to take this item.

New Build experience via CLI

today oc new-app and oc new-build can be pointed at git repositories or local source code (i.e. binary builds) to define build v1 build configs

binary builds are already on the build v2 roadmap, but is that just an operator side / api statement ?

new-app/new-build are somewhat considered "legacy" devtools at this point, but it is unclear if a replacement is truly ready

should buildv2 still integrate with those oc verbs? or is something else ready to pick up the slack?

Kaniko builds are slow with non-admin user

#88 (comment)

After this privileged problem, I still need to investigate another performance blocking issue. Because the build which is executed under tenant namespace is very slower than executed by cluster admin.
Cluster admin just take about 40-60 seconds to build
Tenant user needs 5 - 10 mins to build.
It is terrible... :(

Should we remove the Flavor from source to avoid the confusion?

Hi @sbose78 ,

We added a GitLab test in this PR: Add env variables for the e2e tests (#87)

And in the test, we just need to input the GitLab URL like Github in the spec.source.url:
https://github.com/redhat-developer/build/blob/master/test/data/build_buildah_cr_private_gitlab.yaml#L8

So I thought all repo should follow a similar standard that we don't need to do any additional render work.

But now, we still have a Flavor parameter in source spec:
https://github.com/redhat-developer/build/blob/master/pkg/apis/build/v1alpha1/gitsource.go#L34

I think if all repo can use the same style, we should remove it to avoid the confusion now.

[discuss] Improve CR Status Feedback

Running builds in cluster using any strategy, users should be able to access the current status of any build request. After the build config obejct is submitted, we should be able to access information about it. mainly

  • what all stages are there for a single build?
  • what is the current stage of the build?
  • durations in the stages?
  • success/failure logs for particular build.

After going through code base of build v1, tektoncd pipeline what I could figure out as current buildstatus spec is

build stages:

  • FetchInputs
  • PullImages
  • Build
  • PostCommit
  • PushImage

IMO we can add an optional stage as "PreBuild" before Build Stage, to support any config or binary specific changes in the build time.

we can also show all the stages (already run or will run next) and mark the build accordingly.

I am digging more around the tekton and build strategies will keep on updating here.

Please do share how you think on this?
@sbose78 @otaviof

Provide an auto-generation service account for multiple users

Hi all,

As you know, by default, the build v2 tries to find pipeline or default serviceacount for the generated build taskrun.

There are some problems right now:

  • We use append way to add a secret to the serviceaccount, if I write a wrong secret name, the build process will report an error that the secret doesn't exist and fail the build. But the append way doesn't allow the user to remove the wrong secret from build configuration which will make users confusing. the user/admin has to remove the wrong secret from service account manually

  • If more and more users use the build in the namespace, the secret list may get longer and longer, and no any clear up right now.

  • All users' secrets are store in one service account, it is shared so it is not secure, it this service account is broken or missed, it affects all builds.

So I would like to propose a new solution auto service-account:

  • If end user doesn't set the service account, or set is as auto:
spec:
  serviceAccount: auto
  • Then, during the build process, we can generate a same name serviceaccount as Build, which append the required secrets.

  • Add ownerrefence in this service account to the build

  • When the Build is deleted, the new auto-generated serviceaccount will be deleted directly

I think it is better for us:

  • to avoid the wrong secret name bug, if the secret is wrong, delete the build and rerun can fix it

  • avoid to share one service account to ALL users, better isolation.

  • If end user set service account for example to pipeline or default, we don't auto-generate one for him and keep using the original way.

  • also avoid the future potential serviceaccount appending problems or bugs

Hi @sbose78 ,

Any comment? :)

Remove the BuildStrategyStatus from BuildStrategy struct?

BuildStrategy is a group of BuildSteps, like Tekton Task which includes many Steps.

And I think BuildStrategy is static, just store the build steps and don't have any status for the BuildStrategy.

We have the status now, but we don't have a good scenario to keep in status for the static BuildStrategy.

In my opinion, we could remove the BuildStrategyStatus of the BuildStrategy struct:

type BuildStrategy struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   BuildStrategySpec   `json:"spec,omitempty"`
	Status BuildStrategyStatus `json:"status,omitempty"`
}

It can also avoid the confusion for end-user.

Add Resource limit support in Build definition

Based on the multi-tenant resource limitation problem:#90

If there is the ResourceQuota or LimitRanges configurations in the namespace, we cannot configure the resource limit in build definition right now.

So we should allow the user to define the resource limit in Build CR like this:

  resources:
    limits:
      cpu: "500m"
      memory: "1Gi"
    request:
      cpu: "500m"
      memory: "1Gi"

And also allow end-user to define the resource limit in BuildRun CR like this:

  resources:
    limits:
      cpu: "1Gi"
      memory: "1Gi"
    request:
      cpu: "1Gi"
      memory: "1Gi"

And add related logic and tests for this new feature.

If the user defines both in Build and BuildRun, the values in BuildRun will overwrite the values in Build.

Clarify PATH_CONTEXT for Kaniko docker build

The sample build build_kaniko_cr.yaml specifies spec.pathContext with a value . . This only works by chance because the value is . which is the default. The code in generate_taskrun.go actually checks spec.source.contextDir.

The sample needs to be corrected. An e2e test case with a custom context directory other than . is needed and a check through the documentation if this is correctly documented.

inject source via config map / secrets feature

Today openshift build v1 API allows for injection of config map content under the source tree of the build container for use during the image build

Do we want to expose a similar policy in build v2?

tekton tasks already allow for mounting of config maps / secret volumes in tasks/taskruns

Add missing Reason column to the Build controller

Idea:

The Build controller is missing a Reason column, that illustrates if a Build CRD instance was properly registered or not.

Acceptance_Criteria:

PR the missing feature. The Reason should illustrate in a compact way why the Build CRD was not registered, or if register, then a successful status should be displayed. This should be together with an enhancement of the unit-tests for the build controller, to assert for the proper value of Reason

@sbose78 fyi

Build images using the binary of my application

As a developer, I want to build images using the binary of my application (e.g. app.jar) so that I can make use of artifacts produced in our existing CI process and stored on repositories like Nexus when building images.

Problem:
Users have existing CI processes that often builds the binary of the application and store it in a repository. It's generally recommended to reuse the same app binary in the delivery phases after CI rather than rebuilding it during the image build phase. These binaries might be signed which prevent the customer from using build strategies that rebuild the binary of the application again.

Why is this important?
To allow users to use Builds as the extension of their CI and reuse existing app binaries for building images via Builds.

Add ownerreference for the new created BuildRun

Right now, we have to create a Build CR and a BuildRun CR to trigger the real build process.

But in the logic, there is no place or controller to set the OwnerReference for BuildRun.

It means, when the Build is deleted, the related BuildRun won't be deleted.

We need to use the Admission Controllers from BuildRun Controller to set the OwnerReference when the end-user create the RuildRun CR manually.

Pls refer:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

timeouts on build v2

today in build v1 you can set a timeout for how long a build can run, after which it is cancelled

tekton also has this today with timeouts on taskrun and pipelinerun

build v2 should expose this somehow in their api and translate it to their use of tekton

Use the "default" kubernetes service account if "pipeline" sa is not present

Today, our Builds are run using the pipeline Service account by default.

  • If the pipeline service is absent, we should default to using the default service account.
  • In future, we may support overriding the service account in the BuildRun. However, this should always be optional to avoid a hit in the user experience.

Cluster scope operator (ClusterRole and ClusterRoleBinding) support

Right now, the build v2 operator is still a namespaced scope operator.
It means the operator can only handle the build in its own namespace.

But like other operators, such as Tekton operator, the operator scope should be Cluster scope (ClusterRole and ClusterRoleBinding) so that one operator can handle all namespace requirements:
https://github.com/tektoncd/operator/blob/master/deploy/role.yaml

If agree, I can modify the operator role and rolebinding to the cluster scope:

Without it, my operator reports error:

E0311 13:39:06.431131       1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.BuildStrategy: buildstrategies.build.dev is forbidden: User "system:serviceaccount:build-operator:build-operator" cannot list resource "buildstrategies" in API group "build.dev" at the cluster scope
E0311 13:39:07.194745       1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.Build: builds.build.dev is forbidden: User "system:serviceaccount:build-operator:build-operator" cannot list resource "builds" in API group "build.dev" at the cluster scope
E0311 13:39:07.435991       1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.BuildStrategy: buildstrategies.build.dev is forbidden: User "system:serviceaccount:build-operator:build-operator" cannot list resource "buildstrategies" in API group "build.dev" at the cluster scope
...

And after I fixed and verified in my env, the operator can work fine for any namespace.

mirroring and builds v1

today openshift build v1 can build registries.conf files with associated auth and certs to ensure they are available to the build process and its invocation of buidah

ultimately buildv2 should have some form of this

it very well might make sense for DEVEX to help abstact out logic used for build v1 for use by buildv2 .... https://github.com/gabemontero/obu is a prototype of such an endeavor

Update by shoubhik

If #537 takes care of this use case, we should document how to use the same to accomplish this use case

Automatically trigger the first BuildRun

Why?

  • Improve the experience for running the first build execution.

How?

  • On creation of a Build, automatically trigger the creation of a BuildRun.
  • Use an annotation to disable this behaviour.
  • Have the controller add/update an annotation to record the creation of a BuildRun.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.