Giter Site home page Giter Site logo

nodeshift / nodejs-reference-architecture Goto Github PK

View Code? Open in Web Editor NEW
1.7K 1.7K 123.0 933 KB

The Red Hat and IBM Node.js Reference architecture. The teams 'opinion' on what components our customers and internal teams should use when building Node.js applications and guidance for how to be successful in production with those components.

Home Page: https://nodeshift.dev/nodejs-reference-architecture/

License: Apache License 2.0

JavaScript 100.00%

nodejs-reference-architecture's Introduction

Nodeshift

Node.js CI Coverage Status

What is it

Nodeshift is an opinionated command line application and programmable API that you can use to deploy Node.js projects to OpenShift and Kubernetes(minikube).

Prerequisites

  • Node.js - version 18.x or greater

Install

To install globally: npm install -g nodeshift

Use with npx: npx nodeshift

or to use in an npm script

npm install --save-dev nodeshift

// inside package.json
scripts: {
  nodeshift: nodeshift
}

$ npm run nodeshift

Core Concepts

Commands & Goals

By default, if you run just nodeshift, it will run the deploy goal, which is a shortcut for running resource, build and apply-resource.

login - will login to the cluster

logout - will logout of the cluster

resource - will parse and create the application resources files on disk

apply-resource - does the resource goal and then deploys the resources to your running cluster

build - archives the code, creates a build config and imagestream and pushes the binary to the cluster

deploy - a shortcut for running resource, build and apply-resource

undeploy - removes resources that were deployed with the apply-resource command

Using Login and Logout

By default, the Nodeshift CLI will look for a kube config in ~/.kube/config. This is usually created when a user does an oc login, but that requires the oc to be installed and the extra step of running the oc login command. The Nodeshift CLI allows you to pass a username/password or a valid auth token along with the clusters API server address to authenticate requests without the need to run oc login first.

While these parameters can be specified for each command, the nodeshift login command helps to simplify that. You can now run nodeshift login with the parameters mentioned to first login, then run the usual nodeshift deploy without neededing to add the flags.

CLI Usage - Login:

$ nodeshift login --username=developer --password=password --server=https://api.server

or

$ nodeshift login --token=12345 --server=https://api.server

CLI Usage - Logout

$ nodeshift logout

API usage using async/await would look something like this:

const nodeshift = require('nodeshift');

const options = {
  username: 'kubeadmin',
  password: '...',
  server: '...',
  insecure: true
};

(async () => {
  await nodeshift.login(options);
  await nodeshift.deploy();
  await nodeshift.logout();
})();

.nodeshift Directory

The .nodeshift directory contains your resource fragments. These are .yml files that describe your services, deployments, routes, etc. By default, nodeshift will create a Service and DeploymentConfig in memory, if none are provided. A Route resource fragment should be provided or use the expose flag if you want to expose your application to the outside world.

For kubernetes based deployments, a Service and Deployment will be created by default, if none are provided. The Service is of a LoadBalancer type, so no Ingress is needed to expose the application.

Resource Fragments

OpenShift resource fragments are user provided YAML files which describe and enhance your deployed resources. They are enriched with metadata, labels and more by nodeshift.

Each resource gets its own file, which contains some skeleton of a resource description. Nodeshift will enrich it and then combine all the resources into a single openshift.yml and openshift.json(located in ./tmp/nodeshift/resource/).

The resource object's Kind, if not given, will be extracted from the filename.

Enrichers

Enrichers will add things to the resource fragments, like missing metadata and labels. If your project uses git, then annotations with the git branch and commit hash will be added to the metadata.

Default Enrichers will also create a default Service and DeploymentConfig when none are provided.

The default port value is 8080, but that can be overridden with the --deploy.port flag.

You can also override this value by providing a .nodeshift/deployment.yaml resource file

Resource Fragment Parameters

Some Resource Fragments might need to have a value set at "run time". For example, in the fragment below, we have the ${SSO_AUTH_SERVER_URL} parameter:

    apiVersion: v1
    kind: Deployment
    metadata:
        name: nodejs-rest-http-secured
    spec:
      template:
        spec:
          containers:
            - env:
              - name: SSO_AUTH_SERVER_URL
                value: "${SSO_AUTH_SERVER_URL}"
              - name: REALM
                value: master

To set that using nodeshift, use the -d option with a KEY=VALUE, like this:

nodeshift -d SSO_AUTH_SERVER_URL=https://sercure-url

Project Archive

A user can specify exactly what files would like nodeshift to include to the archive it will generate by using the files property in package.json.

If a user does not use the files property in the package.json to filter what files they would like to include, then nodeshift by default will include everything except the node_modules, .git and tmp directories.

Nodeshift will also look for additional exclusion rules at a .gitignore file if there is one. Same thing with a .dockerignore file.

If both ignore files are present, nodeshift will union them together and use that.

API

Along with the command line, there is also a public API. The API mirrors the commands.

API Docs - https://nodeshift.github.io/nodeshift/

  • resource

  • applyResource

  • build

  • deploy

  • undeploy

Options that you can specify on the command line, can also be passed as an options object to the API

All methods are Promise based and will return a JSON object with information about each goal that is run.

For example, if the deploy method was run, it would return something similar:

{
    build: {
        ... // build information
    },
    resources: [
        ... // resources created
    ],
    appliedResources: [
        ... // resources that were applied to the running cluster
    ]
}

Example Usage

const nodeshift = require('nodeshift');

// Deploy an Application
nodeshift.deploy().then((response) => {
    console.log(response);
    console.log('Application Deployed')
}).catch((err) => {
    console.log(err);
})

please note: Currently, once a route, service, deployment config, build config, and imagestream config are created, those are re-used. The only thing that changes from deployment to deployment is the source code. For application resources, you can update them by undeploying and then deploying again. BuildConfigs and Imagestreams can be re-created using the --build.recreate flag

Using with Kubernetes

Nodeshift can deploy Node.js applications to a Kubernetes Cluster using the --kube flag.

There are 2 options that can be passed. minikube or docker-desktop . Passing just the --kube flag will default to minikube

Nodeshift expects that your code has a Dockerfile in its root directory. Then deploying to kubernetes is as easy as running:

npx nodeshift --kube=minikube

Note on Minikube: This connects to Minikubes docker server, create a new container and then deploy and expose that container with a Deployment and Service

To learn more about minikube.

To learn more about docker-desktop.

Openshift Rest Client Configuration

Nodeshift uses the Openshift Rest Client under the hood to make all REST calls to the cluster. By default, the rest client will look at your ~/.kube/config file to authenticate you. This file will be created when you do an oc login.

If you don't want to use oc to login first, you can pass in a username, password, and the server of the cluster to authenticate against. If you are using a cluster with a self-signed certificate(like code ready containers), then you will need to add the insecure flag.

Also note, that when accessing the cluster this way, the namespace will default to default. If you need to target another namespace, use the namespace.name flag. Just make sure the user you use has the appropriate permissions.

An example of this might look something like this:

npx nodeshift --username developer --password developer --server https://apiserver_for_cluster --insecure --namespace.name nodejs-examples

You can also pass in a valid auth token using the token flag. If both a token and username/password is specified, the token will take the preference.

npx nodeshift --token 123456789 --server https://apiserver_for_cluster --insecure --namespace.name nodejs-examples

Advanced Options

While nodeshift is very opinionated about deployment parameters, both the CLI and the API accept options that allow you to customize nodeshift's behavior.

version

Outputs the current version of nodeshift

projectLocation

Changes the default location of where to look for your project. Defaults to your current working directory(CWD)

configLocation

This option is passed through to the Openshift Rest Client. Defaults to the ~/.kube/config

token

Auth token to pass into the openshift rest client for logging in with the API Server. Overrides the username/password

username

username to pass into the openshift rest client for logging in with the API Server.

password

password to pass into the openshift rest client for logging in with the API Server.

server

server to pass into the openshift rest client for logging in with the API Server.

apiServer - Deprecated

Use server instead. apiServer to pass into the openshift rest client for logging in with the API Server.

insecure

flag to pass into the openshift rest client for logging in with a self signed cert. Only used with apiServer login. default to false.

forceLogin

Force a login when using the apiServer login. Only used with apiServer login. default to false

imageTag

Specify the tag of the docker image or image stream to use for the deployed application. defaults to latest. For docker images these version tags correspond to the RHSCL tags of the ubi8/nodejs s2i images

dockerImage

Specify the s2i builder image of Node.js to use for the deployed applications. Defaults to ubi8/nodejs s2i images

imageStream

Specify the image stream from which to get the s2i image of Node.js to use for the deployed application. If not specified defaults to using a docker image instead.

web-app

Flag to automatically set the appropriate docker image for web app deployment. Defaults to false

resourceProfile

Define a subdirectory below .nodeshift/ that indicates where OpenShift resources are stored

outputImageStream

The name of the ImageStream to output to. Defaults to project name from package.json

outputImageStreamTag

The tag of the ImageStream to output to. Defaults to latest

quiet

suppress INFO and TRACE lines from output logs.

expose

options to create a default route, if non is provided. Defaults to false

removeAll

option to remove builds, buildConfigs and Imagestreams. Defaults to false - Only for the undeploy Command

deploy.port

Flag to update the default ports on the resource files. Defaults to 8080

deploy.env

Flag to pass deployment config environment variables as NAME=Value. Can be used multiple times. ex: nodeshift --deploy.env NODE_ENV=development --deploy.env YARN_ENABLED=true

build.recreate

Flag to recreate a BuildConfig or Imagestream. Defaults to false. Choices are "buildConfig", "imageStream", false, true. If true, both are re-created

build.forcePull

Flag to make your BuildConfig always pull a new image from dockerhub. Defaults to false

build.incremental

Flag to perform incremental builds(if applicable), which means it reuses artifacts from previously-built images. Defaults to false

build.env

Flag to pass build config environment variables as NAME=Value. Can be used multiple times. ex: nodeshift --build.env NODE_ENV=development --build.env YARN_ENABLED=true

build.strategy

Flag to change the build strategy used. Values can be Docker or Source. Defaults to Source

useDeployment

Flag to deploy the application using a Deployment instead of a DeploymentConfig. Defaults to false

knative

EXPERIMENTAL. Flag to deploy an application as a Knative Serving Service. Defaults to false Since this feature is experimental, it is subject to change without a Major version release until it is fully stable.

kube

Flag to deploy an application to a vanilla kubernetes cluster. At the moment only Minikube is supported.

rh-metering

Flag to add some metering labels to a deployment. To change the nodeVersion label, use --rh-metering.nodeVersion flag. Intended for use with Red Hat product images. For more information on metering for Red Hat images, see here

help

Shows the below help

    Usage: nodeshift [--options]

    Commands:
        nodeshift deploy          default command - deploy                   [default]
        nodeshift build           build command
        nodeshift resource        resource command
        nodeshift apply-resource  apply resource command
        nodeshift undeploy        undeploy resources
        nodeshift login           login to the cluster
        nodeshift logout          logout of the cluster

    Options:
        --version                Show version number                         [boolean]
        --projectLocation        change the default location of the project   [string]
        --kube                   Flag to deploy an application to a vanilla kubernetes
                       cluster.  At the moment only Minikube is supported.
                                                                             [boolean]
        --configLocation         change the default location of the config    [string]
        --token                  auth token to pass into the openshift rest client for
                                 logging in.  Overrides the username/password [string]
        --username               username to pass into the openshift rest client for
                                 logging in                                   [string]
        --password               password to pass into the openshift rest client for
                                 logging in                                   [string]
        --apiServer              Deprecated - use the "server" flag instead. server address to pass into the openshift rest client
                                 for logging in                               [string]
        --server                 server address to pass into the openshift rest client
                                 for logging in                               [string]
        --insecure               flag to pass into the openshift rest client for
                                 logging in with a self signed cert.  Only used with
                                 apiServer login                             [boolean]
        --forceLogin             Force a login when using the apiServer login[boolean]
        --imageTag           The tag of the docker image to use for the deployed
                            application.                 [string] [default: "latest"]
        --web-app                flag to automatically set the appropriate docker image
                                 for web app deployment
                                                         [boolean] [default: false]
        --resourceProfile        Define a subdirectory below .nodeshift/ that indicates
                                 where Openshift resources are stored         [string]
        --outputImageStream      The name of the ImageStream to output to.  Defaults
                       to project name from package.json            [string]
        --outputImageStreamTag   The tag of the ImageStream to output to.    [string]
        --quiet                  suppress INFO and TRACE lines from output logs
                                                                            [boolean]
        --expose            flag to create a default Route and expose the default
                   service [boolean] [choices: true, false] [default: false]
        --namespace.displayName  flag to specify the project namespace display name to
                       build/deploy into.  Overwrites any namespace settings
                       in your OpenShift or Kubernetes configuration files
                                                                    [string]
        --namespace.create       flag to create the namespace if it does not exist.
                       Only applicable for the build and deploy command.
                       Must be used with namespace.name            [boolean]
        --namespace.remove       flag to remove the user created namespace.  Only
                       applicable for the undeploy command.  Must be used
                       with namespace.name                         [boolean]
        --namespace.name         flag to specify the project namespace name to
                       build/deploy into.  Overwrites any namespace settings
                       in your OpenShift or Kubernetes configuration files
                                                                    [string]
        --deploy.port        flag to update the default ports on the resource files.
                   Defaults to 8080                          [default: 8080]
        --build.recreate         flag to recreate a buildConfig or Imagestream
                [choices: "buildConfig", "imageStream", false, true] [default: false]
        --build.forcePull        flag to make your BuildConfig always pull a new image
                                from dockerhub or not
                                    [boolean] [choices: true, false] [default: false]
        --build.incremental  flag to perform incremental builds, which means it reuses
                                artifacts from previously-built images
                                    [boolean] [choices: true, false] [default: false]
        --build.strategy         flag to change the build strategy.  Defaults to Source
                                  [choices: "Source", "Docker"]
        --metadata.out           determines what should be done with the response
                                metadata from OpenShift
                [string] [choices: "stdout", "ignore", "<filename>"] [default: "ignore"]
        --useDeployment          flag to deploy the application using a Deployment
                       instead of a DeploymentConfig
                           [boolean] [choices: true, false] [default: false]
        --knative                EXPERIMENTAL. flag to deploy an application
                       as a Knative Serving Service
                           [boolean] [choices: true, false] [default: false]
        --help                   Show help                                   [boolean]
        --cmd                                                      [default: "deploy"]

Contributing

Please read the contributing guide

nodejs-reference-architecture's People

Contributors

aalykiot avatar bethgriggs avatar boneskull avatar cadienvan avatar danbev avatar davidsint avatar domharries avatar edo9k avatar freedomben avatar helio-frota avatar jimlindeman avatar joesepi avatar johncalvinroberts avatar kalmanodds avatar lholmquist avatar ludovic-gasc avatar mhdawson avatar mostafatalaat770 avatar mpparsley avatar richardlau avatar roastlechon avatar rolfl avatar upkarlidder avatar wtrocki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodejs-reference-architecture's Issues

Configuration of the CI/CD for automation and releases

As each reference architecture is crutial to start with versioning - draft, 0.1.0 versions and publish them using automation. This is better to be done at the beginning because without ensuring some specific formats we can end up with lots of boilerplate work before release or we will

Examples of workflows that can be used (if we going to use something like docusaurus for website etc.)
https://github.com/oasis-open/odata-rapid/blob/master/.github/workflows/website.yml

https://github.com/aerogear/graphback/blob/master/.github/workflows/build-website.yml

Express as a framework?

Despite the enormous success it had over the years, Express isn't clearly catching up with the demands of today's products and can be considered deprecated or, at least, not maintained anymore.
Last commit in master branch has been on March 4, 2023 and the last commit in v5, the "will-be-next" one, is dated Feb 15, 2022.

Modern expressions of Node async power such as Fastify are clearly winning this race and, in my - and many others' - opinion should be suggested instead of Express.

The risk here is to start with technical debt.

Research on static file/serving platforms

Hi

I have done some preliminary work to seed discussion on static middlewares in Node.js.

I think we can have 3 categories for those:

Generally s3 dominating as a store.

There is also concept how to actually reference and store s3 files with auth in database but this is very specific and it is easier to google it rather than reference it here.

@mhdawson This might actually require some discussion on call.

Recommend Roarr over Pino

The current recommended component for logging is Pino

https://github.com/nodeshift/nodejs-reference-architecture/blob/main/docs/operations/logging.md

Roarr is an actively maintained Node.js JSON logger that has several benefits over Pino and other frameworks:

  • Does not block the event cycle (=fast).
  • Does not require initialisation.
  • Produces structured data.
  • Decouples transports.
  • Has a CLI program and web application for reading logs.
  • Works in Node.js and browser.
  • Configurable using environment variables.

I have provided an in depth write up about my motivation for creating and maintaining Roarr.

Perhaps the most useful (unique) feature of Roarr is adopt, which allows to inherit log context and describes relationships between logs in async contexts using sequence.

Oauth2 vs Auth Code Workflow

Reading through the documentation the following line from the auth section doesn't make a whole lot of sense:

Do not use OAUTH2 implicit grant, is preferred to use Authorization code workflow.

I'm not sure if I should be reading this as

Do not use OAUTH2 implicit grant, however, it is preferred to use instead of Authorization code worflow.

Or if there is something else missing from that sentence.

Recommend internal npm repo process

This question was raised in a recent talk Michael and I gave internally at IBM:

Has there been any progress on an internal NPM repo that hosts only approved & validated packages and versions so that every BU didn't have to sort this on their own - this would go a long way to help developers get consistency across a company. It would also help with security.

Moving all md files into docs folder.

When using some builders etc. it will be needed to move all MD files int some folder. Currently they are in the root which will make it hard to differentiate with some others like license/contributing etc. I'm happy to do that as requirement to build docusaurus prototype website for this engagement.

Build style job is failing with error

Checking if module is tested by community CITGM runs WARN
TypeError: Cannot read property 'cves' of undefined
at /home/runner/.npm/_npx/2123/lib/node_modules/npcheck/src/plugins/audit.js:60:49
at runMicrotasks ()
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Promise.all (index 0)
at async auditPlugin (/home/runner/.npm/_npx/2123/lib/node_modules/npcheck/src/plugins/audit.js:69:28)
at async Object.run (/home/runner/.npm/_npx/2123/lib/node_modules/npcheck/src/cli.js:101:24)
Error: Process completed with exit code 1.

Formatting of the docs into the website.

Hi

I have been using docusaurus in couple places and it actually enables using some nice syntax highlighting in markdown. What will be the considered target platform for the publishing of this documentation? If we would consider docusaurus I have some experience with setting this up and can help to drive it.

Maintenance and traceability of the spec

As every document or recomendation it is sometimes really hard to maintain it considering number of the packages that will be included in the recomendations.

Packages can get:

  • Outdated
  • Deprecated
  • Stopped from being maintained.
  • No longer work with the rest of the ecosystem (Node.js version etc.)
  • Go against some values and rules

To keep reference specifications maintainable we usually trying to automate some of this task (as recurently going thru the recomendations will be unrealistic and documentation will outdate/deprecate over time (been there).

Solution: Utilize already existing ecosystem of bots and list packages in the way that they can be seen in one place and automated.

Approach nr1

Each node.js package we recommending ending up in the root package.json.
We configure dependabot, renovate, nsp and other IBM and Red Hat bots (Like license checks)
to run on each PR and also on releases

Approach nr2

List packages in custom json file that have them categorized etc. then we can build some scripts for checking some metrics on those packages

I would think that initially we just need something like this:

{
  packages:{
     "express":{
		// To express categories
      	category: "REST API",
		// To express flavours
		labels: ["General Purpose", "Platform", "IOT"],
		// To losely define what version was checked or recomended
		version: ">=5.0.0",
		// To include some packages that we recommend that will work only with express.
		subpackages: ["express-session", "keycloak-connect"]
     }
  }
}

CC @mhdawson @lholmquist

Document justifications for recommendations

It'd be useful to have a little more of the thinking behind some of the recommendations documented. For instance, the recommendation of pino for logging framework is fine, but I'm productizing an existing set of source that already uses winston for logging. I can change it to pino at the cost of some development effort but I don't have any explanation to give the existing developers as to why pino is a better choice than winston.

A comparison based on numbers of weekly downloads or Github stars puts winston well ahead of pino; winston has been round a lot longer; they have the same MIT license; pino is smaller and faster but not as featureful.

The Logging page right now doesn't mention winston; it does say that pino is popular (with some stats that are more than a year old) but doesn't comment on the fact that winston is more popular or attempt to compare pino with winston.

Could we capture a little more of the background thinking behind the recommendations so they have a little more persuasive power to change existing usages?

Monorepository tooling and recommendations Advanced Section

Advanced Tips / Tricks

  • TODO section blurb about hoisting and nohoist for strategies
  • TODO section for yarn resolutions
  • TODO section for individually controlled scripts. Provide use case when you might have scripts that need to be defined differently due to different package behaviors

Move Helmet.js to security

Helmet.js is currently listed under Authentication and Authorization. I just checked their docs, and they provide good security HTTP headers. Should it not move to security? I don't see a security section in functional components. There is one in Development.

Code Coverage section

investigate what folks are using for code coverage / hosting solutions

c8, nyc, codecov, coveralls?

Tooling to capture data about candidate packages

Originally posted by @mhdawson

As part of the discussion around which packages make sense to include in the architecture, it would be good to have overview data to supplement the direct experience of the team.

Informally we've looked at:

weekly npm downloads
PRs, rate of landing etc. active collaborators, etc. to assess how easy it will be to contribute to the project
It would be good to have some tooling to more easily gather this data, and it could be re-used to keep an ongoing eye on the packages we have included as suggested

Re-consider recommendation of rdkafka (prefer kafkajs)

My IBM Garage team prefer kafkajs (https://kafka.js.org) to rdkafka. The developer user experience is better with kafkajs, because of its pedigree, and it seems to have more maintenance activity. It also is more portable. kafkajs looks to have surpassed node-rdkafka in terms of npm downloads (although that metric is an imperfect metric of project value!).

Counter-arguments:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.