Giter Site home page Giter Site logo

extractus / article-extractor Goto Github PK

View Code? Open in Web Editor NEW
1.4K 13.0 127.0 7.53 MB

To extract main article from given URL with Node.js

Home Page: https://extractor-demos.pages.dev/article-extractor

License: MIT License

JavaScript 68.56% HTML 31.44%
nodejs article-parser readability article article-extractor crawler extract scraper

article-extractor's Introduction

@extractus/article-extractor

Extract main article, main image and meta data from URL.

npm version CodeQL CI test Coverage Status

(This library is derived from article-parser renamed.)

Demo

Install & Usage

Node.js

npm i @extractus/article-extractor

# pnpm
pnpm i @extractus/article-extractor

# yarn
yarn add @extractus/article-extractor
// es6 module
import { extract } from '@extractus/article-extractor'

Deno

import { extract } from 'https://esm.sh/@extractus/article-extractor'

// deno > 1.28
import { extract } from 'npm:@extractus/article-extractor'

Browser

import { extract } from 'https://esm.sh/@extractus/article-extractor'

Please check the examples for reference.

APIs


extract()

Load and extract article data. Return a Promise object.

Syntax

extract(String input)
extract(String input, Object parserOptions)
extract(String input, Object parserOptions, Object fetchOptions)

Example:

import { extract } from '@extractus/article-extractor'

const input = 'https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html'

// here we use top-level await, assume current platform supports it
try {
  const article = await extract(input)
  console.log(article)
} catch (err) {
  console.error(err)
}

The result - article - can be null or an object with the following structure:

{
  url: String,
  title: String,
  description: String,
  image: String,
  author: String,
  favicon: String,
  content: String,
  published: Date String,
  type: String, // page type
  source: String, // original publisher
  links: Array, // list of alternative links
  ttr: Number, // time to read in second, 0 = unknown
}

Parameters

input required

URL string links to the article or HTML content of that web page.

parserOptions optional

Object with all or several of the following properties:

  • wordsPerMinute: Number, to estimate time to read. Default 300.
  • descriptionTruncateLen: Number, max num of chars generated for description. Default 210.
  • descriptionLengthThreshold: Number, min num of chars required for description. Default 180.
  • contentLengthThreshold: Number, min num of chars required for content. Default 200.

For example:

import { extract } from '@extractus/article-extractor'

const article = await extract('https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html', {
  descriptionLengthThreshold: 120,
  contentLengthThreshold: 500
})

console.log(article)
fetchOptions optional

fetchOptions is an object that can have the following properties:

  • headers: to set request headers
  • proxy: another endpoint to forward the request to
  • agent: a HTTP proxy agent
  • signal: AbortController signal or AbortSignal timeout to terminate the request

For example, you can use this param to set request headers to fetch as below:

import { extract } from '@extractus/article-extractor'

const url = 'https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html'
const article = await extract(url, {}, {
  headers: {
    'user-agent': 'Opera/9.60 (Windows NT 6.0; U; en) Presto/2.1.1'
  }
})

console.log(article)

You can also specify a proxy endpoint to load remote content, instead of fetching directly.

For example:

import { extract } from '@extractus/article-extractor'

const url = 'https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html'

await extract(url, {}, {
  headers: {
    'user-agent': 'Opera/9.60 (Windows NT 6.0; U; en) Presto/2.1.1'
  },
  proxy: {
    target: 'https://your-secret-proxy.io/loadXml?url=',
    headers: {
      'Proxy-Authorization': 'Bearer YWxhZGRpbjpvcGVuc2VzYW1l...'
    },
  }
})

Passing requests to proxy is useful while running @extractus/article-extractor on browser. View examples/browser-article-parser as reference example.

For more info about proxy authentication, please refer HTTP authentication

For a deeper customization, you can consider using Proxy to replace fetch behaviors with your own handlers.

Another way to work with proxy is use agent option instead of proxy as below:

import { extract } from '@extractus/article-extractor'

import { HttpsProxyAgent } from 'https-proxy-agent'

const proxy = 'http://abc:[email protected]:31113'

const url = 'https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html'

const article = await extract(url, {}, {
  agent: new HttpsProxyAgent(proxy),
})
console.log('Run article-extractor with proxy:', proxy)
console.log(article)

For more info about https-proxy-agent, check its repo.

By default, there is no request timeout. You can use the option signal to cancel request at the right time.

The common way is to use AbortControler:

const controller = new AbortController()

// stop after 5 seconds
setTimeout(() => {
  controller.abort()
}, 5000)

const data = await extract(url, null, {
  signal: controller.signal,
})

A newer solution is AbortSignal's timeout() static method:

// stop after 5 seconds
const data = await extract(url, null, {
  signal: AbortSignal.timeout(5000),
})

For more info:

extractFromHtml()

Extract article data from HTML string. Return a Promise object as same as extract() method above.

Syntax

extractFromHtml(String html)
extractFromHtml(String html, String url)
extractFromHtml(String html, String url, Object parserOptions)

Example:

import { extractFromHtml } from '@extractus/article-extractor'

const url = 'https://www.cnbc.com/2022/09/21/what-another-major-rate-hike-by-the-federal-reserve-means-to-you.html'

const res = await fetch(url)
const html = await res.text()

// you can do whatever with this raw html here: clean up, remove ads banner, etc
// just ensure a html string returned

const article = await extractFromHtml(html, url)
console.log(article)

Parameters

html required

HTML string which contains the article you want to extract.

url optional

URL string that indicates the source of that HTML content. article-extractor may use this info to handle internal/relative links.

parserOptions optional

See parserOptions above.


Transformations

Sometimes the default extraction algorithm may not work well. That is the time when we need transformations.

By adding some functions before and after the main extraction step, we aim to come up with a better result as much as possible.

There are 2 methods to play with transformations:

  • addTransformations(Object transformation | Array transformations)
  • removeTransformations(Array patterns)

At first, let's talk about transformation object.

transformation object

In @extractus/article-extractor, transformation is an object with the following properties:

  • patterns: required, a list of regexps to match the URLs
  • pre: optional, a function to process raw HTML
  • post: optional, a function to process extracted article

Basically, the meaning of transformation can be interpreted like this:

with the urls which match these patterns
let's run pre function to normalize HTML content
then extract main article content with normalized HTML, and if success
let's run post function to normalize extracted article content

article-extractor extraction process

Here is an example transformation:

{
  patterns: [
    /([\w]+.)?domain.tld\/*/,
    /domain.tld\/articles\/*/
  ],
  pre: (document) => {
    // remove all .advertise-area and its siblings from raw HTML content
    document.querySelectorAll('.advertise-area').forEach((element) => {
      if (element.nodeName === 'DIV') {
        while (element.nextSibling) {
          element.parentNode.removeChild(element.nextSibling)
        }
        element.parentNode.removeChild(element)
      }
    })
    return document
  },
  post: (document) => {
    // with extracted article, replace all h4 tags with h2
    document.querySelectorAll('h4').forEach((element) => {
      const h2Element = document.createElement('h2')
      h2Element.innerHTML = element.innerHTML
      element.parentNode.replaceChild(h2Element, element)
    })
    // change small sized images to original version
    document.querySelectorAll('img').forEach((element) => {
      const src = element.getAttribute('src')
      if (src.includes('domain.tld/pics/150x120/')) {
        const fullSrc = src.replace('/pics/150x120/', '/pics/original/')
        element.setAttribute('src', fullSrc)
      }
    })
    return document
  }
}

addTransformations(Object transformation | Array transformations)

Add a single transformation or a list of transformations. For example:

import { addTransformations } from '@extractus/article-extractor'

addTransformations({
  patterns: [
    /([\w]+.)?abc.tld\/*/
  ],
  pre: (document) => {
    // do something with document
    return document
  },
  post: (document) => {
    // do something with document
    return document
  }
})

addTransformations([
  {
    patterns: [
      /([\w]+.)?def.tld\/*/
    ],
    pre: (document) => {
      // do something with document
      return document
    },
    post: (document) => {
      // do something with document
      return document
    }
  },
  {
    patterns: [
      /([\w]+.)?xyz.tld\/*/
    ],
    pre: (document) => {
      // do something with document
      return document
    },
    post: (document) => {
      // do something with document
      return document
    }
  }
])

The transformations without patterns will be ignored.

removeTransformations(Array patterns)

To remove transformations that match the specific patterns.

For example, we can remove all added transformations above:

import { removeTransformations } from '@extractus/article-extractor'

removeTransformations([
  /([\w]+.)?abc.tld\/*/,
  /([\w]+.)?def.tld\/*/,
  /([\w]+.)?xyz.tld\/*/
])

Calling removeTransformations() without parameter will remove all current transformations.

Priority order

While processing an article, more than one transformation can be applied.

Suppose that we have the following transformations:

[
  {
    patterns: [
      /http(s?):\/\/google.com\/*/,
      /http(s?):\/\/goo.gl\/*/
    ],
    pre: function_one,
    post: function_two
  },
  {
    patterns: [
      /http(s?):\/\/goo.gl\/*/,
      /http(s?):\/\/google.inc\/*/
    ],
    pre: function_three,
    post: function_four
  }
]

As you can see, an article from goo.gl certainly matches both them.

In this scenario, @extractus/article-extractor will execute both transformations, one by one:

function_one -> function_three -> extraction -> function_two -> function_four


sanitize-html's options

@extractus/article-extractor uses sanitize-html to make a clean sweep of HTML content.

Here is the default options

Depending on the needs of your content system, you might want to gather some HTML tags/attributes, while ignoring others.

There are 2 methods to access and modify these options in @extractus/article-extractor.

  • getSanitizeHtmlOptions()
  • setSanitizeHtmlOptions(Object sanitizeHtmlOptions)

Read sanitize-html docs for more info.


Test

git clone https://github.com/extractus/article-extractor.git
cd article-extractor
pnpm i
pnpm test

article-extractor-test.png

Quick evaluation

git clone https://github.com/extractus/article-extractor.git
cd article-extractor
pnpm i
pnpm eval {URL_TO_PARSE_ARTICLE}

License

The MIT License (MIT)

Support the project

If you find value from this open source project, you can support in the following ways:

Thank you.


article-extractor's People

Contributors

alanhoff avatar andremacola avatar daveschumaker avatar ganeshmani avatar jfpalacios avatar larchliu avatar matita avatar mphill avatar ndaidong avatar papodaca avatar ranmocy avatar settingdust avatar simsim0709 avatar valiant1x avatar willwashburn avatar yakimka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

article-extractor's Issues

request has been deprecated

The old request library has been deprecated and people are encouraged to use more modern request utilities such as node-fetch or axios

The issue is a result of the dependency es6-readability (which you also authored), which in turn includes a very old version of jsdom. There is a ticket in the JSDOM project to replace request so this ticket is just here to remind you to make a fix when the JSDOM people get around to fixing their stuff.

Content without markup

Is there a way to get the content without markup (i.e. plain text) or would you recommend another library to do this?

Issue while importing package.

Got the following exception when I import the package,

TypeError: Cannot assign to read only property 'Symbol(Symbol.toStringTag)' of object '#<DataView>'

I think the issue is with sanitize-html

Node Version - v12.18.0

Does any one else have the same issue?

Expected to return a value at the end of async arrow function consistent-return

Why do I get this error? Can someone please help me?

"Expected to return a value at the end of async arrow function consistent-return."

I use it on an onChange since I would like the URL put in the input to be used.

const handleChange = async (event) => {
try {
const article = await extract(event.target.value)
return article
} catch (err) {
console.trace(err)
}
}

typeError fs.readFileSync is not a function

  144555 | exports.FILES = [/*require.resolve*/(/*! ../lib/utils.js */ "./node_modules/uglify-js/lib/utils.js"), /*require.resolve*/(/*! ../lib/ast.js */ "./node_modules/uglify-js/lib/ast.js"), /*require.resolve*/(/*! ../lib/transform.js */ "./node_modules/uglify-js/lib/transform.js"), /*require.resolve*/(/*! ../lib/parse.js */ "./node_modules/uglify-js/lib/parse.js"), /*require.resolve*/(/*! ../lib/scope.js */ "./node_modules/uglify-js/lib/scope.js"), /*require.resolve*/(/*! ../lib/compress.js */ "./node_modules/uglify-js/lib/compress.js"), /*require.resolve*/(/*! ../lib/output.js */ "./node_modules/uglify-js/lib/output.js"), /*require.resolve*/(/*! ../lib/sourcemap.js */ "./node_modules/uglify-js/lib/sourcemap.js"), /*require.resolve*/(/*! ../lib/mozilla-ast.js */ "./node_modules/uglify-js/lib/mozilla-ast.js"), /*require.resolve*/(/*! ../lib/propmangle.js */ "./node_modules/uglify-js/lib/propmangle.js"), /*require.resolve*/(/*! ../lib/minify.js */ "./node_modules/uglify-js/lib/minify.js"), /*require.resolve*/(/*! ./exports.js */ "./node_modules/uglify-js/tools/exports.js")];
  144556 | new Function("exports", function () {
  144557 |   var code = exports.FILES.map(function (file) {
> 144558 |     return fs.readFileSync(file, "utf8");
         |               ^  144559 |   });
  144560 |   code.push("exports.describe_ast = " + describe_ast.toString());
  144561 |   return code.join("\n\n");

Syntax error

Using Node v6.9.2,

var ArticleParser = require('article-parser');

Getting following error,

var {configure, FETCH_OPTIONS} = config;
^

SyntaxError: Unexpected token {
at exports.runInThisContext (vm.js:53:16)
at Module._compile (module.js:387:25)
at Object.Module._extensions..js (module.js:422:10)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Module.require (module.js:367:17)
at require (internal/module.js:20:19)
at Object.
at Module._compile (module.js:413:34)
at Object.Module._extensions..js (module.js:422:10)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Module.require (module.js:367:17)
at require (internal/module.js:20:19)
at Object.
at Module._compile (module.js:413:34)
at Object.Module._extensions..js (module.js:422:10)
at Module.load (module.js:357:32)
at Function.Module._load (module.js:314:12)
at Module.require (module.js:367:17)
at require (internal/module.js:20:19)
at Object.

Feature Request: Favicon

Hello,
First I want to say that your lib is ✨greate.

What would you think about adding the favicon of the page, is will allow to add a nice title of the article ?!

Have a good day

Can't parse youtube video inside article

I can't seem to get the video inside an article:

<p class="text-justify">
	<strong>WATCH VIDEO BELOW</strong>
</p>
<p class="text-justify">
	<iframe src="https://www.youtube.com/embed/xxxxxxx" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>
</p>

It can get the WATCH VIDEO BELOW but can't get the iframe. How can I achieve this?

Types

This package is really nice and has a simple api compared to some alternatives out there. Would be awesome if typescript types were included!

Null when extracting article

Hello!

Suddenly I have encountered problems when using article-parser. I get NULL from when using extract but it works when inputting the URL into the demo of your application at https://ndaidong.github.io/article-parser-demo/. I have the following code:

const url = 'https://news.yahoo.co.jp/articles/5c75721b1d2afe89beaf2a7f81e2bcff961fc56a';
  extract(url).then((article) => {
    console.log(article);
    res.json(article);

  }).catch((err) => {
    console.log(err);
    res.json(err);
  });

(Using express). What can be the issue here? Other sites works. I guess I've parsed too much from Yahoo and got blocked or something?

Extracting from Full HTML does not work

Hi, I understand that I can use extract() also by passing a string containing a full HTML page

extract(String url | String html)

Yet if I do this, I get null as a return. Take the contents of http://mindingourway.com/half-assing-it-with-everything-youve-got/ as an example. Passing the URL works fine, but reading the URL contents into a string and then passing the string to extract() fails.

A little bit of debugging article-parser leads to the finding that the culprit is isHtmlDoc(input). This function only returns true if there's at least 10 <div> tags in the string:

return htmlTags === 1 && headTags === 1 && bodyTags === 1 && divTags > 10;

I don't think that's a good heuristic. ;)

Why not skipping isHtmlDoc() at all and simply checking if the string begins with a <?

App Crashing with fresh install

I installed the module on my node app and get instant crashing.

var ArticleParser = require('article-parser')

function extractURL(url) {
    return ArticleParser.extract(url)
}
/.../node_modules/article-parser/src/main.js:15
14:13:17 worker.1 | var {configure, FETCH_OPTIONS} = config;
14:13:17 worker.1 |     ^
14:13:17 worker.1 | 
14:13:17 worker.1 | SyntaxError: Unexpected token {
14:13:17 worker.1 |     at exports.runInThisContext (vm.js:53:16)
14:13:17 worker.1 |     at Module._compile (module.js:404:25)
14:13:17 worker.1 |     at Object.Module._extensions..js (module.js:432:10)
14:13:17 worker.1 |     at Module.load (module.js:356:32)
14:13:17 worker.1 |     at Function.Module._load (module.js:311:12)
14:13:17 worker.1 |     at Module.require (module.js:366:17)
14:13:17 worker.1 |     at require (module.js:385:17)
14:13:17 worker.1 |     at Object.<anonymous> (/.../node_modules/article-parser/index.js:5:12)
14:13:17 worker.1 |     at Module._compile (module.js:425:26)
14:13:17 worker.1 |     at Object.Module._extensions..js (module.js:432:10)

Any thoughts? Thanks.

when dealing with Chinese characters

When articles contain Chinese, it return content like this:

'<div> <div><span> </span><span> </span><span> </span><span> </span><div><span> </span><div><span> </span><span> </span><div><span> </span><span> </span>2017&#x5E74;5&#x6708;29&#x65E5; | &#x5206;&#x7C7B;: <a href="http://blog.51.ca/?cat=101" target="_blank">&#x751F;&#x6D3B;</a> (&#x5168;&#x5C40;), <a href="http://blog.51.ca/u-288413/category/%e7%94%9c%e7%82%b9/" target="_blank">&#x751C;&#x70B9;</a>, <a href="http://blog.51.ca/u-288413/category/blogroll/" target="_blank">Blogroll</a> | &#x4F5C;&#x8005;: <a href="http://blog.51.ca/u-288413/author/%e9%99%88%e5%b0%8f%e6%99%93/" target="_blank">&#x9648;&#x5C0F;&#x6653;</a> | <strong>523</strong> &#x6D4F;&#x89C8;<span> </span><div><span> </span><div><p> &#x660E;&#x5929;&#x5C31;&#x662F;&#x7AEF;&#x5348;&#x8282;&#xFF0C; </p><span> </span><p> &#x4FFA;&#x559C;&#x6B22;&#x5403;&#xFF02;&#x67A7;&#x6C34;&#x68D5;&#xFF02;&#xFF0C; </p><span> </span><p> &#x56E0;&#x4E3A;&#x716E;&#x597D;&#x7684;&#x68D5;&#x5B50;&#xFF0C; </p><span> </span><p> &#x8089;&#x8272;&#x91D1;&#x9EC4;&#x900F;&#x660E;&#xFF0C; </p><span> </span><p> &#x8106;&#x53E3;&#x4E0D;&#x817B;&#xFF0C; </p><span> </span><p> &#x5403;&#x65F6;&#x8638;&#x4EE5;&#x767D;&#x7CD6;&#x6216;&#x871C;&#x7CD6;&#xFF0C; </p><span> </span><p> &#x522B;&#x6709;&#x98CE;&#x5473;&#xFF0C;&#x4ECA;&#x5929;&#x505A;&#x4E86;&#x4E00;&#x4E9B;&#x5E94;&#x8282;&#x3002; </p><span> </span><span> </span><p> <img src="http://blog.51.ca/u-288413/files/2017/05/image4.jpg" alt="image.jpg"> </p><span> </span><p> <img src="http://blog.51.ca/u-288413/files/2017/05/image5.jpg" alt="image.jpg"> </p><span> </span><p> <img src="http://blog.51.ca/u-288413/files/2017/05/image6.jpg" alt="image.jpg"> </p><span> </span><span> </span><span> </span></div><span> </span></div><span> </span><span> </span></div><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span></div><span> </span><span> </span><span> </span><span> </span></div><span> </span><span> </span></div> </div>'

is it possible to have content show exact Chinese characters?

How to decode these characters?

while the page content isn't English,after use extract() method, the article.content may like this &#x7528;&#x2014;&#x2014;&#x6784;&#x5EFA;.
How can avoid this?

node-gyp: Permission denied

Permission issue occurred while installing salient > hiredis:

ndaidong@bella-mint /var/www/node/article-parser $ npm i

> [email protected] install /var/www/node/article-parser/node_modules/hiredis
> node-gyp rebuild

sh: 1: node-gyp: Permission denied
npm ERR! code ELIFECYCLE
npm ERR! errno 126
npm ERR! [email protected] install: `node-gyp rebuild`
npm ERR! Exit status 126
npm ERR! 
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/ndaidong/.npm/_logs/2017-09-08T06_44_16_444Z-debug.log

v2.2.0

Extraction failed when URL terminates with slash " / " character

I noticed that article-parser fails when parsing URLs ending in /.

For instance, article-parser succeeds with
https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete
but fails with
https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete/

Perhaps this is expected behavior.

htmlRules not being applied to results

Current behavior prevents developers from modifying the htmlRules object used with sanitize-html. Instead the htmlRules that is applied is a constant (config.article.htmlRules) defined in './config/index.js'. This seems to be because of an issue previously with parsing Wikipedia articles (see #41). Issue #41 seems to have been resolved by other commits and this remnant of prevent direct access to the htmlRules seems to serve no purpose that I can see. I will fork and create a pull request.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.