Giter Site home page Giter Site logo

webmemex / freeze-dry Goto Github PK

View Code? Open in Web Editor NEW
268.0 268.0 18.0 1.19 MB

Snapshots a web page to get it as a static, self-contained HTML document.

Home Page: https://freezedry.webmemex.org

License: The Unlicense

HTML 8.68% CSS 0.35% TypeScript 89.84% JavaScript 1.13%

freeze-dry's Introduction

WebMemex

A web browser that lets you save and organise the pages you visit. To create your personal web of pages and notes, connected by your assocations.

This experiment was designed to run as a web app, so effectively you run a browser inside your browser (so you can browse while you browse!).

This project is kept alive as a demo but is not developed further in this form. New forms are being worked on, see webmemex.org for the current status.

(screenshot)

Idea

Described here.

Try out

  1. Open demo.webmemex.org in a modern browser.
  2. Tell us what you think! 📧

Build

Building and running this demo is currently a bit of a hassle. The setup consists of two containers: this app itself, and a pywb-webrecorder instance, which is used to proxy viewed webpages while inserting some extra code into them to detect and report link clicks (because the application should open clicked links in a new iframe).

To run things yourself:

  1. Get Docker and Node/NPM.
  2. Clone and run this pywb-webrecorder fork (make rebuild start, or something).
  3. Clone and run webmemex (make build run). Then visit localhost:8086 in your browser.

Code tour

The whole thing is just an HTML page/app itself, using <iframe>s to show the browsed web pages. React is used for managing the DOM, Redux for managing the application state, PouchDB for persisting data in one's local storage. Some familiarity with Redux and its concepts (reducers, actions, store) may be required to understand the code.

The code is designed somewhat modular, and could be seen as three repos in one: top level (the 'app itself'), the canvas module for the UI, and the storage module for handling persisted data.

Top level app (src)

  • Sets up the app (see main.html, main.jsx): renders the canvas into the page, creates a Redux store (see store.js), connects the modules to the store (reducer.js).
  • Specifies the more high level behaviour (see actions.js), practically meaning all the logic that involves both the UI and the storage and can thus not be put in either module.
  • Its React components (src/components/*) specify how to display the 'documents' from the storage in the items on the canvas: the notes (Note.jsx), webpages (Webpage.jsx) and also the special empty item for navigating (EmptyItem.jsx).

Canvas UI (src/canvas)

  • Implements the user interface, a 2d 'canvas' (not related to the html <canvas> element) with any amount of items and possibly with edges between them.
  • Handles the placement and positioning of items (<div>s), and enables (multitouch) interactions like dragging and resizing items.
  • Knows nothing of an item's content, except its docId given by the top level app. It simply passes this docId to the configured component (StemItem in this app), so the top level app decides what to draw inside an item.

Storage (src/storage)

  • Keeps a persistent collection of documents (currently just simple webmarks and text notes, e.g. {url: 'https://webmemex.org'}), and a collection links between them (simple {sourceDocId, targetDocId} pairs).
  • Not to be confused with the Redux store (src/store.js), which manages the application state, and thus also contains the (non-persistent) canvas state.
  • Storage is currently implemented as part of the redux store, using redux-pouchdb for synchronising its state in Redux with a PouchDB database in the browser's offline storage.

freeze-dry's People

Contributors

gozala avatar reficul31 avatar treora avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freeze-dry's Issues

Resolve redirects

Too many links are nowadays obscured by link shorteners and tracker URLs. For example, on Twitter, a link would point to https://t.co/1PT68A6LEt when the author meant to refer to https://voice.mozilla.org/. Learning the intended link target requires querying the shortener service, thus depending on external service to still exist and be reachable. Not so nice.

We could therefore consider href values of such links to be resources that belong to the document, and should thus be fetched and stored. It may be tough to decide when a link is an undesired redirect, and when it is a 'legit' redirect that should be retained. One approach is to always resolve all redirects. The original URL would of course be kept as an extra attribute.

A question is still whether we can actually obtain the redirection location. fetch(url, {method: 'head'}) sounds appropriate, but looking at the fetch specification (here), it looks like it might hide all redirection information for security reasons..

Could you please publish used deps onto git as well

I would really like to use this with plain ESM & without build steps. Other than #35 I also run into issue that some of the dependencies used aren't published in non compiled form anywhere. @Treora Given that you're the author of some of those deps would you mind publishing them to github as well ?

Thanks

Option to try erase hidden information

One of the use cases of freeze-dry is to snapshot web pages in order to share them with others. If a page is personalised, e.g. a user snapshots their shopping cart of a web shop, the page may contain private information one would rather not share. If that information is visible, the user can notice it and choose not to share (or could edit the page with other tools). But if the information is hidden in the page, for example when a session ID or anti-CSRF token is stored in a hidden input field, they might accidentally share private information they could not see themselves.

I once heard that this risk of accidentally sharing hidden, sensitive information was one of the reasons for Mozilla’s PageShot experiment to finally not capture the DOM and only output a screenshot (despite the excellent work at capturing the DOM, similar to freeze-dry).

Freeze-dry already removes javascript, which removes one potential source of hidden information. We could also consider adding an option to remove <input type="hidden"> elements. And perhaps data-… attributes? Are there other invisible elements/attributes that are often used for sensitive data, and that we should thus consider to filter out?

Of course such a filtering approach will never guarantee cleanness, but it could probably weed out most of the cases. Interestingly, PageShot got a bit closer to a guarantee by taking the inverse approach: not cloning the whole DOM and filtering things out, but trying to only pick the elements and attribute types that it knows about.

Of course, in many use cases one may also want to remove everything that is invisible simply for reducing the size of the output. Ideally, various types of DOM transformations like these would not be implemented in freeze-dry itself, but could be plugged in. But I’ll park the issue here for the time being.

Use a type system

Use JS classes, flow, typescript, mere jsdoc @types, ...?

Some objects especially worth creating types for, as we pass them around:

  1. parse functions and their return value: [{ token, index, note? }, ...].
  2. attributeInfo (documented here).
  3. link objects (documented here).
  4. resource objects (documented here), probably with subtypes HtmlResource/CssResource.

Capture dynamically inserted CSS rules

Scripts can modify stylesheets using the CSSOM functions like CSSStylesheet.insertRule(), used by e.g. emotion.js.

The contents of a <style> element appear to not be updated to reflect the new rules, so I suppose we will thus have to do this ourselves, by going through the rules using CSSOM. Or we take another approach altogether to preserve styles.

In the wild, I observed the problem with images on NYTimes articles, which become a mess (they use emotion.js).

Only grab necessary subresources

Currently, we inline all resolutions listed in an <img>'s srcset, all <audio> and <video> sources, all stylesheets, etcetera. This makes snapshots huge. The upside is that the snapshot will be as rich as the original, and more likely to work and look as intended in various browsers and screen resolutions. Depending on the application, one or the other factor may be more important, so it would be nice to make configurable how much we grab. Some preliminary thoughts on this:

  • One reasonable desire is to grab only things that are currently in use (if this can be tested for). This could help a lot with speeding freeze-dry up, as those things may be available from cache.

  • For images with multiple resolutions, we could read element.currentSrc, and only grab that one. And/or perhaps get the one with highest resolution.

  • For audio and video, the sources are usually different file formats; currentSrc seems a reasonable choice again, or some prewired preference to pick a widely supported and/or well compressed format (again a possible trade-off).

  • For stylesheets, we may filter by media queries, both in a media attribute on a <link> (to omit the whole stylesheet), or @media at-rules inside stylesheets (to omit the subresources it affects). The next question is then what media queries to fliter for; type (screen/print), window size; possibly again only take what is currently active.

  • For fonts, we could take only the ones currently used/loaded (how? the status attributes of fonts in document.fonts?). And we could hard-code a preference for some well compressed and/or widely supported file format.

Fails to fetch images (etc.) outside CSP

We refetch each resource (from cache, if possible) to obtain the content of e.g. images, because it seems there is no way to read it directly (if it would come from the same origin, drawing it onto a canvas could help get the data). However, fetching a resource may be restricted more strictly than loading images by the page's content security policy, causing the fetch to fail. I don't know of a way to get around this, except by delegating the fetching to be run elsewhere (in the WebMemex, fetching could be done in the extension's background script).

ESM support

Hi @Treora

I would like to use without build steps in the environment that has ESM support. There are few things that prevent it right now:

  1. ES modules require .js file extension to work. Given the source code is authored in ESM it should not be fairly simple to update all relative imports so they have file extension.
  2. External dependencies. Sadly there is no simple solution for that. But I would like to suggest following workaround:
  • Replace improts like import documentOuterHTML from 'document-outerhtml' with something like
    import documentOuterHTML from '../../modules/document-outerhtml/index.js'
  • Create simple files like modules/document-outerhtml/index.js that just do module.exports = require('document-outerhtml')

No 2 is far from ideal but it would make it fairly easy for anyone to remap that to whatever they need. Although it does incur extra maintenance burden here.

I'll probably do one or possibly both of these things as I'd like to make use of this code & will be happy to upstream changes to either or both if that sounds reasonable.

Breaks on invalid URLs

A link such as <a href="http://"> somewhere in the document causes freeze-dry to throw an error.

Allow alternative blob seralizations

I would like to create service using freeze-dry that would essentially act as IPFS archiving proxy. I'm iterating on it here https://glitch.com/edit/#!/clone

Problem that I'm running into few issues:

  1. Right now freez-dry attempts to create data URLs for everything, which is not what I would like to do, I want save data to the IPFS and use URL that would correspond to that. I did changes along those lines in the past Gozala@161c4fe would you be willing to accept pull that applies them on top of current master ?
  2. Another issue I'm running into is that server often times-out before freeze-dry is finished. I'm starting to suspect that freezeDry fails silently (e.g. when loading https://clone.glitch.me/http://jeditoolkit.com/) I see all requests completing but promise never seems to resolve. Either way I would like to find a way for not having to fetch all resources before being able to serve response. Ideally API would allow me to update all referenced URLs and give me back updated markup without waiting to fetch them, allowing me to serve html and letting me to handle referenced resources on demand. Basic idea is that you don't necessarily need to finish creating a bundle. I'm not exactly sure yet how an API like that would look like but general idea would be something along these lines:
const archive = freezeDry.archive(doc, {...})
// archive maintains map of resources that correspond to a document
// and .fetch returns either returns pending or fulfilled request corresponding to the
// resource URL. Or starts one if not initiated yet.
const page = await arhive.fetch(doc.URL)
// That way proxy server can serve requests from the archive as it's being
// build up.
const css = await archive.fetch(new URL('style.css', doc.URL))
// ....

// Completes whenever all of the resources are finished
const bundle = await archive.write({
  // 
  open: async (metadata) => new Bundler(),
  write: async (resource, bundler) => bundler.write(resource)
  close: async (bundler) => await bundler.writeToFile()
})

Handle 404s and mismatching resource types

For example, we currently happily inline an html 404 page as if it was the desired resource, producing e.g. <img src="data:text/html;base64,......">. We could consider alternatives, such as replacing such URLs with about:invalid.

Inlining corrupt stylesheets can corrupt html

We currently turn a <link rel="stylesheet" href="..."> into a <style> element with the resolved contents of that URL. If resolving results in a 404 html page for example, its content will be inserted into the document and mess it up completely. Things to look at and consider changing:

  1. Sanitise the content; set innerText rather than setting innerHTML?
  2. Put the stylesheet contents as a data URL in the link's href, instead of creating a <style> element; this was the initial approach but it was changed because of a performance problem in Firefox (could that be fixed in Firefox?).
  3. Try send the proper Accept headers when fetching the stylesheet.

Do not make within-document links absolute

Relative links pointing to a place within the same document (= only containing a fragment identifier = the href starts with '#') should not be made absolute. This was the case before (see #6 and commit d8eb036), but probably regressed in the rewrite (v0.2). We probably just have to change makeLinksAbsolute.

Handle encoding of subresources

Freeze-dry messes up if a stylesheet or framed document is encoded in utf16, utf32, or possibly other encodings. We use FileReader.readAsText to decode these resources, which by default assumes utf8 encoding. This assumption is adequate most of the time, but when it isn’t the resource is effectively unreadable.

I do not know enough about the standards, but I suppose the decoder should look at the HTTP Content-Type header, the file’s byte order mark (BOM), and in-document declarations (@charset in CSS, <meta charset=…> in HTML).

This detection&decoding issue seems so generic it should not have to burden this repo, but I have not yet discovered the right tool. Some options I thought of:

  • The browser’s fetch, but unfortunately appears not to help with decoding; its Response.text() is spec'd to "return the result of running UTF-8 decode on bytes".
  • XMLHttpRequest.responseText does seem to respect HTTP header and BOM, though I am not sure about in-document declarations. And it feels a little outdated, as I think fetch was supposed to make it obsolete; but perhaps not.
  • Some javascript module? I did not yet find anything that comes close.

Tips welcome.

Note this issue is similar to issue #29, but that one concerns the DOM that the browser has already decoded for us; this issue is about subresources we fetch.

Provide the option to get resources separately.

Instead of inlining everything, we could offer to return the resources separately. The application can then store it in whichever way it likes, and could provide the URL to replace the original URL with.

Particularly useful for deduplicating the resources, possibly using a content-addressing scheme.

Inline iframe contents

Freeze-dry could be run recursively on iframes. Iframe contents can probably be put as a string in the srcdoc attribute.

Although deprecated, it would be nice to still support <frame>s too; they don't support srcdoc though, so we should try putting contents as a data URL in the src attribute.

bug due to querySelector(All) assumptions

At least one bug is caused by using querySelectorAll and assuming it only returns HTML elements:

const linkElements = Array.from(rootElement.querySelectorAll('a, area'))
linkElements
        .filter(element => element.href.startsWith('javascript:'))

The HTML <a> and <area> elements guarantee that .href is a string (an empty string if the attribute is absent). But SVG’s <a> element does not, making these lines throw an error (discovered in the wild).

Need to check all uses of querySelector(All). Maybe we could…

  • do an instanceof check on the resulting elements;
  • or just check for the existence of the href attribute (or should we avoid interfering with unexpected namespaces?);
  • or we could select the elements in some other way, e.g. rootElement.getElementsByTagNameNS('http://www.w3.org/1999/xhtml','a').

Seperately (in scope of issue #27) we should check if javascript: URLs should be removed from SVG’s xlinks.

Allow getting the result before completion.

A slow refetch of a resource can now slow down the whole procedure. To allow more control, we could accept a timeout as an argument, and return whatever is ready at that moment. But rather, I would let the application request the result when it wants to have it, which it could possibly do multiple times (we could e.g. emit an event when the intermediate result has been updated).

Handle iframes with srcdoc

We currently do not handle content of an <iframe srcdoc="....">. We inline the iframe's inner doc, when available, into the src as a data: URL, regardless whether it came from the src or from a srcdoc; but a viewer will use the srcdoc instead of src when it is available, thus things break.

We could choose to always remove srcdoc attributes (or always use them), or we could process srcdoc content as a subresource (except we don't even need to fetch it, and its base URL is equal to the parent document's base URL) and keep it in the srcdoc attribute. Or something in between.

Handle charset encoding declaration

The document may have a <meta charset="..."> tag in the <head>, but that will be obsoleted as we use the parsed document, and later stringify it again. I suppose we could/should delete it from the DOM when capturing it.

Vice versa, we may want to add the appropriate <meta charset="..."> tag to the snapshot; but this seems a task for the application invoking freeze-dry, as we do not know in which encoding the application will store the string.

We could thus..

  • leave the snapshot without charset declaration, tell callers to add it themselves. But they won't have the parsed DOM, making this a hassle.
  • Easier then is to let the application tell the desired encoding tag as an option to freezeDry(...).
  • Alternatively, we could html-encode all characters so our string only contains plain ASCII, which I presume (rightly or wrongly?) removes the need for declaring the charset.

Allow passing custom grabber for frame contents

As explained in src/Readme:

Although we try to clone each Document living inside a frame (recursively), it may be impossible
to access these inner documents because of the browser's single origin policy. If the document
inside a frame cannot be accessed, its current state cannot be captured. ...
When freeze-dry is run from a more privileged environment, such as a browser extension, it could
work around the single origin policy. A future improvement would be to allow providing a custom
function getDocInFrame(element) to enable such workarounds.

Allow freeze-drying a document snippet

I would like to enable calling freezeDry(element), freezeDry(range), and get back a string that serialises the given Element/Range (possibly also DocumentFragment, array of elements, ...). This would be useful to enable extracting e.g. a single comment from a page, freeze-drying a selection for copy/pasting into another document, etcetera.

Most of our DOM transformations are already written to act on a given rootElement, which need not be the whole document. Hence, exposing this possibility in the API should in theory not be that hard. However, some complications will have to be considered:

  • Stylesheets outside the snippet influence its presentation, and will need to be inlined into the snippet; probably into per-element style="..." attributes, as <style scoped> never became anything.
  • Ancestor elements may influence the snippets meaning/presentation; e.g. if the element/range is within a <b> element. Furthermore, an element may only be valid inside particular parent elements; e.g. a <tr> needs to be inside a <table>. Depending on the use case, it may or may not be desirable to retain such a <b>, and to wrap such a <tr> with a <table>.
  • As we do not return a whole document, we cannot add a content security policy in a <meta> tag; we need to be even more sure that the output is completely clean, if the snippet ought to be usable in any html document.
  • Probably more...

To do: look into how browsers copy selections to the clipboard; at least Chromium seems to do some effort to inline styles and wrap elements in order to keep the selection's presentation intact.

Upstream freeze-dry dependency doesn't work with yarn

We get this error message:

$ capstone [master ≡ +0 ~4 -0 !]> yarn
yarn install v1.10.1
[1/4] Resolving packages...
[2/4] Fetching packages...
error [email protected]: The engine "node" is incompatible with this module. Expected version "6.X.X". Got "8.9.0"
error Found incompatible module
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

The workaround is to run yarn with --ignore-engines, but it's sort of a pain.

Add a "who's using freeze-dry" section to readme

Found this lib while googling for an end-user tool. Although I'm no JS/TS developer who can benefit from it, I could use a list of tools using this lib. It would also give more visibility of the library itself, apart from the depending software.
Just adding a section with a few known users would suffice, hoping for others using it to add themselves to the list with a simple PR.

Keep <noscript> when appropriate

This was previously issue #134 in webmemex-extension ("Images not in snapshots from Medium.com").

When the page was viewed with javascript disabled, we should keep <noscript> tags, and perhaps convert them into <div> tags, in order to make the snapshot correspond to what was rendered. See the corresponding comment in the source:

// If noscript content was not shown, we do not want it to show in the snapshot either. Also, we
// capture pages after scripts executed (presumably), so noscript content is likely undesired.
// TODO We should know whether noscript content was visible, and if so keep it in the doc.
// TODO Keep noscript content in fetched iframe docs, as scripts have not been executed there?
const noscripts = Array.from(doc.querySelectorAll('noscript'))
noscripts.forEach(element => element.parentNode.removeChild(element))

How to use in browser

I'm having some trouble figuring out how to use this library in the browser. I've taken a look at the way you use it in webmemex-extension, and I'm struggling to figure out a way to use freeze-dry ad-hoc through the browser console.

Any thoughts?

Add provenance metadata

It would be valuable to retain the snapshotted document's URL somewhere, as well as the time of capture, and possibly other metadata. I am not sure whether this should be a task of freeze-dry itself, or of the application invoking it.

My current disposition is towards adding <meta> tags to the snapshot's <head> to add the snapshot's URL and date. The Memento protocol has specified HTTP headers for exactly this purpose, which we could pour into meta tags as such (as discussed on memento-dev):

<meta http-equiv="Memento-Datetime" content="Wed, 30 May 2007 18:47:52 GMT">
<link rel="original" href="https://example.org/">

This feature, as well as the current practice of keeping data-original-... attributes to retain the URLs of subresources, should probably be optional.

Fix charset encoding of framed documents

Like issue #29, but for subdocuments inside frames. As remarked here:

        get blob() { return new Blob([this.string], { type: 'text/html' }) },
        get string() {
            // TODO Add <meta charset> if absent? Or html-encode characters as needed?
            return documentOuterHTML(clonedDoc)
        },

The same applies to crawl-subresources for frames whose inner document we cannot access directly.

It seems new Blob() always utf-8-encodes given strings (mdn). I suppose we should either add <meta charset="utf-8"> to the DOM before running documentOuterHTML. Alternatively, we change the blob’s MIME type to text/html;charset=utf-8; something we could not do for the top-level document — might that be ‘cleaner’?

Problem observed in the wild.

CSP in cloned DOM affects live page

On Firefox, after storing a page (at least if done early), scripts on the page can often not reach the web anymore. This appears to be a bug in Firefox, so I filed it there.

SVG support

Needs investigation, fixing, and tests.

Some problems already apparent:

  • an SVG's <a> element's .href property appears to not be a string, while we assume it to be.
  • an SVG can link to subresources, which we currently ignore; need to handle these in extract-links, as well as when crawling subresources)

Stop adding <base href>, always rewrite relative URLs

Unfortunately a <base href='...'> is also applied on relative links within a document, e.g. href="#section3". It would be nice to keep those internal links relative. Rewriting relative hrefs rather than using a base element seems the easiest (only?) solution.

Is it possible to use freeze-dry from server?

Hi! Thank you for this awesome library!

I'm building a simple website archival API (currently just submits URLs to selected archive sites) and I'd love to add freeze-dry as an addition to it -- I am relatively noob to the javascript world though, so I'm a bit lost on how to approach this;

I understand freeze-dry runs in the browser context (?), so something like playwright will be needed to do this which is what I've been trialing.

I tried to modify and run the playwright tests in the 'customisation' branch as a hacky starting point, and I'm currently stuck with this error when running npm run test

page.evaluate: ReferenceError: freezeDry is not defined

   > 17 |   const html = await page.evaluate('freezeDry(document, { now: new Date(1534615340948) })')
           |                           ^
      18 |   console.log(html)

Deal with <canvas> elements

I just noticed these lines in pagearchive:

if (el.tagName == 'CANVAS') {
  return '<IMG SRC="' + htmlQuote(el.toDataURL('image/png')) + '">';
}

Make idempotent.

Freeze-drying an already freeze-dried page would ideally not have any effect. Not sure if that's the case now.

@reficul31: may be nice to add a test for this in the integration tests, that takes the output (snapshot) and applies freezeDry to it again.

Ongoing development

I'm a little worried to see the WebMemex projects haven't been worked on in quite some time. I know they're both considered stable, but when it comes to browser-related projects, things have a tendency to change at a pretty fast pace, to where a few years makes a very big difference.

I've been looking at freeze-dry as an alternative to using SingleFile's CLI, since it lets you save assets without converting them to embedded base64 data, which is good for making continuous backups. But this project hasn't been touched in a year and the browser extension in three years, so I'm not sure whether to invest the time in adapting my backup scripting to use freeze-dry instead or not.

Sorry to be a nag, but it would be very helpful to know. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.