Giter Site home page Giter Site logo

cocostuff's Introduction

The COCO-Stuff dataset

Holger Caesar, Jasper Uijlings, Vittorio Ferrari

COCO-Stuff example annotations

Welcome to official homepage of the COCO-Stuff [1] dataset. COCO-Stuff augments all 164K images of the popular COCO [2] dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning.

Overview

Highlights

  • 164K complex images from COCO [2]
  • Dense pixel-level annotations
  • 80 thing classes, 91 stuff classes and 1 class 'unlabeled'
  • Instance-level annotations for things from COCO [2]
  • Complex spatial context between stuff and things
  • 5 captions per image from COCO [2]

Explore COCO-Stuff

You can now use Scale's Nucleus platform to explore the COCO-Stuff dataset. The platform allows you to search for images with specific annotations or using textual image descriptions. You can also identify potential annotation errors by comparing the predictions of an object detector to the ground-truth of COCO-Stuff. Registration is free! You can learn more at the Scale homepage link above.

COCO-Stuff on Nucleus

Research Paper

COCO-Stuff: Thing and Stuff Classes in Context
H. Caesar, J. Uijlings, V. Ferrari,
In Computer Vision and Pattern Recognition (CVPR), 2018.
[paper][bibtex]

Versions of COCO-Stuff

  • COCO-Stuff dataset: The final version of COCO-Stuff, that is presented on this page. It includes all 164K images from COCO 2017 (train 118K, val 5K, test-dev 20K, test-challenge 20K). It covers 172 classes: 80 thing classes, 91 stuff classes and 1 class 'unlabeled'. This dataset will form the basis of all upcoming challenges.
  • COCO 2017 Stuff Segmentation Challenge: A semantic segmentation challenge on 55K images (train 40K, val 5K, test-dev 5K, test-challenge 5K) of COCO. To focus on stuff, we merged all 80 thing classes into a single class 'other'. The results of the challenge were presented at the Joint COCO and Places Recognition Workshop at ICCV 2017.
  • COCO-Stuff 10K dataset: Our first dataset, annotated by 10 in-house annotators at the University of Edinburgh. It includes 10K images from the training set of COCO. We provide a 9K/1K (train/val) split to make results comparable. The dataset includes 80 thing classes, 91 stuff classes and 1 class 'unlabeled'. This was initially presented as 91 thing classes, but is now changed to 80 thing classes, as 11 classes do not have any segmentation annotations in COCO. This dataset is a subset of all other releases.

Downloads

Filename Description Size
train2017.zip COCO 2017 train images (118K images) 18 GB
val2017.zip COCO 2017 val images (5K images) 1 GB
stuffthingmaps_trainval2017.zip Stuff+thing PNG-style annotations on COCO 2017 trainval 659 MB
stuff_trainval2017.zip Stuff-only COCO-style annotations on COCO 2017 trainval 543 MB
annotations_trainval2017.zip Thing-only COCO-style annotations on COCO 2017 trainval 241 MB
labels.md Indices, names, previews and descriptions of the classes in COCO-Stuff <10 KB
labels.txt Machine readable version of the label list <10 KB
README.md This readme <10 KB

To use this dataset you will need to download the images (18+1 GB!) and annotations of the trainval sets. To download earlier versions of this dataset, please visit the COCO 2017 Stuff Segmentation Challenge or COCO-Stuff 10K.

Caffe-compatible stuff-thing maps We suggest using the stuffthingmaps, as they provide all stuff and thing labels in a single .png file per image. Note that the .png files are indexed images, which means they store only the label indices and are typically displayed as grayscale images. To be compatible with most Caffe-based semantic segmentation methods, thing+stuff labels cover indices 0-181 and 255 indicates the 'unlabeled' or void class.

Separate stuff and thing downloads Alternatively you can download the separate files for stuff and thing annotations in COCO format, which are compatible with the COCO-Stuff API. Note that the stuff annotations contain a class 'other' with index 183 that covers all non-stuff pixels.

Setup

Use the following instructions to download the COCO-Stuff dataset and setup the folder structure. The instructions are for Ubuntu and require git, wget and unzip. On other operating systems the commands may differ:

# Get this repo
git clone https://github.com/nightrome/cocostuff.git
cd cocostuff

# Download everything
wget --directory-prefix=downloads http://images.cocodataset.org/zips/train2017.zip
wget --directory-prefix=downloads http://images.cocodataset.org/zips/val2017.zip
wget --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip

# Unpack everything
mkdir -p dataset/images
mkdir -p dataset/annotations
unzip downloads/train2017.zip -d dataset/images/
unzip downloads/val2017.zip -d dataset/images/
unzip downloads/stuffthingmaps_trainval2017.zip -d dataset/annotations/

Results

Below we present results on different releases of COCO-Stuff. If you would like to see your results here, please contact the first author.

Results on the val set of COCO-Stuff:

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
Deeplab VGG-16 (no CRF) [4] [1] 45.1% 63.6% 33.2% 47.6%

Note that the results between the 10K dataset and the full dataset are not direclty comparable, as different train and val images are used. Furthermore, on the full dataset we train Deeplab for 100K iterations [1], compared to 20K iterations on the 10K dataset [1b].

Results on the val set of the COCO 2017 Stuff Segmentation Challenge:

We show results on the val set of the challenge. Please refer to the official leaderboard for results on the test-dev and test-challenge sets. Note that these results are not comparable to other COCO-Stuff results, as the challenge only includes a single thing class 'other'.

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
Inplace-ABN sync [8] - - 24.9% -

Results on the val set of COCO-Stuff 10K:

Method Source Class accuracy Pixel accuracy Mean IOU FW IOU
FCN-16s [3] [1b] 34.0% 52.0% 22.7% -
Deeplab VGG-16 (no CRF) [4] [1b] 38.1% 57.8% 26.9% -
FCN-8s [3] [6] 38.5% 60.4% 27.2% -
SCA VGG-16 [7] [7] 42.5% 61.6% 29.1% -
DAG-RNN + CRF [6] [6] 42.8% 63.0% 31.2% -
DC + FCN+ [5] [5] 44.6% 65.5% 33.6% 50.6%
Deeplab ResNet (no CRF) [4] - 45.5% 65.1% 34.4% 50.4%
CCL ResNet-101 [10] [10] 48.8% 66.3% 35.7% -
DSSPN ResNet finetune [9] [9] 48.1% 69.4% 37.3% -
* OHE + DC + FCN+ [5] [5] 45.8% 66.6% 34.3% 51.2%
* W2V + DC + FCN+ [5] [5] 45.1% 66.1% 34.7% 51.0%
* DSSPN ResNet universal [9] [9] 50.3% 70.7% 38.9% -

* Results not comparable as they use external data

Labels

Label Names & Indices

To be compatible with COCO, COCO-Stuff has 91 thing classes (1-91), 91 stuff classes (92-182) and 1 class "unlabeled" (0). Note that 11 of the thing classes of COCO do not have any segmentation annotations (blender, desk, door, eye glasses, hair brush, hat, mirror, plate, shoe, street sign, window). The classes desk, door, mirror and window could be either stuff or things and therefore occur in both COCO and COCO-Stuff. To avoid confusion we add the suffix "-stuff" or "-other" to those classes in COCO-Stuff. The full list of classes and their descriptions can be found here.

Label Hierarchy

This figure shows the label hierarchy of COCO-Stuff including all stuff and thing classes: COCO-Stuff label hierarchy

Semantic Segmentation Models (stuff+things)

PyTorch model

We recommend this third party re-implementation of Deeplab v2 in PyTorch. Contrary to our Caffe model, it supports ResNet and CRFs. The authors provide setup routines and models for COCO-Stuff 164K. Please file any issues or questions on the project's GitHub page.

Caffe model

Here we provide the Caffe-based segmentation model used in the COCO-Stuff paper. However, for users not familiar with Caffe we recommend the above PyTorch model. Before using the semantic segmentation model, please setup the dataset. The commands below download and install Deeplab (incl. Caffe), download or train the model and predictions and evaluate the performance. The results should be the same as in the table. Due to several issues, we do not provide the Deeplab ResNet101 model, but some code for it can be found in this folder.

# Get and install Deeplab (you may need to change settings)
# We use a special version of Deeplab v2 that supports CuDNN v5, but others may work as well.
git submodule update --init models/deeplab/deeplab-v2
cd models/deeplab/deeplab-v2
cp Makefile.config.example Makefile.config
make all -j8

# Create symbolic links to the images and annotations
cd models/deeplab/cocostuff/data && ln -s ../../../../dataset/images images && ln -s ../../../../dataset/annotations annotations && cd ../../../..

# Option 1: Download the initial model
# wget --directory-prefix=models/deeplab/cocostuff/model/deeplabv2_vgg16 http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplabv2_vgg16_init.caffemodel

# Option 2: Download the trained model
# wg --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplab_cocostuff_trainedmodel.zip
# zip downloads/deeplab_cocostuff_trainedmodel.zip -d models/deeplab/cocostuff/model/deeplabv2_vgg16/model120kimages/

# Option 3: Run training & test
# cd models/deeplab && ./run_cocostuff_vgg16.sh && cd ../..

# Option 4 (fastest): Download predictions
wget --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplab_predictions_cocostuff_val2017.zip
unzip downloads/deeplab_predictions_cocostuff_val2017.zip -d models/deeplab/cocostuff/features/deeplabv2_vgg16/model120kimages/val/fc8/

# Evaluate performance
python models/deeplab/evaluate_performance.py

The table below summarizes the files used in these instructions:

Filename Description Size
deeplabv2_vgg16_init.caffemodel Deeplab VGG-16 pretrained model (original link) 152 MB
deeplab_cocostuff_trainedmodel.zip Deeplab VGG-16 trained on COCO-Stuff 286 MB
deeplab_predictions_cocostuff_val2017.zip Deeplab VGG-16 predictions on COCO-Stuff 54 MB

Note that the Deeplab predictions need to be rotated and cropped, as shown in this script.

Annotation Tool

For the Matlab annotation tool used to annotate the initial 10K images, please refer to this repository.

Misc

References

Licensing

COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:

Acknowledgements

This work is supported by the ERC Starting Grant VisCul. The annotations were done by the crowdsourcing startup Mighty AI, and financed by Mighty AI and the Common Visual Data Foundation.

Contact

If you have any questions regarding this dataset, please contact us at holger-at-it-caesar.com.

cocostuff's People

Contributors

nightrome avatar pfmark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cocostuff's Issues

test sets available?

This might be a stupid question - in the Versions of COCO-Stuff section, it says "it includes all 164K images from COCO 2017 (train 118K, val 5K, test-dev 20K, test-challenge 20K)." However, I only see the train and val sets available for download in the Downloads section. Are either of the test sets available?

COCO stuff 2017 version for downloading

Hi, could you please share me the link of coco-stuff 2017 version annotations for COCO 2017 Stuff Segmentation Task? Actually it's the version of (train 40K, val 5K, test-dev 5K, test-challenge 5K). I've searched the internet for that but I only found the version of (train 118K, val 5K, test-dev 20K, test-challenge 20K).
Thanks.

About the stuff categories annotation.

panoptic_semseg_train2017/000000000247.png
000000000247

I want to know why the greyscale of the sky is 119, but as you have mentioned in this issue, the sky-other in the labels may 157 or 146(157-11)(some classes have been removed)?

I am so confused that how to build the mapping relationships between classes and greyscale in the .png

Could you provide us the visualization plattete~

Hi, I am running experiments on the COCO-stuff but find that you do not provide the plattete for visualization.

Could you share an array named cocostuff_pallete and _get_cocostuff_pallete like below:

from PIL import Image

def get_mask_pallete(npimg, dataset='detail'):
    """Get image color pallete for visualizing masks"""
    # recovery boundary
    if dataset == 'pascal_voc':
        npimg[npimg==21] = 255
    # put colormap
    out_img = Image.fromarray(npimg.squeeze().astype('uint8'))
    if dataset == 'ade20k':
        out_img.putpalette(adepallete)
    elif dataset == 'cityscapes':
        out_img.putpalette(citypallete)
    else:
        out_img.putpalette(vocpallete)
    return out_img


def _get_voc_pallete(num_cls):
    n = num_cls
    pallete = [0]*(n*3)
    for j in range(0,n):
            lab = j
            pallete[j*3+0] = 0
            pallete[j*3+1] = 0
            pallete[j*3+2] = 0
            i = 0
            while (lab > 0):
                    pallete[j*3+0] |= (((lab >> 0) & 1) << (7-i))
                    pallete[j*3+1] |= (((lab >> 1) & 1) << (7-i))
                    pallete[j*3+2] |= (((lab >> 2) & 1) << (7-i))
                    i = i + 1
                    lab >>= 3
    return pallete

vocpallete = _get_voc_pallete(256)

adepallete = [0,0,0,120,120,120,180,120,120,6,230,230,80,50,50,4,200,3,120,120,80,140,140,140,204,5,255,230,230,230,4,250,7,224,5,255,235,255,7,150,5,61,120,120,70,8,255,51,255,6,82,143,255,140,204,255,4,255,51,7,204,70,3,0,102,200,61,230,250,255,6,51,11,102,255,255,7,71,255,9,224,9,7,230,220,220,220,255,9,92,112,9,255,8,255,214,7,255,224,255,184,6,10,255,71,255,41,10,7,255,255,224,255,8,102,8,255,255,61,6,255,194,7,255,122,8,0,255,20,255,8,41,255,5,153,6,51,255,235,12,255,160,150,20,0,163,255,140,140,140,250,10,15,20,255,0,31,255,0,255,31,0,255,224,0,153,255,0,0,0,255,255,71,0,0,235,255,0,173,255,31,0,255,11,200,200,255,82,0,0,255,245,0,61,255,0,255,112,0,255,133,255,0,0,255,163,0,255,102,0,194,255,0,0,143,255,51,255,0,0,82,255,0,255,41,0,255,173,10,0,255,173,255,0,0,255,153,255,92,0,255,0,255,255,0,245,255,0,102,255,173,0,255,0,20,255,184,184,0,31,255,0,255,61,0,71,255,255,0,204,0,255,194,0,255,82,0,10,255,0,112,255,51,0,255,0,194,255,0,122,255,0,255,163,255,153,0,0,255,10,255,112,0,143,255,0,82,0,255,163,255,0,255,235,0,8,184,170,133,0,255,0,255,92,184,0,255,255,0,31,0,184,255,0,214,255,255,0,112,92,255,0,0,224,255,112,224,255,70,184,160,163,0,255,153,0,255,71,255,0,255,0,163,255,204,0,255,0,143,0,255,235,133,255,0,255,0,235,245,0,255,255,0,122,255,245,0,10,190,212,214,255,0,0,204,255,20,0,255,255,255,0,0,153,255,0,41,255,0,255,204,41,0,255,41,255,0,173,0,255,0,245,255,71,0,255,122,0,255,0,255,184,0,92,255,184,255,0,0,133,255,255,214,0,25,194,194,102,255,0,92,0,255]



Question about confusion matrix indices

I trained semantic segmentation model using "stuffthingmaps_trainval2017.zip"
(Stuff+thing PNG-style annotations on COCO 2017 trainval )

In this case,
thing+stuff labels cover indices 0-181 and 255 indicates the 'unlabeled' or void class.

I think the below line
https://github.com/nightrome/cocostuff/blob/master/models/deeplab/evaluate_performance.py#L98
confusion[g - 1, d - 1] += c
(this is for json format annotation,
COCO-style annotations (json file) cover indices 1-182)

should be changed to
confusion[g, d] += c

since g and d can be 0.

This modification does not change the performance on leaf category.

However, If I add metric for superclass to evaluation_performance.py based on coccostuffapi,
This modification gives me very different values for superclass category performance.
(much higher)

Do I miss something?

Label values shifted by 1

Hi!
I'm using the annotations values and it seems like all the annotations are shifted by 1.
For example:
The person class is labeled with value 0 (but should be 1 according to your mapping list).
The skis class is labeled with value 34 (but should be 35 according to your mapping list).
The snow class is labeled with value 158 (but should be 159 according to your mapping list).

Note - I'm reading the files in Python. Maybe it has anything to do with the fact that the annotation platform is written in Matlab?

Do you have any idea for the root cause of this mismatch?
Thanks in advance.

Link to the current updated mapping list:
https://github.com/nightrome/cocostuff/blob/master/labels.md

Questions about the license

Hi, I am working for a start-up and we are training a segmentation model based on COCO-stuff dataset.

We are not re-distributing any of the COCO images, and we are simply using the images and annotations for training. Below is what I can find regarding licenses, but I am not too sure if using the models commercially will be any breach of the licenses for below.

COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:

COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse)
COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
COCO-Stuff annotations & code: [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/legalcode)[](https://github.com/nightrome/cocostuff#acknowledgements)

Do annotations have the instance-wise bounding boxes?

Thank you, authors, for the great work. As I've examined the annotations, it seems like the bounding boxes are provided per category rather than per instance. For example, there are 2 windows in the image but only a single box covering 2 windows. Am I correct?
If so, do you have the instance-wise annotation?
Many thanks beforehand!

data link died

annotations_trainval2017.zip | Thing-only COCO-style annotations on COCO 2017 trainval | 241 MB

This cannot be downloaded..


<Error>
<link type="text/css" id="dark-mode" rel="stylesheet" href=""/>
<style type="text/css" id="dark-mode-custom-style"/>
<Code>UserProjectAccountProblem</Code>
<Message>User project billing account not in good standing.</Message>
<Details>
The billing account for project 81,941,577,218 is disabled in state delinquent
</Details>
</Error>

Image titles mismatch

how do know the correspondance between the training images and annotations, if all the images have a unique title?

Annotation link not working

When I curl it, I get:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip">here</a>.</p>
</body></html>

Which is to say it has moved to where it currently is supposed to be located.

Noun Annotations

Hi,

My supervisors and I are currently working on a paper analysing the COCO data set. As part of this we need to identify nouns within the COCO captions as “things” or “stuff”. In Section 4.1 of your COCO-Stuff paper, you mention that you underwent a similar process tagging the nouns by hand. I was wondering if you would please be able to share this data with us to save us having to undertake a similar venture. We would of course credit your work through appropriate citations.

Many thanks

Label matching, 182 or 183 labels?

Hi there,

I'm trying to use your model and migrate it to tensorflow using the caffe-tensorflow repository and your caffemodel and prototxt files. The problem I face now is that the Deeplab VGG-16 model trained on COCO-Stuff that you offer on thid repository only outputs 182 different labels but as I understand it should return 183 (91 for COCO, 91 for stuff and 1 for unlabeled).

Please let me know if I'm missing something,
thank you in advance

What is the range of the labels?

Hi,

Since there are 11 classes are remove, does it mean that the labels are range from 0-170 plusing 255 ? Or does is mean that the labels are still range from 0-191, and we need to map them to 0-170 manually?

Undefined label index in COCO stuff

Hi Caesar,

Question for you -- I'm looking at the COCO stuff annotation for 000000351710.png:

image

I see in the label definitions that 0 represents unlabeled, but I can't find a definition for 255. What does the 255 intensity value represent?

Array([[180, 180, 180, 180,  95, 156, 156, 156, 156, 156, 156, 156, 156],
       [ 95,  95, 180,  95,  95,  95, 156, 156, 156, 156, 156, 156, 156],
       [180, 180, 180, 180,  95,  95,  95,  95, 156, 156, 156, 156, 156],
       [ 95,  95,  95,  95,  95,  95,  95,  95, 168, 168, 168, 150, 150],
       [ 95,   0,  95,  95,  95,  95,  95,  95, 168, 168,  95,  95,  95],
       [ 96,   0,   3,   2, 139, 139, 139, 148, 148,   3, 148, 148, 148],
       [141, 141,   3,   3, 255, 255, 255, 255, 148, 255, 255, 255, 148],
       [148, 148, 148, 148, 255, 255, 255, 255, 255, 255, 255, 255, 148],
       [148, 148, 148, 148, 255, 255, 255, 255, 255, 255, 255, 255, 141],
       [148, 148, 148, 148, 148, 255, 255, 148, 148, 255, 255, 141, 141]],
      dtype=uint8)

(cc'ing @liuzhuang13)

Which stuff classes are included in the coco stuff challenge

Hello,

Sorry if this has been answered but I am having a bit of difficulty trying to figure out which stuff classes (as specifically as possible) were included in the Coco 2017 Stuff challenge. I'm imagining the original 80 thing classes (from the 91 with some removed) are included, but which of the remaining 171 classes were included in the Stuff challenge?

Thanks

Stuff dataset .

I only want a stuff data set, How can I separate the outdoor of stuff part from this data set? Are there have a data set only includes outdoor classes in stuff? Thank you!

How to make cocostuff dataset with coco json file?

Hi,

I am trying to train a hybrid task cascade net (HTC) and the mask branch requires the format of cocostuff dataset. I am wondering if it is possible to convert from coco json file to a cocostuff dataset ?

Old version of COCO-stuff 2017

Hello,

I'm reproducing some results in the recent papers, such as LDMs and OC-GAN, and I found that most of them conduct experiments on COCO stuff 2017 old version.
In that old version of COCO stuff, the number of training/testing data with detection annotations is about 41k/5k. Is it possible to get this version of COCO-stuff?

Thank you in advance!

Checking labels from the png maps

@nightrome Hello there!

I'm trying to check the class-number-wise labeled map but couldn't find anything like that.
The image below is the example of PASCAL VOC.


Doesn't cocostuff have these kind of labeled maps?
I opened the images of stuffthingmaps_trainval2017 with numpy, and it seems like they only have the value of (0,255) - which only represents the brightness.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.