Giter Site home page Giter Site logo

vision_transformer's Introduction

Vision Transformer and MLP-Mixer Architectures

In this repository we release models from the papers

The models were pre-trained on the ImageNet and ImageNet-21k datasets. We provide the code for fine-tuning the released models in JAX/Flax.

The models from this codebase were originally trained in https://github.com/google-research/big_vision/ where you can find more advanced code (e.g. multi-host training), as well as some of the original training scripts (e.g. configs/vit_i21k.py for pre-training a ViT, or configs/transfer.py for transfering a model).

Table of contents:

Colab

Below Colabs run both with GPUs, and TPUs (8 cores, data parallelism).

The first Colab demonstrates the JAX code of Vision Transformers and MLP Mixers. This Colab allows you to edit the files from the repository directly in the Colab UI and has annotated Colab cells that walk you through the code step by step, and lets you interact with the data.

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb

The second Colab allows you to explore the >50k Vision Transformer and hybrid checkpoints that were used to generate the data of the third paper "How to train your ViT? ...". The Colab includes code to explore and select checkpoints, and to do inference both using the JAX code from this repo, and also using the popular timm PyTorch library that can directly load these checkpoints as well. Note that a handful of models are also available directly from TF-Hub: sayakpaul/collections/vision_transformer (external contribution by Sayak Paul).

The second Colab also lets you fine-tune the checkpoints on any tfds dataset and your own dataset with examples in individual JPEG files (optionally directly reading from Google Drive).

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax_augreg.ipynb

Note: As for now (6/20/21) Google Colab only supports a single GPU (Nvidia Tesla T4), and TPUs (currently TPUv2-8) are attached indirectly to the Colab VM and communicate over slow network, which leads to pretty bad training speed. You would usually want to set up a dedicated machine if you have a non-trivial amount of data to fine-tune on. For details see the Running on cloud section.

Installation

Make sure you have Python>=3.10 installed on your machine.

Install JAX and python dependencies by running:

# If using GPU:
pip install -r vit_jax/requirements.txt

# If using TPU:
pip install -r vit_jax/requirements-tpu.txt

For newer versions of JAX, follow the instructions provided in the corresponding repository linked here. Note that installation instructions for CPU, GPU and TPU differs slightly.

Install Flaxformer, follow the instructions provided in the corresponding repository linked here.

For more details refer to the section Running on cloud below.

Fine-tuning a model

You can run fine-tuning of the downloaded model on your dataset of interest. All models share the same command line interface.

For example for fine-tuning a ViT-B/16 (pre-trained on imagenet21k) on CIFAR10 (note how we specify b16,cifar10 as arguments to the config, and how we instruct the code to access the models directly from a GCS bucket instead of first downloading them into the local directory):

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
    --config.pretrained_dir='gs://vit_models/imagenet21k'

In order to fine-tune a Mixer-B/16 (pre-trained on imagenet21k) on CIFAR10:

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/mixer_base16_cifar10.py \
    --config.pretrained_dir='gs://mixer_models/imagenet21k'

The "How to train your ViT? ..." paper added >50k checkpoints that you can fine-tune with the configs/augreg.py config. When you only specify the model name (the config.name value from configs/model.py), then the best i21k checkpoint by upstream validation accuracy ("recommended" checkpoint, see section 4.5 of the paper) is chosen. To make up your mind which model you want to use, have a look at Figure 3 in the paper. It's also possible to choose a different checkpoint (see Colab vit_jax_augreg.ipynb) and then specify the value from the filename or adapt_filename column, which correspond to the filenames without .npz from the gs://vit_models/augreg directory.

python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
    --config=$(pwd)/vit_jax/configs/augreg.py:R_Ti_16 \
    --config.dataset=oxford_iiit_pet \
    --config.base_lr=0.01

Currently, the code will automatically download CIFAR-10 and CIFAR-100 datasets. Other public or custom datasets can be easily integrated, using tensorflow datasets library. Note that you will also need to update vit_jax/input_pipeline.py to specify some parameters about any added dataset.

Note that our code uses all available GPUs/TPUs for fine-tuning.

To see a detailed list of all available flags, run python3 -m vit_jax.train --help.

Notes on memory:

  • Different models require different amount of memory. Available memory also depends on the accelerator configuration (both type and count). If you encounter an out-of-memory error you can increase the value of --config.accum_steps=8 -- alternatively, you could also decrease the --config.batch=512 (and decrease --config.base_lr accordingly).
  • The host keeps a shuffle buffer in memory. If you encounter a host OOM (as opposed to an accelerator OOM), you can decrease the default --config.shuffle_buffer=50000.

Vision Transformer

by Alexey Dosovitskiy*†, Lucas Beyer*, Alexander Kolesnikov*, Dirk Weissenborn*, Xiaohua Zhai*, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby*†.

(*) equal technical contribution, (†) equal advising.

Figure 1 from paper

Overview of the model: we split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence.

Available ViT models

We provide a variety of ViT models in different GCS buckets. The models can be downloaded with e.g.:

wget https://storage.googleapis.com/vit_models/imagenet21k/ViT-B_16.npz

The model filenames (without the .npz extension) correspond to the config.model_name in vit_jax/configs/models.py

We recommend using the following checkpoints, trained with AugReg that have the best pre-training metrics:

Model Pre-trained checkpoint Size Fine-tuned checkpoint Resolution Img/sec Imagenet accuracy
L/16 gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz 1243 MiB gs://vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 50 85.59%
B/16 gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0.npz 391 MiB gs://vit_models/augreg/B_16-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 138 85.49%
S/16 gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz 115 MiB gs://vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 300 83.73%
R50+L/32 gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1.npz 1337 MiB gs://vit_models/augreg/R50_L_32-i21k-300ep-lr_0.001-aug_medium1-wd_0.1-do_0.1-sd_0.1--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 327 85.99%
R26+S/32 gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz 170 MiB gs://vit_models/augreg/R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 560 83.85%
Ti/16 gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz 37 MiB gs://vit_models/augreg/Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 610 78.22%
B/32 gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz 398 MiB gs://vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 955 83.59%
S/32 gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0.npz 118 MiB gs://vit_models/augreg/S_32-i21k-300ep-lr_0.001-aug_none-wd_0.1-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.01-res_384.npz 384 2154 79.58%
R+Ti/16 gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz 40 MiB gs://vit_models/augreg/R_Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0--imagenet2012-steps_20k-lr_0.03-res_384.npz 384 2426 75.40%

The results from the original ViT paper (https://arxiv.org/abs/2010.11929) have been replicated using the models from gs://vit_models/imagenet21k:

model dataset dropout=0.0 dropout=0.1
R50+ViT-B_16 cifar10 98.72%, 3.9h (A100), tb.dev 98.94%, 10.1h (V100), tb.dev
R50+ViT-B_16 cifar100 90.88%, 4.1h (A100), tb.dev 92.30%, 10.1h (V100), tb.dev
R50+ViT-B_16 imagenet2012 83.72%, 9.9h (A100), tb.dev 85.08%, 24.2h (V100), tb.dev
ViT-B_16 cifar10 99.02%, 2.2h (A100), tb.dev 98.76%, 7.8h (V100), tb.dev
ViT-B_16 cifar100 92.06%, 2.2h (A100), tb.dev 91.92%, 7.8h (V100), tb.dev
ViT-B_16 imagenet2012 84.53%, 6.5h (A100), tb.dev 84.12%, 19.3h (V100), tb.dev
ViT-B_32 cifar10 98.88%, 0.8h (A100), tb.dev 98.75%, 1.8h (V100), tb.dev
ViT-B_32 cifar100 92.31%, 0.8h (A100), tb.dev 92.05%, 1.8h (V100), tb.dev
ViT-B_32 imagenet2012 81.66%, 3.3h (A100), tb.dev 81.31%, 4.9h (V100), tb.dev
ViT-L_16 cifar10 99.13%, 6.9h (A100), tb.dev 99.14%, 24.7h (V100), tb.dev
ViT-L_16 cifar100 92.91%, 7.1h (A100), tb.dev 93.22%, 24.4h (V100), tb.dev
ViT-L_16 imagenet2012 84.47%, 16.8h (A100), tb.dev 85.05%, 59.7h (V100), tb.dev
ViT-L_32 cifar10 99.06%, 1.9h (A100), tb.dev 99.09%, 6.1h (V100), tb.dev
ViT-L_32 cifar100 93.29%, 1.9h (A100), tb.dev 93.34%, 6.2h (V100), tb.dev
ViT-L_32 imagenet2012 81.89%, 7.5h (A100), tb.dev 81.13%, 15.0h (V100), tb.dev

We also would like to emphasize that high-quality results can be achieved with shorter training schedules and encourage users of our code to play with hyper-parameters to trade-off accuracy and computational budget. Some examples for CIFAR-10/100 datasets are presented in the table below.

upstream model dataset total_steps / warmup_steps accuracy wall-clock time link
imagenet21k ViT-B_16 cifar10 500 / 50 98.59% 17m tensorboard.dev
imagenet21k ViT-B_16 cifar10 1000 / 100 98.86% 39m tensorboard.dev
imagenet21k ViT-B_16 cifar100 500 / 50 89.17% 17m tensorboard.dev
imagenet21k ViT-B_16 cifar100 1000 / 100 91.15% 39m tensorboard.dev

MLP-Mixer

by Ilya Tolstikhin*, Neil Houlsby*, Alexander Kolesnikov*, Lucas Beyer*, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy.

(*) equal contribution.

Figure 1 from paper

MLP-Mixer (Mixer for short) consists of per-patch linear embeddings, Mixer layers, and a classifier head. Mixer layers contain one token-mixing MLP and one channel-mixing MLP, each consisting of two fully-connected layers and a GELU nonlinearity. Other components include: skip-connections, dropout, and linear classifier head.

For installation follow the same steps as above.

Available Mixer models

We provide the Mixer-B/16 and Mixer-L/16 models pre-trained on the ImageNet and ImageNet-21k datasets. Details can be found in Table 3 of the Mixer paper. All the models can be found at:

https://console.cloud.google.com/storage/mixer_models/

Note that these models are also available directly from TF-Hub: sayakpaul/collections/mlp-mixer (external contribution by Sayak Paul).

Expected Mixer results

We ran the fine-tuning code on Google Cloud machine with four V100 GPUs with the default adaption parameters from this repository. Here are the results:

upstream model dataset accuracy wall_clock_time link
ImageNet Mixer-B/16 cifar10 96.72% 3.0h tensorboard.dev
ImageNet Mixer-L/16 cifar10 96.59% 3.0h tensorboard.dev
ImageNet-21k Mixer-B/16 cifar10 96.82% 9.6h tensorboard.dev
ImageNet-21k Mixer-L/16 cifar10 98.34% 10.0h tensorboard.dev

LiT models

For details, refer to the Google AI blog post LiT: adding language understanding to image models, or read the CVPR paper "LiT: Zero-Shot Transfer with Locked-image text Tuning" (https://arxiv.org/abs/2111.07991).

We published a Transformer B/16-base model with an ImageNet zeroshot accuracy of 72.1%, and a L/16-large model with an ImageNet zeroshot accuracy of 75.7%. For more details about these models, please refer to the LiT model card.

We provide a in-browser demo with small text encoders for interactive use (the smallest models should even run on a modern cell phone):

https://google-research.github.io/vision_transformer/lit/

And finally a Colab to use the JAX models with both image and text encoders:

https://colab.research.google.com/github/google-research/vision_transformer/blob/main/lit.ipynb

Note that none of above models support multi-lingual inputs yet, but we're working on publishing such models and will update this repository once they become available.

This repository only contains evaluation code for LiT models. You can find the training code in the big_vision repository:

https://github.com/google-research/big_vision/tree/main/big_vision/configs/proj/image_text

Expected zeroshot results from model_cards/lit.md (note that the zeroshot evaluation is slightly different from the simplified evaluation in the Colab):

Model B16B_2 L16L
ImageNet zero-shot 73.9% 75.7%
ImageNet v2 zero-shot 65.1% 66.6%
CIFAR100 zero-shot 79.0% 80.5%
Pets37 zero-shot 83.3% 83.3%
Resisc45 zero-shot 25.3% 25.6%
MS-COCO Captions image-to-text retrieval 51.6% 48.5%
MS-COCO Captions text-to-image retrieval 31.8% 31.1%

Running on cloud

While above colabs are pretty useful to get started, you would usually want to train on a larger machine with more powerful accelerators.

Create a VM

You can use the following commands to setup a VM with GPUs on Google Cloud:

# Set variables used by all commands below.
# Note that project must have accounting set up.
# For a list of zones with GPUs refer to
# https://cloud.google.com/compute/docs/gpus/gpu-regions-zones
PROJECT=my-awesome-gcp-project  # Project must have billing enabled.
VM_NAME=vit-jax-vm-gpu
ZONE=europe-west4-b

# Below settings have been tested with this repository. You can choose other
# combinations of images & machines (e.g.), refer to the corresponding gcloud commands:
# gcloud compute images list --project ml-images
# gcloud compute machine-types list
# etc.
gcloud compute instances create $VM_NAME \
    --project=$PROJECT --zone=$ZONE \
    --image=c1-deeplearning-tf-2-5-cu110-v20210527-debian-10 \
    --image-project=ml-images --machine-type=n1-standard-96 \
    --scopes=cloud-platform,storage-full --boot-disk-size=256GB \
    --boot-disk-type=pd-ssd --metadata=install-nvidia-driver=True \
    --maintenance-policy=TERMINATE \
    --accelerator=type=nvidia-tesla-v100,count=8

# Connect to VM (after some minutes needed to setup & start the machine).
gcloud compute ssh --project $PROJECT --zone $ZONE $VM_NAME

# Stop the VM after use (only storage is billed for a stopped VM).
gcloud compute instances stop --project $PROJECT --zone $ZONE $VM_NAME

# Delete VM after use (this will also remove all data stored on VM).
gcloud compute instances delete --project $PROJECT --zone $ZONE $VM_NAME

Alternatively, you can use the following similar commands to set up a Cloud VM with TPUs attached to them (below commands copied from the TPU tutorial):

PROJECT=my-awesome-gcp-project  # Project must have billing enabled.
VM_NAME=vit-jax-vm-tpu
ZONE=europe-west4-a

# Required to set up service identity initially.
gcloud beta services identity create --service tpu.googleapis.com

# Create a VM with TPUs directly attached to it.
gcloud alpha compute tpus tpu-vm create $VM_NAME \
    --project=$PROJECT --zone=$ZONE \
    --accelerator-type v3-8 \
    --version tpu-vm-base

# Connect to VM (after some minutes needed to setup & start the machine).
gcloud alpha compute tpus tpu-vm ssh --project $PROJECT --zone $ZONE $VM_NAME

# Stop the VM after use (only storage is billed for a stopped VM).
gcloud alpha compute tpus tpu-vm stop --project $PROJECT --zone $ZONE $VM_NAME

# Delete VM after use (this will also remove all data stored on VM).
gcloud alpha compute tpus tpu-vm delete --project $PROJECT --zone $ZONE $VM_NAME

Setup VM

And then fetch the repository and the install dependencies (including jaxlib with TPU support) as usual:

git clone --depth=1 --branch=master https://github.com/google-research/vision_transformer
cd vision_transformer

# optional: install virtualenv
pip3 install virtualenv
python3 -m virtualenv env
. env/bin/activate

If you're connected to a VM with GPUs attached, install JAX and other dependencies with the following command:

pip install -r vit_jax/requirements.txt

If you're connected to a VM with TPUs attached, install JAX and other dependencies with the following command:

pip install -r vit_jax/requirements-tpu.txt

Install Flaxformer, follow the instructions provided in the corresponding repository linked here.

For both GPUs and TPUs, Check that JAX can connect to attached accelerators with the command:

python -c 'import jax; print(jax.devices())'

And finally execute one of the commands mentioned in the section fine-tuning a model.

Bibtex

@article{dosovitskiy2020vit,
  title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
  author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and  Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
  journal={ICLR},
  year={2021}
}

@article{tolstikhin2021mixer,
  title={MLP-Mixer: An all-MLP Architecture for Vision},
  author={Tolstikhin, Ilya and Houlsby, Neil and Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Unterthiner, Thomas and Yung, Jessica and Steiner, Andreas and Keysers, Daniel and Uszkoreit, Jakob and Lucic, Mario and Dosovitskiy, Alexey},
  journal={arXiv preprint arXiv:2105.01601},
  year={2021}
}

@article{steiner2021augreg,
  title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
  author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
  journal={arXiv preprint arXiv:2106.10270},
  year={2021}
}

@article{chen2021outperform,
  title={When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations},
  author={Chen, Xiangning and Hsieh, Cho-Jui and Gong, Boqing},
  journal={arXiv preprint arXiv:2106.01548},
  year={2021},
}

@article{zhuang2022gsam,
  title={Surrogate Gap Minimization Improves Sharpness-Aware Training},
  author={Zhuang, Juntang and Gong, Boqing and Yuan, Liangzhe and Cui, Yin and Adam, Hartwig and Dvornek, Nicha and Tatikonda, Sekhar and Duncan, James and Liu, Ting},
  journal={ICLR},
  year={2022},
}

@article{zhai2022lit,
  title={LiT: Zero-Shot Transfer with Locked-image Text Tuning},
  author={Zhai, Xiaohua and Wang, Xiao and Mustafa, Basil and Steiner, Andreas and Keysers, Daniel and Kolesnikov, Alexander and Beyer, Lucas},
  journal={CVPR},
  year={2022}
}

Changelog

In reverse chronological order:

  • 2022-08-18: Added LiT-B16B_2 model that was trained for 60k steps (LiT_B16B: 30k) without linear head on the image side (LiT_B16B: 768) and has better performance.

  • 2022-06-09: Added the ViT and Mixer models trained from scratch using GSAM on ImageNet without strong data augmentations. The resultant ViTs outperform those of similar sizes trained using AdamW optimizer or the original SAM algorithm, or with strong data augmentations.

  • 2022-04-14: Added models and Colab for LiT models.

  • 2021-07-29: Added ViT-B/8 AugReg models (3 upstream checkpoints and adaptations with resolution=224).

  • 2021-07-02: Added the "When Vision Transformers Outperform ResNets..." paper

  • 2021-07-02: Added SAM (Sharpness-Aware Minimization) optimized ViT and MLP-Mixer checkpoints.

  • 2021-06-20: Added the "How to train your ViT? ..." paper, and a new Colab to explore the >50k pre-trained and fine-tuned checkpoints mentioned in the paper.

  • 2021-06-18: This repository was rewritten to use Flax Linen API and ml_collections.ConfigDict for configuration.

  • 2021-05-19: With publication of the "How to train your ViT? ..." paper, we added more than 50k ViT and hybrid models pre-trained on ImageNet and ImageNet-21k with various degrees of data augmentation and model regularization, and fine-tuned on ImageNet, Pets37, Kitti-distance, CIFAR-100, and Resisc45. Check out vit_jax_augreg.ipynb to navigate this treasure trove of models! For example, you can use that Colab to fetch the filenames of recommended pre-trained and fine-tuned checkpoints from the i21k_300 column of Table 3 in the paper.

  • 2020-12-01: Added the R50+ViT-B/16 hybrid model (ViT-B/16 on top of a Resnet-50 backbone). When pretrained on imagenet21k, this model achieves almost the performance of the L/16 model with less than half the computational finetuning cost. Note that "R50" is somewhat modified for the B/16 variant: The original ResNet-50 has [3,4,6,3] blocks, each reducing the resolution of the image by a factor of two. In combination with the ResNet stem this would result in a reduction of 32x so even with a patch size of (1,1) the ViT-B/16 variant cannot be realized anymore. For this reason we instead use [3,4,9] blocks for the R50+B/16 variant.

  • 2020-11-09: Added the ViT-L/16 model.

  • 2020-10-29: Added ViT-B/16 and ViT-L/16 models pretrained on ImageNet-21k and then fine-tuned on ImageNet at 224x224 resolution (instead of default 384x384). These models have the suffix "-224" in their name. They are expected to achieve 81.2% and 82.7% top-1 accuracies respectively.

Disclaimers

Open source release prepared by Andreas Steiner.

Note: This repository was forked and modified from google-research/big_transfer.

This is not an official Google product.

vision_transformer's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vision_transformer's Issues

Hyper-parameters of ViT-B/16 training from scratch

First, thanks for sharing codes and the pretrained models.
I have a question related to #2

As you replied,

Note that for the published checkpoints we pretrained on imagenet21k (see README), using 102.4M examples for training.

but imagenet21k only has 14M images, right? May I know what is 102.4M examples you referred?

Thanks.

TypeError in 2nd cell in section "Load dataset"


TypeError Traceback (most recent call last)
in ()
3 num_classes = input_pipeline.get_dataset_info(
4 dataset,
----> 5 'train'
6 )[1]['num_classes']
7 # tf.data.Datset for training, infinite repeats.
TypeError: get_dataset_info() takes 1 positional argument but 2 were given

current code:
num_classes = input_pipeline.get_dataset_info(dataset, 'train')['num_classes'] ds_train = input_pipeline.get_data( dataset=dataset, mode='train', repeats=None, batch_size=batch_size, ) ds_test = input_pipeline.get_data( dataset=dataset, mode='test', repeats=1, batch_size=batch_size, )

suggested code:
num_classes = input_pipeline.get_dataset_info( dataset = dataset, split = 'train' )[1]['num_classes'] ds_train = input_pipeline.get_data( dataset=dataset, mode='train', repeats=None, batch_size=batch_size, ) ds_test = input_pipeline.get_data( dataset=dataset, mode='test', repeats=1, batch_size=batch_size, )


AttributeError Traceback (most recent call last)
in ()
8 init_params=params,
9 model_config=models.CONFIGS[model],
---> 10 logger=logger
11 )
2 frames
/content/vision_transformer/vit_jax/checkpoint.py in _flatten_dict(d, parent_key, sep)
29 """Flattens a dictionary, keeping empty leaves."""
30 items = []
---> 31 for k, v in d.items():
32 path = parent_key + sep + k if parent_key else k
33 if isinstance(v, collections.MutableMapping):
AttributeError: 'tuple' object has no attribute 'items'

Suggestions for image generation

I removed the classification head and trying to use this repo for image generation but I get really bad results. All images have patchy looks and very low quality. I played with number of heads, number of layers, LR etc, but didnt really matter.

What would be the most sensible approach to generate images with the encoder part? I don't use pretraining, would it be the cause of bad results?

Non-finetuned ViT-B_16 has final layer weights set to 0

Hi,

I am trying to use the non-finetuned ViT-B_16 and it seems the final layer weights are all set to 0:

import numpy as np
params = np.load('imagenet21k_ViT-B_16.npz')
keys, values = zip(*list(params.items()))
>>> values[keys.index('head/kernel')]
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)
>>> values[keys.index('head/bias')]
array([0., 0., 0., ..., 0., 0., 0.], dtype=float32)

The other non-finetuned model weights do not seem to have this problem. Could this checkpoint please be updated?

Thank you,
Rohan

Resize size in data augmentation

Hi,

I have been reviewing the paper of BiT and ViT.

It seems that BiT "resize larger images to $448 \times 448$ and then crop to $384 \times 384$", while in this repo, apparently, it is "resize larger images to $512 \times 512$ and then crop to $384 \times 384$" (though in default setting inception crop is used instead). Can I ask how did you get the hyperparameter of 512?

    'cifar10': {
        'train': 'train[:98%]',
        'test': 'test',
        'resize': 512,
        'crop': 384,
        'total_steps': 10_000,
    },
    'cifar100': {
        'train': 'train[:98%]',
        'test': 'test',
        'resize': 512,
        'crop': 384,
        'total_steps': 10_000,
    },
    'imagenet2012': {
        'train': 'train[:99%]',
        'test': 'validation',
        'resize': 512,
        'crop': 384,
        'total_steps': 20_000,
    },

Pretrained weights and position embedding at 224x224

I've been fiddling with the models a bit, massaging the weights into my PyTorch impl. I have the base 384x384 models working well, but generating the params for the 224x224, the resulting output has low validation accuracy (it was in the 76s top-1 when I killed it).

Is that expected? Is the pos embedding interpolation from 24x24 grid only intended as a starting point for transfer learning and not expected to provide good results as is? By comparison the base 384x384 16x16 patch is validating at 84.2 and the 384x384 32x32 patch at 81.7

when i run the command "python3 -m vit_jax.train --name ViT-B_16-cifar10_`date +%F_%H%M%S` --model ViT-B_16 --logdir /tmp/vit_logs --dataset cifar10",I encountered the error"Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'"."."

this is the detailed error:
(VIT) llb@raypc:~/codes/vision_transformer$ python3 -m vit_jax.train --name ViT-B_16-cifar10_date +%F_%H%M%S --model ViT-B_16 --logdir /tmp/vit_logs --dataset cifar10
2020-12-05 09:46:07,527 [INFO] vit_jax.log: Namespace(accum_steps=8, base_lr=0.03, batch=512, batch_eval=512, copy_to=None, dataset='cifar10', decay_type='cosine', eval_every=100, grad_norm_clip=1, logdir='/tmp/vit_logs', mixup_alpha=0, model='ViT-B_16', name='ViT-B_16-cifar10_2020-12-05_094606', optim_dtype='bfloat16', output=None, prefetch=2, progress_every=10, shuffle_buffer=200000, tfds_data_dir=None, tfds_manual_dir=None, total_steps=None, vit_pretrained_dir='.', warmup_steps=500)
2020-12-05 09:46:08,183 [INFO] vit_jax.log: Available devices: [GpuDevice(id=0), GpuDevice(id=1), GpuDevice(id=2), GpuDevice(id=3)]
2020-12-05 09:46:08.185488: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
how to solve this problem,thanks!!!!

Visualization of position embeddings

Thanks for your great work!

Could you please share the codes using for the visualization of position embeddings in Figure 7 (center) and Figure 9?

Many thanks!

The problem of transfer to remote-sensing images domain

Thanks for your excellent job! I want follow your job in remote-sensing.
My dataset just like DOTA. And the task is multi-label classification.
But when i used vit to finetune my dataset. I got the F2 score is 72%. But with resnet50 i can reach 88%.
The diffrient is really large. Can you tell me how to make full use of vit? Should I train it from scratch with remote-sensing images?
Have you erer considered the transfer problem when the source domain and target domain not similar?

Hyper Params & loss function for ViT-L/16

Hello!
I was trying to pretrain the ViT-L/16 architecture on imagenet-21k though there are a few aspects which I'm unsure of and hoping you can clarify.

  1. What were the hyperparameters used for pretraining. In the paper it's mentioned that the model was trained for either 30/90 epochs and 0.1 weight decay at the start and 0.03 in the appendix, hence the uncertainty.

  2. Is the data augmentation and model initialization the same as what's used in the released jax code?

  3. What loss function was used? considering that imagenet-21k and JFT-300M are multilabel datasets.

Thanks!

The keyword argument 'dropout_rate' in class 'Encoder1DBlock' of models.

When I ran the models_test.py, there was an error that is "TypeError: _ModuleMeta object got multiple values for keyword argument 'dropout_rate'". If I remove the keyword argument 'dropout_rate'(line 132 in vit_jax/models.py), the test passed successfully.

So, should the keyword argument 'dropout_rate'(line 132 in fit_jax.models.py) be removed.
I look forward your response. Thank you very much!

Can't perform inference

I'm running the following code snippet to perform inference on my local computer. I downloaded imagenet21k_ViT-B_16.npz and placed it in the models/ directory.

import numpy as np
from vit_jax import checkpoint
from vit_jax import models

model = 'ViT-B_16'
VisionTransformer = models.KNOWN_MODELS[model].partial(num_classes=1000)
params = checkpoint.load('models/imagenet21k_ViT-B_16.npz')
params['pre_logits'] = {} 
img = np.zeros((1, 384, 384, 3))
VisionTransformer.call(params, img)

When I run this, I get the following error:

ValueError                                Traceback (most recent call last)
<ipython-input-2-5880fe0789e1> in <module>
     13 img = np.zeros((1, 384, 384, 3))
     14 
---> 15 VisionTransformer.call(params, img)

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in wrapper(class_, *args, **kwargs)
    218       def wrapper(class_, *args, **kwargs):
    219         super_fn = getattr(super(cls, class_), name)
--> 220         return super_fn(*args, **kwargs)
    221       wrapper.__doc__ = f'''{orig_fn.__doc__}
    222 

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in wrapper(class_, *args, **kwargs)
    218       def wrapper(class_, *args, **kwargs):
    219         super_fn = getattr(super(cls, class_), name)
--> 220         return super_fn(*args, **kwargs)
    221       wrapper.__doc__ = f'''{orig_fn.__doc__}
    222 

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in wrapper(class_, *args, **kwargs)
    218       def wrapper(class_, *args, **kwargs):
    219         super_fn = getattr(super(cls, class_), name)
--> 220         return super_fn(*args, **kwargs)
    221       wrapper.__doc__ = f'''{orig_fn.__doc__}
    222 

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in call(cls, params, name, *args, **kwargs)
    536                          transparent=cls._is_transparent())
    537     with cls._with_instance(frame) as instance:
--> 538       y = instance.apply(*args, **kwargs)
    539       _track_outputs(y)
    540     return y

~/Desktop/PCA_based_defenses/vision_transformer/vit_jax/models.py in apply(self, x, num_classes, train, resnet, patches, hidden_size, transformer, representation_size, classifier)
    254         x = jnp.concatenate([cls, x], axis=1)
    255 
--> 256       x = Encoder(x, train=train, name='Transformer', **transformer)
    257 
    258     if classifier == 'token':

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in __new__(cls, name, *args, **kwargs)
    275                          transparent=cls._is_transparent())
    276     with cls._with_instance(frame) as instance:
--> 277       y = instance.apply(*args, **apply_kwargs)
    278       _track_outputs(y)
    279     return y

~/Desktop/PCA_based_defenses/vision_transformer/vit_jax/models.py in apply(self, inputs, num_layers, mlp_dim, inputs_positions, dropout_rate, train, **attention_kwargs)
    178         inputs_positions=inputs_positions,
    179         posemb_init=nn.initializers.normal(stddev=0.02),  # from BERT.
--> 180         name='posembed_input')
    181     x = nn.dropout(x, rate=dropout_rate, deterministic=not train)
    182 

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in __new__(cls, name, *args, **kwargs)
    275                          transparent=cls._is_transparent())
    276     with cls._with_instance(frame) as instance:
--> 277       y = instance.apply(*args, **apply_kwargs)
    278       _track_outputs(y)
    279     return y

~/Desktop/PCA_based_defenses/vision_transformer/vit_jax/models.py in apply(self, inputs, inputs_positions, posemb_init)
     51                               ' but it is: %d' % inputs.ndim)
     52     pos_emb_shape = (1, inputs.shape[1], inputs.shape[2])
---> 53     pe = self.param('pos_embedding', pos_emb_shape, posemb_init)
     54     if inputs_positions is None:
     55       # Normal unpacked case:

/usr/local/lib/python3.6/dist-packages/flax/nn/base.py in param(self, name, shape, initializer)
    565       raise ValueError(
    566           'Existing shape {} differs from requested shape {}'.format(
--> 567               param.shape, shape))
    568     return param
    569 

ValueError: Existing shape (1, 197, 768) differs from requested shape (1, 577, 768)

What am I doing wrong?

Visualizing Attention Maps

Thank you for the release of the code for your paper.

I was curious whether you could additionally also share the code that produced Fig. 6 and Fig 13 of the paper i.e. the attention maps.

About accuracy of training from scratch on ImageNet-1k

Hi,

Thanks for your great work! I have several questions about reproducing the ImageNet-1k experiments:

  1. I am wondering what's the top-1 accuracy of ViT-B-32 (without fine-tuning on 384x384 resolution) while training scratch from ImageNet-1k. In the paper, the table5 shows the fine-tuning accuracy with 384x384 resolution is 73.38.

  2. Do you observe an overfitting phenomenon in this setting? Or do you observe a much higher training top-1 accuracy (e.g., >90%) than validation? If so, do you use some additional regularization methods (except for weight decay and dropout) to mitigate it?

  3. Which loss function do you use in this setting, softmax cross-entropy or sigmoid cross-entropy?

  4. Why use MLP as a classifier instead of a single linear layer in pre-training? Does this improve the final accuracy or alleviate over-fitting?

Pretrain weights of ResNet

Thank you again for this great work,

I just noticed that you have achieved great results with ResNet, is there any plan to release those weights (pretrained with ImageNetk-21k)?

ViT-H

Hi,
Will the ViT-Huge models be released?

Edit: pertrained on imagenet21K

TypeError: tuple indices must be integers or slices, not str

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-a1516126008a> in <module>()
      1 # Note the datasets are configured in input_pipeline.DATASET_PRESETS
      2 # Have a look in the editor at the right.
----> 3 num_classes = input_pipeline.get_dataset_info(dataset, 'train')['num_classes']
      4 # tf.data.Datset for training, infinite repeats.
      5 ds_train = input_pipeline.get_data(

TypeError: tuple indices must be integers or slices, not str

Further Details

Dear All,
thank you so much for this great work.
I would like to know some more information regarding this great project.

If I understand correctly you trained on ImageNet21k (That has more than 21k classes where each class has more than 200 images) in total I think it occupies around 4TB of Memory.

You initially trained the model with TPUs?
How many TPUs and for how long? 8TPUs for 8 hours?

When you fine-tuned you used 8GPU V100 with 2hours?!

How many people work on this great project just to have a general idea, counting on the credit is 20 people in total, for 2 months correct?!

Thanks again for provide this information
Best Regards

ValueError: Unknown split "train". Should be one of []

Hi,

Thank you for such a great work!

I was trying to run your code, but I have encountered the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/path/vit/vit_jax/input_pipeline.py", line 59, in get_dataset_info
    num_examples = data_builder.info.splits[split].num_examples
  File "/env/lib/python3.7/site-packages/tensorflow_datasets/core/splits.py", line 177, in __getitem__
    instruction=key,
  File "/env/lib/python3.7/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 99, in make_file_instructions
    absolute_instructions = instruction.to_absolute(name2len)
  File "/env/lib/python3.7/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 557, in to_absolute
    for rel_instr in self._relative_instructions]
  File "/env/lib/python3.7/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 557, in <listcomp>
    for rel_instr in self._relative_instructions]
  File "/env/lib/python3.7/site-packages/tensorflow_datasets/core/tfrecords_reader.py", line 384, in _rel_to_abs_instr
    split, list(name2len)))
ValueError: Unknown split "train". Should be one of []

I digged a little bit and found it looks like the data_builder.info.splits is an empty dict

image

Advertised fine-tuning hyper-parameters & results

Hi,
I am trying to restore your fine-tuning results on ViT-L-32 Imagenet (and larger model later)

I notice you publish two tensorboard.dev runs reaching ~81.7 and ~82 accuracies.
(final train loss is ~0.4 and ~0.8 ), the hyperparameters difference between the two runs is not visible from tensorboard.dev.
Can you clarify the difference between the two runs?
Is there a reproduction script for these results with the code here, or does it include additional addons e.g Polyak avg?

(The reported accuracy in the paper was 80.99)

In my attempt to reproduce with same hyper parameters, I reached a lower training loss (~0.16) after 20K steps (used lr 0.03 according to graph tensorboard.dev ). Unfortunately, my training doesn't match tensrobard.dev contours nor final result.
The top accuracy I see at an early epoch is 80.98, then I observe slight overfitting.

Ideas for image verification

Hi, Thanks so much for this great project.
how do you think it is best to use this transformer for image verification.
Say I have groups of two images each and I want to predict are they from the same family or not, and then to visualize the areas of which we attended on.
Any Idea?
Eran

Load the pretrained weights to TF2 ViT implementation, thanks

I have the ViT implementation by TF2, no flax/jax was used. I want to load the imagenet21k pretrained weights to apply transfer learning. Could anybody guide me how to convert between the flax implementation and TF2 implementation or how to convert the pretrained weights to TF2 version? Thank you very much.

img_size=224 in ImageNet21k

Took me quite some time to find that the state_dicts provided in the imagenet/ directory are for img_size=224.

Object detection using ViT

if possible, please add some hints in README.md on how to approach the object detection based on ViT.

Many thanks!

How to deal with variant image sizes, thanks

Transformer can definitely deal with variant inputs length (time steps). If we have images with varying sizes, and no resize will be used. That means varying numbers of patches (or pixels from CNN maps) will be obtained. How can we handle this issue? I implemented ViT in TF2/Keras and have no problem to process 1D inputs with shape of (batch, time-steps, features), in which time-steps are varying. However, I tried to use the same model of hybrid ViT on images with varying sizes. This should be same as the 1D problem. However, I couldn't make it run, and the main problem is due to that the shape cannot be obtained at runtime, e.g., I have to use (None, None, 3) as the input image shape.

Did anybody have tried something? Thanks a lot.

The Importance of Pretrained Model Weight.

I download the official pretrain weight of R50-ViT-B/16. After load the pretrained weight, I got high accuracy on my dataset. If I drop the pretrained weight, we observe a significant drop in accuracy. This is impressive and explainable because Transformer needs pretrain to work well.

However, my question is:
When I drop part of the pretrained model weight., for example, the projection 'embedding' layer(between the R50 and Transformer Encoder). I got a significant drop in accuracy too. Which is incredible and hard to explain.

Does anyone have the same problem as me? How can I change part of the pretrain weight but still keep comparable performance as baseline model ?

lambda computation in code

Hi,

As I check the code, it seems you are using attention module from flax which using scaled dot product attention instead of lambda computation. I might be wrong as you might pass the function somewhere. Can you point me to the code where you compute equation 1 and equation 2 in the paper?

Unexpected memory consumption with mixup

Hello, I have observed unexplainable memory consumption at training phase with mixup augmentation.
Mixup_alpha =0.2; Custom dataset; batch_size =512.
Ram consumption increasing with ongoing training.

Without mix-up augmentation it's ok.

Invalid pretrain weights

imagenet21k/ViT-H_14.npz['head/kernel'] is a tensor with shape of (21384, 1280).
However, it seems that this is a zero tensor, all values are zero.

Hybrid pretrained models

Thanks for providing the code and pretrained models! I was curious if Hybrid models would also be released; if so, any idea on the timeline? (days? weeks?)

Has anyone reproduce the fine tune results?

Thanks to the authors for this amazing work.

I'm currently reproducing the fine-tune part (pre-train on ImageNet 21k and fine-tune on ImageNet 1k). However, the best result I can get is 84.1% with ViT-L16.

I noticed the authors kindly provided tensorboard for our reference. It shows ViT-L16 achieves 83.25% as early as the 800th step, while I can only get around 80% in step 2503 (we are running on epoch-based, and there should be 2503 steps per epoch for a batch size of 512). In addition, it took 2 hours for the authors to reach step 800, while in our experiments, it takes around 11 minutes.

Our setup is 64 V100 32G, the total batch size is 512, no accum step is applied. The rest are all following table B.1.1 of the original paper except that we train for 8 epochs which makes the total training steps 20.02k

Half-Precision Optimizer

Hello,

Thanks for the code.

Looking at the momentum_hp.py file, at line 22, you comment that the optimizer stores state using half-precision. Just to clarify, are you using a half-precision optimizer similar to the Jukebox paper for training?

'pre_logits' in the pretrained weights' dictionary

Hi,
I used the load function in the checkpoints.py and it outputted the dictionary containing the weights of the model parameters as numpy arrays.
I have wrapped my head around most of the entries', however, I cannot comprehend where the pre_logits Dense layer is supposed to fit. Can someone please explain?

pre_logits

Hyper-parameters of ViT-B/16 training from scratch

Thanks for sharing your code. Can you provide the hyper-parameters (e.g. learning rate, weight decay, optimizer type, training epochs) of ViT-B/16 training from scratch on ImageNet dataset? Many thanks.

RuntimeError: Internal: Failed to load in-memory CUBIN: CUDA_ERROR_OUT_OF_MEMORY: out of memory: while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).

I use this command:
python3 -m vit_jax.train --name ViT-B_16-my_dataset_date +%F_%H%M%S --model ViT-B_16 --logdir /tmp/vit_logs --dataset my_dataset --batch 8 --batch_eval 8
and gets in the console:

File "/home/env/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 954, in execute_replicated
out_bufs = compiled.execute_on_local_devices(list(input_bufs))
RuntimeError: Internal: Failed to load in-memory CUBIN: CUDA_ERROR_OUT_OF_MEMORY: out of memory: while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).

CUDA Version: 11.0
python 3.8
tensorflow 2.3.1
jax 0.2.6
jaxlib 0.1.57+cuda110
joblib 0.17.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.