Giter Site home page Giter Site logo

facebookresearch / ov-seg Goto Github PK

View Code? Open in Web Editor NEW
650.0 13.0 60.0 17.01 MB

This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.

License: Other

Python 13.69% Shell 0.06% Makefile 0.01% Jupyter Notebook 86.24%

ov-seg's Introduction

[OVSeg] Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

This is the official PyTorch implementation of our paper:
Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, Diana Marculescu
Computer Vision and Pattern Recognition Conference (CVPR), 2023

[arXiv] [Project] [huggingface demo]

Installation

Please see installation guide.

Data Preparation

Please see datasets preparation.

Getting started

Please see getting started instruction.

Finetuning CLIP

Please see open clip training.

LICENSE

Shield: CC BY-NC 4.0

The majority of OVSeg is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

CC BY-NC 4.0

However portions of the project are under separate license terms: CLIP and ZSSEG are licensed under the MIT license; MaskFormer is licensed under the CC-BY-NC; openclip is licensed under the license at its repo.

Citing OVSeg 🙏

If you use OVSeg in your research or wish to refer to the baseline results published in the paper, please use the following BibTeX entry.

@inproceedings{liang2023open,
  title={Open-vocabulary semantic segmentation with mask-adapted clip},
  author={Liang, Feng and Wu, Bichen and Dai, Xiaoliang and Li, Kunpeng and Zhao, Yinan and Zhang, Hang and Zhang, Peizhao and Vajda, Peter and Marculescu, Diana},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7061--7070},
  year={2023}
}

ov-seg's People

Contributors

bichenwu09 avatar jeff-liangf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ov-seg's Issues

Finetuning CLIP

bash /media/hubu/Data1/202131116023006_bky/ov-seg-main/open_clip_training/src/scripts/coco_gt_171cls_finetune_VitL.sh
2023-10-01,13:03:34 | INFO | Running with a single process. Device cuda:0.
2023-10-01,13:03:34 | INFO | Loading pretrained ViT-L-14 from OpenAI.
2023-10-01,13:03:45 | INFO | Model:
2023-10-01,13:05:35 | INFO | Finished zero-shot imagenet.
2023-10-01,13:05:35 | INFO | Eval Epoch: 0 ade150-zeroshot-val-top1: 0.2742 ade150-zeroshot-val-top5: 0.5465
2023-10-01,13:05:35 | INFO | Start epoch 0
Traceback (most recent call last):
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/media/hubu/Data1/202131116023006_bky/ov-seg-main/open_clip_training/src/training/main.py", line 320, in
main()
File "/media/hubu/Data1/202131116023006_bky/ov-seg-main/open_clip_training/src/training/main.py", line 268, in main
train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, ema, args, writer)
File "/media/hubu/Data1/202131116023006_bky/ov-seg-main/open_clip_training/src/training/train.py", line 69, in train_one_epoch
for i, batch in enumerate(dataloader):
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/hubu/Data1/202131116023006_bky/ov-seg-main/open_clip_training/src/training/data.py", line 47, in getitem
images = self.transforms(Image.open(str(self.images[idx])))
File "/home/hubu/anaconda3/envs/bpy38/lib/python3.8/site-packages/PIL/Image.py", line 3236, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'coco_gt_171cls/2/000000129379.jpg'

This photo is in the dataset, but the error can't be found, and some other photos can be found。

Release under a license for commercial use?

Question

I am wondering whether it is possible to change the license from CC BY-NC 4.0 to a different one that allows commercial use? (E.g. Apache 2.0). This project would become a great foundation to solve many classic computer-vision problems.

Details about using SAM

Hi! Thank you for the interesting work!
In the CLIP with segment anything, I want to know that you just replace the Maskformer with SAM? In the SAM, what kind of prompt do you use? Find everything in the image? After that, do something to select better scores in an image based on the number of classes? or other prompt? Thank you for your reply!
best

Download data error!

when run gdown 1cycn5BpUjkSTIysEtxnAUFrgUEnQ5_pW
it show `Access denied with the following error:

Too many users have viewed or downloaded this file recently. Please
try accessing the file again later. If the file you are trying to
access is particularly large or is shared with many people, it may
take up to 24 hours to be able to view or download the file. If you
still can't access a file after 24 hours, contact your domain
administrator. 

`

About train on other datasets

Hi, thanks for doing such interesting work.I would like to know how to train on my own datasets, I only have images and masks.Thank you.

ValueError: not enough values to unpack (expected 3, got 2)

why i use this scripts python train_net.py --num-gpu 4 --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.CLIP_MODEL_NAME ViT-L/14 to reproduce the results , but after the 14999 iters when start inference on 500 batchs will meet a error
`Traceback (most recent call last):
File "train_net.py", line 302, in
launch(
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/launch.py", line 67, in launch
mp.spawn(
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 3 terminated with the following error:
Traceback (most recent call last):
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/launch.py", line 126, in _distributed_worker
main_func(*args)
File "/root/ov-seg/train_net.py", line 296, in main
return trainer.train()
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 484, in train
super().train(self.start_iter, self.max_iter)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 150, in train
self.after_step()
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 180, in after_step
h.after_step()
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 552, in after_step
self._do_eval()
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 525, in _do_eval
results = self._func()
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 453, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 608, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 158, in inference_on_dataset
outputs = model(inputs)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 886, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/ov-seg/open_vocab_seg/ovseg_model.py", line 212, in forward
r, regions = self.semantic_inference(
File "/root/ov-seg/open_vocab_seg/ovseg_model.py", line 237, in semantic_inference
clip_cls, regions, valid_flag = self.clip_adapter(
File "/home/Anaconda3/envs/ovseg/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/ov-seg/open_vocab_seg/modeling/clip_adapter/adapter.py", line 126, in forward
(regions, unnorm_regions), region_masks, valid_flag = self._preprocess_image(image, mask, normalize=normalize)
ValueError: not enough values to unpack (expected 3, got 2)`

FREEZE_AT parameter in config

Hello,

I was looking into the implementation of OVSeg and I cannot see where you used the cfg.BACKBONE.FREEZE_AT. I see that you initialize frozen_stages=-1 in SwinTransformer. Shouldnt it be initialized with this cfg.BACKBONE.FREEZE_AT value?

The CLIP ViT-B/16 model

Hello, thank you for your amazing work! I want to know where can I get the version of ViT-B/16 backbone for CLIP and R101 for Maskformer. The ViT-L/14 is too big and it doesn't adapt my application. Thank you~

Why can't support 'CLIP_ENSEMBLE_WEIGHT > 0' using 'R101c + CLIP-ViT-B/16' in demo.py

Hi,

I try to run demo.py using model 'R101c + CLIP-ViT-B/16'. I modified the config file as:

MODEL:
  META_ARCHITECTURE: "OVSegDEMO"
  BACKBONE:
    NAME: "build_resnet_deeplab_backbone"
  RESNETS:
    DEPTH: 101
    STEM_TYPE: "deeplab"
    STEM_OUT_CHANNELS: 128
    STRIDE_IN_1X1: False
    OUT_FEATURES: [ "res2", "res3", "res4", "res5" ]
    # NORM: "SyncBN"
    RES5_MULTI_GRID: [ 1, 2, 4 ]
  WEIGHTS: "detectron2://DeepLab/R-103.pkl"
  PIXEL_MEAN: [123.675, 116.280, 103.530]
  PIXEL_STD: [58.395, 57.120, 57.375]
  SEM_SEG_HEAD:
    NAME: "OpenVocabMaskFormerHead"
    IN_FEATURES: [ "res2", "res3", "res4", "res5" ]
    IGNORE_VALUE: 255
    NUM_CLASSES: 171 # number of categories in training set
    EMBEDDING_DIM: 512
    EMBED_LAYERS: 2
    COMMON_STRIDE: 4 # not used, hard-coded
    LOSS_WEIGHT: 1.0
    CONVS_DIM: 256
    MASK_DIM: 256
    NORM: "GN"
  MASK_FORMER:
    TRANSFORMER_IN_FEATURE: "res5"
    DEEP_SUPERVISION: True
    NO_OBJECT_WEIGHT: 0.1
    DICE_WEIGHT: 1.0
    MASK_WEIGHT: 20.0
    HIDDEN_DIM: 256
    NUM_OBJECT_QUERIES: 100
    NHEADS: 8
    DROPOUT: 0.1
    DIM_FEEDFORWARD: 2048
    ENC_LAYERS: 0
    DEC_LAYERS: 6
    PRE_NORM: False
  CLIP_ADAPTER:
    TEXT_TEMPLATES: "vild"
    CLIP_MODEL_NAME: "ViT-B/16"
    MASK_FILL: "mean"
    MASK_EXPAND_RATIO: 1.0
    MASK_THR: 0.5 # choose the foreground objects
    MASK_MATTING: False # use soft background, default not used
    MASK_PROMPT_DEPTH: 3
    MASK_PROMPT_FWD: True # use mask prompt during forward
    REGION_RESIZED: True # resize to the input of clip, e.g., 224
    CLIP_ENSEMBLE: True # use ensemble of two classification branches
    CLIP_ENSEMBLE_WEIGHT: 0.5
DATASETS:
  TRAIN: ("coco_2017_train_stuff_sem_seg",)
  TEST: ("ade20k_sem_seg_val",)
SOLVER:
  IMS_PER_BATCH: 32
  BASE_LR: 0.00006
  MAX_ITER: 120000
  WARMUP_FACTOR: 1e-6
  WARMUP_ITERS: 1500
  WEIGHT_DECAY: 0.01
  WEIGHT_DECAY_NORM: 0.0
  WEIGHT_DECAY_EMBED: 0.0
  BACKBONE_MULTIPLIER: 1.0
  TEST_IMS_PER_BATCH: 1
  CLIP_GRADIENTS:
    ENABLED: True
    CLIP_TYPE: "full_model"
    CLIP_VALUE: 0.01
    NORM_TYPE: 2.0
INPUT:
  MIN_SIZE_TEST: 512
  MAX_SIZE_TEST: 2048
  CROP:
    ENABLED: True
    TYPE: "absolute"
    SIZE: (512, 512)
    SINGLE_CATEGORY_MAX_AREA: 1.0
  COLOR_AUG_SSD: True
  SIZE_DIVISIBILITY: 512  # used in dataset mapper
  FORMAT: "RGB"
TEST:
  EVAL_PERIOD: 5000
  AUG:
    ENABLED: False
    MIN_SIZES: [256, 384, 512, 640, 768, 896]
    MAX_SIZE: 3584
    FLIP: True
DATALOADER:
  FILTER_EMPTY_ANNOTATIONS: True
  NUM_WORKERS: 4
VERSION: 2

I changed the CLIP_ENSEMBLE_WEIGHT to 0.5. The command is

python demo.py --config-file configs/ovseg_R101c_demo.yaml --class-names 'Oculus' 'Ukulele'  --input ./resources/demo_samples/sample_03.jpeg --output ./pred --opts MODEL.WEIGHTS pretrained_model/ovseg_R101c_vitB16_ft_mpt.pth.pt

But it gets error:
image

Is there anything wrong with my modification of config file? How can I solve it?

PATH_TO_MASKADAPTED_CLIP

Hi, thanks for your great work, but it seems that I cannot find the given mask adapted CLIP model weight.

Segmentation Problem

Hi @ALL,
Thank you for your great work.
When I run the demo.py as the following:
python demo.py --input custom_images/*.JPEG --output output/ --class-name drum, trifle, person, punching bag, mouse, trap, volleyball, piano, boxer, plunger, harmonica, dog, muzzle, violin

The outputs were not as I expected:
Screenshot 2022-12-19 at 9 33 09 PM
Screenshot 2022-12-19 at 9 33 01 PM

So do I have any mistake when running demo.py file? or I forgot the pretrained weight? If this is the problem, could you provide the pretrained weight file, it would be really kind of you.

Best,
Tin

demo.py

Hi, thanks for doing such interesting work, I am very interested in this, I ran it on an online demo with my own data and found the results very exciting. However, when running the demo.py file, I found that the result is not as good as the online demo, and I can't realize the arbitrary semantic segmentation without prompting, may I ask how to get the relevant code of the online demo. Looking forward to your reply!

Per-pixel feature extraction

Hello, thanks for making the code available. I have a question. Is it possible to obtain per-pixel features (e.g., 512-D or 768-D) instead of N_mask x W x H and N_mask x feat_dim that the encoder provides as output?

On a similar note, Is it a correct understanding that the mask proposal generation and then subsequent classification architectures does not have an intermediate per-pixel feature representation?

Reproducing the results of baseline w/ original CLIP

Dear author,

Thanks for the great work! Could you tell me how many epochs are needed to train baseline w/ original CLIP? I've trained 2 epochs(10,000 iters), and the loss seems to have converged already. However, the testing results have a gap with your results( I have tested the weight you provided and the result was 29.6 on ADE-150 which was perfect. However, my result is only 18.0, and according to your paper, it should be 21.8). Could you help me out?

Here are my training results and my training logs:
06bc78c3cfd3fb8316f059e9668d746
image

Thanks.

Loss plots

Hello,

I was wondering if you can provide the loss graphs from W&B. I am trying to train OVSeg myself and I am wondering how the graphs should look like.

Thank you in advance for your response.

CLIP weights?

Hi @ALL,
Thank you for your great work.
Your work has two implication 1) the segmentation model with Mask prompting, 2) the Embedding of your new CLIP model (that have a mask image and a mask prompt as inputs).

For now, I just found that you provided the weights for the entire model (segmentation model), so could you provide us with the new CLIP weights? I think it is really helpful because your work help CLIP learns the region-level image (as in RegionCLIP).

And do you plan to work on ViT-B/32? because I just found the ViT-B/16 and ViT-L/16 versions.
Thank you very much.

Best,
Tin

SAM+Mask_adapted CLIP

Hello, first of all, thanks for your very influential and interesting work sharing. I have tried the online demo and found it very interesting, but I did not find the code of this part. Are you willing to release the code of this part?

swin_base_patch4_window12_384_22k.pkl

Thank you for your generous sharing! When I try to test ,an error jumped out :AssertionError: Override list has odd length: ['MODEL.WEIGHTS']; it must be a list of pairs

I finually got clear that something wrong is in the file named "swin_base_patch4_window12_384_22k.pkl" .However, I cannot find this file .Would you please tell me?

The training process of ovseg

Thanks for your amazing work!

Is the training of ovseg a two-stage pipeline? Where is the code for CLIP adaption?

Thanks

The issue with the dataset during the training process.

Hello, when I run the Python script datasets/prepare_coco_stuff_sem_seg.py to generate a dataset, it shows 0it [00:00, ?it/s], and when I execute python train_net.py --num-gpu 1 --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.MASK_PROMPT_FWD False, I encounter the following error.
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
relative_position_bias_table
Traceback (most recent call last):
File "train_net.py", line 302, in
launch(
File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/launch.py", line 82, in launch
main_func(*args)
File "train_net.py", line 294, in main
trainer = Trainer(cfg)
File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/defaults.py", line 378, in init
data_loader = self.build_train_loader(cfg)
File "train_net.py", line 106, in build_train_loader
return build_detection_train_loader(cfg, mapper=mapper, dataset=dataset)
File "/usr/local/lib/python3.8/dist-packages/detectron2/config/config.py", line 207, in wrapped
explicit_args = _get_args_from_config(from_config, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/detectron2/config/config.py", line 245, in _get_args_from_config
ret = from_config_func(*args, **kwargs)
File "/root/ov-seg/open_vocab_seg/data/build.py", line 164, in _train_loader_from_config
dataset = get_detection_dataset_dicts(
File "/root/ov-seg/open_vocab_seg/data/build.py", line 127, in get_detection_dataset_dicts
dataset_dicts = [
File "/root/ov-seg/open_vocab_seg/data/build.py", line 128, in
wrap_metas(DatasetCatalog.get(dataset_name), dataset_name=dataset_name)
File "/usr/local/lib/python3.8/dist-packages/detectron2/data/catalog.py", line 58, in get
return f()
File "/root/ov-seg/open_vocab_seg/data/datasets/register_coco_stuff.py", line 235, in
lambda x=image_dir, y=gt_dir: load_sem_seg(
File "/usr/local/lib/python3.8/dist-packages/detectron2/data/datasets/coco.py", line 274, in load_sem_seg
assert len(gt_files) > 0, "No annotations found in {}.".format(gt_root)
AssertionError: No annotations found in datasets/coco/stuffthingmaps_detectron2/train2017.

Use which feature to classify for demo.

Thanks for your great work. I see that for the demo config, the mask is from ov-seg, but the classification is completely dependent on clip classification (L486: # only clip model predictions are used). At table 5, I may understand using each feature(either from ovseg and clip) is able to classify. However, if I turned clip_ensemble to False, the pred picture become totally wrong. Does the ov-seg only produces mask proposals for clip adapter in the demo? How to use ovseg feature only for mask classification?

Customer dataset error: ValueError: not enough values to unpack (expected 3, got 2)

Thanks for your fantastic work.

I encountered an issue when I changed the dataset to my own dataset. This error is not always present but sometimes happens when evaluating the model on val dataset.

I think the issue is my dataset includes empty mask which means this image does not include any object, only the background in the mask. The whole mask pixel value only has 255. But I am not sure.

image

Please let me know any suggestions about this issue. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.