Giter Site home page Giter Site logo

tsinghua-mars-lab / neural_map_prior Goto Github PK

View Code? Open in Web Editor NEW
181.0 181.0 22.0 492.31 MB

The official implementation of the CVPR2023 paper titled “Neural Map Prior for Autonomous Driving”.

Home Page: https://tsinghua-mars-lab.github.io/neural_map_prior/

License: Apache License 2.0

Python 99.93% Shell 0.07%

neural_map_prior's People

Contributors

abbyxxn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural_map_prior's Issues

GeoDistributedSampler

Hi @abbyxxn, deeply appreciate for your great work! I was trying to finetune nmp after trained bevformer_30m_60m for 24 epochs on 8*3090 with the same requirements but got:

2023-08-12 08:36:16,655 - mmdet - INFO - workflow: [('train', 1)], max: 24 epochs
start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!

  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
  • 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    epoch: 0 self.train_epoch_point: -2
    0%| | 0/11 [00:00<?, ?it/s]epoch: 0 self.train_epoch_point: -2
    9%|█████████████▊ | 1/11 [00:02<00:24, 2.48s/it]epoch: 0 self.train_epoch_point: -2
    0%| | 0/8 [00:00<?, ?it/s]epoch: 0 self.train_epoch_point: -2
    18%|███████████████████████████▋ | 2/11 [00:04<00:20, 2.32s/it]epoch: 0 self.train_epoch_point: -2
    0%| | 0/6 [00:00<?, ?it/s]GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler shuffle shuffle shuffle
    epoch: 0 self.train_epoch_point: -2
    0%| | 0/3 [00:00<?, ?it/s]epoch: 0 self.train_epoch_point: -2
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [01:02<00:00, 20.90s/it]
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [01:05<00:00, 21.78s/it]
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:07<00:00, 8.48s/it]
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:06<00:00, 16.71s/it]
    18%|██████████████████████████▊ | 3/17 [01:26<07:36, 32.60s/it]gpu_id: 2 count: tensor(0.) gpu_city_list ['boston-seaport_map_3_1', 'boston-seaport_map_1_1', 'boston-seaport_map_0_2']
    create empty map for: map_slice_float_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'float32', 'embed_dims': 256, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 97.24it/s]
    83%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 5/6 [01:22<00:16, 16.09s/it]gpu_id: 2 count: tensor(0) gpu_city_list ['boston-seaport_map_3_1', 'boston-seaport_map_1_1', 'boston-seaport_map_0_2']
    create empty map for: map_slice_int_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'int16', 'embed_dims': 1, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
    100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [01:22<00:00, 13.82s/it]
    24%|███████████████████████████████████▊ | 4/17 [01:28<04:26, 20.53s/it]gpu_id: 5 count: tensor(0.) gpu_city_list ['singapore-onenorth_map_0_1', 'singapore-onenorth_map_2_3', 'singapore-onenorth_map_2_2', 'singapore-onenorth_map_1_1', 'singapore-onenorth_map_1_3', 'singapore-onenorth_map_0_2', 'singapore-onenorth_map_3_1', 'singapore-onenorth_map_3_0']
    create empty map for: map_slice_float_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'float32', 'embed_dims': 256, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
    100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<00:00, 638.75it/s]
    gpu_id: 5 count: tensor(0) gpu_city_list ['singapore-onenorth_map_0_1', 'singapore-onenorth_map_2_3', 'singapore-onenorth_map_2_2', 'singapore-onenorth_map_1_1', 'singapore-onenorth_map_1_3', 'singapore-onenorth_map_0_2', 'singapore-onenorth_map_3_1', 'singapore-onenorth_map_3_0']
    create empty map for: map_slice_int_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'int16', 'embed_dims': 1, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
    29%|████████████████████████████████████████████▋ | 5/17 [01:30<02:45, 13.77s/it]ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 6 (pid: 3624545) of binary: /opt/conda/bin/python
    ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
    INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 3/3 attempts left; will restart worker group

then torch tried to restart the Worker group and stuck. It looks like something went wrong with GeoDistributedSampler that does not reading files properly in my case. I tried to change if shuffle condition in mmdet_dataloader.py and some other configs but with no luck. My pickled ann_files are generated from data_sampler.py and nusc_city_infos.py, the root directories had been replace in lane_render.py.

I was wondering is there something missing or I loaded the data in a wrong way? Btw, I also trained the baseline of 24 epochs of _city_split but "CUDA out of memory" while starting to finetune it. Appreciate for your help in advance.

I'm only getting half of the results provided in the pretrained model checkpoint.

Hello, when I execute the command "bash tools/dist_train.sh project/configs/neural_map_prior_bevformer_30m_60m.py 8," I'm only getting half of the results provided in the pretrained model checkpoint. I wanted to ask if I need to modify the configuration file as you mentioned to freeze some layers of the backbone and neck in order to reproduce the results. Also, it seems like I found a way to change the sampler without modifying "train.py." Can I have a more in-depth discussion with you on WeChat? My upcoming internship work is also related to map priors, and I would greatly appreciate it.

Question about the gt generation from nuscenes

Thanks for you all the work, but I have learned the code, there is some problem about the RaseterizedData class's get_lineimg function which generate the seg_map, inst_mask and direction_mask. Is there any detailed explanation documents about it? Thanks and hope for your answers.

KeyError: 'nuScenesMapDataset is not in the dataset registry'

你好,我的安装版本是:
mmcv.1.3.14
执行python test.py ./project/configs/bevformer_30m_60m.py ./ckpts/bevformer_epoch_24.pth --eval iou
报错:
KeyError: 'nuScenesMapDataset is not in the dataset registry'
我debug到是envs/npn/lib/python3.8/site-packages/mmcv/utils/registry.py内
obj_cls = registry.get(obj_type)
if obj_cls is None:
raise KeyError(
f'{obj_type} is not in the {registry.name} registry')

这里的registry.get(obj_type)返回为空。
谢谢

Issues related to data samplers

Hi, I used 4 GPUs for the test, but it resulted in a mismatch between self.num_samples and the number of indices in the sampler, how should I solve this problem?
Traceback (most recent call last):
File "./tools/test.py", line 267, in
main()
File "./tools/test.py", line 243, in main
outputs = multi_gpu_test(model, data_loader, args.tmpdir,
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/mmdet/apis/test.py", line 95, in multi_gpu_test
for i, data in enumerate(data_loader):
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in iter
return self._get_iterator()
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 944, in init
self._reset(loader, first_iter=True)
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
self._try_put_index()
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
index = self._next_index()
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 226, in iter
for idx in self.sampler:
File "/home/hwt/network/neural_map_prior/tools/data_sampler.py", line 40, in iter
assert len(indices) == self.num_samples, (len(indices), self.num_samples)
AssertionError: (752, 1504)
GeoDistributedSamplerGeoDistributedSamplerGeoDistributedSampler no shuffleno shuffleno shuffle

code release

Hi, thanks for your great work, when will you release the codes? @abbyxxn
Looking forward to your reply, thanks!

Question about the train data and test data

I have noticed that we train the model with the input of the Surrounding Camera Images and Global Neural Map Prior. How can we get the Neural Map Prior of the train data and test data? I guess that we run all the scenes by the base models such as BevFormer to get it and then use it to train the BevFormer+NMP using the same data. Do I understand correctly?

Question about Dataset Preparation

I have run the nuscenes_converter:

python tools/data_converter/nuscenes_converter.py --data-root your/dataset/nuScenes/

and it generated nuscenes_map_infos_train.pkl and nuscenes_map_infos_val.pkl, but I find they are different from the provided files nuScences_map_trainval_infos_train.pkl and nuScences_map_trainval_infos_val.pkl.
Also, could you provide the script for generating train_city_infos.pkl and val_city_infos.pkl? And, the nuScenes Dataset Structure does not display where the map extensions (V1.3) should be placed.

'NeuralMapPrior is not in the models registry'

hi does anyone konow how to slove this problem?

2023-07-07 23:02:23,832 - mmdet - INFO - Set random seed to 0, deterministic: False
Traceback (most recent call last):
File "tools/train.py", line 280, in
main()
File "tools/train.py", line 207, in main
model = build_model(
File "/home/users/hao.yetian/test/neural_map_prior-main/mmdetection3d/mmdet3d/models/builder.py", line 84, in build_model
return build_detector(cfg, train_cfg=train_cfg, test_cfg=test_cfg)
File "/home/users/hao.yetian/test/neural_map_prior-main/mmdetection3d/mmdet3d/models/builder.py", line 57, in build_detector
return DETECTORS.build(
File "/home/users/hao.yetian/miniconda3/envs/npn/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
return self.build_func(*args, **kwargs, registry=self)
File "/home/users/hao.yetian/miniconda3/envs/npn/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/home/users/hao.yetian/miniconda3/envs/npn/lib/python3.8/site-packages/mmcv/utils/registry.py", line 44, in build_from_cfg
raise KeyError(
KeyError: 'NeuralMapPrior is not in the models registry'

question about testing model

Both testing and training the model are conducted on the nuScene dataset.I want to prepare my own dataset to test the model,What should I modify the code?

map_slice_float_dict generation

Hi !@abbyxxn appreciate for your great work! When I try to finetune with the NMP after training the baseline BEVFormer for 24 epoch on v1.0-mini dataset, I met the following problem:

start before_train_epoch removing map_slice_float_dict content !!!!!!!!!!!!!!!
* 3 start before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
end before_train_epoch removing map_slice_int_dict content !!!!!!!!!!!!!!!
epoch: 0 self.train_epoch_point: -2
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.31s/it]
gpu_id: 0 count: tensor(0.) gpu_city_list ['boston-seaport_map_2_1']
create empty map for: map_slice_float_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'float32', 'embed_dims': 256, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.76it/s]
gpu_id: 0 count: tensor(0) gpu_city_list ['boston-seaport_map_2_1']
create empty map for: map_slice_int_dict, {'root_dir': '/localdata_ssd/map_slices/raster_global_map', 'type': 'rasterized', 'prefix': 'map_large_reso_gru_cpu', 'tile_param': {'data_type': 'int16', 'embed_dims': 1, 'num_traversals': 1}, 'batch_size': 8, 'single_gpu': False, 'global_map_tile_size': [4, 4], 'global_map_raster_size': [0.3, 0.3]}
Traceback (most recent call last):
File "tools/train.py", line 279, in
main()
File "tools/train.py", line 275, in main
meta=meta)
File "/neural_map_prior/mmdetection3d/mmdet3d/apis/train.py", line 35, in train_model
meta=meta)
File "/usr/local/lib/python3.6/dist-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/parallel/distributed.py", line 52, in train_step
output = self.module.train_step(*inputs[0], **kwargs[0])
File "/neural_map_prior/project/neural_map_prior/models/mapers/base_mapper.py", line 122, in train_step
loss, log_vars, num_samples = self(**data_dict)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/neural_map_prior/project/neural_map_prior/models/mapers/base_mapper.py", line 90, in forward
return self.forward_train(*args, **kwargs)
File "/neural_map_prior/project/neural_map_prior/models/mapers/original_hdmapnet.py", line 239, in forward_train
preds_dict = self.forward_single(imgs, **kwargs)
File "/neural_map_prior/project/neural_map_prior/models/mapers/original_hdmapnet.py", line 131, in forward_single
self.gm.take_map_prior(prior_bev[ib:ib + 1], token, img_meta, 'train', trans)
File "/neural_map_prior/project/neural_map_prior/models/mapers/map_global_memory.py", line 237, in take_map_prior
global_map_slice = self.map_slice_float_dict[f'map
{city_name}{map_index[0]}{map_index[1]}']
KeyError: 'map_singapore-hollandvillage_2_0'

When I tried to locate the variable of self.map_slice_float_dict, I found this dict variable only have one key which is 'map_boston-seaport_2_1'
Did I load the data in a wrong way or something?

Versions of some dependent libraries during environment configuration

Hello!I would like to know why I get the following error when I configure my environment according to the official requirements, is it the version of some of my dependent libraries?

Traceback (most recent call last): File "./tools/test.py", line 7, in <module> import mmcv File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/mmcv/__init__.py", line 5, in <module> from .image import * File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/mmcv/image/__init__.py", line 5, in <module> from .geometric import (cutout, imcrop, imflip, imflip_, impad, File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/mmcv/image/geometric.py", line 8, in <module> from .io import imread_backend File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/mmcv/image/io.py", line 19, in <module> from PIL import Image, ImageOps File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/PIL/Image.py", line 68, in <module> from ._typing import StrOrBytesPath, TypeGuard File "/home/hwt/anaconda3/envs/npn/lib/python3.8/site-packages/PIL/_typing.py", line 10, in <module> NumpyArray = npt.NDArray[Any] AttributeError: module 'numpy.typing' has no attribute 'NDArray'

NuScenes Boston Split

Hi,
thanks for the great work!
Do you provide the Boston split that you used in Table.8? Can it be enabled by setting nusc_new_split to True in nuscenes_dataset.py and generated with nusc_split.py?

NMP results download issue

你好
NMP result那个链接下载的文件只有11M.
我猜测是因为google drive把这6000多个文件打包的时候总是打包不全。
您可以直接将您的1G的zip文件上传吗。
谢谢。

neural map prior + MapTR

is it possible to merge neural map prior as an online hd map construction model with the idea of neural map prior ?
how can it be done

How to get the min/max_geo_loc

Hello, thanks for the great work!
I have learned the code and have a question in lane_render.py: the code tried to get the map_geo_loc with the following code,

train_min_geo_loc = {'singapore-onenorth': np.array([118., 420.]) - bev_radius,
                     'boston-seaport': np.array([298., 328.]) - bev_radius,
                     'singapore-queenstown': np.array([347., 862.]) - bev_radius,
                     'singapore-hollandvillage': np.array([442., 902.]) - bev_radius}
train_max_geo_loc = {'singapore-onenorth': np.array([1232., 1777.]) + bev_radius,
                     'boston-seaport': np.array([2527., 1896.]) + bev_radius,
                     'singapore-queenstown': np.array([2686., 3298.]) + bev_radius,
                     'singapore-hollandvillage': np.array([2490., 2839.]) + bev_radius}
val_min_geo_loc = {'singapore-onenorth': np.array([118., 408.]) - bev_radius,
                   'boston-seaport': np.array([412., 555.]) - bev_radius,
                   'singapore-queenstown': np.array([524., 871.]) - bev_radius,
                   'singapore-hollandvillage': np.array([608., 2007.]) - bev_radius}
val_max_geo_loc = {'singapore-onenorth': np.array([1232., 1732.]) + bev_radius,
                   'boston-seaport': np.array([2367., 1720.]) + bev_radius,
                   'singapore-queenstown': np.array([2044., 3333.]) + bev_radius,
                   'singapore-hollandvillage': np.array([2460., 2836.]) + bev_radius}

I'm not quite sure how to obtain these specific values (such as np.array([118., 420.]), I tried to locate them in the nuscenes-devkit based on these values, but I couldn't find the corresponding points.

Could you please explain what these values mean? And how were they obtained?

Thanks a lot !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.