Giter Site home page Giter Site logo

pxiangwu / motionnet Goto Github PK

View Code? Open in Web Editor NEW
168.0 15.0 25.0 8.2 MB

CVPR 2020, "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps"

cvpr2020 autonomous-driving perception motion-prediction pytorch

motionnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motionnet's Issues

Bug in nuscenes_dataloader.py [os.listdir does not guarantee the order of the returned list of files on other systems beyond Ubuntu]

Hi,

I think I found a bug in data/nuscenes_dataloader.py:

            gt_file_paths = [
                os.path.join(seq_dir, f)
                for f in os.listdir(seq_dir)
                if os.path.isfile(os.path.join(seq_dir, f))
            ]
            num_gt_files = len(gt_file_paths)

            assert gt_file_paths == sorted(gt_file_paths), gt_file_paths  # <-- line inserted by me
            gt_dict_list = []
            for f in range(
                num_gt_files
            ):  # process the files, starting from 0.npy to 1.npy, etc

The gt_file_paths should contain the two files (for multi_seq training) 0.npy and 1.npy and based on the following comment starting the for-loop, this list should be ordered (see assertion line I inserted and marked with a comment).
Now os.listdir does not guarantee any ordering:
Taken from here:

Python method listdir() returns a list containing the names of the entries in the directory given by path. The list is in arbitrary order. [...]

When I run the above code with the assertion, on the cluster everything is fine. However, on my local machine throws the assertion. So the ordering probably depends on the file system, OS, and os.listdir implementation, but one might get lucky and it will be always ordered.

However, I started wondering if this might even be kind of an augmentation (reverse in time), or if it even really matters (network applies to both 0.npy and 1.npy independently, and they are only combined for the background consistency loss, in which probably the ordering is(?) relevant, as one uses the trans_matrix which only works in one direction by definition? Could you shed some light on this, if it was intended or not?

Inference on own LiDAR data

Dear,

First of all would I like to thank you for the provided code.
I understand how to preprocess and train the model, but I'm in the dark about how I could inference now on my own LiDAR data. We have a Velodyne ultra puck 32 LiDAR, as well as an Ouster OS1 & OS2, from which we can receive a bytestream. Are there certain parameters I can tweak in the model to accomodate for our LiDAR setup (we're not using 360°, the angle is tilted, ...).

Thanks in advance!

Some questions about background_temporal_consistency loss

Hi @pxiangwu ,
Thanks for your open sources, I am just wondering why flipping curr_pred along dim=2 before applying F.affine_grid in function background_temporal_consistency_loss

# Next, translation
curr_pred = curr_pred.permute(0, 1, 3, 2).contiguous()  # swap x and y axis 
curr_pred = torch.flip(curr_pred, dims=[2]) 

grid = F.affine_grid(grid_trans_matrix_disp, curr_pred.size())

Inactive bug in classify_speed_level

In the following snippet I added an assertion with a NOTE comment. Is it correct?
The assertion never throws, that is why I described this as an inactive bug ;)

def classify_speed_level(
    all_disp_field_gt, total_future_sweeps=20, future_frame_skip=0
):
    """
    Classify each cell into static (possibly background) or moving.
    """
    # First, compute the static and moving cell masks
    all_disp_field_gt_norm = np.linalg.norm(all_disp_field_gt, ord=2, axis=-1)

    # Every future_frame_skip frames, if the movement of grid cells does not exceed this thresh (unit: meters),
    # then they are considered as static. This thresh is set to be the maximum perturbation for 1 second.
    upper_thresh = 0.2
    upper_bound = (future_frame_skip + 1) / 20 * upper_thresh
    selected_future_sweeps = np.arange(
        0, total_future_sweeps + 1, future_frame_skip + 1
    )
    selected_future_sweeps = selected_future_sweeps[1:]

    assert (
        future_frame_skip == 0
    )  # NOTE: the following selection does only work if no frames were skipped
    future_sweeps_disp_field_gt_norm = all_disp_field_gt_norm[
        -len(selected_future_sweeps) :, ...
    ]
    ...

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

python train_multi_seq.py --data /DataSet/ --batch 8 --nepoch 45 --nworker 4 --use_bg_tc --reg_weight_bg_tc 0.1 --use_fg_tc --reg_weight_fg_tc 2.5 --use_sc --reg_weight_sc 15.0 --log

Namespace(batch=8, data='/DataSet/', log=True, logpath='', nepoch=45, nn_sampling=False, nworker=4, reg_weight_bg_tc=0.1, reg_weight_fg_tc=2.5, reg_weight_sc=15.0, resume='', use_bg_tc=True, use_fg_tc=True, use_sc=True)
device number 1
data root: /DataSet/
Training dataset size: 17065
Epoch 1, learning rate 0.0016
Traceback (most recent call last):
File "train_multi_seq.py", line 610, in
main()
File "train_multi_seq.py", line 183, in main
= train(model, criterion, trainloader, optimizer, device, epoch)
File "train_multi_seq.py", line 226, in train
motion_pred, trans_matrices, pixel_instance_map)
File "train_multi_seq.py", line 356, in compute_and_bp_loss
loss.backward()
File "/home/redrafi/.local/lib/python3.7/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/redrafi/.local/lib/python3.7/site-packages/torch/autograd/init.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

#-------------------------------------------- My Env details-----------------------------------------------------------------

Name --- Version ---- Build Channel

_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
blas 1.0 mkl
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2021.7.5 h06a4308_1
cachetools 4.2.2 pyhd8ed1ab_0 conda-forge
cairo 1.16.0 hf32fb01_1
certifi 2021.5.30 py37h06a4308_0
cffi 1.14.5 py37h261ae71_0
cudatoolkit 9.0 h13b8566_0
cycler 0.10.0 py37_0
dbus 1.13.18 hb2f20db_0
expat 2.4.1 h2531618_2
ffmpeg 4.0 hcdf2ecd_0
fontconfig 2.13.1 h6c09931_0
freeglut 3.2.1 h9c3ff4c_2 conda-forge
freetype 2.10.4 h5ab3b9f_0
glib 2.68.2 h36276a3_0
graphite2 1.3.13 h58526e2_1001 conda-forge
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
harfbuzz 1.8.8 hffaf4a1_0
hdf5 1.10.2 hc401514_3 conda-forge
icu 58.2 he6710b0_3
intel-openmp 2021.2.0 h06a4308_610
jasper 2.0.14 h07fcdf6_1
joblib 0.17.0 py_0 anaconda
jpeg 9b h024ee3a_2
kiwisolver 1.3.1 py37h2531618_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.35.1 h7274673_9
libblas 3.9.0 9_mkl conda-forge
libcblas 3.9.0 9_mkl conda-forge
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgfortran 3.0.0 1 conda-forge
libgfortran-ng 7.3.0 hdf63c60_0 anaconda
libglu 9.0.0 he1b5a44_1001 conda-forge
libgomp 9.3.0 h5101ec6_17
liblapack 3.9.0 9_mkl conda-forge
libopencv 3.4.2 hb342d67_1
libopus 1.3.1 h7f98852_1 conda-forge
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.3.0 hd4cf53a_17
libtiff 4.2.0 h85742a9_0
libuuid 1.0.3 h1bed415_2
libvpx 1.7.0 h439df22_0
libwebp-base 1.2.0 h27cfd23_0
libxcb 1.14 h7b6447c_0
libxml2 2.9.12 h03d6c58_0
llvmlite 0.36.0 py37hf484d3e_0 numba
lz4-c 1.9.3 h2531618_0
matplotlib 3.3.4 py37h06a4308_0
matplotlib-base 3.3.4 py37h62a2d02_0
mkl 2021.2.0 h06a4308_296
mkl-service 2.3.0 py37h27cfd23_1
mkl_fft 1.3.0 py37h42c9631_2
mkl_random 1.2.1 py37ha9443f7_2
ncurses 6.2 he6710b0_1
ninja 1.10.2 hff7bd54_1
numba 0.53.1 np1.11py3.7h04863e7_g97fe221b3_0 numba
numpy 1.21.0 py37h038b26d_0 conda-forge
olefile 0.46 py37_0
opencv 3.4.2 py37h6fd60c2_1
openssl 1.1.1k h27cfd23_0
pcre 8.45 h295c915_0
pillow 8.0.0 py37h9a89aac_0 anaconda
pip 21.1.3 py37h06a4308_0
pixman 0.40.0 h36c2ea0_0 conda-forge
py-opencv 3.4.2 py37hb342d67_1
pycparser 2.20 py_2
pyparsing 2.4.7 pyhd3eb1b0_0
pyqt 5.9.2 py37h05f1152_2
pyquaternion 0.9.9 pypi_0 pypi
python 3.7.10 h12debd9_4
python-dateutil 2.8.1 pyhd3eb1b0_0
python_abi 3.7 2_cp37m conda-forge
pytorch 1.1.0 py3.7_cuda9.0.176_cudnn7.5.1_0 pytorch
qt 5.9.7 h5867ecd_1
quaternion 2021.6.9.13.34.11 py37h5e8e339_0 conda-forge
readline 8.1 h27cfd23_0
scikit-learn 0.23.2 py37h0573a6f_0 anaconda
scipy 1.6.2 py37had2a1c9_1
setuptools 52.0.0 py37h06a4308_0
sip 4.19.8 py37hf484d3e_0
six 1.16.0 pyhd3eb1b0_0
sqlite 3.36.0 hc218d9a_0
threadpoolctl 2.1.0 pyh5ca1d4c_0 anaconda
tk 8.6.10 hbc83047_0
torchvision 0.3.0 py37_cu9.0.176_1 pytorch
tornado 6.1 py37h27cfd23_0
tqdm 4.61.1 pyhd8ed1ab_0 conda-forge
tzdata 2021a h52ac0ba_0
wheel 0.36.2 pyhd3eb1b0_0
xorg-fixesproto 5.0 h7f98852_1002 conda-forge
xorg-inputproto 2.3.2 h7f98852_1002 conda-forge
xorg-kbproto 1.0.7 h7f98852_1002 conda-forge
xorg-libx11 1.7.2 h7f98852_0 conda-forge
xorg-libxau 1.0.9 h7f98852_0 conda-forge
xorg-libxext 1.3.4 h7f98852_1 conda-forge
xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge
xorg-libxi 1.7.10 h7f98852_0 conda-forge
xorg-xextproto 7.3.0 h7f98852_1002 conda-forge
xorg-xproto 7.0.31 h7f98852_1007 conda-forge
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.9 haebb681_0

if other datasets can be used by this model

Hello!
I wanna know if other datasets can be used by this model.
Is it feasible to convert the annotation file into nuScenes format?
I think rewriting the dataset import may be a solution.

some problem about the motionnet postprocess

First, thanks for your work!
there has some place in postprocess part make me confused

# We only show the cells having one-hot category vectors
max_prob = np.amax(pixel_cat_map_gt, axis=-1)
filter_mask = max_prob == 1.0
pixel_cat_map = np.argmax(pixel_cat_map_gt, axis=-1) + 1  # category starts from 1 (background), etc
pixel_cat_map = (pixel_cat_map * non_empty_map * filter_mask).astype(np.int)

cat_pred = np.argmax(cat_pred, axis=0) + 1
cat_pred = (cat_pred * non_empty_map * filter_mask).astype(np.int)

the tensor cat_pred output by motionnet looks like is the catrgory of per pixel in lidar bev map.
but what the filter mask mean in this part ?
cat_pred = (cat_pred * non_empty_map).astype(np.int) will also output normal result

the filter_mask tensor related to pixel_cat_map_gt value, but if I test MotionNet on my own lidar data, that means I have no any GT-boxes annotations, and the filter_mask may be can't be compute.
I have no idea my understanding is correct, Hope for any reply!
@pxiangwu

Problems training train_single_seq

Impressive work.

When I tried to train the model without using spatio-temporal consistency losses from scratch, I found out that the training dataset cannot be imported successfully. It turned out that there is a typo in the line 171 of MotionNet/data/nuscenes_dataloader.py 'os.path.isfile(os.path.join(self.dataset_root, d))]', which is supposed to be 'os.path.isdir(os.path.join(self.dataset_root, d))]'.

BTW, it would be very helpful if you can provide a pre-trained single-seq model. In addition, how long does it take to train the model?

The datasets of gen_data.py

Hello!
The trainval on the nuscenes official website is divided into 10 parts. Do I need to download all the datasets of these parts?

hello! When I run the gen_data.py, it has a problem in LidarPointCloud(from nuscenes.utils.data_classes import LidarPointCloud)

Traceback (most recent call last):
File "E:/MotionNet/MotionNet/data/gen_data.py", line 401, in
gen_data()
File "E:/MotionNet/MotionNet/data/gen_data.py", line 83, in gen_data
LidarPointCloud.from_file_multisweep_bf_sample_data(nusc, curr_sample_data,
AttributeError: type object 'LidarPointCloud' has no attribute 'from_file_multisweep_bf_sample_data'

It seen that the class of LidarPointCloud don't have 'from_file_multisweep_bf_sample_data'

Is the background loss filtering out dynamic objects?

Hi again,

this is a question related to the paper and after skimming the code I was still not quite clear on this.
Your background temporal consistency loss in equation (3) of the paper seems reasonable for static points but not for dynamic ones because you specifically wrote that the alignment transformation T is rigid and therefore cannot account for object motion.
Are you filtering out cells that dynamic/non-background for this loss?
Also why did you need a complete second set of N motion maps for the background loss?

On a side note:
In a different issue #4 (comment) you wrote:

The training usually takes less than one day on a single RTX 2080 Ti GPU.

with that GPU only having around 11GB. However even on my Tesla V100 16GB GPU the training train_multi_seq_MGDA.py ran out of memory at the very beginning. Running the complete training with 2 GPUs worked, though. Do you have an idea what the reason could be for this?

Thanks again for your answer.

BrokenPipeError during training process

Hi Wu,

Thanks for your excellent work! When I train the network with provided training script, it always shows errors at some batches or epochs. error information is as:
#########################################
Traceback (most recent call last):
File "train_multi_seq.py", line 611, in
main()
File "train_multi_seq.py", line 184, in main
= train(model, criterion, trainloader, optimizer, device, epoch)
File "train_multi_seq.py", line 212, in train
for i, data in enumerate(trainloader, 0):
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 582, in next
return self._process_next_batch(batch)
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
BrokenPipeError: Traceback (most recent call last):
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/aiserver/FangYang/MotionNet/data/nuscenes_dataloader.py", line 58, in getitem
if idx in self.cache:
File "", line 2, in contains
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/multiprocessing/managers.py", line 818, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/home/aiserver/anaconda3/envs/MotionNet/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

#############################################
I have no idea why it happened, could you help me figure it out ? Thank you!

Training Set size not 17065 for NuScenes after preprocessing

First of all, thank you for releasing your code and your great work.
I have a short question regarding your MotionNet, as I am trying to reproduce your numbers. When I run the pre-processing script over the NuScenes folder everything seems to work fine and the output looks also good, with a rough training dataset size of 19GB. You reported in your data/readme.md a total preprocessed training dataset size of 26,5 GB on your system. Is this difference realistic? Also, when I start the MGDA with ST consistency loss as shown in the readme.md, the first warning I get is "The size of training dataset is not 17065" and shortly after I am told that my Training dataset size is instead 6951. So a lot of numbers do not add up for me here (if you have 17k samples in 26GB and I less than half of samples still in 19GB, and also where are the missing 10k samples).
Maybe you can help me out here or have an idea of what is different?

Could you provide your command line when running the code? Let me check what might cause this inconsistency.

Everything I did was done in a venv with python3.6.9 and the required pip-dependencies on an Ubuntu 18.04 system.
The command line I ran was directly taken from the readme.md where I just replaced my used directories:

python $SRC_DIR/data/gen_data.py --root $INPUT_DATADIR/nuscenes --split train --savepath $INPUT_DATADIR/nuscenes_preprocessed/train

The starting output looks like the following:

======                                                                                                                                                                                                             
Loading NuScenes tables for version v1.0-trainval...                                                                                                                                                               
23 category,                                                                                                                                                                                                       
8 attribute,                                                                                                                                                                                                       
4 visibility,                                                                                                                                                                                                      
64386 instance,                                                                                                                                                                                                    
12 sensor,                                                                                                                                                                                            
10200 calibrated_sensor,                                                                                                                                                                                           
2631083 ego_pose,                                                                                                                                                                                   
68 log,                                                                                                                                                                                                      
850 scene,                                                                                                                                                                                                        
34149 sample,                                                                                                                                                                                                      
2631083 sample_data,                                                                                                                                                                                       
1166187 sample_annotation,                                                                                                                                                                                         
4 map,                                                                                                                                                                                                
Done loading in 36.6 seconds.                                                                                                                                                                                      
======
Reverse indexing ...
Done reverse indexing in 9.8 seconds.
======
Total number of scenes: 850
Split: train, which contains 500 scenes.
Processing scene 411 ...
  >> Finish sample: 0, sequence 0

When I now start a training with MGDA and ST consistency loss like described in the readme.md:

python train_multi_seq_MGDA.py --data $INPUT_DATADIR/nuscenes_preprocessed/train --batch 8 --nepoch 70 --nworker 4 --use_bg_tc --reg_weight_bg_tc 0.1 --use_
fg_tc --reg_weight_fg_tc 2.5 --use_sc --reg_weight_sc 15.0 --reg_weight_cls 2.0 --log

I get the following output:

Namespace(batch=8, data='/xxxINPUT_DATADIRxxx(postedited for this issue)/nuscenes_preprocessed/train', log=True, logpath='', nepoch=70, nn_sampling=False, nworker=4, reg_weight_bg_tc=0.1, reg_weight_cls=2.0, reg_weight_fg_tc=2
.5, reg_weight_sc=15.0, resume='', use_bg_tc=True, use_fg_tc=True, use_sc=True)                                                                                                                                    
device number 2                                                                                                                                                                                                    
data root: /xxxINPUT_DATADIRxxx/nuscenes_preprocessed/train                                                                                                                                            
/xxxSRC_DIRxxx/data/nuscenes_dataloader.py:40: UserWarning: >> The size of training dataset is not 17065.                                                                                     
                                                                                                                                                                                                                   
  warnings.warn(">> The size of training dataset is not 17065.\n")                                                                                                                                                 
Training dataset size: 6951                                                                                                                                                                                        
Epoch 1, learning rate 0.002                                                                                                                                                                                       
[1/0]   Disp 0.106501,  Obj_Cls 0.110858,       Motion_Cls 0.057613,    bg_tc 0.8646359,        sc 0.0885072,   fg_tc 0.0126457
.
.
.

So as you can see, there is no real problem with the preprocessing and the start of the training, however having 10k samples missing compared to the publicated results makes the reproduction of the results impossible.

Also after some time the training actually fails, but I cannot tell if it is related to this issue (I am not a pickle expert):

.
.
.
[1/482] Disp 0.035911,  Obj_Cls 0.068191,       Motion_Cls 0.014474,    bg_tc 0.0069933,        sc 0.0002731,   fg_tc 0.0000397
Traceback (most recent call last):
  File "train_multi_seq_MGDA.py", line 1042, in <module>
    main()
  File "train_multi_seq_MGDA.py", line 269, in main
    models, criterion, trainloader, optimizers, device, epoch
  File "train_multi_seq_MGDA.py", line 321, in train
    for i, data in enumerate(trainloader, 0):
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 582, in __next__
    return self._process_next_batch(batch)
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
_pickle.UnpicklingError: Traceback (most recent call last):
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 99, in <listcomp>
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/xxxSRC_DIRxxx/data/nuscenes_dataloader.py", line 68, in __getitem__
    gt_data_handle = np.load(gt_file_path, allow_pickle=True)
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/numpy/lib/npyio.py", line 440, in load
    pickle_kwargs=pickle_kwargs)
  File "/xxxSRC_DIRxxx/.venv/lib/python3.6/site-packages/numpy/lib/format.py", line 732, in read_array
    array = pickle.load(fp, **pickle_kwargs)
_pickle.UnpicklingError: invalid load key, '\x00'.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.