Giter Site home page Giter Site logo

hasanirtiza / pedestron Goto Github PK

View Code? Open in Web Editor NEW
679.0 18.0 157.0 66.34 MB

[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021

Home Page: https://openaccess.thecvf.com/content/CVPR2021/papers/Hasan_Generalizable_Pedestrian_Detection_The_Elephant_in_the_Room_CVPR_2021_paper.pdf

License: Apache License 2.0

Python 74.47% C++ 1.36% Cuda 2.68% Shell 0.04% MATLAB 3.03% Jupyter Notebook 18.27% Dockerfile 0.02% Cython 0.12%
pedestrian-detection autonomous-driving citypersons caltech eurocity-persons retinanet cascade-rcnn faster-rcnn benchmarking datasets-preparation

pedestron's People

Contributors

abdulhannankhan avatar chensnathan avatar dandax123 avatar dongdem avatar donnyyou avatar f-fl0 avatar gokulanv avatar hasanirtiza avatar hellock avatar innerlee avatar jokoe66 avatar libuyu avatar lindahua avatar liushuchun avatar ljpadam avatar luxiin avatar lyuwenyu avatar myownskyw7 avatar oceanpang avatar patrick-llgc avatar sovrasov avatar sty-yyj avatar thangvubk avatar tjsongzw avatar wswday avatar xvjiarui avatar yhcao6 avatar youkaichao avatar zehaos avatar zhihuagao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pedestron's Issues

pretrained model for wider pedestrain

Hi thanks for your nice work. Are you planning to add pretrained model purely trained on wider pedestrain? Also what is the difference between CrowdHuman 1 and CrowdHuman 2 in the pretrained-model?

Question about Caltech dataset

Hello Doctor, I have a question to ask you, when evaluating the Caltech dataset, you are using the Caltech new annotaions provided by Shanshan Zhang, The format is .txt, and the caltech matlab evalutation tool uses the original annotations and the format is .vbb, so how can I use this new annotation for Evaluation? I need your help, thank you very much!

Do Citypersons and ECP datasets contain instance masks?

Hi, thank you for your great work! I'm new to pedestrian detection, and I looked through the annotation files of Citypersons and ECP datasets but didn't find the annotations of instance masks. So do these datasets contain instance masks, and how do you train Mask RCNNs on these datasets?

Thanks in advance!

How should the CityPersons results in the Elephant in the Room paper be reproduced?

I notice that this repository trains CityPersons for 240 epochs by default. Private communication with one of the authors on the topic of reproducing CityPersons results stated that:

we used the best checkpoint inside an interval (from epochs 10-20 for table 4 and 7), to report results on the validation set

Is configs/elephant/cityperson/cascade_hrnet.py incorrect in training for 240 epochs if the best checkpoints are taken from epochs 10-20? What setting should we use here to train and select a checkpoint that reproduces the results for CityPersons in table 4 and 7?

CPU Inference

Hi,

Thank you for the great work. Does this repository support inference on CPU for available pedestrian/person detection models? From the installation instructions, it looks like it can only run on GPUs. If that is the case, are you planning to add CPU Inference Support in the near future?

Stay Safe,
M Maaz

what is the flag mean_teacher for?

Hi,

I found that the network would output two pths, one is for teacher, and the other is for student. Do you have any reference for their difference?

AP value for Crowd Human

Hi, thank you for this great work.

Sorry to bother you again but I have an issue when calculating the AP value for CrowdHuman dataset using epoch_19.pth.stu pretrained model (CrowdHuman 2). The AP value I got is 12.40 and it's so far away from the one you claimed on the repo (84.1).

Would you share with us, if it's possible, the programme or the way used to get this value.

Thank you in advance.

rider affect

Hi @hasanirtiza ,
Sorry to bother you again, I have an question about WiderPedestrian and cityperson dataset

  1. In WiderPedestrian 2019 dataset, both pedestrian and rider have the same label
  2. In Cityperson dataset, pedestrian label is 1, rider label is 2

When we use widerpedestrian 2019 to train, and we use cityperson dataset to test, the model will detect the rider as pedestrian. Is it any side affect for cityperson test?

Thanks
Meixitu

Segmentation fault

Thanks for your great project,but I got some problem with it.
I just clone a project ,download some pre-trained mode ,but some error happended
I'd appreciate if you can help me

Describe the bug
I've tired 3 pretrained-mode you provided,but I got same error:
' Class names are not saved in the checkpoint's meta data'

Reproduction

  1. What command or script did you run?
    When I use
 python tools/demo.py configs/elephant/cityperson/cascade_hrnet.py ./models_pretrained/epoch_5.pth.stu demo/ result_demo/ 
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    No
  2. What dataset did you use?
    I just clone your project,and run a demo
    Environment
  • OS: redhat
  • GCC gcc version :4.8.5 20150623 (Red Hat 4.8.5-39) (GCC)
  • PyTorch version :1.4.0
  • How you installed PyTorch :pip
  • GPU model :p100
  • CUDA and CUDNN version :cuda 9 and cudnn 7.1.x
  • [optional] Other information that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

['demo/1.png', 'demo/2.png', 'demo/3.png']
The model and loaded state dict do not match exactly

unexpected key in source state_dict: mask_head.0.conv_res.conv.weight, mask_head.0.conv_res.conv.bias, mask_head.1.conv_res.conv.weight, mask_head.1.conv_res.conv.bias, mask_head.2.conv_res.conv.weight, mask_head.2.conv_res.conv.bias

/DATA/app/OCR/Pedestron/Pedestron/tools/../mmdet/apis/inference.py:40: UserWarning: Class names are not saved in the checkpoint's meta data, use COCO classes by default.
  warnings.warn('Class names are not saved in the checkpoint\'s '
[                                                  ] 0/3, elapsed: 0s, ETA:/DATA/app/anaconda3/envs/pytorch886/lib/python3.6/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
  "See the documentation of nn.Upsample for details.".format(mode))
Segmentation fault

Bug fix

Pre-trained model about issue #22

@hasanirtiza. Thanks for your detailed explanation for pre-training and fin-tuning! It really works, and I got the result of 8.53 MR^-2 in CityPerson, but it is still 1 MR-^2 point behind your best result. I have tried a series of learning rate(from 0.0025 to 0.0005 for 1 img and 1 GPU) for fine-tuning, and only got the result of 8.53 MR^-2 in CityPerson. So, I think it is not very well of my pre-trained model on Wilder Pedestrian and ECP before fine-tune. Could you provide the model that you pre-trained on Wilder Pedestrian and ECP before fine-tuning step in your paper Sec 6.2?

BTW, from my experiments on CityPerson datasets, the model pre-trained on Wilder Pedestrian only is much better than the model pre-trained on WIDER PERSON and WIDER Pedestrian that you provided on CityPerson datasets. If it is convenient, could you provide the pre-trained model only on WIDER Pedestrian? I think it is also much helpful for generic human detection.

Clearer mapping between weights and configs

I may have just missed it but it's often not clear to me which weight file hosted on Google Drive relates to which config script in the repo. Could you provide another table for this?

Thanks,

What about the performance of vgg16 based Faster-RCNN?

Grateful to open-sourcing such a convenient PedDet toolbox. I have tried training Adapted Faster-RCNN (vgg16 based, as described in the CityPersons paper) on CityPersons dataset before, with the official mmdetection repo., where I get a log average miss rate(MR) of ~16%. However, the reported MR in the CityPersons paper is ~15%. So have you trained vgg16 based Faster-RCNN and if yes, how about the evaluation result?

retinanet accuracy

Hi @hasanirtiza ,
In your pretrained model of Retinanet, the cityperson accuracy is not that good, did you use crowdHuman and widerpedestrian dataset to pre-trained it?

Thanks
Meixitu

Can it be run on CPU?

Hi, I tried to run Pedestron on CPU by got a cuda error (even after changing the device to cpu). Is there any way to run it on CPU?

bounding box coordinates

I'm so gratefull for the work you have done and all the effort you put on this project, but I still have a little issue :
The issue is that I wanna evaluate the result of Pedestron on my own dataset (inference evaluation) using the mAP metric and for this I need the bounding boxes coordinates, is there a way to get them ?

Thank you in advance.

When i tried to trian cascade_hrnet network, but i got an error.

*rpn_loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)

File "/media/xx/xx/Pedestron/mmdet/models/anchor_heads/rpn_head.py", line 51, in loss
gt_bboxes_ignore=gt_bboxes_ignore)
File "/media/xx/xx/Pedestron/mmdet/core/fp16/decorators.py", line 127, in new_func
return old_func(*args, **kwargs)
File "/media/xx/xx/Pedestron/mmdet/models/anchor_heads/anchor_head.py", line 179, in loss
sampling=self.sampling)
File "/media/xx/xx/Pedestron/mmdet/core/anchor/anchor_target.py", line 63, in anchor_target
unmap_outputs=unmap_outputs)
File "/media/xx/xx/Pedestron/mmdet/core/utils/misc.py", line 24, in multi_apply
return tuple(map(list, zip(*map_results)))
File "/media/xx/xx/Pedestron/mmdet/core/anchor/anchor_target.py", line 108, in anchor_target_single
cfg.allowed_border)
File "/media/xx/xx/Pedestron/mmdet/core/anchor/anchor_target.py", line 176, in anchor_inside_flags
(flat_anchors[:, 2] < img_w + allowed_border) &
RuntimeError: Expected object of scalar type Byte but got scalar type Bool for argument #2 'other'

How to repeat your best result in CityPerson?

I have tried to repeat your Cascade Mask-R-CNN with the pre-trained model and the config you provided for many times, but got a much higher MR than your benchmarking(about 14.74% in reasonable datasets). Could you share the details about how to repeat your best result in CityPerson?

Could you share setting for RetinaNet?

Thank you share your nice work! Could you share us how to choose setting for RetinaNet, i have try some , but not good .so ~~~ i come here again, looking for help.

HTC with the HRNET

Since HTC has been seen as a powerful detector, is it possible to have it with HRNet ?

RuntimeError: CUDA error: no kernel image is available for execution on the device

I run this command (same as example):
$ python3 tools/demo.py configs/elephant/cityperson/cascade_hrnet.py ./models_pretrained/epoch_5.pth.stu ../data/person/ result_demo/ ../data/person/

get me this error:

['../data/person/p3.jpg', '../data/person/p8.jpg', '../data/person/p4.jpg', '../data/person/p2.jpg', '../data/person/p1.jpg', '../data/person/p10.jpg', '../data/person/p7.jpg', '../data/person/p6.jpg', '../data/person/p9.jpg', '../data/person/p5.jpg', '../data/person/p11.jpg']
unexpected key in source state_dict: mask_head.0.conv_res.conv.weight, mask_head.0.conv_res.conv.bias, mask_head.1.conv_res.conv.weight, mask_head.1.conv_res.conv.bias, mask_head.2.conv_res.conv.weight, mask_head.2.conv_res.conv.bias

./Pedestron-master/tools/../mmdet/apis/inference.py:39: UserWarning: Class names are not saved in the checkpoint's meta data, use COCO classes by default.
  warnings.warn('Class names are not saved in the checkpoint\'s '
[                                                  ] 0/11, elapsed: 0s, ETA:Traceback (most recent call last):
  File "tools/demo.py", line 67, in <module>
    run_detector_on_dataset()
  File "tools/demo.py", line 63, in run_detector_on_dataset
    detections = mock_detector(model, im, output_dir)
  File "tools/demo.py", line 37, in mock_detector
    results = inference_detector(model, image)
  File "./Pedestron-master/tools/../mmdet/apis/inference.py", line 66, in inference_detector
    return _inference_single(model, imgs, img_transform, device)
  File "./Pedestron-master/tools/../mmdet/apis/inference.py", line 93, in _inference_single
    result = model(return_loss=False, rescale=True, **data)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "./Pedestron-master/tools/../mmdet/core/fp16/decorators.py", line 49, in new_func
    return old_func(*args, **kwargs)
  File "./Pedestron-master/tools/../mmdet/models/detectors/base.py", line 88, in forward
    return self.forward_test(img, img_meta, **kwargs)
  File "./Pedestron-master/tools/../mmdet/models/detectors/base.py", line 79, in forward_test
    return self.simple_test(imgs[0], img_metas[0], **kwargs)
  File "./Pedestron-master/tools/../mmdet/models/detectors/cascade_rcnn.py", line 241, in simple_test
    x = self.extract_feat(img)
  File "./Pedestron-master/tools/../mmdet/models/detectors/cascade_rcnn.py", line 115, in extract_feat
    x = self.backbone(img)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "./Pedestron-master/tools/../mmdet/models/backbones/hrnet.py", line 446, in forward
    x = self.relu(x)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/activation.py", line 94, in forward
    return F.relu(input, inplace=self.inplace)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 912, in relu
    result = torch.relu_(input)
RuntimeError: CUDA error: no kernel image is available for execution on the device

my python version is 3.5

Pre-trained models

Hi, Thanks for the amazing work! The paper and repo has been super helpful.

One question I have is regarding the available pre-trained models. For these models, did you pre-train them on CrowdHuman or Wider Pedestrian, and tuned the models for corresponding target dataset (CityPersons, CalTech and ECP)? Is there anyway you can provide the pre-trained models before the fine-tuning step for more general purpose human detection?

Thank you and looking forward to your reply!

MR of Caltech dataset

i use the pretrianed model(Cascade Mask R-CNN | Caltech | HRNet | 1.7 | 25.7) you provided.
But the MR i got is R:17%;HO:58%. far from the R:1.7% and HO:25.7%

Testing for crowd human dataset

Hi I am trying to use test_crowdhuman.py for crowdhumandataset. However, I found the model listed in the repo having '.stu'. so when I run 'python tools/test_crowdhuman.py configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/crowdhuman/epoch_ 19 20 --out 'result.json'. It just got stuck after printing 'index created' I tried test_wider.py which doesn't have any problem, The model for wider faces doesn't have '*.stu' in this name. Also searched in the issues and you said can simply rename 'epoch_19.pth.stu' to 'epoch_19.pth' here (#17 (comment)). I tried but it just gave me this error
'Traceback (most recent call last):
File "tools/test_crowdhuman.py", line 226, in
main()
File "tools/test_crowdhuman.py", line 188, in main
if 'CLASSES' in checkpoint['meta']:
KeyError: 'meta'
'
demo.py is no problem when using 'epoch_19.pth.stu'.

Would appreciate any help. Thanks!

AP for cityperson

Sorry to bother you but I have an issue when calculating the AP value for cityperson.
what should i do ? i have tried CUDA_VISIBLE_DEVICES=0 python tools/test_city_person.py config checkpoint start checkpoin_end --result.pickle --eval bbox.

testing Caltech dataset and its miss rate

hello, I met a confused problem. When I use the pretraining model of cascade mask r-cnn on Caltech dataset, the miss rate on Caltech dataset is very large(0.99). but when I use the pretraining model of cascade mask r-cnn on CrowdHuman dataset, the miss rate is normal(0.46 on HO).

Do you use sync batch normalization internaly?

Hi,

I am new to mmdetection.

I found out that you set ims_per_gpu to 1. but the batch size should be grater than one if using batchnorm in training mode in pytorch.

Do you use synchronous batch normalization internally? I can' find the related code.

test_euroCity.py no --out argument in def parse_args():

Hi,

This may have been intentional however the "def parse_args():" Function in "test_euroCity.py" appears to be missing "parser.add_argument('--out', help='output result file')"

I am trying to test Eurocity Persons dataset with something similar to below however "--out" is displaying the following error:

"test_euroCity.py: error: unrecognized arguments: --out"

Can you provide the correct argument to test the Eurocity dataset using a single GPU? I am using this at the moment but not with the test_euroCity.py

./tools/dist_test.sh configs/elephant/eurocity/cascade_hrnet.py ./models_pretrained/epoch_147.pth.stu 1 --out euroCity.pkl --eval bbox

Running Cascade Mask RNN with Mobilenet backbone

Hi Hasan!

Thank you for publishing your amazing work and helping the CV community!

I've run the demo.py script on Cascade Mask RCNN (HRNet backbone) using the cascade_hrnet.py configuration and the model you uploaded (epoch_5.pth.stu) and it worked great!

I also want to run the model with the mobilenet backbone thus used the cascade_mobilenet.py configuration with the model named epoch_16.pth.stu.

The demo seems to run without any exceptions but the detector does not result in any detections (the resulting images are created without detections).

Do you have an idea why?

Many thanks,
Kyanite

How to extract features from Pedestron?

I want to get the features from Pedestron that helps the model identify the persons. Like the features that help identify just the persons accurately.

I think mmlab gets its features from -
feature_extractor.

Can you please suggest me how those features can be extracted? Or Where to look at to know more about that?

How to finetune?

Thanks for your work!!
I have read your pre-print version (Sec6.2) about how to achieve the best performance on Citypersons. However, when I use the pre-trained CrowdHuman model (AP84.2) and finetune on the Citypersons with the lr 0.0025 (1 img, 1 GPU), the 5 epoch result decreases a lot (just 84.4 MR-2 on Reasonable set).
Can you share more details about how to finetune? Do I use the inappropriate finetune parameters or datasets?

test error

when runing for test, the code needs val_gt.json file, so i change the path in test_city_person.py to "val_gt_for mmdetection.json". and get following problem.

File "tools/test_city_person.py", line 225, in
main()
File "tools/test_city_person.py", line 219, in main
MRs = validate('Pedestron/datasets/CityPersons/val_gt_for_mmdetction.json', args.out)
File "Pedestron/tools/cityPerson/eval_demo.py", line 10, in validate
cocoDt = cocoGt.loadRes(dt_path)
File "Pedestron/tools/cityPerson/coco.py", line 313, in loadRes
if 'caption' in anns[0]:
IndexError: list index out of range

ImportError: cannot import name 'get_dist_info' from 'mmcv.runner.utils'

When I run this command:
python tools/demo.py configs/elephant/cityperson/cascade_hrnet.py ./models_pretrained/epoch_5.pth.stu demo/ result_demo/

I am getting this error:
ImportError: cannot import name 'get_dist_info' from 'mmcv.runner.utils'

I have installed mmdetection as per your instructions and also tried for conda-instruction.

I updated mmcv as per the suggestions got online but nothing is working. Now my mmcv version is 0.4.4

setting of training set

The training sets of CityPersons and Caltech benchmarks currently used in Pedestron are the Reasonable subsets (h>=50 and vis>=0.65), right ? Do you have any plan to train the detectors on other training subsets? Since when training occlusion-handling detectors (like JL-TopS, PDOE, MGAN), the subset (h>=50 and vis >=0.3) is commonly used.

Inference on Colab - Setup issue

I tried to setup and run inference on a custom dataset from Google Colab. But, I am not able to run the setup and I get this error. Can you please help resolve this?

`running develop
running egg_info
writing mmdet.egg-info/PKG-INFO
writing dependency_links to mmdet.egg-info/dependency_links.txt
writing requirements to mmdet.egg-info/requires.txt
writing top-level names to mmdet.egg-info/top_level.txt
/usr/local/lib/python3.6/dist-packages/torch/utils/cpp_extension.py:304: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
writing manifest file 'mmdet.egg-info/SOURCES.txt'
running build_ext
building 'mmdet.ops.roi_align.roi_align_cuda' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda/include -I/usr/include/python3.6m -c mmdet/ops/roi_align/src/roi_align_cuda.cpp -o build/temp.linux-x86_64-3.6/mmdet/ops/roi_align/src/roi_align_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=roi_align_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
mmdet/ops/roi_align/src/roi_align_cuda.cpp: In function ‘int roi_align_forward_cuda(at::Tensor, at::Tensor, int, int, float, int, at::Tensor)’:
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:31:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(features);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: error: ‘AT_CHECK’ was not declared in this scope
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: in definition of macro ‘CHECK_CUDA’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:31:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(features);
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: suggested alternative: ‘DCHECK’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: in definition of macro ‘CHECK_CUDA’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:31:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(features);
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:32:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(rois);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:33:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(output);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp: In function ‘int roi_align_backward_cuda(at::Tensor, at::Tensor, int, int, float, int, at::Tensor)’:
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:59:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(top_grad);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: error: ‘AT_CHECK’ was not declared in this scope
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: in definition of macro ‘CHECK_CUDA’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:59:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(top_grad);
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: suggested alternative: ‘DCHECK’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:23: note: in definition of macro ‘CHECK_CUDA’
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:59:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(top_grad);
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:60:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(rois);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:20:39: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x, " must be a CUDAtensor ")
^
mmdet/ops/roi_align/src/roi_align_cuda.cpp:24:3: note: in expansion of macro ‘CHECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
mmdet/ops/roi_align/src/roi_align_cuda.cpp:61:3: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(bottom_grad);
^
In file included from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Tensor.h:11:0,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/Context.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/ATen/ATen.h:5,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
from /usr/local/lib/python3.6/dist-packages/torch/include/torch/extension.h:4,
from mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
DeprecatedTypeProperties & type() const {
^~~~
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

CalledProcessError Traceback (most recent call last)
in ()
----> 1 get_ipython().run_cell_magic('shell', '', 'cd /content/Pedestron/\npython setup.py develop')

2 frames
/usr/local/lib/python3.6/dist-packages/google/colab/_system_commands.py in check_returncode(self)
136 if self.returncode:
137 raise subprocess.CalledProcessError(
--> 138 returncode=self.returncode, cmd=self.args, output=self.output)
139
140 def repr_pretty(self, p, cycle): # pylint:disable=unused-argument

CalledProcessError: Command 'cd /content/Pedestron/
python setup.py develop' returned non-zero exit status 1.`

wider pedestrian 2019 dataset

Hi @hasanirtiza ,
Thanks for your nice work.
In your paper, you said the wider pedestrian 2019 dataset can be accessed.
image

But I visited this website, I can't find how to download this dataset
Could you give me an clue?

Thanks
meixitu

Assertion Error : Torch not compiled with CUDA enabled

I am trying to compile the setup file for the last 3 days but didn't get success yet.

I am using Ubuntu 18.04 in Virtual Box. I have a windows machine without GPU.
In Ubuntu, I have installed pytroch for CPU version.
I have also installed Cuda toolkit so that I can have nvcc Compiler. Though I haven't installed gpu driver as I don't have any GPU in the machine.

Please help me to figure this out.

Thanks and Regards

mmdetection version

What is the version number of the MMDetection that the Pedestron depends on? Thanks

Cannot execute succesfully the run.py

Hello,

first of all, thank you for the work done to provide such an amazing collection of code.

Following the installation tutorial, we run into a runtime error when CUDA optimized files are being compiled from the setup.py file.

We are using the google cloud deep learning VM, here are the dependencies we have in our machine

. Debian GNU/Linux 9.11 (stretch)
. Gcc 6.3.0
. Cuda 10.1
. NCCL 2.4.8
. Python 3.7.7
. Torch 1.5 stable cuda 10.1
. Using conda

Error traceback

In file included from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:11:0,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/ATen/ATen.h:5,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4,
                 from /home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /home/nodiz/Pedestron/mmdet/ops/roi_align/src/roi_align_cuda.cpp:1:
/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:262:30: note: declared here
   DeprecatedTypeProperties & type() const {
                              ^~~~
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1400, in _run_ninja_build
    check=True)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "setup.py", line 199, in <module>
    zip_safe=False)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/setuptools/__init__.py", line 144, in setup
    return distutils.core.setup(**attrs)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/setuptools/command/develop.py", line 38, in run
    self.install_for_development()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/setuptools/command/develop.py", line 140, in install_for_development
    self.run_command('build_ext')
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 87, in run
    _build_ext.run(self)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 580, in build_extensions
    build_ext.build_extensions(self)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
    depends=ext.depends)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 423, in unix_wrap_ninja_compile
    with_cuda=with_cuda)
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1140, in _write_ninja_file_and_compile_objects
    error_prefix='Error compiling objects for extension')
  File "/home/nodiz/.conda/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1413, in _run_ninja_build
    raise RuntimeError(message)
RuntimeError: Error compiling objects for extension

Bug fix

Not yet.

Any suggestion?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.