Giter Site home page Giter Site logo

extremenet's Introduction

ExtremeNet: Training and Evaluation Code

Code for bottom-up object detection by grouping extreme and center points:

Bottom-up Object Detection by Grouping Extreme and Center Points,
Xingyi Zhou, Jiacheng Zhuo, Philipp Krähenbühl,
CVPR 2019 (arXiv 1901.08043)

This project is developed upon the CornerNet code and contains the code from Deep Extreme Cut(DEXTR). Thanks to the original authors!

Contact: [email protected]. Any questions or discussions are welcomed!

Abstract

With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP.

Installation

The code was tested with Anaconda Python 3.6 and PyTorch v0.4.1. After install Anaconda:

  1. Clone this repo:

    ExtremeNet_ROOT=/path/to/clone/ExtremeNet
    git clone --recursive https://github.com/xingyizhou/ExtremeNet $ExtremeNet_ROOT
    
  2. Create an Anaconda environment using the provided package list from Cornernet.

    conda create --name CornerNet --file conda_packagelist.txt
    source activate CornerNet
    
  3. Compiling NMS (originally from Faster R-CNN and Soft-NMS).

    cd $ExtremeNet_ROOT/external
    make
    

Demo

  • Download our pre-trained model and put it in cache/.

  • Optionally, if you want to test instance segmentation with Deep Extreme Cut, download their PASCAL + SBD pertained model and put it in cache/.

  • Run the demo

    python demo.py [--demo /path/to/image/or/folder] [--show_mask]
    

    Contents in [] are optional. By default, it runs the sample images provided in $ExtremeNet_ROOT/images/ (from Detectron). We show the predicted extreme point heatmaps (combined four heatmaps and overlaid on the input image), the predicted center point heatmap, and the detection and octagon mask results. If setup correctly, the output will look like:

    If --show_mask is turned on, it further pipelined with DEXTR for instance segmentation. The output will look like:

Data preparation

If you want to reproduce the results in the paper for benchmark evaluation and training, you will need to setup dataset.

Installing MS COCO APIs

cd $ExtremeNet_ROOT/data
git clone https://github.com/cocodataset/cocoapi.git coco
cd $ExtremeNet_ROOT/data/coco/PythonAPI
make
python setup.py install --user

Downloading MS COCO Data

  • Download the images (2017 Train, 2017 Val, 2017 Test) from coco website.

  • Download annotation files (2017 train/val and test image info) from coco website.

  • Place the data (or create symlinks) to make the data folder like:

    ${ExtremeNet_ROOT}
    |-- data
    `-- |-- coco
        `-- |-- annotations
            |   |-- instances_train2017.json
            |   |-- instances_val2017.json
            |   |-- image_info_test-dev2017.json
            `-- images
                |-- train2017
                |-- val2017
                |-- test2017
    

Generate extreme point annotation from segmentation:

~~~
cd $ExtremeNet_ROOT/tools/
python gen_coco_extreme_points.py
~~~

It generates instances_extreme_train2017.json and instances_extreme_val2017.json in data/coco/annotations/.

Benchmark Evaluation

After downloading our pre-trained model and the dataset,

  • Run the following command to evaluate object detection:

    python test.py ExtremeNet [--suffix multi_scale]
    

    The results on COCO validation set should be 40.3 box AP without --suffix multi_scale and 43.3 box AP with --suffix multi_scale.

  • After obtaining the detection results, run the following commands for instance segmentation:

    python eval_dextr_mask.py results/ExtremeNet/250000/validation/multi_scale/results.json
    

    The results on COCO validation set should be 34.6 mask AP(The evaluation will be slow).

  • You can test with other hyper-parameters by creating a new config file (ExtremeNet-<suffix>.json) in config/.

Training

You will need 5x 12GB GPUs to reproduce our training. Our model is fine-tuned on the 10-GPU pre-trained CornerNet model. After downloading the CornerNet model and put it in cache/, run

python train.py ExtremeNet

You can resume a half-trained model by

python train.py ExtremeNet --iter xxxx

Notes:

  • Training takes about 10 days in our Titan V GPUs. Train with 150000 iterations (about 6 days) will be 0.5 AP lower.
  • Training from scratch for the same iteration (250000) may result in 2 AP lower than fintuning from CornerNet, but can get higher performance (43.9AP on COCO val w/ multi-scale testing) if trained for 500000 iterations
  • Changing the focal loss implementation to this can accelerate training, but costs more GPU memory.

Citation

If you find this model useful for your research, please use the following BibTeX entry.

@inproceedings{zhou2019bottomup,
  title={Bottom-up Object Detection by Grouping Extreme and Center Points},
  author={Zhou, Xingyi and Zhuo, Jiacheng and Kr{\"a}henb{\"u}hl, Philipp},
  booktitle={CVPR},
  year={2019}
}

Please also considering citing the CornerNet paper (where this code is heavily borrowed from) and Deep Extreme Cut paper (if you use the instance segmentation part).

@inproceedings{law2018cornernet,
  title={CornerNet: Detecting Objects as Paired Keypoints},
  author={Law, Hei and Deng, Jia},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={734--750},
  year={2018}
}

@Inproceedings{Man+18,
  Title          = {Deep Extreme Cut: From Extreme Points to Object Segmentation},
  Author         = {K.K. Maninis and S. Caelles and J. Pont-Tuset and L. {Van Gool}},
  Booktitle      = {Computer Vision and Pattern Recognition (CVPR)},
  Year           = {2018}
}

extremenet's People

Contributors

0xflotus avatar ausk avatar whughw avatar xingyizhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

extremenet's Issues

Heatmaps and tags in the layers

  1. HEATMAPS: For figure 4 of arXiv, I think you are using the heatmaps after softmax?
    I mean not self.l_heats in _train(*) or _test(*) of exkp.py but heat maps after _sigmoid( in CTLoss(*) of exkp.py(when training) or _exct_decode(*) of kp_utils.py(when testing). Is that correct?
    For heatmaps before softmax, I am unable to obtain an explainable visualization; but my heatmaps after softmax seems urgly. (I need to double check my visualization).
  2. TAGS in the layer: Compared to CornerNet, it seems counterparts for self.tl_tags self.dr_tags do not exist. Is this mentioned some where in the arXiv? I am not proficient enough to figure out the reason of the differences.

Corner pooling

Ever thought about using corner pooling to further improve AP?

cornernet环境配置

请问可以不配置cornernet环境,直接使用ExtremeNet的package list创建环镜吗,打算使用tx2跑一下,但是不支持下载anaconda

Train my Own Data

If my Annotations have no segmentation, Can I train ExtremeNet and obtain the predict Box and octagon mask

ValueError: not enough values to unpack

Traceback (most recent call last):
  File "demo.py", line 268, in <module>
    image, ex, color_mask)
  File "/t/extremnet/utils/visualize.py", line 41, in vis_octagon
    img = vis_mask(img, mask, col)
  File "/t/extremnet/utils/visualize.py", line 22, in vis_mask
    mask.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
ValueError: not enough values to unpack (expected 3, got 2)

Train problem

When I trained my own dataset,I find the test result of all my object have negative confidence score and cannot identify the extreme point on the left very well. Have you ever had a similar problem on your side, or did you know what might have caused it.
Demo result

系统电源不够

你好,在训练extremenet时候出现服务器重启,显示电源功率不够!我的是4快1080TI的显卡,怎么修改代码只使用2个显卡来训练

Train my own data errors,Hope to get your help, Sincere thanks!

Hello, I want to train my own data, there is only one category, and I encountered the following error during training:

start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
Process Process-1:
Traceback (most recent call last):
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 65, in prefetch_data
raise e
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
Process Process-2:
Traceback (most recent call last):
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 65, in prefetch_data
raise e
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
Process Process-3:
building model...
module_file: models.ExtremeNet
Traceback (most recent call last):
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 65, in prefetch_data
raise e
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
start prefetching data...
shuffling indices...
Traceback (most recent call last):
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
Process Process-4:
Traceback (most recent call last):
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/wujiacheng/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "train.py", line 65, in prefetch_data
raise e
File "train.py", line 61, in prefetch_data
data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 248, in sample_data
return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
File "/home/wujiacheng/2020/ExtremeNet-text/sample/coco_extreme.py", line 85, in kp_detection
db_ind = db.db_inds[k_ind]
IndexError: index 0 is out of bounds for axis 0 with size 0
total parameters: 198531504
loading from pretrained model
loading from ./cache/ExtremeNet_250000.pkl
setting learning rate to: 0.00025
training start...
0%| | 0/250000 [00:00<?, ?it/s]

Sincerely hope to get your reply!

terminate called without an active exception Aborted (core dumped)

python train.py ExtremeNet
loading all datasets...
using 4 threads
loading from cache file: ./cache/coco_extreme_train2017.pkl
loading annotations into memory...
Done (t=12.73s)
creating index...
index created!
loading from cache file: ./cache/coco_extreme_train2017.pkl
loading annotations into memory...
Done (t=12.93s)
creating index...
index created!
loading from cache file: ./cache/coco_extreme_train2017.pkl
loading annotations into memory...
Done (t=10.87s)
creating index...
index created!
loading from cache file: ./cache/coco_extreme_train2017.pkl
loading annotations into memory...
Done (t=15.55s)
creating index...
index created!
system config...
{'batch_size': 24,
'cache_dir': './cache',
'chunk_sizes': [4, 5, 5, 5, 5],
'config_dir': './config',
'data_dir': './data',
'data_rng': <mtrand.RandomState object at 0x7f87c7ffa480>,
'dataset': 'MSCOCOExtreme',
'decay_rate': 10,
'display': 5,
'learning_rate': 0.00025,
'max_iter': 250000,
'nnet_rng': <mtrand.RandomState object at 0x7f87c7ffa4c8>,
'opt_algo': 'adam',
'prefetch_size': 10,
'pretrain': './cache/CornerNet_500000.pkl',
'result_dir': './results',
'sampling_function': 'kp_detection',
'snapshot': 50000,
'snapshot_name': 'ExtremeNet',
'stepsize': 200000,
'test_split': 'testdev',
'train_split': 'train',
'val_iter': 100,
'val_split': 'val',
'weight_decay': False,
'weight_decay_rate': 1e-05,
'weight_decay_type': 'l2'}
db config...
{'ae_threshold': 0.5,
'aggr_weight': 0.1,
'border': 128,
'categories': 80,
'center_thresh': 0.1,
'data_aug': True,
'gaussian_bump': True,
'gaussian_iou': 0.7,
'gaussian_radius': -1,
'input_size': [511, 511],
'lighting': True,
'max_per_image': 100,
'merge_bbox': False,
'nms_algorithm': 'exp_soft_nms',
'nms_kernel': 3,
'nms_threshold': 0.5,
'output_sizes': [[128, 128]],
'rand_color': True,
'rand_crop': True,
'rand_pushes': False,
'rand_samples': False,
'rand_scale_max': 1.4,
'rand_scale_min': 0.6,
'rand_scale_step': 0.1,
'rand_scales': array([0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3]),
'scores_thresh': 0.1,
'special_crop': False,
'suppres_ghost': True,
'test_scales': [1],
'top_k': 40,
'weight_exp': 8}
len of db: 118287
start prefetching data...
shuffling indices...
start prefetching data...
start prefetching data...
shuffling indices...
shuffling indices...
building model...
module_file: models.ExtremeNet
start prefetching data...
shuffling indices...
total parameters: 198531504
loading from pretrained model
loading from ./cache/CornerNet_500000.pkl
setting learning rate to: 0.00025
training start...
0%| | 0/250000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train.py", line 225, in
train(training_dbs, None, args.start_iter, args.debug)
File "train.py", line 159, in train
training_loss = nnet.train(**training)
File "/home/rencong/ExtremeNet/nnet/py_factory.py", line 81, in train
loss = self.network(xs, ys)
File "/home/rencong/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/rencong/ExtremeNet/models/py_utils/data_parallel.py", line 66, in forward
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids, self.chunk_sizes)
File "/home/rencong/ExtremeNet/models/py_utils/data_parallel.py", line 77, in scatter
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim, chunk_sizes=self.chunk_sizes)
File "/home/rencong/ExtremeNet/models/py_utils/scatter_gather.py", line 30, in scatter_kwargs
inputs = scatter(inputs, target_gpus, dim, chunk_sizes) if inputs else []
File "/home/rencong/ExtremeNet/models/py_utils/scatter_gather.py", line 25, in scatter
return scatter_map(inputs)
File "/home/rencong/ExtremeNet/models/py_utils/scatter_gather.py", line 18, in scatter_map
return list(zip(map(scatter_map, obj)))
File "/home/rencong/ExtremeNet/models/py_utils/scatter_gather.py", line 20, in scatter_map
return list(map(list, zip(map(scatter_map, obj))))
File "/home/rencong/ExtremeNet/models/py_utils/scatter_gather.py", line 15, in scatter_map
return Scatter.apply(target_gpus, chunk_sizes, dim, obj)
File "/home/rencong/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 89, in forward
outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
File "/home/rencong/anaconda3/lib/python3.6/site-packages/torch/cuda/comm.py", line 148, in scatter
return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: CUDA error: invalid device ordinal (exchangeDevice at /opt/conda/conda-bld/pytorch_1550802451070/work/aten/src/ATen/cuda/detail/CUDAGuardImpl.h:28)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x6d (0x7f8821feb69d in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: + 0x4f223c (0x7f881f16d23c in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #2: + 0x5fc38e (0x7f87fbb9638e in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #3: + 0x739e55 (0x7f87fbcd3e55 in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #4: at::TypeDefault::copy(at::Tensor const&, bool, c10::optionalc10::Device) const + 0x74 (0x7f87fbe4f204 in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #5: at::native::to(at::Tensor const&, at::TensorOptions const&, bool, bool) + 0xc6d (0x7f87fbc327fd in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #6: at::TypeDefault::to(at::Tensor const&, at::TensorOptions const&, bool, bool) const + 0x2c (0x7f87fbe0bcbc in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #7: torch::autograd::VariableType::to(at::Tensor const&, at::TensorOptions const&, bool, bool) const + 0x19c (0x7f87fe532e1c in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #8: torch::cuda::scatter(at::Tensor const&, c10::ArrayRef, c10::optional<std::vector<long, std::allocator > > const&, long, c10::optional<std::vector<c10::optionalat::cuda::CUDAStream, std::allocator<c10::optionalat::cuda::CUDAStream > > > const&) + 0x7a8 (0x7f881f183da8 in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #9: + 0x5124de (0x7f881f18d4de in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #10: + 0xfd760 (0x7f881ed78760 in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

frame #21: THPFunction_apply(_object
, _object
) + 0x6ad (0x7f881ef7482d in /home/rencong/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so)

terminate called without an active exception
Aborted (core dumped)

user warning

/pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead

Hi,how to train my own dataset without segment?

Hi, thank you for your awasome work!
And now, I want to train my own dataset, in coco format, but only have 4 categories, and most important is that it's no have sengment data ("segmentation": []), just for object detection, but I can't Generate extreme use tools/gen_coco_extreme_points.py successfully, what should I do next?
Thx a lot!

Train error

when I run tran.py ExtremeNet, the error occurs:
Traceback (most recent call last):
File "train.py", line 213, in
training_dbs = [datasets[dataset](configs["db"], train_split) for _ in range(threads)]
File "train.py", line 213, in
training_dbs = [datasets[dataset](configs["db"], train_split) for _ in range(threads)]
File "/home/gxt/ExtremeNet/db/ead_extreme.py", line 68, in init
self._load_data()
File "/home/gxt/ExtremeNet/db/ead_extreme.py", line 77, in _load_data
self._extract_data()
File "/home/gxt/ExtremeNet/db/ead_extreme.py", line 133, in _extract_data
if len(annotation["extreme_points"]) == 0:
KeyError: 'extreme_points'

Issuse that training on my own data

Your work inspired me! However, I met some problems and really need your help!
I have trained on my dataset and it's just like coco. But there are issues when training. Could you please help me!

Traceback (most recent call last):
  File "train.py", line 61, in prefetch_data
    data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
  File "/data0/svc8/pytorchprojects/ExtremeNet/sample/coco_extreme.py", line 245, in sample_data
    return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
  File "/data0/svc8/pytorchprojects/ExtremeNet/sample/coco_extreme.py", line 187, in kp_detection
    t_regrs[b_ind, tag_ind, :] = [fxt - xt, fyt - yt]
IndexError: index 128 is out of bounds for axis 1 with size 128
Process Process-2:
Traceback (most recent call last):
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "train.py", line 65, in prefetch_data
    raise e
  File "train.py", line 61, in prefetch_data
    data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
  File "/data0/svc8/pytorchprojects/ExtremeNet/sample/coco_extreme.py", line 245, in sample_data
    return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)
  File "/data0/svc8/pytorchprojects/ExtremeNet/sample/coco_extreme.py", line 187, in kp_detection
    t_regrs[b_ind, tag_ind, :] = [fxt - xt, fyt - yt]
IndexError: index 128 is out of bounds for axis 1 with size 128
  0%|                                   | 12/250000 [01:18<453:10:32,  6.53s/it]Exception in thread Thread-1:
Traceback (most recent call last):
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "train.py", line 69, in pin_memory
    data = data_queue.get()
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/queues.py", line 113, in get
    return _ForkingPickler.loads(res)
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 70, in rebuild_storage_fd
    fd = df.detach()
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/connection.py", line 487, in Client
    c = SocketClient(address)
  File "/data0/svc8/anaconda3/envs/ExtremeNet/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient
    s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory

Generate extreme point annotation from segmentation

@xingyizhou while running python gen_coco_extreme_points.py for generating extreme points, I get this error. I tried looking at the exact column where it shows error, but it seams right.

Error I get:
Traceback (most recent call last):
File "gen_coco_extreme_points.py", line 80, in
data = json.load(open(ANN_PATH.format(split), 'r'))
File "/usr/lib/python3.6/json/init.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.6/json/init.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.6/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 54525953 (char 54525952)

Can you pls help me with this?

in random_crop_pts image_height, image_width = image.shape[0:2] AttributeError: 'NoneType' object has no attribute 'shape'

I think the code has some error:

def prefetch_data(db, queue, sample_data, data_aug, debug=False):
    ind = 0
    print("start prefetching data...")
    np.random.seed(os.getpid())
    while True:
        try:
            data, ind = sample_data(db, ind, data_aug=data_aug, debug=debug)
            queue.put(data)
        except Exception as e:
            traceback.print_exc()
            raise e

this function called sample_data which take db as argument, but the sample data implementation 👍

def sample_data(db, k_ind, data_aug=True, debug=False):
    print('calling sample data....')
    print('db: {}, k_ind: {}'.format(db, k_ind))
    return globals()[system_configs.sampling_function](db, k_ind, data_aug, debug)

Which calls a sampling function, in this case it is random_crop, but random_crop take a image as first argument..

Why opensource codes with errors............

test error!!!

thank you for your works, i have problem when testing coco data
image

Train with my dataset Runtime error: Device index must be -1 or non-negative

Hello @xingyizhou, I am trying to train this network with my own dataset and I keep getting this Device index must be -1 or non-negative error (see below):

Traceback (most recent call last):
File "train.py", line 225, in
train(training_dbs, None, args.start_iter, args.debug)
File "train.py", line 159, in train
training_loss = nnet.train(**training)
File "/content/ExtremeNet/nnet/py_factory.py", line 83, in train
loss = self.network(xs, ys)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/content/ExtremeNet/models/py_utils/data_parallel.py", line 66, in forward
inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids, self.chunk_sizes)
File "/content/ExtremeNet/models/py_utils/data_parallel.py", line 77, in scatter
return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim, chunk_sizes=self.chunk_sizes)
File "/content/ExtremeNet/models/py_utils/scatter_gather.py", line 30, in scatter_kwargs
inputs = scatter(inputs, target_gpus, dim, chunk_sizes) if inputs else []
File "/content/ExtremeNet/models/py_utils/scatter_gather.py", line 25, in scatter
return scatter_map(inputs)
File "/content/ExtremeNet/models/py_utils/scatter_gather.py", line 18, in scatter_map
return list(zip(map(scatter_map, obj)))
File "/content/ExtremeNet/models/py_utils/scatter_gather.py", line 20, in scatter_map
return list(map(list, zip(map(scatter_map, obj))))
File "/content/ExtremeNet/models/py_utils/scatter_gather.py", line 15, in scatter_map
return Scatter.apply(target_gpus, chunk_sizes, dim, obj)
File "/usr/local/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 89, in forward
outputs = comm.scatter(input, target_gpus, chunk_sizes, ctx.dim, streams)
File "/usr/local/lib/python3.7/site-packages/torch/cuda/comm.py", line 148, in scatter
return tuple(torch._C._scatter(tensor, devices, chunk_sizes, dim, streams))
RuntimeError: Device index must be -1 or non-negative, got -14913 (Device at /pytorch/c10/Device.h:40)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f4cef83c021 in /usr/local/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f4cef83b8ea in /usr/local/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #2: + 0x10ceca (0x7f4d29b74eca in /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #3: torch::cuda::scatter(at::Tensor const&, c10::ArrayRef, c10::optional<std::vector<long, std::allocator > > const&, long, c10::optional<std::vector<c10::optionalat::cuda::CUDAStream, std::allocator<c10::optionalat::cuda::CUDAStream > > > const&) + 0x2dc (0x7f4d29f4faac in /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x4ed28f (0x7f4d29f5528f in /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x11663e (0x7f4d29b7e63e in /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

frame #13: THPFunction_apply(_object
, _object
) + 0x5a1 (0x7f4d29d7b961 in /usr/local/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

Do you have some sort of explanations why this happens?
If I run demo.py I get no errors.. so I guess the environment is setup correctly :(

training out of memory

When I tried to train ExtremeNet with my machine, I use 5 gpus same as reported in paper. There are 8 TITAN gpus on my machine, so I set the device_ids=[0,1,2,3,4] in ExtremeNet/nnet/py_factory.py, in DataParrallel function. But when I start the training progress, I got out of memory error as bellow:
屏幕快照 2019-03-27 下午8 45 34
Then I tried to use 8 TITAN gpus to train ExtremeNet, I set chunk_sizes=[3,3,3,3,3,3,3,3], which means there are 3 images per gpu, the training went well.
Why does this happend? It seems that the memory were used out when computing loss, and the most memory cost is on gpu0.

$ python demo.py [--demo /path/to/image/or/folder] [--show_mask]

Hi,man!
I should try demo.py in terminal,but code find bug!
loading parameters: cache/ExtremeNet_250000.pkl building neural network... module_file: models.ExtremeNet total parameters: 198531504 loading parameters... loading from cache/ExtremeNet_250000.pkl Running Traceback (most recent call last): File "demo.py", line 150, in <module> height, width = image.shape[0:2] AttributeError: 'NoneType' object has no attribute 'shape'
Please tell me where bug ? THK!

demo error

(CornerNet) chase@zlq:~/ExtremeNet$ python demo.py
Traceback (most recent call last):
File "demo.py", line 4, in
import torch
File "/home/chase/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/init.py", line 80, in
from torch._C import *
ImportError: libcudart.so.9.0: cannot open shared object file: No such file or directory

storage has wrong size: expected -8367237312445466787 got 1

when i run python demo.py --demo images, i got the following errors:
...
loading parameters: cache/ExtremeNet_250000.pkl
building neural network...
module_file: models.ExtremeNet
total parameters: 198531504
loading parameters...
loading from cache/ExtremeNet_250000.pkl
Trackback (most recent call last):
File "demo.py", line 107, in
nnet.load_pretrained_params(args.model_path)
File "/home/ExtremeNet-master/nnet/py_factory.py", line 109, in load_pretrained_params
params = torch.load(f)
File "/usr/lib/python3.6/site-packages/torch/serialization.py", line 358, in load
return _load(f, map_location, pickle_module)
File "/usr/lib/python3.6/site-packages/torch/serialization.py", line 549, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: storage has wrong size: expected -8367237312445466787 got 1

torch version is 0.4.1 and python 3.6.5, what is the problem? @xingyizhou

"false" in file qasciikey.cpp

Hello,when I run demo.py, I get an error "ASSERT: "false" in file qasciikey.cpp, line 501
Aborted (core dumped)”, do you know what's going on?

What is ghost box?

thanks for your work, it's really great!

But I am not sure what ghost box is, could you please show me a example to help me understand it better?

thank you!

Some questions about rand_crop=false

When I train my own data, I set rand_crop to false. Then when the trained model is tested on the training set, there is no box on the output image, and the loss at the end of the training is also very small. So it may be due to rand_crop = false, and then we need to change the demo code?

View data structures of ExtremeNet_250000.pkl

I try to view the data structures of ExtremeNet_250000.pkl by the code:
import pickle
pth=open(r'E:/ExtremeNet_250000.pkl','rb')
pkl=pickle.load(pth)
print(pkl)
but it return the int just like

= RESTART: C:/Users/cwc888888/AppData/Local/Programs/Python/Python37/111.py =
119547037146038801333356
could you give me some suggestion?

RuntimeError: CUDNN_STATUS_EXECUTION_FAILED heeeelp please :(

Hello ,

I'm trying to run demo.py but it gives me this error. The environment is given mainly by:

  • torch 0.4.0
  • cuda 8.0.61
  • cudnn 7.1.0.2

This is the whole error:

Traceback (most recent call last):
File "demo.py", line 189, in
kernel=nms_kernel, debug=True)
File "demo.py", line 86, in kp_decode
scores_thresh=scores_thresh, center_thresh=center_thresh, debug=debug)
File "/home/DIINF/pcuevas/ExtremeNet/nnet/py_factory.py", line 99, in test
return self.model(*xs, **kwargs)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/DIINF/pcuevas/ExtremeNet/nnet/py_factory.py", line 31, in forward
return self.module(*xs, **kwargs)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/DIINF/pcuevas/ExtremeNet/models/py_utils/exkp.py", line 267, in forward
return self._test(*xs, **kwargs)
File "/home/DIINF/pcuevas/ExtremeNet/models/py_utils/exkp.py", line 226, in _test
inter = self.pre(image)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/DIINF/pcuevas/ExtremeNet/models/py_utils/utils.py", line 14, in forward
conv = self.conv(x)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/DIINF/pcuevas/anaconda3/envs/CornerNet/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

Help me please! :( Anyone can help me to debug it?

Thanks in advance!

Annotation tools

Hello,Thanks for your awesome work, I'm very interested in your work.
And I want to consult you that if there any annotation tools to pick four extreme points and calculate center point depend on these?I want to train on my own data. Thanks a lot.

File "/data2/yanmengkai/ExtremeNet/nnet/py_factory.py", line 117, in load_params self.model.load_state_dict(params)

RuntimeError: Error(s) in loading state_dict for DummyModule:
Unexpected key(s) in state_dict: "module.pre.0.bn.num_batches_tracked", "module.pre.1.bn1.num_batches_tracked", "module.pre.1.bn2.num_batches_tracked", "module.pre.1.skip.1.num_batches_tracked", "module.kps.0.up1.0.bn1.num_batches_tracked", "module.kps.0.up1.0.bn2.num_batches_tracked", "module.kps.0.up1.1.bn1.num_batches_tracked", "module.kps.0.up1.1.bn2.num_batches_tracked", "module.kps.0.low1.0.bn1.num_batches_tracked", "module.kps.0.low1.0.bn2.num_batches_tracked", "module.kps.0.low1.0.skip.1.num_batches_tracked", "module.kps.0.low1.1.bn1.num_batches_tracked", "module.kps.0.low1.1.bn2.num_batches_tracked", "module.kps.0.low2.up1.0.bn1.num_batches_tracked", "module.kps.0.low2.up1.0.bn2.num_batches_tracked", "module.kps.0.low2.up1.1.bn1.num_batches_tracked", "module.kps.0.low2.up1.1.bn2.num_batches_tracked", "module.kps.0.low2.low1.0.bn1.num_batches_tracked", "module.kps.0.low2.low1.0.bn2.num_batches_tracked", "module.kps.0.low2.low1.0.skip.1.num_batches_tracked", "module.kps.0.low2.low1.1.bn1.num_batches_tracked", "module.kps.0.low2.low1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.0.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.0.low2.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.2.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.2.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.3.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low2.3.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low2.low3.1.skip.1.num_batches_tracked", "module.kps.0.low2.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.0.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.0.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.0.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.0.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.0.low2.low3.0.bn1.num_batches_tracked", "module.kps.0.low2.low3.0.bn2.num_batches_tracked", "module.kps.0.low2.low3.1.bn1.num_batches_tracked", "module.kps.0.low2.low3.1.bn2.num_batches_tracked", "module.kps.0.low2.low3.1.skip.1.num_batches_tracked", "module.kps.0.low3.0.bn1.num_batches_tracked", "module.kps.0.low3.0.bn2.num_batches_tracked", "module.kps.0.low3.1.bn1.num_batches_tracked", "module.kps.0.low3.1.bn2.num_batches_tracked", "module.kps.1.up1.0.bn1.num_batches_tracked", "module.kps.1.up1.0.bn2.num_batches_tracked", "module.kps.1.up1.1.bn1.num_batches_tracked", "module.kps.1.up1.1.bn2.num_batches_tracked", "module.kps.1.low1.0.bn1.num_batches_tracked", "module.kps.1.low1.0.bn2.num_batches_tracked", "module.kps.1.low1.0.skip.1.num_batches_tracked", "module.kps.1.low1.1.bn1.num_batches_tracked", "module.kps.1.low1.1.bn2.num_batches_tracked", "module.kps.1.low2.up1.0.bn1.num_batches_tracked", "module.kps.1.low2.up1.0.bn2.num_batches_tracked", "module.kps.1.low2.up1.1.bn1.num_batches_tracked", "module.kps.1.low2.up1.1.bn2.num_batches_tracked", "module.kps.1.low2.low1.0.bn1.num_batches_tracked", "module.kps.1.low2.low1.0.bn2.num_batches_tracked", "module.kps.1.low2.low1.0.skip.1.num_batches_tracked", "module.kps.1.low2.low1.1.bn1.num_batches_tracked", "module.kps.1.low2.low1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.1.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.1.low2.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.up1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.up1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.up1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.up1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low1.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low1.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low1.0.skip.1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low1.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low1.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.2.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.2.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.3.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low2.3.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low2.low3.1.skip.1.num_batches_tracked", "module.kps.1.low2.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.1.low2.low2.low3.0.bn1.num_batches_tracked", "module.kps.1.low2.low2.low3.0.bn2.num_batches_tracked", "module.kps.1.low2.low2.low3.1.bn1.num_batches_tracked", "module.kps.1.low2.low2.low3.1.bn2.num_batches_tracked", "module.kps.1.low2.low3.0.bn1.num_batches_tracked", "module.kps.1.low2.low3.0.bn2.num_batches_tracked", "module.kps.1.low2.low3.1.bn1.num_batches_tracked", "module.kps.1.low2.low3.1.bn2.num_batches_tracked", "module.kps.1.low2.low3.1.skip.1.num_batches_tracked", "module.kps.1.low3.0.bn1.num_batches_tracked", "module.kps.1.low3.0.bn2.num_batches_tracked", "module.kps.1.low3.1.bn1.num_batches_tracked", "module.kps.1.low3.1.bn2.num_batches_tracked", "module.cnvs.0.bn.num_batches_tracked", "module.cnvs.1.bn.num_batches_tracked", "module.inters.0.bn1.num_batches_tracked", "module.inters.0.bn2.num_batches_tracked", "module.inters_.0.1.num_batches_tracked", "module.cnvs_.0.1.num_batches_tracked".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.