Thanks for sharing your work. I have created my own tfrecord file which can be used for traning R2CNN_FPN_Tensorflow. However, when I train the RRPN model, the following error raised :
`2018-03-25 22:44:13: step1 image_name:0000000110.jpg |
rpn_loc_loss:0.297302305698 | rpn_cla_loss:1.0283062458 | rpn_total_loss:1.3256084919 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.43538749218 | fast_rcnn_total_loss:1.43538749218 |
total_loss:3.46225190163 | per_cost_time:3.72758412361s
2018-03-25 22:44:23: step11 image_name:0000000102.jpg |
rpn_loc_loss:0.205864414573 | rpn_cla_loss:0.336254030466 | rpn_total_loss:0.542118430138 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.04045414925 | fast_rcnn_total_loss:1.04045414925 |
total_loss:2.28382301331 | per_cost_time:0.68939781189s
2018-03-25 22:44:30: step21 image_name:0000000158.jpg |
rpn_loc_loss:0.27754047513 | rpn_cla_loss:0.293545395136 | rpn_total_loss:0.571085870266 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.0313565731 | fast_rcnn_total_loss:1.0313565731 |
total_loss:2.30369186401 | per_cost_time:0.698785066605s
2018-03-25 22:44:37: step31 image_name:0000000169.jpg |
rpn_loc_loss:0.126704588532 | rpn_cla_loss:0.266889303923 | rpn_total_loss:0.393593907356 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.21958398819 | fast_rcnn_total_loss:1.21958398819 |
total_loss:2.3144288063 | per_cost_time:0.69150686264s
2018-03-25 22:44:44: step41 image_name:0000000193.jpg |
rpn_loc_loss:0.6847884655 | rpn_cla_loss:0.277387470007 | rpn_total_loss:0.962175965309 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.05666255951 | fast_rcnn_total_loss:1.05666255951 |
total_loss:2.72009158134 | per_cost_time:0.70182800293s
2018-03-25 22:44:51: step51 image_name:0000000060.jpg |
rpn_loc_loss:0.375000298023 | rpn_cla_loss:0.243579685688 | rpn_total_loss:0.618579983711 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.01494431496 | fast_rcnn_total_loss:1.01494431496 |
total_loss:2.33477902412 | per_cost_time:0.69392490387s
2018-03-25 22:45:00: step61 image_name:0000000013.jpg |
rpn_loc_loss:0.184998586774 | rpn_cla_loss:0.218382105231 | rpn_total_loss:0.403380692005 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.16415882111 | fast_rcnn_total_loss:1.16415882111 |
total_loss:2.26879382133 | per_cost_time:0.687323093414s
2018-03-25 22:45:07: step71 image_name:0000000072.jpg |
rpn_loc_loss:0.396233230829 | rpn_cla_loss:0.345920056105 | rpn_total_loss:0.742153286934 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.13157987595 | fast_rcnn_total_loss:1.13157987595 |
total_loss:2.57498908043 | per_cost_time:0.748478889465s
2018-03-25 22:45:14: step81 image_name:0000000042.jpg |
rpn_loc_loss:0.635928988457 | rpn_cla_loss:0.273401498795 | rpn_total_loss:0.909330487251 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.09614527225 | fast_rcnn_total_loss:1.09614527225 |
total_loss:2.70673298836 | per_cost_time:0.71663403511s
2018-03-25 22:45:21: step91 image_name:0000000162.jpg |
rpn_loc_loss:0.111868612468 | rpn_cla_loss:0.167042255402 | rpn_total_loss:0.27891087532 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.0333006382 | fast_rcnn_total_loss:1.0333006382 |
total_loss:2.01346969604 | per_cost_time:0.712218046188s
2018-03-25 22:45:28: step101 image_name:0000000099.jpg |
rpn_loc_loss:0.639244914055 | rpn_cla_loss:0.138194292784 | rpn_total_loss:0.777439236641 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.2477004528 | fast_rcnn_total_loss:1.2477004528 |
total_loss:2.72639989853 | per_cost_time:0.690226078033s
2018-03-25 22:45:36: step111 image_name:0000000147.jpg |
rpn_loc_loss:0.2139043957 | rpn_cla_loss:0.126326650381 | rpn_total_loss:0.340231060982 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.14501595497 | fast_rcnn_total_loss:1.14501595497 |
total_loss:2.18650889397 | per_cost_time:0.711352109909s
2018-03-25 22:45:43: step121 image_name:0000000174.jpg |
rpn_loc_loss:0.138845905662 | rpn_cla_loss:0.144465446472 | rpn_total_loss:0.283311367035 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.48444437981 | fast_rcnn_total_loss:1.48444437981 |
total_loss:2.46901965141 | per_cost_time:0.706383943558s
2018-03-25 22:45:50: step131 image_name:0000000185.jpg |
rpn_loc_loss:0.27352809906 | rpn_cla_loss:0.133923202753 | rpn_total_loss:0.407451301813 |
fast_rcnn_loc_loss:0.0 | fast_rcnn_cla_loss:1.17738771439 | fast_rcnn_total_loss:1.17738771439 |
total_loss:2.28610467911 | per_cost_time:0.718617916107s
2018-03-25 22:45:57: step141 image_name:0000000116.jpg |
rpn_loc_loss:0.170325279236 | rpn_cla_loss:0.0914347320795 | rpn_total_loss:0.261759996414 |
fast_rcnn_loc_loss:0.281328618526 | fast_rcnn_cla_loss:1.30894041061 | fast_rcnn_total_loss:1.59026908875 |
total_loss:2.55329680443 | per_cost_time:0.699913978577s
2018-03-25 22:46:03.734151: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-03-25 22:46:03.734662: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-03-25 22:46:03.734703: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-03-25 22:46:03.734727: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
2018-03-25 22:46:03.734747: W tensorflow/core/framework/op_kernel.cc:1192] Out of range: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
Traceback (most recent call last):
File "/home/choiyeren/Documents/dockerfiels/RRPN_FPN_Tensorflow/tools/train.py", line 274, in
train()
File "/home/choiyeren/Documents/dockerfiels/RRPN_FPN_Tensorflow/tools/train.py", line 234, in train
fast_rcnn_total_loss, total_loss, train_op])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]
Caused by op u'get_batch/batch', defined at:
File "/home/choiyeren/Documents/dockerfiels/RRPN_FPN_Tensorflow/tools/train.py", line 274, in
train()
File "/home/choiyeren/Documents/dockerfiels/RRPN_FPN_Tensorflow/tools/train.py", line 37, in train
is_training=True)
File "/home/choiyeren/Documents/dockerfiels/RRPN_FPN_Tensorflow/data/io/read_tfrecord.py", line 90, in next_batch
dynamic_pad=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 927, in batch
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/input.py", line 722, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/data_flow_ops.py", line 464, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 2418, in _queue_dequeue_many_v2
component_types=component_types, timeout_ms=timeout_ms, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1470, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
OutOfRangeError (see above for traceback): PaddingFIFOQueue '_2_get_batch/batch/padding_fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
[[Node: get_batch/batch = QueueDequeueManyV2[component_types=[DT_STRING, DT_FLOAT, DT_INT32, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](get_batch/batch/padding_fifo_queue, get_batch/batch/n)]]`
Any suggestion will be appreciated.