When I run the example i got the following error. Any ideas?
(ocr) salinas@salinas-HP-Pavilion-Aero-Laptop-13-be0xxx:~/Libraries/deep-text-recognition-benchmark-master$ CUDA_VISIBLE_DEVICES=0 python3 demo.py --Transformation TPS --FeatureExtraction ResNet --SequenceModeling BiLSTM --Prediction Attn --image_folder demo_image/ --saved_model models/TPS-ResNet-BiLSTM-Attn.pth
model input parameters 32 100 20 1 512 256 38 25 TPS ResNet BiLSTM Attn
loading pretrained model from models/TPS-ResNet-BiLSTM-Attn.pth
Traceback (most recent call last):
File "demo.py", line 129, in
demo(opt)
File "demo.py", line 64, in demo
preds = model(image, text_for_pred, is_train=False)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
return self.module(*inputs, **kwargs)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/salinas/Libraries/deep-text-recognition-benchmark-master/model.py", line 89, in forward
contextual_feature = self.SequenceModeling(visual_feature)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/home/salinas/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, **kwargs)
File "/home/salinas/Libraries/deep-text-recognition-benchmark-master/modules/sequence_modeling.py", line 19, in forward
recurrent = recurrent.view(bT, h)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.