clovaai / craft-pytorch Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of Character Region Awareness for Text Detection (CRAFT)
License: MIT License
Official implementation of Character Region Awareness for Text Detection (CRAFT)
License: MIT License
Boundry boxes are shuffled. Can you please help me for this. Really appreciate.
When reading PostProcessing code, I was confused with these 2 parts. Your explanation is appreciated.
niter = int(math.sqrt(size * min(w, h) / (w * h)) * 2)
# size / (w * h) is the boundbox coverage ratio
# 2 * min(w, h) is the area of padding alone longer edge.
Hi,my input image is the size of [547,456],
but the size of output mask is [832,352].
I want to see the affinity map which is same of srcimage's shape.
In paper, They create Ground Truth Label use Gaussian heatmap by other application. Can you show me algorithm create Gaussian heatmap? Thanks
Hello,
I can't get bboxes of single character words. The characters are detected, but because they don't have link they don't get bounding boxes. Any way to specify that if there is only one character link threshold dont matter?
thanks for you model. and where can I get the code for training?
1、what is the size of input image?
2、Did you adjust the lr during the training?
3、What data augment methods are used in the data preprocessing process?
4、when I train SynthText dataset that it's very slow because of the huge number of images, are there some advice can give me about how to accelerate the training process.
5 、How long have you trained the model and how many gpus did you use?
Hi, thanks for your wonderful work! I meet some questions when evaluating your model. I only could get: precision --85.1, recall -- 79.4, h-mean -- 82.2 on ICDAR 2015 dataset. Which is lower than the results reported in the paper.
I am wondering that if you get similar results?
Hi, I would like to try to train the network, you download the script for training?
First of all thanks very much for your model. I have used your pretrained model for text detection. I have got the bounding box coordinates as txt files and converted the polygons to rectangles for cropping the text area in the image. The resulted bounding boxes are shuffled and i could not sort it out. When there is a presence of curved text in a same line like in the image below, the order gets shuffled and i need to sort it before passing it to text extraction model.
Converted the polygons to rectangles for cropping text areas
I have not used POLY mode. For the above image, the model outputs a txt file in which the bb coordinates are as follows. I have added the detected text for better explanation of my problem. In this case the detection order is
146,36,354,34,354,82,146,84 "Australian"
273,78,434,151,411,201,250,129 "Collection"
146,97,250,97,250,150,146,150 "vine"
77,166,131,126,154,158,99,197 "Old"
242,215,361,241,354,273,235,248 "Valley"
140,247,224,219,234,250,150,277 "Eden"
194,298,306,296,307,324,194,325 "Shiraz"
232,406,363,402,364,421,233,426 "Vintage"
152,402,216,405,215,425,151,422 "2008"
124,470,209,480,207,500,122,490 "South"
227,481,387,472,389,494,228,503 "Australia"
222,562,312,564,311,585,222,583 "Gibson"
198,564,217,564,217,584,198,584 "by"
386,570,421,570,421,600,386,600 "750 ml"
But the expected output is Australian->old->vine->collection->Eden->Valley->shiraz->2008->vintage->south->Australia->by->GIBSON->750ml.
.
Is there some way I can get character-level annotation boxes as output instead of the word-level annotation ones?
In the paper you state
"Moreover, with our polygon representation, the
curved images can be rectified into straight text images,
which are also shown in Fig. 11. We believe this ability for
rectification can further be of use for recognition tasks."
My question is from a set of polygon points, how can I reconstruct the rectified image?
Can you kindly point me towards the correct direction? many thanks in advance
hi, text detection is how to apply OCR?
I tested my images first on the web demo, but when I test using the repo locally on my machine I get different results. Is there a difference in the parameters or you are using another weight. I need to know the difference to apply it to my local version.
Website demo result that recognizes line by line[what I need]
Local repo result that recognizes word by word
Plz help me to understand the meaning of below arguments-
--text_threshold: text confidence threshold
--low_text: text low-bound score
--link_threshold: link confidence threshold
@ClovaAIAdmin @YoungminBaek
I would like to use this code in C++ any idea ?
First of all thanks very much for your model. I have used your pretrained model for text detection. I have got the bounding box coordinates as txt files. The resulted bounding boxes are shuffled and i could not sort it . When there is a presence of curved text in a same line like in the image below, the order gets shuffled and i need to sort it before passing it to text extraction model.
I have not used POLY mode. For the above image, the model outputs a txt file in which the bb coordinates are as follows. I have added the detected text for better explanation of my problem. In this case the detection order is
146,36,354,34,354,82,146,84 "Australian"
273,78,434,151,411,201,250,129 "Collection"
146,97,250,97,250,150,146,150 "vine"
77,166,131,126,154,158,99,197 "Old"
242,215,361,241,354,273,235,248 "Valley"
140,247,224,219,234,250,150,277 "Eden"
194,298,306,296,307,324,194,325 "Shiraz"
232,406,363,402,364,421,233,426 "Vintage"
152,402,216,405,215,425,151,422 "2008"
124,470,209,480,207,500,122,490 "South"
227,481,387,472,389,494,228,503 "Australia"
222,562,312,564,311,585,222,583 "Gibson"
198,564,217,564,217,584,198,584 "by"
386,570,421,570,421,600,386,600 "750 ml"
But the expected output is Australian->old->vine->collection->Eden->Valley->shiraz->2008->vintage->south->Australia->by->GIBSON->750ml.
Hi, thanks for your work. I have some questions about the detail of generating pseudo ground-truth.
In Figure 6, I am confused that why can't we just input the whole image to generate the pseudo ground truth (non-text region area are padded with 0).
How to crop the image?
As I understand for the paper Figure 4, we should first crop the text region and then feed them to the network. Is that right? However, in the ICDAR2015, the word ground truths aren't the normal rectangle. 1) How do we crop it? (rotate it and then crop?).
hi, I found that it is not so easy to split the Gaussian distribution map.
can you provide details of the watered algorithm?
For example, the binarization method used here?
The details of the initial marker in the cv2.watershed() interface function in opencv?
Or is it using a different function interface?
Line 112 in ce07620
When I run test.py on my computer that does not have cuda installed, I get the following error even though --cuda=False
. I attached the picture, but I also put the error below.
raise RuntimeError ('Attempting to deserialize object on a CUDA'
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available () is False.
If you are running on a CPU-only machine, please use torch.load with map_location = 'cpu' to map your storages to the CPU.
In this case, modify the following line in main
of test.py
and the code will run well. Can you add this part to main code for cpu-only users? I put an example below.
net.load_state_dict(copyStateDict(torch.load(args.trained_model)))
An example of a modified code:
if args.cuda:
net.load_state_dict(copyStateDict(torch.load(args.trained_model)))
else:
net.load_state_dict(copyStateDict(torch.load(args.trained_model, map_location='cpu')))
Thank you for publishing this good STD model.
In the demo app I see the images are being queued and inference is taken on them one after another. Is it possible to run inference in parallel and not in queue as shown in the demo. Please explain ?
I suppose it is in the getDetBoxes_core function, can you kindly elucidate?
Thanks in advance
I found that my model's gaussian map are not regulation, mellow and full as yours. And it's not good enough for detecting big words. Could you give me some advice?
I see that providing a pretrained model is trained on IC13 and IC17 dataset.
Do these datasets include image samples within Korean?
Since I would like to adapt your model to recognize the quotation written in Korean.
Do you think pretrained model would work?
I found it stucked when loading model as a celery task in CPU mode. Such things happens very frequetly as the celery task starts at first time.
When I use ptrace, I found there're some stuch in "futex", when I use ltrace, I found all the time is memcpy/memset/malloc.
Another celery task is recognition by pytorch model, that's ok.
I wonder if there is a futex in CPU mode when loading the model?
I'm trying to export from pth
to ONNX
format:
import torch
from torch.autograd import Variable
import cv2
import imgproc
from craft import CRAFT
# load net
net = CRAFT() # initialize
net = net.cuda()
net = torch.nn.DataParallel(net)
net.load_state_dict(torch.load('./craft_mlt_25k.pth'))
net.eval()
# load data
image = imgproc.loadImage('./misc/test.jpg')
# resize
img_resized, target_ratio, size_heatmap = imgproc.resize_aspect_ratio(image, 1280, interpolation=cv2.INTER_LINEAR, mag_ratio=1.5)
ratio_h = ratio_w = 1 / target_ratio
# preprocessing
x = imgproc.normalizeMeanVariance(img_resized)
x = torch.from_numpy(x).permute(2, 0, 1) # [h, w, c] to [c, h, w]
x = Variable(x.unsqueeze(0)) # [c, h, w] to [b, c, h, w]
x = x.cuda()
# trace export
torch.onnx.export(net,
x,
'onnx/craft.onnx',
export_params=True,
verbose=True)
But then encountered this error:
RuntimeError: tuple appears in op that does not forward tuples (VisitNode at /opt/conda/conda-bld/pytorch_1556653114079/work/torch/csrc/jit/passes/lower_tuples.cpp:117)
Followed these issue pytorch/pytorch#5315 and pytorch/pytorch#13397, it turnned out that nn.DataParallel
wrapper doesn't support trace export for ONNX.
Is there a workaround for this?
I have been trying to evaluate your model on ICDAR 2013 and ICDAR 2015, and the F-score I get are 88.74 and 70.46 respectively, which is a far cry from what is mentioned in the paper. Could you provide the evaluation script and also clarify the dataset(s) on which the pre-trained model provided by you was trained on?
Thank You.
how to calculate the loss in fine-tuning stage? when I process weakly-supervised training stage that I calculate the loss of each image in the batch and then add them divide batch size, is that right?
In paper, VGG_bn output size is w/32, h/32, 512,
in code VGG_bn output 'h_fc7'(in vgg16_bn.py) like this,
relu2_2 shape: torch.Size([1, 384, 384, 128])
relu3_2 shape: torch.Size([1, 192, 192, 256])
relu4_3 shape: torch.Size([1, 96, 96, 512])
relu5_3 shape: torch.Size([1, 48, 48, 512])
fc7 shape: torch.Size([1, 48, 48, 1024]),
input size is 786, 786, 3
784/48 =16...
I use Synthtext to train CRAFT, but the result is very bad.
this is mine
I don't know what caused this problem.I guess the difference is caused by loss, i use ohem, all postive (gt > 0.1) and negative(gt<0.1), pos:neg=1:3,and loss divede by number of neg and pos.
this is train loss, Its value looks smaller. Can you give me some guidance?
Lines 58 to 80 in 7afb2bd
Last stage's output shape is (H/32,W/32)
F.interpolate is look like up-sampling function.
Your model have three up-sampling function.
But predict shape is (H/2,W/2).
how can do this ?
So i have a question about this
CRAFT-pytorch/basenet/vgg16_bn.py
Lines 42 to 45 in 7afb2bd
self.slice5 - Does layer have down-sampling ?
Hi,
I am using your pre trained model craft_mlt_25k.pth for text detection. i have modified the test.py as per my use case and it always process only one image for single call. In cpu mode it takes on average of 12 to 14 seconds to process the single image(480*640) and in GPU (google colab) it takes around 7 seconds.
Especially, the call to forward pass (y, _ = net(x)) to craft network takes longer time to return the propability tensor.
Is there any way that i can speed it up? Thanks in advance.
@YoungminBaek Now that Craft have detected the boxes and saved them into a .txt
file, can you add a script to extract/crop the detected boxes into images.
Is the model uploaded here for text detection the best model that was trained or does the demo site ->https://demo.ocr.clova.ai/, have a better model trained on a larger data-set
I read the paper and there is no explanation of why did you choose VGG16.
Did you try other feature extraction networks as ResNet ?
If so why did you chose VGG16 ?
Issue was generated while running test.py script with --cuda=False. Key names mismatched while loading pretrained weight with pytorch-cpu version. Resolved temporarily by adding net=torch.nn.DataParallel(net) in test.py incase args.cuda=False
Can you please help to solve special character recognition along with word.
Is the model provided the same as https://demo.ocr.clova.ai/ ?
I can't recognize numbers correctly
I am try to reimplement the model,but there are some details that i am not clear. Are there some reference about train code which can give me ?
Hi,
I was trying to run the command
python test.py --trained_model=./craft_mlt_25k.pth --test_folder=./test_data
and got the error.
RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 3.95 GiB total capacity; 2.88 GiB already allocated; 58.25 MiB free; 42.71 MiB cached)
There is no other process running on my laptop and I have a 4GB NVIDIA 1080 TI GPU.
Hello. I have some text right at the border of image. The bounding box for that text contains some coordinates that are our of image bounds.
I am referencing from the issue #4
@hiepph : This issue is a reference to #4 (comment)
When I run this script I get error:
File "onxx_inference.py", line 27, in <module>
boxes = craft_utils.adjustResultCoordinates(boxes, ratio_w, ratio_h)
File "/home/ubuntu/ajinkya/CRAFT-pytorch/craft_utils.py", line 242, in adjustResultCoordinates
polys[k] *= (ratio_w * ratio_net, ratio_h * ratio_net)
Any idea on how to fix it ? Also how to cv2.imwrite() the output image with boxes using onxx_inference.py ?
Dataset like ICDAR2015 has word level annotation, so that craft can be evaluated easily by IoU.
How can we evaluate the performance of craft with annotation like text region? Such as labeling the entire text area "Hello World!" with a single rectangle.
It is my pleasure to get the answers.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.