Giter Site home page Giter Site logo

navervision / mlsd Goto Github PK

View Code? Open in Web Editor NEW
511.0 511.0 80.0 81.9 MB

Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection" (AAAI 2022 Oral)

License: Apache License 2.0

Python 93.31% CSS 0.19% HTML 6.51%

mlsd's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlsd's Issues

[Q] Was the reported FPS achieved using GPU?

Running on Windows using xnnpack delegate I get ~300ms per inference, using model M-LSD_512_tiny_fp16.tflite
(relatively old cpu: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz, 3401 Mhz, 4 Core(s), 8 Logical Processor(s))

For comparison, running object detection using ssd mobilenet v3 (input size 320x320) I get ~17ms per inference

Meta files of saved models

Thank you for implementing such an excellent line and box detection. In the ckpt_models, I have seen checkpoint, .ckpt.index and .ckpt.data, but no .ckpt.meta. Is it possible for you to provide the .ckpt.meta files as well?

low fps

Thank you for providing the source code. I deployed M-LSD_320_tiny_fp16.tflite model on i7 Windows pc with 12 CPUs using Tensorflow Lite C api but the inference speed is only 4fps. The paper claims tiny model obtains real-time performance between 30.7fps ~ 56.8fps on iPhone (A14 Bionic chipset) and Android phone (Snapdragon 865 chipset). I thought it will perform better on i7 intel cpu. Could you explain how it perform poorly on pc? Did you use additional optimization when you deploy it on smartphone?

Convert to tensorflowjs

Hi,
Thanks for your great work.

Could you please upload the saved model/keras model so that it could be converted to tensorflowjs model? I am working on a web application.

Thank you in advance.

Negative number of test results

Hi ,
Thanks for you great m-lsd work!
I use it to detect my own pictures,and save the Endpoint coordinates.There are small amount of Endpoint coordinates is Negative number .I find the mistake only happened at x_start.Can you give me some information of it .Thanks a lot
a result file
1403638127445096960.txt

What tools should I use to label my own training set

Hello, I want to train my own data set, but I don't know which tool to use. It seems that labelimg and labelme are not suitable, so I would like to ask what tool should be used to mark the training set of pytorch version

error for begin web ui

apt update
apt upgrade
shutdown -r now

then

apt install python3-pip
pip install gradio
git clone https://github.com/AK391/mlsd.git
cd mlsd
pip install -r requirements.txt

this show me cmd

Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 1)) (1.20.3)
Collecting opencv-python
  Downloading opencv_python-4.5.2.54-cp38-cp38-manylinux2014_x86_64.whl (51.0 MB)
     |████████████████████████████████| 51.0 MB 21.0 MB/s
Requirement already satisfied: pillow in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 3)) (8.2.0)
Collecting tensorflow-gpu
  Downloading tensorflow_gpu-2.5.0-cp38-cp38-manylinux2010_x86_64.whl (454.4 MB)
     |████████████████████████████████| 454.4 MB 272 bytes/s
Requirement already satisfied: Flask in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 5)) (2.0.1)
Requirement already satisfied: gradio in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 6)) (2.0.2)
Collecting tensorboard~=2.5
  Downloading tensorboard-2.5.0-py3-none-any.whl (6.0 MB)
     |████████████████████████████████| 6.0 MB 54.7 MB/s
Collecting keras-preprocessing~=1.1.2
  Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
     |████████████████████████████████| 42 kB 3.6 MB/s
Collecting google-pasta~=0.2
  Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
     |████████████████████████████████| 57 kB 15.3 MB/s
Collecting keras-nightly~=2.5.0.dev
  Downloading keras_nightly-2.5.0.dev2021032900-py2.py3-none-any.whl (1.2 MB)
     |████████████████████████████████| 1.2 MB 69.3 MB/s
Collecting opt-einsum~=3.3.0
  Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
     |████████████████████████████████| 65 kB 13.3 MB/s
Collecting gast==0.4.0
  Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting astunparse~=1.6.3
  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting wrapt~=1.12.1
  Downloading wrapt-1.12.1.tar.gz (27 kB)
Collecting h5py~=3.1.0
  Downloading h5py-3.1.0-cp38-cp38-manylinux1_x86_64.whl (4.4 MB)
     |████████████████████████████████| 4.4 MB 73.7 MB/s
Collecting six~=1.15.0
  Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting absl-py~=0.10
  Downloading absl_py-0.12.0-py3-none-any.whl (129 kB)
     |████████████████████████████████| 129 kB 76.3 MB/s
Collecting wheel~=0.35
  Downloading wheel-0.36.2-py2.py3-none-any.whl (35 kB)
Collecting grpcio~=1.34.0
  Downloading grpcio-1.34.1-cp38-cp38-manylinux2014_x86_64.whl (4.0 MB)
     |████████████████████████████████| 4.0 MB 73.2 MB/s
Collecting flatbuffers~=1.12.0
  Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting protobuf>=3.9.2
  Downloading protobuf-3.17.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB)
     |████████████████████████████████| 1.0 MB 71.5 MB/s
Collecting typing-extensions~=3.7.4
  Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting termcolor~=1.1.0
  Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Collecting tensorflow-estimator<2.6.0,>=2.5.0rc0
  Downloading tensorflow_estimator-2.5.0-py2.py3-none-any.whl (462 kB)
     |████████████████████████████████| 462 kB 74.7 MB/s
Requirement already satisfied: itsdangerous>=2.0 in /usr/local/lib/python3.8/dist-packages (from Flask->-r requirements.txt (line 5)) (2.0.1)
Requirement already satisfied: Jinja2>=3.0 in /usr/local/lib/python3.8/dist-packages (from Flask->-r requirements.txt (line 5)) (3.0.1)
Requirement already satisfied: Werkzeug>=2.0 in /usr/local/lib/python3.8/dist-packages (from Flask->-r requirements.txt (line 5)) (2.0.1)
Requirement already satisfied: click>=7.1.2 in /usr/local/lib/python3.8/dist-packages (from Flask->-r requirements.txt (line 5)) (8.0.1)
Requirement already satisfied: analytics-python in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (1.3.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (1.2.4)
Requirement already satisfied: paramiko in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (2.7.2)
Requirement already satisfied: ffmpy in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (0.3.0)
Requirement already satisfied: flask-cachebuster in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (1.0.0)
Requirement already satisfied: pycryptodome in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (3.10.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (1.6.3)
Requirement already satisfied: Flask-Login in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (0.5.0)
Requirement already satisfied: markdown2 in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (2.4.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (3.4.2)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from gradio->-r requirements.txt (line 6)) (2.22.0)
Requirement already satisfied: Flask-Cors>=3.0.8 in /usr/local/lib/python3.8/dist-packages (from gradio->-r requirements.txt (line 6)) (3.0.10)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
  Downloading tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl (4.9 MB)
     |████████████████████████████████| 4.9 MB 77.6 MB/s
Collecting tensorboard-plugin-wit>=1.6.0
  Downloading tensorboard_plugin_wit-1.8.0-py3-none-any.whl (781 kB)
     |████████████████████████████████| 781 kB 66.2 MB/s
Requirement already satisfied: setuptools>=41.0.0 in /usr/lib/python3/dist-packages (from tensorboard~=2.5->tensorflow-gpu->-r requirements.txt (line 4)) (45.2.0)
Collecting google-auth<2,>=1.6.3
  Downloading google_auth-1.30.2-py2.py3-none-any.whl (146 kB)
     |████████████████████████████████| 146 kB 82.3 MB/s
Collecting google-auth-oauthlib<0.5,>=0.4.1
  Downloading google_auth_oauthlib-0.4.4-py2.py3-none-any.whl (18 kB)
Collecting markdown>=2.6.8
  Downloading Markdown-3.3.4-py3-none-any.whl (97 kB)
     |████████████████████████████████| 97 kB 19.9 MB/s
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from Jinja2>=3.0->Flask->-r requirements.txt (line 5)) (2.0.1)
Requirement already satisfied: backoff==1.10.0 in /usr/local/lib/python3.8/dist-packages (from analytics-python->gradio->-r requirements.txt (line 6)) (1.10.0)
Requirement already satisfied: python-dateutil>2.1 in /usr/local/lib/python3.8/dist-packages (from analytics-python->gradio->-r requirements.txt (line 6)) (2.8.1)
Requirement already satisfied: monotonic>=1.5 in /usr/local/lib/python3.8/dist-packages (from analytics-python->gradio->-r requirements.txt (line 6)) (1.6)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.8/dist-packages (from pandas->gradio->-r requirements.txt (line 6)) (2021.1)
Requirement already satisfied: pynacl>=1.0.1 in /usr/lib/python3/dist-packages (from paramiko->gradio->-r requirements.txt (line 6)) (1.3.0)
Requirement already satisfied: cryptography>=2.5 in /usr/lib/python3/dist-packages (from paramiko->gradio->-r requirements.txt (line 6)) (2.8)
Requirement already satisfied: bcrypt>=3.1.3 in /usr/local/lib/python3.8/dist-packages (from paramiko->gradio->-r requirements.txt (line 6)) (3.2.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib->gradio->-r requirements.txt (line 6)) (1.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in /usr/local/lib/python3.8/dist-packages (from matplotlib->gradio->-r requirements.txt (line 6)) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.8/dist-packages (from matplotlib->gradio->-r requirements.txt (line 6)) (0.10.0)
Collecting cachetools<5.0,>=2.0.0
  Downloading cachetools-4.2.2-py3-none-any.whl (11 kB)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/lib/python3/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu->-r requirements.txt (line 4)) (0.2.1)
Collecting rsa<5,>=3.1.4; python_version >= "3.6"
  Downloading rsa-4.7.2-py3-none-any.whl (34 kB)
Collecting requests-oauthlib>=0.7.0
  Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
Requirement already satisfied: cffi>=1.1 in /usr/local/lib/python3.8/dist-packages (from bcrypt>=3.1.3->paramiko->gradio->-r requirements.txt (line 6)) (1.14.5)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/lib/python3/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu->-r requirements.txt (line 4)) (0.4.2)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/lib/python3/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow-gpu->-r requirements.txt (line 4)) (3.1.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.8/dist-packages (from cffi>=1.1->bcrypt>=3.1.3->paramiko->gradio->-r requirements.txt (line 6)) (2.20)
Building wheels for collected packages: wrapt, termcolor
  Building wheel for wrapt (setup.py) ... done
  Created wheel for wrapt: filename=wrapt-1.12.1-cp38-cp38-linux_x86_64.whl size=78523 sha256=a505b3132066629135da59f1eb98ce9d8345c4c629073f5e2db1312cdaa1c3f0
  Stored in directory: /root/.cache/pip/wheels/5f/fd/9e/b6cf5890494cb8ef0b5eaff72e5d55a70fb56316007d6dfe73
  Building wheel for termcolor (setup.py) ... done
  Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=04d8a4a005daff5c461eafe0ed769735eae67519d0be0a1aabbd546ad4ed0c1f
  Stored in directory: /root/.cache/pip/wheels/a0/16/9c/5473df82468f958445479c59e784896fa24f4a5fc024b0f501
Successfully built wrapt termcolor
ERROR: launchpadlib 1.10.13 requires testresources, which is not installed.
ERROR: tensorflow-gpu 2.5.0 has requirement numpy~=1.19.2, but you'll have numpy 1.20.3 which is incompatible.
Installing collected packages: opencv-python, six, grpcio, protobuf, tensorboard-data-server, tensorboard-plugin-wit, absl-py, wheel, cachetools, rsa, google-auth, requests-oauthlib, google-auth-oauthlib, markdown, tensorboard, keras-preprocessing, google-pasta, keras-nightly, opt-einsum, gast, astunparse, wrapt, h5py, flatbuffers, typing-extensions, termcolor, tensorflow-estimator, tensorflow-gpu
  Attempting uninstall: six
    Found existing installation: six 1.14.0
    Not uninstalling six at /usr/lib/python3/dist-packages, outside environment /usr
    Can't uninstall 'six'. No files were found to uninstall.
  Attempting uninstall: wheel
    Found existing installation: wheel 0.34.2
    Not uninstalling wheel at /usr/lib/python3/dist-packages, outside environment /usr
    Can't uninstall 'wheel'. No files were found to uninstall.
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.10.0.0
    Uninstalling typing-extensions-3.10.0.0:
      Successfully uninstalled typing-extensions-3.10.0.0
Successfully installed absl-py-0.12.0 astunparse-1.6.3 cachetools-4.2.2 flatbuffers-1.12 gast-0.4.0 google-auth-1.30.2 google-auth-oauthlib-0.4.4 google-pasta-0.2.0 grpcio-1.34.1 h5py-3.1.0 keras-nightly-2.5.0.dev2021032900 keras-preprocessing-1.1.2 markdown-3.3.4 opencv-python-4.5.2.54 opt-einsum-3.3.0 protobuf-3.17.3 requests-oauthlib-1.3.0 rsa-4.7.2 six-1.15.0 tensorboard-2.5.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.0 tensorflow-estimator-2.5.0 tensorflow-gpu-2.5.0 termcolor-1.1.0 typing-extensions-3.7.4.3 wheel-0.36.2 wrapt-1.12.1

when put this command:
python3 demo_MLSD.py

Traceback (most recent call last):
  File "demo_MLSD.py", line 13, in <module>
    import cv2
  File "/usr/local/lib/python3.8/dist-packages/cv2/__init__.py", line 5, in <module>
    from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

im using ubuntu 20 from scratch

pls help me to solve this

Project dependencies may have API risk issues

Hi, In mlsd, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

numpy
opencv-python
pillow
tensorflow-gpu=2.3.0
Flask
gradio

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.
The version constraint of dependency pillow can be changed to ==9.2.0.
The version constraint of dependency pillow can be changed to >=2.0.0,<=9.1.1.
The version constraint of dependency gradio can be changed to >=1.0.0a1,<=1.0.0a4.
The version constraint of dependency gradio can be changed to >=1.3.0,<=2.3.0a0.
The version constraint of dependency gradio can be changed to >=2.3.0,<=2.7.0a102.
The version constraint of dependency gradio can be changed to >=2.7.0,<=2.8.14.
The version constraint of dependency gradio can be changed to >=2.9.0,<=2.9.4.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numpy
numpy.linalg.norm
The calling methods from the pillow
PIL.Image.open
The calling methods from the gradio
gradio.inputs.Number
gradio.Interface.launch
The calling methods from the all methods
absl.flags.DEFINE_integer
self.decoder2
tensorflow.keras.applications.mobilenet_v2.preprocess_input
square_list.np.array.reshape
Conv_BN_Act
flask.send_from_directory
inter_y.inter_x.np.concatenate.astype
stem_layer.get_weights
cv2.imdecode
cfg.cfg.dilate.cfg.topk.cfg.map_size.x.shape.Decoder
val1.numpy
self.load_tflite
self.get_pts_scores
tensorflow.keras.regularizers.l2
str
self.decoder1
tensorflow.lite.Interpreter.allocate_tensors
numpy.abs
gradio.inputs.Number
new_hough.current_hough.all
tensorflow.cast
absl.flags.DEFINE_string
tensorflow.keras.initializers.Constant
self.Conv_BN_Act.super.__init__
tensorflow.image.resize
tensorflow.lite.TFLiteConverter.from_keras_model
junc_list.append
numpy.frombuffer
tensorflow.lite.Interpreter.get_output_details
gradio.Interface.launch
interpreter.set_tensor
f.write
yx.numpy.numpy
range
tensorflow.keras.initializers.he_normal
len
numpy.transpose
numpy.argsort
block_name.block_dict.append
preprocess
layer_list
tensorflow.keras.applications.mobilenet_v2.MobileNetV2
PIL.Image.open
response.content.BytesIO.Image.open.convert
utils.pred_squares
flask.Flask.route
self.conv_block3
add_block_list.append
front_list.append
numpy.sum
argparse.ArgumentParser
uuid.uuid1
tensorflow.train.Checkpoint
urllib.request.urlretrieve
layer
self.final_act
tensorflow.lite.Interpreter
square_list.append
cv2.circle
tensorflow.train.CheckpointManager
init_worker
img_input.copy
flask.render_template
tensorflow.reshape
numpy.ones
cv2.line
self.conv_block4
numpy.sqrt
self.up_blocks.append
model_graph.decode_image
cv2.imwrite
flask.json.dump
numpy.unique
self.decoder0
Decoder_FPN
time.time
self.get_pts_scores_fast
absl.app.run
int
backbone_type.lower
argparse.ArgumentParser.add_argument
model
tensorflow.zeros
checkpoint.step.numpy
super.call
connect_list.append
numpy.array.append
tensorflow.concat
org_times.append
resized_image.np.expand_dims.astype
tensorflow.math.top_k
segment_list.append
self.BatchNormalization.super.__init__
_regularizer
flask.Flask.run
backbone_outputs.append
cfg.backbone_type.lower
model_graph.pred_tflite
numpy.max
tensorflow.expand_dims
numpy.expand_dims
numpy.mean
numpy.concatenate
tensorflow.where
model_graph.save_output
tensorflow.math.sigmoid
flask.request.files.save
argparse.ArgumentParser.parse_args
numpy.reshape
tensorflow.lite.TFLiteConverter.from_keras_model.convert
layer_name.split
cv2.polylines
cfg.x.Decoder_FPN
tensorflow.keras.layers.MaxPool2D
numpy.argmax
image.copy.copy
cv2.resize
gradio.Interface
square_length.append
tensorflow.gather_nd
BatchNormalization
tensorflow.math.equal
absl.flags.DEFINE_float
utils.pred_lines
cfg.post_name.backbone_type.Backbone
square.reshape
idx.self.up_blocks
check_outside_inside
top_layer
tensorflow.logical_and
lower
tensorflow.__version__.split
post_name.backbone_type.output_layers.extractor.input.Model
model_graph
numpy.roll
interpreter.get_tensor
os.path.join
os.makedirs
NotImplementedError
numpy.asarray
val2.numpy
open
self.conv_block2
interpreter.invoke
print
self.init_resize_image
tensorflow.train.Checkpoint.restore
end_list.append
modules.models.WireFrameModel
os.path.exists
self.Decoder.super.__init__
tensorflow.io.gfile.GFile
block_list.append
numpy.linalg.norm
logger.info
numpy.arctan2
format
requests.get
segments_list.append
absl.flags.DEFINE_boolean
tqdm.tqdm
Decoder
self.Upblock.super.__init__
self.conv
zip
flask.Flask
tensorflow.Variable
numpy.random.rand
tensorflow.ones
topk_values.numpy.numpy
tensorflow.keras.Model.summary
self.conv_block1
enumerate
self.Decoder_FPN.super.__init__
indices.hough.astype
Upblock
numpy.zeros
super
numpy.array
self.act_fn
segment_list.np.array.reshape
tensorflow.keras.layers.Input
numpy.sort
alpha_times.append
tensorflow.lite.Interpreter.get_input_details
merged_segments.append
tensorflow.constant
tensorflow.keras.layers.Conv2D
model_graph.read_image
segments_list.np.array.reshape
corner_info.corner_dict.append
numpy.arccos
model.read_image.copy
io.BytesIO
tensorflow.keras.layers.ReLU
new_model
io.BytesIO.getvalue
self.draw_output
self.bn
tensorflow.keras.Model
Backbone

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

int8 quantization

Thanks for your great work! Could you please tell me the dataset this project uses since I want to do a int8 quantization to deploy it on other devices. Thanks.

Center score in post-processing

Hi, I am looking into the post-processing code and most of the code seems logical to me. But I don't understand why the centers are of size [128, 128] this line. Shouldn't the center position relate to map size otherwise the center score will be greater than 1 sometimes?

Parameter documentation

Thanks for your great line and box detection code. Are the parameters for the box detection method described anywhere? I looked in the paper and didn't see how to match the equations to the code.

Flask

Hello, thanks your sharing, I have a problem, is this flask service on the phone?

post-processing of box detector

hi,
I'm very interested in the post-processing algorithm, but I can't understand it through the code.
Could you please give me some information about it?

input shape

Would you please explain why input channel of your tflite model is 4?
shape of input : [1,512,512,4]

GPU:

GPU:
start = new_segments[:,:2] # (x1, y1)
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

It runs very slow on the CPU

Untrained primitive model, tiny model runs at about 400ms-500ms per frame on CPU, but is normal on GPU, 5ms per frame. Why is CPU running slow?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.