cvg / limap Goto Github PK
View Code? Open in Web Editor NEWA toolbox for mapping and localization with line features.
License: BSD 3-Clause "New" or "Revised" License
A toolbox for mapping and localization with line features.
License: BSD 3-Clause "New" or "Revised" License
Dear author,
Thanks for your great work.
When I run the Quickstart in docker with "python runners/hypersim/triangulation.py", I tried to change the 2d line detector method into hawpv3 by adding "--line2d.detector.method hawpv3". However, it went wrong with "No module named hawp". I also tried other 2d line detect models such as lsd, deeplsd, it went well. Could you please give me some hints about the problem?
Thank you very much.
Hello, I am looking for a way to visualize 2d lines on top of the image.
My code is as the following:
import limap.util.io as limapio
import json
from tqdm import tqdm
import pandas as pd
import cv2
from pathlib import Path
name = "global"
def track_is_good(track, bb=50):
points = track.line.start, track.line.end
return all([(point < bb).all() and (point > -bb).all() for point in points])
def load_tracks():
_, tracks = limapio.read_lines_from_input(f"/files/static/storage/reconstructions/{name}/outputs/lines/finaltracks")
assert tracks
tracks = list(filter(track_is_good, tracks))
return tracks
image_list = f"/files/static/storage/reconstructions/{name}/outputs/lines/image_list.txt"
with open(image_list) as f:
lines = f.readlines()
lines = lines[1:]
image_id_to_path = dict()
for x in lines:
x = x.split(', ')
image_id_to_path[int(x[0])] = x[1].strip()
tracks = load_tracks()
for image_id in list(image_id_to_path.keys())[:10]:
image_path = image_id_to_path[image_id]
lines_2d = []
for track in tracks:
if image_id in track.image_id_list:
for image_idx, image_id_i in enumerate(track.image_id_list):
if image_id_i == image_id:
lines_2d.append(track.line2d_list[image_idx])
image = cv2.imread(image_path)
for line in lines_2d:
start = line.start.astype(int)
end = line.end.astype(int)
image = cv2.line(image, start, end, (0, 255, 0), 3)
file_name = Path(image_path).name
cv2.imwrite(f"temp/{file_name}", image)
print("a")
However, when I open an image I get this misalignment:
You can see, the overall result is correct, just that scaled by some factor. How do I unscale it?
Didn't find anything like target_resize_test parameter in the triangulation configs
Hi! Now I have colmap results of my own dataset, how can I reconstruct line map?
Am I gonna run triangulation.py on Hypersim?
Could you please tell me what are the parameters of each camera hdf5 file under the folder data/ai_001_001/_detail/cam_00
Hi, I just want to ask Is this the optimal parameter setting in the defalt below when I run colmap_triangulation?
https://github.com/cvg/limap/blob/main/cfgs/triangulation/default.yaml#L15
Did you have any timeline for releasing the incremental SfM pipeline?
colmap_map
├── database.db
├── images
│ ├── IMG_2077.jpg
│ ├── IMG_2078.jpg
│ ├── IMG_2079.jpg
│ ├── IMG_2081.jpg
│ ├── IMG_2082.jpg
│ ├── IMG_2083.jpg
│ └── IMG_2084.jpg
└── sparse
└── 0
├── cameras.bin
├── images.bin
├── points3D.bin
└── project.ini
I have already reconstructed it using colmap 3.8,Am I writing correctly? I want to use my own image data to run it.
Am I writing correctly? I want to use my own image data to run it.
python3 runners/colmap_triangulation.py -c ./cfgs/triangulation/default.yaml -a ./runners/colmap_map/ --output_dir ./test/
*** Aborted at 1713508552 (unix time) try "date -d @1713508552" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGABRT (@0x3e800005093) received by PID 20627 (TID 0x70a1240b3000) from PID 20627; stack trace: ***
@ 0x70a02649a19e (unknown)
@ 0x70a123e42520 (unknown)
@ 0x70a123e969fc pthread_kill
@ 0x70a123e42476 raise
@ 0x70a123e287f3 abort
@ 0x70a122ce370e (unknown)
@ 0x70a0e5e52e5b __libunwind_Unwind_RaiseException
@ 0x70a0e46ae4cb __cxa_throw
@ 0x70a0262ac427 (unknown)
@ 0x70a0263afb17 (unknown)
@ 0x70a0262f2ef8 (unknown)
@ 0x5f593f7ee10e (unknown)
@ 0x5f593f7e4a7b _PyObject_MakeTpCall
@ 0x5f593f7fa15d (unknown)
@ 0x5f593f7ec8a8 _PyObject_GenericGetAttrWithDict
@ 0x5f593f7f1eeb PyObject_GetAttrString
@ 0x70a026349e2b (unknown)
@ 0x70a026322a25 (unknown)
@ 0x70a02634043c (unknown)
@ 0x70a026341a9a PyInit_pycolmap
@ 0x5f593f8f3a7a (unknown)
@ 0x5f593f7eec59 (unknown)
@ 0x5f593f7d95d7 _PyEval_EvalFrameDefault
@ 0x5f593f7ee9fc _PyFunction_Vectorcall
@ 0x5f593f7dccfa _PyEval_EvalFrameDefault
@ 0x5f593f7ee9fc _PyFunction_Vectorcall
@ 0x5f593f7d745c _PyEval_EvalFrameDefault
@ 0x5f593f7ee9fc _PyFunction_Vectorcall
@ 0x5f593f7d726d _PyEval_EvalFrameDefault
@ 0x5f593f7ee9fc _PyFunction_Vectorcall
@ 0x5f593f7d726d _PyEval_EvalFrameDefault
@ 0x5f593f7ee9fc _PyFunction_Vectorcall
已中止 (核心已转储)
Running python runners/hypersim/fitnmerge.py --output_dir outputs/quickstart_fitnmerge
in the docker container fails.
Here is the error trace:
Traceback (most recent call last):
File "/limap/runners/hypersim/fitnmerge.py", line 41, in
main()
File "/limap/runners/hypersim/fitnmerge.py", line 38, in main
run_scene_hypersim(cfg, dataset, cfg["scene_id"], cam_id=cfg["cam_id"])
File "/limap/runners/hypersim/fitnmerge.py", line 14, in run_scene_hypersim
linetracks = limap.runners.line_fitnmerge(cfg, imagecols, depths)
File "/limap/limap/runners/line_fitnmerge.py", line 106, in line_fitnmerge
_, neighbors, ranges = _runners.compute_sfminfos(cfg, imagecols)
File "/limap/limap/runners/functions.py", line 126, in compute_sfminfos
_psfm.run_colmap_sfm_with_known_poses(cfg["sfm"], imagecols, output_path=colmap_output_path, skip_exists=cfg["skip_exists"])
File "/limap/limap/pointsfm/colmap_sfm.py", line 195, in run_colmap_sfm_with_known_poses
run_hloc_matches(cfg["hloc"], image_path, Path(db_path), keypoints=keypoints_in_order, neighbors=neighbors, imagecols=imagecols_tmp)
File "/limap/limap/pointsfm/colmap_sfm.py", line 110, in run_hloc_matches
triangulation.estimation_and_geometric_verification(db_path, sfm_pairs)
File "/limap/third-party/Hierarchical-Localization/hloc/triangulation.py", line 109, in estimation_and_geometric_verification
pycolmap.verify_matches(
TypeError: verify_matches(): incompatible function arguments. The following argument types are supported:
1. (database_path: object, pairs_path: object, options: pycolmap.TwoViewGeometryOptions = <pycolmap.TwoViewGeometryOptions object at 0x7f81852bdcb0>) -> None
Invoked with: PosixPath('outputs/quickstart_fitnmerge/colmap_outputs/db.db'), PosixPath('outputs/quickstart_fitnmerge/colmap_outputs/hloc_outputs/pairs-exhaustive.txt'); kwargs: max_num_trials=20000, min_inlier_ratio=0.1
There seems to be a mismatch between the argument types expected by pycolmap and those supplied by the hloc triangulation code. Could this possibly result from the recently made changes in hloc and pycolmap?
Any help is appreciated! Thanks!
Hello Team,
I am not exactly sure if this information is already present in the LiMAP documentation.
Just like the way COLMAP or HLOC take a folder of images and do camera parameter estimation
by using techniques like Superpoint + Superglue
, Can I use LiMAP along with GlueStick to do the camera parameter estimation
?
I am really interested to see if I can use GlueStick (line features) for doing Visual Localization. I am very curious to know if I can do that with LiMAP.
This might be a duplicate of this ticket: cvg/GlueStick#15. If that is the case, we can close one of the tickets.
Thanks.
Hello, first of all, thank you very much for contributing the code!
I followed the example step by step, but when I reached this python code:
python runners/7scenes/localization.py --dataset $dataset -s stairs --skip_exists --use_dense_depth --localization.optimize.loss_func TrivialLoss
The following error occurs:
[LOG] Output dir: tmp/7scenes/stairs
[LOG] Loading dir: tmp/7scenes/stairs
[LOG] weight dir: /home/danson/.limap/models
[2023/12/09 13:50:15 JointLoc INFO] Working on scene "stairs".
[2023/12/09 13:50:15 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 4096, 'name': 'superpoint', 'nms_radius': 3},
'output': 'feats-superpoint-n4096-r1024',
'preprocessing': {'globs': ['*.color.png'],
'grayscale': True,
'resize_max': 1024}}
[2023/12/09 13:50:15 hloc INFO] Found 3000 images in root datasets/7scenes/stairs.
[2023/12/09 13:50:16 hloc INFO] Skipping the extraction.
[2023/12/09 13:50:17 hloc INFO] Creating the reference model.
[2023/12/09 13:50:21 hloc INFO] Kept 2000 images out of 3000.
[2023/12/09 13:50:22 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'},
'output': 'matches-superglue'}
[2023/12/09 13:50:22 hloc INFO] Skipping the matching.
[2023/12/09 13:50:22 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue', 'sinkhorn_iterations': 5, 'weights': 'outdoor'},
'output': 'matches-superglue'}
[2023/12/09 13:50:22 hloc INFO] Skipping the matching.
[2023/12/09 13:50:23 hloc INFO] Correcting sfm using depth...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:15<00:00, 131.40it/s]
[2023/12/09 13:50:39 JointLoc INFO] Running Point-only localization...
[2023/12/09 13:50:39 hloc.utils.parsers INFO] Imported 1000 images from query_list_with_intrinsics.txt
[2023/12/09 13:50:39 hloc INFO] Reading the 3D model...
[2023/12/09 13:50:39 hloc INFO] Starting localization...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:25<00:00, 38.76it/s]
[2023/12/09 13:51:05 hloc INFO] Localized 1000 / 1000 images.
[2023/12/09 13:51:05 hloc INFO] Writing poses to outputs/localization/7scenes/stairs/results_dense_point.txt...
[2023/12/09 13:51:05 hloc INFO] Writing logs to outputs/localization/7scenes/stairs/results_dense_point.txt_logs.pkl...
[2023/12/09 13:51:06 hloc INFO] Done!
[2023/12/09 13:51:06 JointLoc INFO] Coarse pose saved at outputs/localization/7scenes/stairs/results_dense_point.txt
[2023/12/09 13:51:07 hloc.pipelines.Cambridge.utils INFO] Results for file results_dense_point.txt:
Median errors: 0.047m, 1.252deg
Percentage of test images localized within:
1cm, 1deg : 2.10%
2cm, 2deg : 12.90%
3cm, 3deg : 29.10%
5cm, 5deg : 53.10%
25cm, 2deg : 68.60%
50cm, 5deg : 91.40%
500cm, 10deg : 97.10%
[2023/12/09 13:51:07 JointLoc INFO] Coarse pose read from outputs/localization/7scenes/stairs/results_dense_point.txt
[2023/12/09 13:51:08 JointLoc INFO] Running LIMAP fit&merge
[LOG] Number of images: 2000
[LOG] Output dir: tmp/7scenes/stairs
[LOG] Loading dir: tmp/7scenes/stairs
[LOG] weight dir: /home/danson/.limap/models
[LOG] Start 2D line detection and description (detector = lsd, extractor = sold2, n_images = 2000)...
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:00<00:00, 607034.37it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [02:17<00:00, 14.57it/s]
[LOG] Start 2D line detection and description (detector = lsd, extractor = sold2, n_images = 2000)...
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:00<00:00, 585061.24it/s]
[LOG] Start 2D line detection and description (detector = lsd, extractor = sold2, n_images = 1000)...
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 574877.19it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [00:00<00:00, 593841.71it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 590830.26it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 299123.09it/s]
[LOG] Starting localization with points+lines...
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/danson/Code_center/limap/runners/7scenes/localization.py", line 164, in
main()
File "/home/danson/Code_center/limap/runners/7scenes/localization.py", line 158, in main
final_poses = _runners.line_localization(
File "/home/danson/Code_center/limap/limap/runners/line_localization.py", line 207, in line_localization
final_pose, ransac_stats = _estimators.pl_estimate_absolute_pose(
File "/home/danson/Code_center/limap/limap/estimators/absolute_pose/init.py", line 26, in pl_estimate_absolute_pose
return _pl_estimate_absolute_pose(cfg, l3ds, l3d_ids, l2ds, p3ds, p2ds, camera, campose=campose,
File "/home/danson/Code_center/limap/limap/estimators/absolute_pose/_pl_estimate_absolute_pose.py", line 13, in _pl_estimate_absolute_pose
jointloc_cfg['loss_function'] = getattr(_ceresbase, jointloc_cfg['loss_func'])(*jointloc_cfg['loss_func_args'])
TypeError: init(): incompatible constructor arguments. The following argument types are supported:
1. _limap._ceresbase.TrivialLoss()
Invoked with: 1.0
I need help!
Thanks for your great work!
While running the quickstart
on my machine, I found an issue with importing _limap
with the errors as shown below:
(limap) [TITAN] ➜ limap git:(main) ✗ python runners/hypersim/triangulation.py --output_dir outputs/quickstart_triangulation
Traceback (most recent call last):
File "/home/xn/repo/limap/runners/hypersim/triangulation.py", line 6, in <module>
from loader import read_scene_hypersim
File "/home/xn/repo/limap/runners/hypersim/loader.py", line 7, in <module>
import limap.base as _base
File "/home/xn/repo/limap/limap/__init__.py", line 3, in <module>
from _limap import *
ImportError: /home/xn/anaconda3/envs/limap/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /home/xn/repo/limap/_limap.cpython-39-x86_64-linux-gnu.so)
After checking the code, I found the issue is caused by the sys.path.append
at triangulation.py#L4.
When I first import _limap
at triangulation.py#L3, the problem is solved.
I executed this command: Python runners/typosim/finnmerge. py -- output_ Dir outputs/quickstart_ Fitnmerge
Afterwards, the terminal finally displays the prompt:
Computing visual neighbors... (n_neighbors = 100)
[LOG] Start 2D line detection and description (detector = deeplsd, n_images = 98)...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 98/98 [02:55<00:00, 1.79s/it]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 98/98 [00:08<00:00, 11.26it/s]
[████████████████████]100%(98/98) [130.666it/s]
[29-11-2023 20:02:10 INFO ] # graph nodes: 18396
[29-11-2023 20:02:10 INFO ] # graph edges: 246244
[29-11-2023 20:02:10 INFO ] # tracks: 1197
[LOG] tracks after iterative remerging: 1162 / 1196
Writing linetracks to outputs/quickstart_fitnmerge/fitnmerge_finaltracks...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1162/1162 [00:00<00:00, 20287.13it/s]
Writing all linetracks to a single file...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 651/651 [00:00<00:00, 35245.80it/s]
[Track Report] (N2, N4, N6, N8, N10, N20, N50) = (1094, 651, 501, 398, 336, 156, 5)
average supporting images (>= 3): 9702 / 781 = 12.42
average supporting lines (>= 3): 14031 / 781 = 17.97
average supporting images (>= 4): 9312 / 651 = 14.30
average supporting lines (>= 4): 13579 / 651 = 20.86
/home/danson/Code_center/limap/limap/runners/line_fitnmerge.py(179)line_fitnmerge()
-> VisTrack.vis_reconstruction(imagecols, n_visible_views=cfg["n_visible_views"], width=2)
(Pdb)
(Pdb)
(Pdb)
Do I need to input anything in it? Still close the terminal and continue executing: Python runners/typim/triangulation. py -- output_ Dir outputs/quickstart_ Triangulation?
Hello, I did the line triangulation from the colmap's estimated pose with my own dataset, but it doesn't seem that triangulation work well although there are enough highly similar detected 2D line when I compared many images.
Although I changed the extractor and matching in the cfg's default.yaml file, it still doesn't make many 3D line.
(e.g. wireframe + gluestick -> superpoint endpoint + superglue endpoint)
Is there any solution by modifying the cfg's default values to make more 3D lines?
(green line: detected lines by DeepLSD, blue line: triangulated lines)
Thanks in advance :)
Hi, after finishing all the requirements successfully, I run python -c "import limap"
, occurs the error:
Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/dewran/limap/limap/__init__.py", line 3, in <module> from _limap import * ImportError: /lib/x86_64-linux-gnu/libfreeimage.so.3: undefined symbol: TIFFFieldDataType, version LIBTIFF_4.0
I have no idea how to deal with it, could you please help me?
My cuda version is 11.5 and have Anaconda3-2023.03 installed.
Thanks for the wonderful work!
Following up on cvg/pixel-perfect-sfm#109 (comment) discussion.
I want to ask if LIMAP can estimate the poses for the low-textured objects. I use colmap_triangulation.py
on colmap data, and I see wrong poses are estimated for 360 objects.
For example: For this cube data, I have the following results:
Where the poses/camera locations should be 360 views around the cube. Would you happen to have any tips I can use to get better pose estimation?
P.S. The current cube information in sparse/0
is generated using Pixel-Perfect-SfM.
I really appreciate any help you can provide.
I am a rookie, I would like to ask if I do not have root permission under linux, that is, I cannot use the sudo
command, then how to install the previous dependencies(CMake,COLMAP,
PoseLib,
HDF5).I can't install to the / usr/local
folder. Now I'm trying to install it to another folder, but it still reports a lot of errors.
Hi, I am quite interested in your work which is excellent.
I am trying to run your code, but there occur some installing problems that I could not deal with as I am new to Linux and CUDA.
python -m pip install -r requirements.txt this step occurs error:
ERROR: Directory './third-party/pytlsd' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
Could you please help me out?
when i run the example:
Both of them raise the problem:
Computing visual neighbors... (n_neighbors = 100)
[LOG] Start 2D line detection and description (detector = sold2, n_images = 98)...
Traceback (most recent call last):
File "/home/everbright/Codes/limap/runners/hypersim/fitnmerge.py", line 41, in
main()
File "/home/everbright/Codes/limap/runners/hypersim/fitnmerge.py", line 38, in main
run_scene_hypersim(cfg, dataset, cfg["scene_id"], cam_id=cfg["cam_id"])
File "/home/everbright/Codes/limap/runners/hypersim/fitnmerge.py", line 14, in run_scene_hypersim
linetracks = limap.runners.line_fitnmerge(cfg, imagecols, depths)
File "/home/everbright/Codes/limap/limap/runners/line_fitnmerge.py", line 100, in line_fitnmerge
all_2d_segs, _ = _runners.compute_2d_segs(cfg, imagecols, compute_descinfo=cfg["line2d"]["compute_descinfo"])
File "/home/everbright/Codes/limap/limap/runners/functions.py", line 136, in compute_2d_segs
detector = limap.line2d.get_detector(cfg["line2d"]["detector"], max_num_2d_segs=cfg["line2d"]["max_num_2d_segs"], do_merge_lines=cfg["line2d"]["do_merge_lines"], visualize=cfg["line2d"]["visualize"], weight_path=weight_path)
File "/home/everbright/Codes/limap/limap/line2d/register_detector.py", line 16, in get_detector
return SOLD2Detector(options)
File "/home/everbright/Codes/limap/limap/line2d/SOLD2/sold2.py", line 16, in init
self.detector = SOLD2LineDetector(weight_path=self.weight_path)
File "/home/everbright/Codes/limap/limap/line2d/SOLD2/sold2_wrapper.py", line 28, in init
self.initialize_line_matcher()
File "/home/everbright/Codes/limap/limap/line2d/SOLD2/sold2_wrapper.py", line 38, in initialize_line_matcher
self.line_matcher = LineMatcher(
File "/home/everbright/Codes/limap/limap/line2d/SOLD2/model/line_matcher.py", line 31, in init
checkpoint = torch.load(ckpt_path, map_location=self.device)
File "/home/everbright/anaconda3/envs/LIMAP/lib/python3.9/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/everbright/anaconda3/envs/LIMAP/lib/python3.9/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
After updating to Ceres 2.2.0 as enabled by b58d4fb the localization pipeline becomes significantly slower (e.g. on 7Scenes stairs 7.5min -> 40min, and for the single-case test script 3s -> 5s).
Hello, I am very interested in your work and would like to make some applications based on line map, such as visual localization..
Now that I have the initial pose and some 2D-3D line correspondences, how can I construct optimization problems to optimize the initial pose? (In the case of using only line correspondences instead of point line union)
Looking forward to your reply and wishing you a happy life!
Hello, I would like to run my data through your LIMAP. I'm interested in changing the INPUT of the 'Hybrid Localization with Points and Lines' code to our data instead of using 7Scense. Could you let me know what's needed for that? It would be really appreciated!
When I run python -c "import limap", I encounter the error "Undefined symbol: TIFFFieldDataType, version LIBTIFF_4.0" on Ubuntu 20.04.
(pytorch) ubuntu@ml-ubuntu20-04-desktop-v1-0-108gb-100m:~/Desktop/limap$ python -c "import limap"
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/Desktop/limap/limap/init.py", line 3, in
from _limap import *
ImportError: /lib/x86_64-linux-gnu/libfreeimage.so.3: undefined symbol: TIFFFieldDataType, version LIBTIFF_4.0
Then I tried the solution from this GitHub issue: colmap/colmap#1937 by using conda uninstall libtiff to resolve it. However, I faced another issue that has been bothering me all day.
(pytorch) ubuntu@ml-ubuntu20-04-desktop-v1-0-108gb-100m:~/Desktop/limap$ python -c "import limap"
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/Desktop/limap/limap/init.py", line 6, in
from . import point2d
File "/home/ubuntu/Desktop/limap/limap/point2d/init.py", line 1, in
from .superpoint import *
File "/home/ubuntu/Desktop/limap/limap/point2d/superpoint/init.py", line 1, in
from .main import run_superpoint
File "/home/ubuntu/Desktop/limap/limap/point2d/superpoint/main.py", line 12, in
from hloc import extract_features
File "/home/ubuntu/Desktop/limap/third-party/Hierarchical-Localization/hloc/extract_features.py", line 12, in
import PIL.Image
ModuleNotFoundError: No module named 'PIL.Image'
I am using CUDA 11.8, torch 1.13.1, colmap3.8.
Hello, thank you very much for providing open source code.
I encountered the following error while sequentially executing QuickStart related instructions. Hope to provide assistance.
I followed the doc and continued to execute until an error occurred during this command:
python runners/hypersim/fitnmerge.py --output_dir outputs/quickstart_fitnmerge
The terminal reported the following error.I guess it's because the relevant models were not downloaded (for example, I don't have the directory weight dir mentioned here:/home/danson/. limap/models), please help me.
[LOG] Number of images: 98
[LOG] Output dir: outputs/quickstart_fitnmerge
[LOG] Loading dir: outputs/quickstart_fitnmerge
[LOG] weight dir: /home/danson/.limap/models
[SuperPoint] Extracting local features with configuration:
{'model': {'max_keypoints': 4096, 'name': 'superpoint', 'nms_radius': 3},
'output': 'feats-superpoint-n4096-r1024',
'preprocessing': {'grayscale': True, 'resize_max': 1024}}
[2023/11/29 13:14:09 hloc INFO] Found 98 images in root outputs/quickstart_fitnmerge/colmap_outputs/images.
Traceback (most recent call last):
File "/home/danson/Code_center/limap/runners/hypersim/fitnmerge.py", line 41, in
main()
File "/home/danson/Code_center/limap/runners/hypersim/fitnmerge.py", line 38, in main
run_scene_hypersim(cfg, dataset, cfg["scene_id"], cam_id=cfg["cam_id"])
File "/home/danson/Code_center/limap/runners/hypersim/fitnmerge.py", line 14, in run_scene_hypersim
linetracks = limap.runners.line_fitnmerge(cfg, imagecols, depths)
File "/home/danson/Code_center/limap/limap/runners/line_fitnmerge.py", line 106, in line_fitnmerge
_, neighbors, ranges = _runners.compute_sfminfos(cfg, imagecols)
File "/home/danson/Code_center/limap/limap/runners/functions.py", line 126, in compute_sfminfos
_psfm.run_colmap_sfm_with_known_poses(cfg["sfm"], imagecols, output_path=colmap_output_path, skip_exists=cfg["skip_exists"])
File "/home/danson/Code_center/limap/limap/pointsfm/colmap_sfm.py", line 182, in run_colmap_sfm_with_known_poses
run_hloc_matches(cfg["hloc"], image_path, Path(db_path), keypoints=keypoints_in_order, neighbors=neighbors, imagecols=imagecols_tmp)
File "/home/danson/Code_center/limap/limap/pointsfm/colmap_sfm.py", line 76, in run_hloc_matches
feature_path = run_superpoint(feature_conf, image_path, outputs, keypoints=keypoints)
File "/home/danson/Code_center/limap/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/danson/Code_center/limap/limap/point2d/superpoint/main.py", line 55, in run_superpoint
model = SuperPoint(conf['model']).eval().to(device)
File "/home/danson/Code_center/limap/limap/point2d/superpoint/superpoint.py", line 144, in init
self.load_state_dict(torch.load(str(path)))
File "/home/danson/Code_center/limap/venv/lib/python3.9/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/danson/Code_center/limap/venv/lib/python3.9/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
EOFError: Ran out of input
hello,I would like to ask a question:
Is it based on the principle of perspective projection to project the two endpoints of a 3D line segment into a 2D image, and then connect them?
eg:(R * X_start_In3DWorld+T)*intrinsic_matrix=X_start_InImage,(R * X_end_In3DWorld+T)*intrinsic_matrix=X_end_InImage
And R,T is the corresponding camera Rotation matrix and translation vector, intrinsic_ Matrix is the corresponding camera internal parameter matrix.
I think your code seems to do this.
Hello, thanks for the nice work!
There may be a bug in calculating "innerseg" between two lines.
In the comments on the following code, it appears the author intends to unproject the two endpoints of l1
to l2
. However, the actual code appears to project some vectors onto l1
, which doesn't seem to get "innerseg".
// innerseg
template <typename LineType>
bool get_innerseg(const LineType& l1, const LineType& l2, LineType& innerseg) {
// unproject the two endpoints of l1 to l2 and select the inner seg along l2
// return false if there is no overlap between the unprojection and l2
auto l1_dir = l1.direction();
double denom = (l2.end - l2.start).dot(l1_dir);
double nume_start = (l1.start - l2.start).dot(l1_dir);
double t1 = nume_start / (denom + EPS);
double nume_end = (l1.end - l2.start).dot(l1_dir);
double t2 = nume_end / (denom + EPS);
if (t1 > t2)
std::swap(t1, t2);
if (t1 >= 1.0 || t2 <= 0.0)
return false;
innerseg.start = l2.start + (l2.end - l2.start) * std::max(t1, 0.0);
innerseg.end = l2.start + (l2.end - l2.start) * std::min(t2, 1.0);
return true;
}
Hello, I would like to ask if it is possible to accept stream input? If so, where should I start? Also, can the SuperPoint model be loaded only once?
Thank you for the wonderful work you've done. I'm facing an issue after running the following steps. I believe the problem is linked to DeepLSD (https://github.com/cvg/DeepLSD#usage), but as this model is essential to the project, I suggest including it as part of the LIMAP repository (i.e. packages).
Steps:
python runners/hypersim/fitnmerge.py --output_dir outputs/quickstart_fitnmerge
Logs:
Computing visual neighbors... (n_neighbors = 100)
[LOG] Start 2D line detection and description (detector = deeplsd, n_images = 98)...
Downloading DeepLSD model...
--2023-07-19 15:07:22-- https://www.polybox.ethz.ch/index.php/s/XVb30sUyuJttFys/download
Resolving www.polybox.ethz.ch (www.polybox.ethz.ch)... 129.132.71.243
Connecting to www.polybox.ethz.ch (www.polybox.ethz.ch)|129.132.71.243|:443... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2023-07-19 15:07:22 ERROR 503: Service Unavailable.
Traceback (most recent call last):
File "runners/hypersim/fitnmerge.py", line 41, in <module>
main()
File "runners/hypersim/fitnmerge.py", line 38, in main
run_scene_hypersim(cfg, dataset, cfg["scene_id"], cam_id=cfg["cam_id"])
File "runners/hypersim/fitnmerge.py", line 14, in run_scene_hypersim
linetracks = limap.runners.line_fitnmerge(cfg, imagecols, depths)
File "/home/amughrabi/projects/limap/limap/runners/line_fitnmerge.py", line 98, in line_fitnmerge
all_2d_segs, _ = _runners.compute_2d_segs(cfg, imagecols, compute_descinfo=cfg["line2d"]["compute_descinfo"])
File "/home/amughrabi/projects/limap/limap/runners/functions.py", line 135, in compute_2d_segs
detector = limap.line2d.get_detector(cfg["line2d"]["detector"], max_num_2d_segs=cfg["line2d"]["max_num_2d_segs"], do_merge_lines=cfg["line2d"]["do_merge_lines"], visualize=cfg["line2d"]["visualize"], weight_path=weight_path)
File "/home/amughrabi/projects/limap/limap/line2d/register_detector.py", line 25, in get_detector
return DeepLSDDetector(options)
File "/home/amughrabi/projects/limap/limap/line2d/DeepLSD/deeplsd.py", line 26, in __init__
self.download_model(ckpt)
File "/home/amughrabi/projects/limap/limap/line2d/DeepLSD/deeplsd.py", line 39, in download_model
subprocess.run(cmd, check=True)
File "/home/amughrabi/anaconda3/envs/limap/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['wget', 'https://www.polybox.ethz.ch/index.php/s/XVb30sUyuJttFys/download', '-O', '/home/amughrabi/.limap/models/line2d/DeepLSD/deeplsd_md.tar']' returned non-zero exit status 8.
pip install -r requirements.txt
ERROR: Could not find a version that satisfies the requirement open3d==0.16.0
how to locate from a image or convert a npy like runners/tests/localization_test_data_stairs_1.npy?
We have already set up the whole environment of COLMAP and LIMAP (gui window pops up after running colmap gui, and there is no error after running python3 -c "import limap"). But when we tried to run the Quickstart command python runners/hypersim/triangulation.py --output_dir outputs/quickstart_triangulation on the Ubuntu virtual environment in the VMware locally, it failed with the error I attached. I wonder if there is the way that can we run it without cuda and with just CPUs?
Hello,
First of all, thank you for sharing this amazing work. I was trying to create a very simple example based on this tutorial http://b1ueber2y.me/projects/LIMAP/docs/tutorials/line2d.html to detect, describe and match lines. I have solved some errors taking a look at the code in line_triagulation.py and functions.py but it is still not working. It ends up with an Aborted error.
Here is the code:
import limap.util.config
import limap.base
import limap.line2d
cfg = limap.util.config.load_config("/data/libraries/limap/cfgs/triangulation/default.yaml") # example config file
view1 = limap.base.CameraView(limap.base.Camera("PINHOLE", 0), "/data/libraries/limap/data/ai_001_001/images/scene_cam_00_final_preview/frame.0000.color.jpg") # initiate an limap.base.CameraView instance for detection. You can specify the height and width to resize into in the limap.base.Camera instance at initialization.
view2 = limap.base.CameraView(limap.base.Camera("PINHOLE", 1), "/data/libraries/limap/data/ai_001_001/images/scene_cam_00_final_preview/frame.0001.color.jpg")
detector = limap.line2d.get_detector(cfg["line2d"]["detector"]) # get a line detector
segs1 = detector.detect(view1) # detection
desc1 = detector.extract(view1, segs1) # description
segs2 = detector.detect(view2) # detection
desc2 = detector.extract(view2, segs1) # description
extractor = limap.line2d.get_detector(cfg) # get a line extractor
matcher = limap.line2d.get_matcher(cfg["line2d"]["matcher"], extractor) # initiate a line matcher
matches = matcher.match_pair(desc1, desc2) # matching
It seems that execution stops when the method segs1 = detector.detect(view1) is called. Any idea on how to solve the problem?
Thank you very much in advance!
python -m pip install -Ive . conda environment occurs error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /data/miniconda3/envs/limap/bin/python -c '
exec(compile('"'"''"'"''"'"'
# This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
#
# - It imports setuptools before invoking setup.py, to enable projects that directly
# import from `distutils.core` to work with newer packaging standards.
# - It provides a clear error message when setuptools is not installed.
# - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
# setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
# manifest_maker: standard file '"'"'-c'"'"' not found".
# - It generates a shim setup.py, for handling setup.cfg-only projects.
import os, sys, tokenize
try:
import setuptools
except ImportError as error:
print(
"ERROR: Can not execute `setup.py` since setuptools is not available in "
"the build environment.",
file=sys.stderr,
)
sys.exit(1)
__file__ = %r
sys.argv[0] = __file__
if os.path.exists(__file__):
filename = __file__
with tokenize.open(__file__) as f:
setup_py_code = f.read()
else:
filename = "<auto-generated setuptools caller>"
setup_py_code = "from setuptools import setup; setup()"
exec(compile(setup_py_code, filename, "exec"))
'"'"''"'"''"'"' % ('"'"'/ali-nas/loc/limap/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' develop --no-deps
cwd: /ali-nas/loc/limap/
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Computing visual neighbors... (n_neighbors = 20)
[LOG] Start 2D line detection and description (detector = deeplsd, extractor = wireframe, n_images = 98)...
100% | | 98/98 [00:32<00:00, 3.05it/s]
Loaded SuperPoint model
100% | |98/98 [00:02<00:00, 32.92it/s]
[LOG] Start matching 2D lines... (extractor = wireframe, matcher = gluestick, n_images = 98, n_neighbors = 20)
Loaded SuperPoint model
0%| | 0/98 [00:00<?, ?it/s]../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [96,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [98,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds"
failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [100,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds"
failed.
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
0%| | 0/98 [00:00<?, ?it/s]
Wow, great job!!!
How do I operate to build a line map on my dataset?
I want to use SOLD as a line extraction and matching .
I look forward to your next update!
Wish you a happy life!
When I run Line Mapping in Quickstart python runners/hypersim/triangulation.py --output_dir outputs/quickstart_triangulation
, occurs the error:
Computing visual neighbors... (n_neighbors = 20) [LOG] Start 2D line detection and description (detector = deeplsd, extractor = wireframe, n_images = 98)... 100%|███████████████████████████████████████████| 98/98 [00:21<00:00, 4.46it/s] Loaded SuperPoint model 100%|███████████████████████████████████████████| 98/98 [00:02<00:00, 43.43it/s] [LOG] Start matching 2D lines... (extractor = wireframe, matcher = gluestick, n_images = 98, n_neighbors = 20) Loaded SuperPoint model Traceback (most recent call last): File "/home/rylynn/limap/runners/hypersim/triangulation.py", line 42, in <module> main() File "/home/rylynn/limap/runners/hypersim/triangulation.py", line 39, in main run_scene_hypersim(cfg, dataset, cfg["scene_id"], cam_id=cfg["cam_id"]) File "/home/rylynn/limap/runners/hypersim/triangulation.py", line 14, in run_scene_hypersim linetracks = limap.runners.line_triangulation(cfg, imagecols) File "/home/rylynn/limap/limap/runners/line_triangulation.py", line 66, in line_triangulation matches_dir = _runners.compute_matches(cfg, descinfo_folder, imagecols.get_img_ids(), neighbors) File "/home/rylynn/limap/limap/runners/functions.py", line 207, in compute_matches matcher = limap.line2d.get_matcher(cfg["line2d"]["matcher"], extractor, n_neighbors=cfg["n_neighbors"], weight_path=weight_path) File "/home/rylynn/limap/limap/line2d/register_matcher.py", line 38, in get_matcher return GlueStickMatcher(extractor, options) File "/home/rylynn/limap/limap/line2d/GlueStick/matcher.py", line 20, in __init__ ckpt = torch.load(ckpt, map_location='cpu')['model'] File "/home/rylynn/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 705, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/home/rylynn/anaconda3/lib/python3.9/site-packages/torch/serialization.py", line 242, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
Have no idea what is going on here. The previous installation steps all have done successfully, so I don't think some files are misssing.
When I run Line Mapping on my own dataset, there occurs the same error.
Please, seeking for help...
Hello, when I build a line map on a dataset with over 1000 images, it will appear:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 938.00 MiB (GPU 0; 5.77 GiB total capacity; 1.07 GiB already allocated; 879.31 MiB free; 3.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So, I would like to inquire if it is possible to change certain parameters to accommodate small memory GPUs?
First, thank you for the great work. I'm trying to install limap by docker but currently there are issues with my buid.
When i ran the build with the Dockerfile
provided in the repo, i encountered No matching distribution found for torch==1.12.0
=> CACHED [stage-1 14/16] COPY --from=intermediate /limap /limap 0.0s
=> ERROR [stage-1 15/16] RUN python -m pip install torch==1.12.0 torchvision==0.13.0 --index-url https://download.pytorch.org/whl/cu115 2.7s
------
> [stage-1 15/16] RUN python -m pip install torch==1.12.0 torchvision==0.13.0 --index-url https://download.pytorch.org/whl/cu115:
0.527 Looking in indexes: https://download.pytorch.org/whl/cu115
1.780 ERROR: Could not find a version that satisfies the requirement torch==1.12.0 (from versions: 1.11.0+cu115)
1.780 ERROR: No matching distribution found for torch==1.12.0
------
Dockerfile:121
--------------------
119 | # Copy the repository from the first image
120 | COPY --from=intermediate /limap /limap
121 | >>> RUN python -m pip install torch==1.12.0 torchvision==0.13.0 --index-url https://download.pytorch.org/whl/cu115
122 | # RUN python -m pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
123 | # RUN python -m pip install torch==1.11.0 torchvision==0.12.0 --index-url https://download.pytorch.org/whl/cu115
--------------------
ERROR: failed to solve: process "/bin/sh -c python -m pip install torch==1.12.0 torchvision==0.13.0 --index-url https://download.pytorch.org/whl/cu115" did not complete successfully: exit code: 1
Then i changed the torch version to torch==1.11.0 torchvision==0.12.0
similar to the previous commit. This time, there is build issue with the hawp, No module name torch
.
=> CACHED [stage-1 15/16] RUN python -m pip install torch==1.11.0 torchvision==0.12.0 --index-url https://download.pytorch.org/whl/cu115 0.0s
=> ERROR [stage-1 16/16] RUN python -m pip install --upgrade pip setuptools && cd limap && python --version && pip --version && python -m pip install -r r 9.8s
------
> [stage-1 16/16] RUN python -m pip install --upgrade pip setuptools && cd limap && python --version && pip --version && python -m pip install -r requirements.txt && python -m pip install -Ive .:
0.508 Requirement already satisfied: pip in /opt/venv/lib/python3.9/site-packages (23.0.1)
0.584 Collecting pip
0.639 Downloading pip-23.3.1-py3-none-any.whl (2.1 MB)
0.888 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 8.5 MB/s eta 0:00:00
0.896 Requirement already satisfied: setuptools in /opt/venv/lib/python3.9/site-packages (58.1.0)
1.075 Collecting setuptools
1.097 Downloading setuptools-68.2.2-py3-none-any.whl (807 kB)
1.179 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.9/807.9 kB 9.9 MB/s eta 0:00:00
1.274 Installing collected packages: setuptools, pip
1.274 Attempting uninstall: setuptools
1.274 Found existing installation: setuptools 58.1.0
1.313 Uninstalling setuptools-58.1.0:
1.396 Successfully uninstalled setuptools-58.1.0
1.682 Attempting uninstall: pip
1.683 Found existing installation: pip 23.0.1
1.823 Uninstalling pip-23.0.1:
1.962 Successfully uninstalled pip-23.0.1
2.660 Successfully installed pip-23.3.1 setuptools-68.2.2
2.779 Python 3.9.18
2.943 pip 23.3.1 from /opt/venv/lib/python3.9/site-packages/pip (python 3.9)
3.250 Processing ./third-party/pytlsd
3.254 Installing build dependencies: started
8.338 Installing build dependencies: finished with status 'done'
8.339 Getting requirements to build wheel: started
8.445 Getting requirements to build wheel: finished with status 'done'
8.447 Preparing metadata (pyproject.toml): started
8.564 Preparing metadata (pyproject.toml): finished with status 'done'
8.569 Processing ./third-party/hawp
8.572 Installing build dependencies: started
9.514 Installing build dependencies: finished with status 'done'
9.515 Getting requirements to build wheel: started
9.599 Getting requirements to build wheel: finished with status 'error'
9.603 error: subprocess-exited-with-error
9.603
9.603 × Getting requirements to build wheel did not run successfully.
9.603 │ exit code: 1
9.603 ╰─> [17 lines of output]
9.603 Traceback (most recent call last):
9.603 File "/opt/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
9.603 main()
9.603 File "/opt/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
9.603 json_out['return_val'] = hook(**hook_input['kwargs'])
9.603 File "/opt/venv/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
9.603 return hook(config_settings)
9.603 File "/tmp/pip-build-env-2apo2evk/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
9.603 return self._get_build_requires(config_settings, requirements=['wheel'])
9.603 File "/tmp/pip-build-env-2apo2evk/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
9.603 self.run_setup()
9.603 File "/tmp/pip-build-env-2apo2evk/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 507, in run_setup
9.603 super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
9.603 File "/tmp/pip-build-env-2apo2evk/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 341, in run_setup
9.603 exec(code, locals())
9.603 File "<string>", line 4, in <module>
9.603 ModuleNotFoundError: No module named 'torch'
9.603 [end of output]
9.603
9.603 note: This error originates from a subprocess, and is likely not a problem with pip.
9.604 error: subprocess-exited-with-error
9.604
9.604 × Getting requirements to build wheel did not run successfully.
9.604 │ exit code: 1
9.604 ╰─> See above for output.
9.604
9.604 note: This error originates from a subprocess, and is likely not a problem with pip.
------
Dockerfile:124
--------------------
123 | RUN python -m pip install torch==1.11.0 torchvision==0.12.0 --index-url https://download.pytorch.org/whl/cu115
124 | >>> RUN python -m pip install --upgrade pip setuptools && \
125 | >>> cd limap && \
126 | >>> python --version && \
127 | >>> pip --version && \
128 | >>> python -m pip install -r requirements.txt && \
129 | >>> python -m pip install -Ive .
130 |
--------------------
ERROR: failed to solve: process "/bin/sh -c python -m pip install --upgrade pip setuptools && cd limap && python --version && pip --version && python -m pip install -r requirements.txt && python -m pip install -Ive ." did not complete successfully: exit code: 1
My system configuration is:
Is there any mismatch of torch and hawp, or my OS configuration.
Hi,
I would like to ask if it is possible to download the pre-computed line maps from standard benchmark datasets like Cambridge or 7 scenes?
Hello and thanks for your code.
Is it possible to run your code on a dataset consists of images only with no extra parameters? I mean i have only folder with images from the same camera but nothing else - no camera parameters or precomputed results. Anything could be done to run your code on such image folder?
Ubuntu 22.04.1
Anaconda 4.13.0
CUDA 11.8
Python 3.9
PyTorch 2.0.0+cu118
When I use the following command import limap
, the following error occurs.
Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/hc/github/limap/limap/__init__.py", line 3, in <module> from _limap import * ImportError: /home/hc/github/limap/_limap.cpython-39-x86_64-linux-gnu.so: undefined symbol: _ZTVN5ceres26QuaternionParameterizationE
I have tried installing it outside of the Anaconda environment, but the same error occurs. Can anyone tell me what the problem is? I would be very grateful!
How does the author get the 'inpainted_depth' data in ETH3D? The depth images provided on the ETH3D official website seem incompatible with the code for correct execution. As far as I know, the depth images on the official website correspond to the distorted images, but Limap requires undistorted JPG images?
I am very interested in your work!
I can now use colmap to reconstruct a sparse point cloud of my own dataset. Can you provide a readme to explain how to reconstruct a line map from colmap results?
Looking forward to your reply and wishing you a happy life!
Hi;
Thank you for your contributions.
How was the barn results for the case of sequentially moving camera generated?
I have not been able to find any demo files that can replicate this result.
Excellent job, I am very interested in your work!
Now, I can use your program to rebuild the line map of my own dataset, but I have some questions about the output line map result file.
Can you explain the information of each output file?
I would like to know how to read the 3D line segment coordinates and their corresponding camera ID, 2D line segment ID, and 2D line segment coordinates in the output line map?
Looking forward to your reply very much!
Amazing work!
Do you include an example or reproduction of the Structure-from-Motion refinement workflow mentioned in Sec 4.3 of your paper?
When I run python -m pip install -Ive .
, occurs the error:
` In file included from /home/rylynn/limap-initial/limap/base/transforms.cc:1:
/home/rylynn/limap-initial/limap/base/transforms.h:4:10: fatal error: colmap/base/pose.h: No such file or directory
4 | #include <colmap/base/pose.h>
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:104: limap/CMakeFiles/limap.dir/base/transforms.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
In file included from /home/rylynn/limap-initial/limap/base/camera.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
In file included from /home/rylynn/limap-initial/limap/base/camera_view.h:16,
from /home/rylynn/limap-initial/limap/base/linebase.h:13,
from /home/rylynn/limap-initial/limap/base/line_dists.h:10,
from /home/rylynn/limap-initial/limap/base/line_dists.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
In file included from /home/rylynn/limap-initial/limap/base/camera_view.h:16,
from /home/rylynn/limap-initial/limap/base/camera_view.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:90: limap/CMakeFiles/limap.dir/base/camera.cc.o] Error 1
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:118: limap/CMakeFiles/limap.dir/base/camera_view.cc.o] Error 1
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:188: limap/CMakeFiles/limap.dir/base/line_dists.cc.o] Error 1
In file included from /home/rylynn/limap-initial/limap/base/camera_view.h:16,
from /home/rylynn/limap-initial/limap/base/linebase.h:13,
from /home/rylynn/limap-initial/limap/base/line_linker.h:8,
from /home/rylynn/limap-initial/limap/base/line_linker.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:202: limap/CMakeFiles/limap.dir/base/line_linker.cc.o] Error 1
In file included from /home/rylynn/limap-initial/limap/base/image_collection.h:18,
from /home/rylynn/limap-initial/limap/base/image_collection.cc:1:
/home/rylynn/limap-initial/limap/base/transforms.h:4:10: fatal error: colmap/base/pose.h: No such file or directory
4 | #include <colmap/base/pose.h>
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:132: limap/CMakeFiles/limap.dir/base/image_collection.cc.o] Error 1
In file included from /home/rylynn/limap-initial/limap/base/camera_view.h:16,
from /home/rylynn/limap-initial/limap/base/linetrack.h:14,
from /home/rylynn/limap-initial/limap/base/linetrack.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:174: limap/CMakeFiles/limap.dir/base/linetrack.cc.o] Error 1
In file included from /home/rylynn/limap-initial/limap/base/camera_view.h:16,
from /home/rylynn/limap-initial/limap/base/linebase.h:13,
from /home/rylynn/limap-initial/limap/base/linebase.cc:1:
/home/rylynn/limap-initial/limap/base/camera.h:16:10: fatal error: colmap/base/camera.h: No such file or directory
16 | #include <colmap/base/camera.h>
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [limap/CMakeFiles/limap.dir/build.make:160: limap/CMakeFiles/limap.dir/base/linebase.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:667: limap/CMakeFiles/limap.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/home/rylynn/limap-initial/setup.py", line 68, in
setup(
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/init.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/command/develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/dist.py", line 1208, in run_command
super().run_command(command)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
self._build_extensions_serial()
File "/home/rylynn/anaconda3/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
self.build_extension(ext)
File "/home/rylynn/limap-initial/setup.py", line 61, in build_extension
subprocess.check_call(
File "/home/rylynn/anaconda3/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--parallel 8']' returned non-zero exit status 2.
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /home/rylynn/anaconda3/bin/python -c '
exec(compile('"'"''"'"''"'"'
# This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
#
# - It imports setuptools before invoking setup.py, to enable projects that directly
# import from `distutils.core` to work with newer packaging standards.
# - It provides a clear error message when setuptools is not installed.
# - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
# setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
# manifest_maker: standard file '"'"'-c'"'"' not found".
# - It generates a shim setup.py, for handling setup.cfg-only projects.
import os, sys, tokenize
try:
import setuptools
except ImportError as error:
print(
"ERROR: Can not execute `setup.py` since setuptools is not available in "
"the build environment.",
file=sys.stderr,
)
sys.exit(1)
__file__ = %r
sys.argv[0] = __file__
if os.path.exists(__file__):
filename = __file__
with tokenize.open(__file__) as f:
setup_py_code = f.read()
else:
filename = "<auto-generated setuptools caller>"
setup_py_code = "from setuptools import setup; setup()"
exec(compile(setup_py_code, filename, "exec"))
'"'"''"'"''"'"' % ('"'"'/home/rylynn/limap-initial/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' develop --no-deps
cwd: /home/rylynn/limap-initial/
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
`
Does anyone know what is going on here? please ask for help!!
Hi. Very amazing work.
May I ask a simple question? Since I'm not familiar with this field, I'm curious whether we can use the point cloud which is not generated by SfM, like the point cloud generated by using other sensors.
Hello, thanks for the wonderful work!
I wonder if there is a possible bug in calculating the score between two lines in the code snippet below.
double multiplier() const { return get_multiplier(score_th); }
double get_multiplier(const double& score_th) {
// exp(- (val / sigma)^2 / 2.0) >= 0.5 <--> val <= 1.1774100 sigma
return sqrt(-log(score_th) * 2.0);
}
double LineLinker2d::compute_score_angle(const Line2d& l1, const Line2d& l2) const {
double angle = compute_angle<Line2d>(l1, l2);
double score = expscore(angle, config.th_angle * config.multiplier());
if (score < config.score_th)
score = 0.0;
return score;
}
Should double score = expscore(angle, config.th_angle * config.multiplier());
be modified to double score = expscore(angle, config.th_angle / config.multiplier());
to ensure that when angle
equals config.th_angle
, the score
precisely equals 0.5 (assuming the score_th
is set to 0.5)?
Was this your initial intention?
Looking forward to your reply :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.