Giter Site home page Giter Site logo

vissatsatellitestereo's Introduction

VisSat Satellite Stereo

Note: this repo is not actively maintained; I've been trying to build a new full-stack satellite stereo pipelines in computer vision style: SatelliteSfM.

Introduction

This is the python interface for VISion-based SATellite stereo (VisSat) that is backed by our adapted COLMAP. You can run both SfM and MVS on a set of satellite images.

Project page: https://kai-46.github.io/VisSat/

Installation

  • Install our adapted COLMAP first.
  • Install GDAL and GDAL for python on your machine according to this page.
  • Use python3 instead of python2.
  • All the python dependent packages can be installed via:
pip3 install -r requirements.txt

Quick Start

  • Download the MVS3DM satellite stereo dataset.
  • The file "aoi_config/MVS3DM_Explorer.json" is a template configuration for the site 'Explorer' in the MVS3DM dataset. Basically, you only need to set two fields, i.e., "dataset_dir" and "work_dir", in order to get started for this site.
  • Launch our pipeline with:
python3 stereo_pipeline.py --config_file aoi_config/MVS3DM_Explorer.json
  • If you enable "aggregate_3d", the output point cloud and DSM will be inside "{work_dir}/mvs_results/aggregate_3d/"; alternatively, if "aggregate_2p5d" is adopted, the output will be inside "{work_dir}/mvs_results/aggregate_2p5d/".
  • Our pipeline is written in a module way; you can run it step by step by choosing what steps to execute in the configuration file.
  • You can navigate inside {work_dir} to get intermediate results.

For Hackers

General Program Logic

We use a specific directory structure to help organize the program logic. The base directory is called {work_dir}. To help understand how the system works, let me try to point out what directory or files one should pay attention to at each stage of the program.

SfM stage

You need to enable {“clean_data”, “crop_image”, “derive_approx”, “choose_subset”, “colmap_sfm_perspective”} in the configuration. Then note the following files.

  1. (.ntf, .tar) pairs inside {dataset_dir}
  2. (.ntf, .xml) pairs inside {work_dir}/cleaned_data
  3. {work_dir}/aoi.json
  4. .png inside {work_dir}/images, and .json inside {work_dir}/metas
  5. .json inside {work_dir}/approx_camera, especially perspective_enu.json
  6. {work_dir}/colmap/subset_for_sfm/{images, perspective_dict.json}
  7. {work_dir}/colmap/sfm_perspective/init_ba_camera_dict.json

Step [1-4] transform the (.ntf, .tar) data into more accessible conventional formats. Step 5 approximates the RPC cameras with perspective cameras. Step [6-7] selects a subset of images (by default, all the images), performs bundle adjustment, and writes bundle-adjusted camera parameters to {work_dir}/colmap/sfm_perspective/init_ba_camera_dict.json. For perspective cameras in the .json files mentioned in Step [5-7], the camera parameters are organized as:

w, h, f_x, f_y, c_x, c_y, s, q_w, q_x, q_y, q_z, t_x, t_y, t_z

, where (w,h) is image size, (f_{x,y}, c_{x,y}, s) are camera intrinsics, q_{w,x,y,z} is the quaternion representation of the rotation matrix, and t_{x,y,z} is the translation vector.

Coordinate system

Our perspective cameras use the local ENU coordinate system instead of the global (lat, lon, alt) or (utm east, utm north, alt).

For conversion between (lat, lon, alt) and local ENU, please refer to: coordinate_system.py and latlonalt_enu_converter.py

For conversion between (lat, lon) and (utm east, utm north), please refer to: lib/latlon_utm_converter.py

MVS stage

To run MVS after the SfM stage is done, you need to enable {“reparam_depth”, “colmap_mvs”, “aggregate_3d”} or {“reparam_depth”, “colmap_mvs”, “aggregate_2p5d”}.

If you enable "aggregate_2p5d", you will be able to see the per-view DSM in {work_dir}/colmap/mvs/dsm.

Cite Our Work

@inproceedings{VisSat-2019,
 title={Leveraging Vision Reconstruction Pipelines for Satellite Imagery},
 author={Zhang, Kai and Sun, Jin and Snavely, Noah},
 booktitle={IEEE International Conference on Computer Vision Workshops},
 year={2019}
}

License

This software uses the 3-clause BSD license.

vissatsatellitestereo's People

Contributors

dependabot[bot] avatar kai-46 avatar millerjv avatar snavely avatar wdixon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

vissatsatellitestereo's Issues

KeyError / SiftExtraction.use_gpu Error when running stereo_pipeline.py

Thank you for releasing your source code. Unfortunately, running your pipeline with the MVS3D imagery throws an error message.

When executing python3 stereo_pipeline.py --config_file aoi_config/MVS3DM_Explorer.json, I receive the following error message:

  File "VisSatSatelliteStereo/stereo_pipeline.py", line 115, in run
    self.run_colmap_sfm_perspective()
  File "VisSatSatelliteStereo/stereo_pipeline.py", line 346, in run_colmap_sfm_perspective
    colmap_sfm_perspective.run_sfm(work_dir, sfm_dir, init_camera_file, weight)
  File "VisSatSatelliteStereo/colmap_sfm_perspective.py", line 97, in run_sfm
    after_bundle_params = after_bundle_cameras[img_name]
KeyError: '0000_WV03_14NOV15_135121-P1BS-500171606160_05_P005.png'

After some investigation I found that in colmap_sfm_perspective.py the following line
colmap_sfm_commands.run_point_triangulation(img_dir, db_file, tri_dir, init_template, reproj_err_threshold, reproj_err_threshold, reproj_err_threshold) produces "empty" results (i.e. cameras.txt, images.txt, poinits3D.txt are empty)

Running the first command in the corresponding log file (log_sfm_perspective.txt)

Running subprocess: colmap feature_extractor --database_path /mnt/DataSSD/SpaceNet/MVS3D_wd/colmap/sfm_perspective/database.db --image_path /mnt/DataSSD/SpaceNet/MVS3D_wd/colmap/sfm_perspective/images --ImageReader.camera_model PERSPECTIVE --SiftExtraction.max_image_size 10000 --SiftExtraction.estimate_affine_shape 0         --SiftExtraction.domain_size_pooling 1 --SiftExtraction.max_num_features 25000 --SiftExtraction.num_threads 32 --SiftExtraction.use_gpu 1 --SiftExtraction.gpu_index -1

causes

==============================================================================
Feature extraction
==============================================================================

Killed

The same error is thrown when I call the above command (except for the --ImageReader.camera_model PERSPECTIVE) for the original colmap library (https://github.com/colmap/colmap).

Any ideas what could be the cause of this?

I think it would be nice if the pipeline (VisSatSatelliteStereo) would throw an appropriate error message and not something like KeyError: '0000_WV03_14NOV15_135121-P1BS-500171606160_05_P005.png'.

Best regards,
Sebastian

Worldview images

First of all, congratulations on such a great job. My question is the following;
In my case I have downloaded Wolrdview -2 images which have the following metadata files:

original image in tif format,
file with extension .ATT
file with extension .EPH
file with extension .GEO
file with extension .IMD
file with extension .RPB
file with extension .TIL
file with extension .XML
I would like to know how I can use your tool with this data, what steps should I take.

Thanks for your attention

Running on imagery pairs?

Hi!

I've been able to get this working for the IARPA MVS dataset, but if I attempt to run it on just two images, it fails out, seeming to not be able to find any matches.

Did you have to change any settings to get this to work on only 2 images for your evaluations in the paper?

here's my what my logs look like at the point where it cant match and triangulate:

==============================================================================

Loading database

==============================================================================

Loading cameras... 2 in 0.000s

Loading matches... 1 in 0.000s

Loading images... 2 in 0.006s (connected 2)

Building correspondence graph... in 0.001s (ignored 0)

Elapsed time: 0.000 [minutes]

0001_WV03_15MAY05_140810-P1BS-500497282090_01_P001.png, 6400

0000_WV03_15APR03_140238-P1BS-500497283030_01_P001.png, 6400

==============================================================================

Triangulating image #1

==============================================================================

=> Image has 0 / 6400 points

=> Triangulated 0 points

==============================================================================

Triangulating image #2

==============================================================================

=> Image has 0 / 6400 points

=> Triangulated 0 points

=> Merged observations: 0

=> Completed observations: 0

sqlite3.OperationalError: no such table: images

I have only downloaded some PAN and MSI images (not completely) as you can seen below,

ssr_msi

ssr_pan

Then I changed the path in the MVS3DM_Explorer.json and executed the code below ,

python3 stereo_pipeline.py --config_file aoi_config/MVS3DM_Explorer.json

{'dataset_dir': '/home/ujjawal/my_work/object_recon/SatelliteSurfaceReconstruction/satellite_imgs/WV3/PAN', 'work_dir': '/home/ujjawal/my_work/object_recon/SatelliteSurfaceReconstruction/VisSatSatelliteStereo/data2/kz298/explorer', 'bounding_box': {'zone_number': 21, 'hemisphere': 'S', 'ul_easting': 354052.3651180889, 'ul_northing': 6182702.10540914, 'width': 712.9748326897388, 'height': 651.4791190521792}, 'steps_to_run': {'clean_data': True, 'crop_image': True, 'derive_approx': True, 'choose_subset': True, 'colmap_sfm_perspective': True, 'inspect_sfm_perspective': False, 'reparam_depth': True, 'colmap_mvs': True, 'aggregate_2p5d': True, 'aggregate_3d': False}, 'alt_min': -30.0, 'alt_max': 120.0}
step clean_data:	finished in 0.010855866666666667 minutes
step crop_image:	finished in 0.04069456666666667 minutes
step derive_approx:	finished in 0.02496066666666667 minutes
step choose_subset:	finished in 1.0033333333333333e-05 minutes
Traceback (most recent call last):
  File "stereo_pipeline.py", line 511, in <module>
    pipeline.run()
  File "stereo_pipeline.py", line 115, in run
    self.run_colmap_sfm_perspective()
  File "stereo_pipeline.py", line 346, in run_colmap_sfm_perspective
    colmap_sfm_perspective.run_sfm(work_dir, sfm_dir, init_camera_file, weight)
  File "/home/ujjawal/my_work/object_recon/SatelliteSurfaceReconstruction/VisSatSatelliteStereo/colmap_sfm_perspective.py", line 72, in run_sfm
    reproj_err_threshold, reproj_err_threshold, reproj_err_threshold)
  File "/home/ujjawal/my_work/object_recon/SatelliteSurfaceReconstruction/VisSatSatelliteStereo/colmap_sfm_commands.py", line 75, in run_point_triangulation
    create_init_files(db_file, template_file, out_dir)
  File "/home/ujjawal/my_work/object_recon/SatelliteSurfaceReconstruction/VisSatSatelliteStereo/colmap_sfm_utils.py", line 110, in create_init_files
    table_images = db.execute("SELECT * FROM images")
sqlite3.OperationalError: no such table: images

I'm missing one {work_dir}/colmap/sfm_perspective/init_ba_camera_dict.json in sfm_stage

Can you @Kai-46 please have a look into this issue?

Satellite Surface Reconstruction

I've written a library that allows to combine the results of VisSat with several state-of-the-art meshing / texturing libraries to obtain texture surface reconstructions from satellite images.

Since this is potentially useful for other VisSat users as well, I was wondering if you want to add a reference somewhere. Feel free to close this issue.

sqlite3.OperationalError: no such table: images

I have some problems in this project,Except the CUDA version is different, everything else is the same, but the error is reported. Can you help me have a look.Here is my log file in log_sfm_perspective.txt
Screenshot from 2022-07-28 21-08-58

reparam_depth issue

I'm running into an issue running the Sfm and MVS portions of the pipeline. The initial steps clean_data, crop_image, derive_approx, choose_subset, colmap_sfm_persepctive' all run to completion successfully on the explorer` dataset.

Once the reparam_depth portion of the pipeline initializes it immediately fails on line 99 (reparm_depth.py)
min_z_values = np.percentile(z_values, 1) - margin

Looks like this is failing because my z_values list is empty which is due to the fact that thepoints3D.txt in my sparse output folder doesn't contain any data. It seems that this is supposed to be populated as part of the colmap process but for some reason no data is being written.

Has anyone experienced this problem or is there possibly a step that I'm missing?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.