Giter Site home page Giter Site logo

rust-cv / cv Goto Github PK

View Code? Open in Web Editor NEW
814.0 814.0 62.0 2.29 MB

Rust CV mono-repo. Contains pure-Rust dependencies which attempt to encapsulate the capability of OpenCV, OpenMVG, and vSLAM frameworks in a cohesive set of APIs.

Rust 99.94% HTML 0.06%
algorithms computer-vision crates rust-cv

cv's Introduction

Rust Computer Vision

Rust CV is a project to implement computer vision algorithms in Rust.

What is computer vision

Many people are familiar with covolutional neural networks and machine learning in computer vision, but computer vision is much more than that. One of the first things that Rust CV focused on was algorithms in the domain of Multiple-View Geometry (MVG). Today, Rust now has enough MVG algorithms to perform relatively simple camera tracking and odometry tasks. Weakness still exists within image processing and machine learning domains.

Goals

Here are some of the domains of computer vision that Rust CV intends to persue along with examples of the domain (not all algorithms below live within the Rust CV organization, and some of these may exist and are unknown):

To support computer vision tooling, the following will be implemented:

cv's People

Contributors

astraw avatar codec-abc avatar janroden avatar mpizenberg avatar muqito avatar spectralflame avatar stephanemagnenat avatar that1guy007 avatar vadixidav avatar whhuang avatar xd009642 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cv's Issues

Sift Access Request

Please provide Sift access to the following new hires:

Oliver Coquillo - CS Lead

Aaron Jackson - CS / Payment Ops Specialist

BoW matching algorithm

It would be good to have at least a simple BoW matching algorithm. Refer to the wikipedia page for more information.

This is necessary to make the matching process go quickly. Right now a brute force match is performed. BoW by a well-distributed but constant position selection of bits should be an improvement in speed over the current brute force matching. To elaborate, choosing the first 8 bits, for instance, would be biased towards certain gradients in the binary feature (same side of the feature or same scale, like an AKAZE feature). Collecting a selection of 8 well-distributed bits from different places in the feature will avoid this bias. This can be done quickly on x86 processors with the BMI-2 extension using this crate's PEXT abstraction for u64. No x86 processors currently support PEXT on numbers bigger than 64-bit, so multiple PEXT operations will be required if this approach is taken. Another fast approach may be to mask and OR together separate 8 bit pieces.

Remove blas and lapack dependencies

We have a dependence on blas and lapack. This stems from our use of argmin with ndarray support in cv-optimize. This pulls in ndarray-linalg, which then requires these linear algebra support libraries written in C. This counters our goal of making an easy-to-build Rust source with no C dependencies. While we do have a Levenberg-Marquardt algorithm as part of the rust-cv organization, since the implementation is not a sparse Levenberg-Marquardt, we are using Nelder-Mead instead for various reasons, including the ability to perform structureless bundle-adjust.

The first attempt to remedy this involved switching to the experimental nalgebra support in argmin, but this was unsuccessful because the implementation produced numerical errors and did not converge.

We can continue to use Nelder-Mead without using argmin, and this may require creating a custom implementation of Nelder-Mead, similar to our implementation of Levenberg-Marquardt.

Sample consensus doesn't give back the correct model

When solving the two-view pose using the EightPoint algorithm in sample consensus, the incorrect model is returned. The cheirality check that occurred in EssentialMatrix::solve_pose was removed when EightPoint was changed to return 4 separate poses, which are only checked based on cosine distance, not cheirality. Cheirality must be checked in the code for the CameraToCamera transformation model residual for it to work correctly.

iLBA or similar structureless bundle adjust

See this link for details: https://borg.cc.gatech.edu/projects/ilba.html.

Structureless bundle adjust is a technique that is already used in vslam-sandbox today. Unfortunately, the way that it does it is not efficient. It runs Nelder-Mead optimization using all the landmark observations as constraints for all the views. By splitting the reconstruction into smaller constraints, such as with iLBA, we might be able to scale better than we do today.

Some common questions

Hi, this is a really wonderful try in rust. But for a DL user I mainly focus only on some core functions like:

  • resize
  • cvtColor
  • video open/write
  • image open/write

So, just wonder, how does the performance compare with opencv? for example, the resize sometimes is non-crutial since it time-costing ,in opencv, it using SIMD for accelerations, does cv able to do so?

Integrate ndarray-vision

ndarray-vision is where ndarray-based image processing algorithms belong. Currently, the akaze crate does several image processing tasks on its own (fast explicit diffusion, schar filters, etc). We should move these algorithms into ndarray-vision and add it into the mono-repo.

Missing optional serde feature in Akaze

I need to store the result of Akaze for embedded use. So, I'll need a serde feature, to optionally allow serialization of KeyPoint and descriptors, by using the optional serde feature of the later. I'll make a PR.

RANSAC

The sample-consensus crate provides abstractions for consensus algorithms. Currently, arrsac is provided as a state-of-the-art consensus algorithm. However, it is useful to be able to compare the original RANSAC algorithm, which is commonly used as a benchmark or when comparing algorithms that aren't sample consensus algorithms when used in conjunction with a sample consensus algorithm.

CV tooling for Point Clouds

See #52. This ticket tracks the creation of PLY tools in Rust CV.

@Jafagervik Sorry for the late response. I went ahead and made you an owner in Rust CV. You may need to accept the invitation. Once you do, you can make a new repository. Please use the .github folder (and ensure_no_std folder) in the arrsac repository as an example (it is the most up-to-date and correct CI script for things that include no_std support).

I would recommend making any kind of data structures and traits related to point clouds separate from anything PLY related. I need to do some more work on the space crate once GATs come out, but that is the common integration point for abstractions surrounding NN searches and spatial data structures. Please feel free to file issues on space if you need any additional traits added there (it will likely need many more than it currently has). PLY loading should be separated into its own crate, and you may want to initially limit it to loading PLY data in a streaming fashion (via an iterator or stream if doing async). I recommend starting with a synchronous (not async) version written with nom that just produces an iterator over the parsed PLY data. I can work on this as well, just ping me if you need anything. Discord is preferred.

We also use PLY in vslam-sandbox (in this repo) and ennona: https://github.com/rust-cv/ennona. Currently we are using poorly maintained crates. It would be nice to make a fresh new crate, but hopefully also something that I can maintain. Feel free to help keep it up to date and maintain it though if you like ๐Ÿ˜„, but just contributing a PLY parser would be greatly appreciated. I would see how vslam-sandbox exports PLYs and how ennona consumes PLY point cloud data and try to use that as a basis for the requirements of this parser so that way we can replace the ones we use today. You may also want to look at the other PLY parsers that exist today (ply-rs is the best one: https://crates.io/crates/ply-rs).

Like I said before, lets try to keep everything organized. This should belong in its own repository in Rust CV. This repository is for computer vision primitives that specifically depend on cv-core and implement its traits. Since the PLY crate you are working on is much more broadly applicable than just computer vision, it should belong outside of this mono-repo.

Let me know if you need anything. Like I said, I am available via the discord server. Find me as vadix on the server.

Video processing and playback

Just like how we can work with images, it would be really useful to work with Videos to provide an alternative for VideoCapture from OpenCV. Ideally, we'd not want to load the entire video at once but allow some form of frame-by-frame processing.

Create corresponding tutorial chapter for geometric verification code

@vadixidav created code for a geometric verification tutorial which was slotted for a Chapter 5 tutorial, but there is no corresponding tutorial to describe it. This is a nice example to show a consensus algorithm (ARRSAC), and is a nice extension to the feature matching of Chapter 4 to demonstrate how the features can be utilized to infer camera motion. It would be nice to have the corresponding markdown description to solidify it as an example in the tutorial book.

Change SVDs to use new nalgebra APIs

nalgebra previously did not sort singular vectors in SVD. However, now it does by default, and it has an option to ask for them unordered as well. See https://docs.rs/nalgebra/0.30.1/nalgebra/base/struct.Matrix.html#method.svd.

In many places in the code, the SVD is is being explicitly sorted. Ideally, we would like to rely on this new behavior where the SVD is already sorted for us, so this old code can be changed and the sorting can be removed. Also, in any case where we were not sorting the singular values, we can change it to explicitly use the unordered version of the API: https://docs.rs/nalgebra/0.30.1/nalgebra/base/struct.Matrix.html#method.svd_unordered.

Template Matching Example

I had tried feature matching on a screenshot, the screenshot being the first image and a small cut out of it being the second one, which did not work.
My goal was to find whether the cut out exists in the screenshot via feature matching (as is the examples, feature matching one seemed that it could be used for my purpose) (the code was the same as that in the example).

Can you please tell a bit about whether feature matching or anything in the current state of the project could help to find objects in an image/ template match.

Accelerate AKAZE with WebGPU

AKAZE can be sped up in several places with GPU routines. Support should be added using WebGPU to get the highest level of cross-platform support for compute acceleration. The three parts that are important is the diffusion step, gradient computation, and feature extraction. Currently, the first two of these are done with ndarray, but GPU should be optionally supported. This will be gated behind an optional feature, but it should be capable of falling back to the CPU implementation if GPUs are not available for compute. At this time, it is not necessary to tell AKAZE which GPU device to use to perform the compute.

AC-RANSAC

I have talked briefly with @pmoulon about RANSAC, and his advice for more robust sample consensus (such as while performing SfM) is to utilize AC-RANSAC. This should go on our to-do list as an important sample consensus algorithm.

Camera Calibration

It would be good if we could perform camera calibration in some way. Auto-calibration from multiple images (and perhaps from SfM with optimization) would be wonderful, but equally useful could be a checkerboard-based calibration or calibration based on some other pattern. Currently, cameras must be calibrated outside of our CV software, but it would be ideal to have a Rust solution to this problem.

Issue trying the example tutorial

I'm trying the example tutorial my code for the main.rs is:

use image::{DynamicImage, Rgba, GenericImageView};
use imageproc::drawing;
use rand::Rng;

fn main() {
    let src_image = image::open("res/0000000000.png").expect("failed to open image file");

    let mut rng = rand::thread_rng();

    let mut canvas = drawing::Blend(src_image.to_rgba());
    for _ in 0..50 {
        let x : i32 = rng.gen_range(0, src_image.width() - 1) as i32;
        let y : i32 = rng.gen_range(0, src_image.height() - 1) as i32;
        drawing::draw_cross_mut(&mut canvas, Rgba([0, 255, 255, 128]), x as i32, y as i32);
    }

    let out_img = DynamicImage::ImageRgba8(canvas.0);
    imgshow::imgshow(&out_img);
}

my Cargo.toml file:

[package]
name = "chapter2-first-program"
version = "0.1.0"
authors = ["Walter Perdan <[email protected]>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
imgshow = { git = "https://github.com/rust-cv/cv.git" }
image = "0.23.7"
imageproc = "0.21.0"
rand = "0.7.3"

image is in res folder;
When i run cargo run:

Running `target/debug/chapter2-first-program`
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', /home/walter/.cargo/registry/src/gith
ub.com-1ecc6299db9ec823/wgpu-native-0.4.3/src/instance.rs:474:72
stack backtrace:
   0: rust_begin_unwind
             at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/std/src/panicking.rs:475
   1: core::panicking::panic_fmt
             at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/panicking.rs:85
   2: core::panicking::panic
             at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/panicking.rs:50
   3: core::option::Option<T>::unwrap
             at /rustc/18bf6b4f01a6feaf7259ba7cdae58031af1b7b39/library/core/src/option.rs:370
   4: wgpu_request_adapter
             at /home/walter/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-native-0.4.3/src/instance.rs
:474
   5: wgpu::Adapter::request
             at /home/walter/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-0.4.0/src/lib.rs:545
   6: <iced_wgpu::window::backend::Backend as iced_native::window::backend::Backend>::new
             at /home/walter/.cargo/registry/src/github.com-1ecc6299db9ec823/iced_wgpu-0.2.3/src/window/backen
d.rs:21
 7: iced_winit::application::Application::run
             at /home/walter/.cargo/registry/src/github.com-1ecc6299db9ec823/iced_winit-0.1.0/src/application.
rs:180
   8: iced::application::Application::run
             at /home/walter/.cargo/registry/src/github.com-1ecc6299db9ec823/iced-0.1.1/src/application.rs:201
   9: imgshow::imgshow
             at /home/walter/.cargo/git/checkouts/cv-f2802c299d0e3a2a/ed1a25e/imgshow/src/lib.rs:12
  10: chapter2_first_program::main
             at ./src/main.rs:18
  11: core::ops::function::FnOnce::call_once

my rustc version:

rustc --version
rustc 1.47.0 (18bf6b4f0 2020-10-07)

The image is in the target/debug/res folder Infact if i move it i receive the IoError:

cargo run
   Finished dev [unoptimized + debuginfo] target(s) in 0.17s
    Running `./chapter2-first-program`
thread 'main' panicked at 'failed to open image file: IoError(Os { code: 2, kind: NotFound, message: "No such file or directory" })', src/main.rs:6:55
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

What is wrong or what i did wrong? Sorry maybe is trivial for you but i'am a newbie with Rust... ๐Ÿ™‚

Convert remaining vSLAM Sandbox command line options into text entry boxes

The remaining settings (such as the PLY export location, the reconstruction data export location, the export thresholds, the camera calibration, etc) should all be removed from the command line and added as GUI elements. The camera calibration may need to be should into a JSON blob for convenience of sharing.

Optimize the Akaze feature detector

While working on fixing #63, I'm going through all of Akaze's source code quite deeply, and so to not loose the understanding, I'm collecting here the low-hanging fruit optimization opportunities I'm discovering. These are:

  • In create_nonlinear_scale_space, the Lx and Ly images could be computed in parallel, as these only access Lsmooth in read. This should be useful for the first evolutions when images are of a significant size.
  • Descriptor could be computed just after angle is calculated, in order to optimize cache locality when accessing the evolutions.
  • In angle and descriptor computation, window is iterator with x major, while in memory images are y major. This is bad for cache locality.
  • Improved filter functions with better loop structures.
  • SIMD in filters.
  • In find_scale_space_extrema, there is a O(nยฒ) loop for testing duplicates. This could be optimized to O(n) using some form of spacing hashing.

A note here: I did not consider micro-optimizations, such as parallelizing loops within simple image computations. If so, more could be done, but the benefit is not fully clear due to the overhead cost of spawning threads.

PnP

We currently do not have a PnP crate. Such a crate would combine the Levenberg-Marquardt crate from #2 with sample-consensus. The user will be able to choose the estimator and choose the sample consensus algorithm.

Guidance

The definition of the Jacobian for PnP should use the quaternion rather than a rotation matrix or a polar coordinate. It is easy (computationally) to calculate the derivative of the outputs of a quaternion rotation in respect to the quaternion that describes the rotation. This makes applying Levenberg-Marquardt to an Isometry3 from nalgebra, which is what we are currently using for pose in cv-core, a computationally easy task. The partial derivative of the output in respect to the translation component is trivial to derive.

Figure out why CI test build is breaking

Currently, the CI test build is breaking, as there seems to be a SIGKILL on the process. We might be going over some time limit for building. We should investigate to see if this can be worked around.

Create an indirect vSLAM application

We should create an indirect vSLAM application to test our algorithms and rally support in the community. It might also be useful to some to get 3d reconstructions from video or to take measurements. The application should be relatively minimal and focus on running a SLAM engine and visualizing the point cloud and cameras.

This is currently being done in the vslam-sandbox crate.

Create initial empty vslam-sandbox GUI

This task is to get vslam-sandbox running with a simple GUI. The command line arguments should be used exactly the same way they are today, but the GUI should contain info logs as if one used RUST_LOG=info at the command line. The GUI and application exit after all the processes that are currently done are done. GUI should be made with iced for wgpu compatibility.

Calibrate camera for distorsion

Hi,

I'm trying to calibrate a camera against what i believe to be radial distortion. With openCV the method used is calibrateCamera (link). Which usually requires you to use a chessboard pattern in order to identify the actual points and automatically calculate where they should be. I have manually constructed 6 pairs of these points.

My question is how do we achieve this using rust-cv? It is implemented? I've found something similar in the dlt crate but the output is a 4x3 matrix and i can't really make sense of it. (More details on what i've tried be found here)

Any help in deeply appreciated.

Disclaimer: I've been reading for about 2 days and decided to open an issue to you guys , maybe it will also be useful for someone else who's looking for this in the future.

SIFT

The patents on SIFT have recently expired and we can now do a Rust implementation of the algorithm. This algorithm is a detector and descriptor which is known for its high robustness. Currently we have AKAZE, but we would want to use this when doing non real-time tasks to maximize matching accuracy.

Akaze's implementation seems to have issues with rotational invariance

I am trying to use Akaze for an application that searches correspondences between a template and an image seen by a camera.

As a start, I have written the common pipeline building on the excellent tutorial of Rust CV. I am using Akaze with threshold 0.001, looking for two nearest neighbors, and using a distance ratio criteria of 0.8 between the first best match and the second one (best_dist < 0.8 * next_best_dist), as suggested in the Akaze OpenCV tutorial. Besides that, I am using the functions matching and symmetric_matching from the tutorial as is.

For comparison, I created a similar pipeline in Python using OpenCV (4.7.0.72). When only scaling is involved, both Rust CV's Akaze and Open CV Akaze perform well:

Open CV, before matching: left: 2514 keypoints, right: 820 keypoints:
correspondences_opencv_scale

Rust CV, before matching: left: 5353 keypoints, right: 1480:
correspondences_rustcv_scale

However, when I rotate the small image, Rust CV completely fails while Open CV works like a charm:

Open CV, before matching: left: 2514 keypoints, right: 1207 keypoints:
correspondences_opencv_rot

Rust CV, before matching: left 5353 keypoints, right: 2490
correspondences_rustcv_rot

So, it looks to me that there is something very broken with the rotational invariance of RustCV's Akaze implementation.
Was this implementation ever validated against the OpenCV one?

I am also surprised by the different numbers of matches, as I would expect more similar ones between the two implementations.

VITAMIN-E image feature meshing and noise removal

"VITAMIN-E: VIsual Tracking And MappINg with Extremely Dense Feature Points" is a paper that introduces several novel concepts. One of the things it does is create highly accurate meshes in real-time. The image meshing component is achieved by performing Delaunay triangulation (standard technique) with NLTGV minimization proposed by Greene et al (GitHub here).

This ticket is to create a crate that can perform this image meshing algorithm as per the VITAMIN-E paper.

Fisheye camera model

cv-core has a CameraModel trait that should be implemented for fisheye cameras. A new crate should be created called cv-fisheye for this purpose, similar to the existing cv-pinhole.

Add image "adder", queue, save, and export button to vSLAM Sandbox GUI

Currently, vslam-sandbox operates by taking all images to run at the command line. We should augment this by allowing images to be added through either drag-and-drop or a file chooser onto the GUI. This would replace passing this at the command line. Adding images will add them to a queue. There will also be a button to queue up a save of the reconstruction state to the GUI. Additionally, there will be a button to export to PLY. The settings for these can still be retrieved from the command line within the scope of this issue.

Integrate point cloud viewer into vSLAM Sandbox

The point cloud viewer being worked from by @Schweeble needs to be integrated into vSLAM Sandbox. This viewer should display the point cloud between every operation that it can. Specifically:

  • After an image is registered
  • After each component of optimization
    • Bundle adjust
    • Filtering
    • Merging

The camera positions should also be passed to the point cloud viewer as well.

Poisson surface reconstruction

We should have a crate to perform Poisson surface reconstruction. Out of the most appropriate methods for noisy and scattered data (Ball-pivoting algorithm and Poisson Surface Reconstruction), Poisson Surface Reconstruction is most appropriate for photogrammetric reconstruction (rather than reconstruction of other point cloud data such as LIDAR).

It is not clear if surface reconstruction should have abstractions associated with it. We may only need Poisson surface reconstruction for the time being. Some datatypes may be added to the cv-core crate as necessary.

Build fails with Rust 1.54.0

Building rust-cv with Rust 1.54.0 fails due to this error:

   [...]
   Compiling cv-sfm v0.1.0 (/Users/fast/Projects/Personal/rust/rust-cv/cv-sfm)
error[E0658]: use of unstable library feature 'array_map'
    --> cv-sfm/src/lib.rs:1826:27
     |
1826 |         let poses = views.map(|view| {
     |                           ^^^
     |
     = note: see issue #75243 <https://github.com/rust-lang/rust/issues/75243> for more information

error[E0658]: use of unstable library feature 'array_map'
    --> cv-sfm/src/lib.rs:1844:23
     |
1844 |                 views.map(|view| {
     |                       ^^^
     |
     = note: see issue #75243 <https://github.com/rust-lang/rust/issues/75243> for more information

error[E0658]: use of unstable library feature 'array_map'
  --> cv-sfm/src/export.rs:95:50
   |
95 |             [(1, 1), (1, -1), (-1, -1), (-1, 1)].map(|(up, right)| {
   |                                                  ^^^
   |
   = note: see issue #75243 <https://github.com/rust-lang/rust/issues/75243> for more information

error: aborting due to 3 previous errors

For more information about this error, try `rustc --explain E0658`.
error: could not compile `cv-sfm`

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed

What version of Rust shall I use instead?

VITAMIN-E curvature feature detection

"VITAMIN-E: VIsual Tracking And MappINg with Extremely Dense Feature Points" is a paper that introduces several novel concepts. One of those is a curvature based feature detection scheme. This scheme does not have an associated descriptor, but is purely a detector. It is useful to effectively get a maximally dense blanket of features over all interesting places in the image. This is incredibly useful for SfM. The paper also uses it to perform real-time meshing and dense feature extraction thanks to an overabundance of features, which is another novel accomplishment.

This ticket is for us to implement this image curvature detector.

Cargo add cv doesn't work due to old unsupported attributes.

New project, and doing cargo add cv ; cargo run will generate this error:

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> /home/dogunbound/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bitarray-0.2.6/src/lib.rs:2:1
  |
2 | #![feature(min_const_generics)]
  | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: remove the attribute
  |
  = help: the feature `min_const_generics` has been stable since `1.51.0` and no longer requires an attribute to enable

I decided to try out master with this settings:

[dependencies.cv]
git = "https://github.com/rust-cv/cv"

And only got these warnings (atleast it is running):

[dogunbound@archlinux test-rust]$ cargo run
warning: skipping duplicate package `ensure_no_std` found at `/home/dogunbound/.cargo/git/checkouts/cv-f2802c299d0e3a2a/82a25ee/cv-geom/ensure_no_std`
warning: skipping duplicate package `ensure_no_std` found at `/home/dogunbound/.cargo/git/checkouts/cv-f2802c299d0e3a2a/82a25ee/lambda-twist/ensure_no_std`
warning: skipping duplicate package `ensure_no_std` found at `/home/dogunbound/.cargo/git/checkouts/cv-f2802c299d0e3a2a/82a25ee/akaze/ensure_no_std`
warning: skipping duplicate package `ensure_no_std` found at `/home/dogunbound/.cargo/git/checkouts/cv-f2802c299d0e3a2a/82a25ee/cv-pinhole/ensure_no_std`
warning: skipping duplicate package `ensure_no_std` found at `/home/dogunbound/.cargo/git/checkouts/cv-f2802c299d0e3a2a/82a25ee/cv-core/ensure_no_std`
    Finished dev [unoptimized + debuginfo] target(s) in 0.05s
     Running `target/debug/test-rust`

Add contrast Normalization

Ive begun to implement a contrast normalization function, based on the README's wikipedia entry wiki

I figure the imageproc crate is the right place to put it.

I will update the README when i am done.

The Akaze extract benchmark is testing image loading performance as well

Currently the Akaze "extract" benchmark is testing image loading performance as well, which seems to take a large part of the time. I believe that is not really what we want, rather we should benchmark the extraction, using the newly-added extract_from_gray_float_image function.

If people agree, I'm happy to do a PR.

Allow JSON blobs to be toggled to text inputs in vSLAM Sandbox

Currently, the reconstruction settings and the camera calibration will be passed as JSON blobs. This is good for being able to share settings with other users, but it may be confusing. This task is to be able to toggle between JSON and text input mode. This would be a radio button selection that allows changing mode. When changing to JSON mode, any text input which has no value entered is not present and is the default value. Likewise, when changing from JSON mode to text input mode, all of the text inputs should be empty where values are not specified in the JSON. This can be achieved using serde_json::Value, which will allow reflection over the JSON.

I see that the calibrate and uncalibrate function with k1 distortion is not correct.

pub struct CameraIntrinsicsK1Distortion {

As per opencv to move from 3d point ( or barring ) to 2D point the transform is not as required by equations.
X,Y,Z points -> U, V image point should be as per ,

distortion

that is first after scaling by Z, ( X/Z, Y/Z) it should be mulitplied by (1+k1*r^2) whereas in function ,
uncalibrate(&self, projection: UnitVector3) -> Option
opposite is happening. Can someone correct me ?

Ref:
https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#:%7E:text=The%20next%20figures,monotonically%20increasing

Can somebody please guide if it is correct.

Create real-time 3d point cloud viewer

To show the output of vSLAM and visual odometry algorithms, we need a crate that can efficiently take VBOs of points to display and render them. A crate should be written to do the rendering and a separate crate should be created for an example viewer, including the controls. A good start would be to load a PLY file using the ply-rs crate or an LAS file using the las crate.

wgpu should be used to target all systems. It is recommended to use iced, which (on master branch) can natively render on the web or desktop using wgpu.

Does not compile with rust 1.64

I just added
cv = "0.6.0" in dependency

and tried running "cargo check" or "cargo run" is doesn't compile.

I'm using rust 1.64.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.