Giter Site home page Giter Site logo

nvblox's People

Contributors

alexmillane avatar david-tingdahl-nvidia avatar davidtingdahl avatar helenol avatar hemalshahnv avatar remostei avatar swapnesh-wani-nvidia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nvblox's Issues

Question about nvblox_interface target

Hi!

I am using your library but needed a ROS1 interface so I converted the isaac_ros_nvblox/nvblox_ros package to noetic. Everything is working good now, but I got stuck for the longest time sorting out an issue where I couldn't use the CUDA implementation to extract the ESDF as a Pointcloud. The issue came from using to the wrong target when linking the nvblox library with my new nvblox_ros package.

Initially, I was linking against the targets nvblox_lib and nvblox_eigen, following the ROS2 CMakeLists.txt, but switching those out in favor of nvblox_interface fixed the issue. The comments here mention this target it specifically meant for ROS, but that its also been marked to be deleted: https://github.com/nvidia-isaac/nvblox/blob/public/nvblox/CMakeLists.txt#L199.

I was just hoping to get some additional context for the nvblox_interface target, like why it was needed originally, and now is it only needed now for ROS1 since its not being used in the isaac_ros_nvblox package? I was doing some reading up on the INTERFACE keyword, so I guess it has something to do with how nvblox, stdgpu, and eigen libraries are made, but I still don't actually understand why using nvblox_lib caused the issue.

I appreciate any details you're willing to provide, thanks!

Docker image fails to build

Hello,

We are building a docker image on top of this image: dustynv/ros:humble-ros-base-l4t-r35.1.0 for a Jetson Xavier NX.

When we run:

RUN source ${ROS_ROOT}/install/setup.bash && \
    colcon build 

We get the following error:

 > [ 8/10] RUN source /opt/ros/humble/install/setup.bash && colcon build:
#13 33.64 CMake Error at CMakeLists.txt:135 (set_target_properties):
#13 33.64   set_target_properties called with incorrect number of arguments.
#13 33.64 
#13 33.64 
#13 33.64 ---
#13 33.64 Failed   <<< nvblox [27.8s, exited with code 1]

Line 135 is:

   set_target_properties(nvblox_lib PROPERTIES CUDA_ARCHITECTURES ${CMAKE_CUDA_ARCHITECTURES})

We have narrowed it down to CMAKE_CUDA_ARCHITECTURES not being set.
If we create an image without running colcon build, then running the image on the Jetson and manually running colcon build, it works fine, and we can also see CMAKE_CUDA_ARCHITECTURES is set.

Also building this package on our development computer works fine. Just running a docker build fails.

We are presuming this might be related to what the docs about [CMAKE_CUDA_ARCHITECTURES]:(https://cmake.org/cmake/help/latest/variable/CMAKE_CUDA_ARCHITECTURES.html) state

Users are encouraged to override this, as the default varies across compilers and compiler versions.

We are building the docker image on an x86_64 cpu, but for an arm64 cpu by declaring the platform via docker-compose.yaml:

version: "3.9"
services:
  nakai-robot:
    privileged: true
    network_mode: "host"
    platform: linux/arm64/v8
    container_name: nakai-robot
    image: nakairobotics/nakai:nakai
    ...

Question about two functions: ViewCalculator::getBlocksInImageViewRaycast() and checkBlocksInTruncationBand()

Hi, Thanks for your great works. I have two questions regarding details of NVblox:

Q1: Voxel block retrieval in the TSDF integration:

const std::vector<Index3D> block_indices =

Could you please check the correctest of my understanding: it seems that indices of all visible voxel blocks (traversed by the ray) are stored as the block_indices. The TSDF values and weights of voxels from these blocks will be updated, through TSDF values of voxels (purple one in the below figure) which are far away from the truncation are equal to the truncation_distance_m.

Q2: Reduce blocks in the color integration:

if (std::abs(voxel.distance) <= truncation_distance_m) {

For the color integration, we are interested in voxels which are near surfaces. In other words, only these voxels should have colored. Thus, could you please explain why the critera of reducing blocks like this
if (std::abs(voxel.distance) <= truncation_distance_m)?

I think that the critera should be set as
if (std::abs(voxel.distance) < truncation_distance_m && std::abs(voxel.distance) > 0)
to keep voxels which are already observed before and near surfaces.

Thanks.

Compatibility of Nvblox with Jetson Xavier NX running Ubuntu 20.04 (ROS Noetic)

Hi!

Thanks for the great work. I am using the nvblox package on Jetson xavier running on ubuntu 20.4 with ROS noetic. I see that the Nvblox package currently supports ROS2 Humble version. However, I have most of my other related work in ROS noetic and would not be possible to migrate to ROS2 at this point. Moreover, my Jetson xavier has arm64 architecture running on ubuntu 20.04 (Tier 3 platform) which is not a supported platform to install ros2 humble (https://docs.ros.org/en/humble/Releases/Release-Humble-Hawksbill.html#supported-platforms). Therefore, I tried it with ROS2 foxy and was able to get it running on the Jetson using the docker. I was just hoping if there is a way to run the nvblox package with ros noetic version so that I can integrate my existing work to it.

I appreciate any information you are willing to provide. Thanks!

mesh result is poor compared to voxblox.

hi, thanks for the great work. I have tested It with my own dataset, depth from stereo_images is used.
but the mesh generated compared to the voxblox_ros project is poor, is there any suggestion .

error: ‘gflags’ has not been declared

I have install gflags by:

sudo apt-get install -y libgoogle-glog-dev libgtest-dev libgflags-dev python3-dev libsqlite3-dev
cd /usr/src/googletest && sudo cmake . && sudo cmake --build . --target install

But still found:
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h: In function ‘void nvblox::get_multi_mapper_params_from_gflags(float*, nvblox::MappingType*, nvblox::EsdfMode*, nvblox::MultiMapper::Params*)’:
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:96:8: error: ‘gflags’ has not been declared
96 | if (!gflags::GetCommandLineFlagInfoOrDie("voxel_size").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:102:8: error: ‘gflags’ has not been declared
102 | if (!gflags::GetCommandLineFlagInfoOrDie("mapping_type_static_occupancy")
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:110:8: error: ‘gflags’ has not been declared
110 | if (!gflags::GetCommandLineFlagInfoOrDie("mapping_type_dynamic").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:119:8: error: ‘gflags’ has not been declared
119 | if (!gflags::GetCommandLineFlagInfoOrDie("use_2d_esdf_mode").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:124:8: error: ‘gflags’ has not been declared
124 | if (!gflags::GetCommandLineFlagInfoOrDie("esdf_2d_min_height").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:129:8: error: ‘gflags’ has not been declared
129 | if (!gflags::GetCommandLineFlagInfoOrDie("esdf_2d_max_height").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:134:8: error: ‘gflags’ has not been declared
134 | if (!gflags::GetCommandLineFlagInfoOrDie("esdf_slice_height").is_default) {
| ^~~~~~
/home/yuchy/Dev/nvblox/nvblox/executables/include/nvblox/gflags_param_loading/fuser_params_from_gflags.h:139:8: error: ‘gflags’ has not been declared
139 | if (!gflags::GetCommandLineFlagInfoOrDie(
| ^~~~~~

It seems that nvblox has not found gflags installed.
Could you help me why this error ocurred?
My system is Ubuntu 20.04 with cuda 11.7

Is nvblox the GPU version of Voxgraph?

Hi!Thanks for you azamzing work on nvblox!

My question is that is nvblox the GPU version of Voxgraph? If not, what are those changes compared to the Voxgraph?

I'm really looking forward to your reply! Thanks for your time!

Question about querying the esdf for distance and gradient

Hi,

First of all thanks for making this code publicly available! It's great the generation of an accurate ESDF is working so well.

I have a question about how to most easily query the distance and gradient of a point (x,y,z) or list of points, when using the 3D esdf generation. I did not find a corresponding method to call from the Mapper class for requested point(s).

Would highly appreciate it if you could point me in the right direction on where to look/modify nvblox to query for distance and gradient. Thanks in advance!

Complie Error about librt.so

Thank you for your contribution!

I met a error whtn I complied this project as below. Could anyone help fix it?

"No rule to make target '/tmp/build/80754af9/snappy_1649923748780/_build_env/x86_64-conda-linux-gnu/sysroot/usr/lib/librt.so', needed by 'executables/benchmark'. Stop."

Mesh to ESDF

Hey NVBLOX Team!

Is there a way to create ESDFs from an existing triangular mesh? My desired workflow looks as follows:

  1. Take a bunch of messy (self intersecting) meshes
  2. Voxelize them
  3. Mesh the ESDF to create clean, water-tight geometry

The input meshes have a relatively low polygon count, hence using only vertices as a point cloud input will probably not yield meaningful results.

Currently, I'm using OpenVDB for this task but it's relatively slow for higher voxel resolutions.

Missing `thurst` header file in `esdf_integrator.cu`

I got the following error when building the library, which I can fix by adding #include <thrust/sort.h> and #include <thrust/unique.h> to esdf_integrator.cu.

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1002): error: qualified name is not allowed

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1002): error: expected a ";"

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: a class or namespace qualified name is required

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: global-scope qualifier (leading "::") is not allowed

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: expected a ";"

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1090): error: namespace "thrust" has no member "sort"

/tmp/nvblox/nvblox/src/integrators/esdf_integrator.cu(1095): error: namespace "thrust" has no member "unique"

fatal error: nvblox/tests/utils.h

/home/yuchy/Dev/thirdparty/nvblox/nvblox/executables/src/benchmark.cpp:27:10: fatal error: nvblox/tests/utils.h: No such file or directory
27 | #include "nvblox/tests/utils.h"
| ^~~~~~~~~~~~~~~~~~~~~~

At the endding of compiling occured this error, seems related to nvblox/nvblox/executables/src/benchmark.cpp

#include <benchmark/benchmark.h>
#include <gtest/gtest.h>
#include
#include
#include "nvblox/datasets/3dmatch.h"
#include "nvblox/executables/fuser.h"
#include "nvblox/io/image_io.h"
#include "nvblox/sensors/connected_components.h"
#include "nvblox/sensors/npp_image_operations.h"
#include "nvblox/serialization/mesh_serializer.hpp"
#include "nvblox/tests/utils.h"

CTEST failing

Issue Description

Summary

One of the benchmark tests, specifically test_default_stream_utilization, has failed with the following error:

319/319 Test #319: test_default_stream_utilization ....................................................................................***Failed    0.01 sec
99% tests passed, 1 tests failed out of 319
Total Test time (real) =  78.10 sec
 executables/benchmark
...
Aborted (core dumped)

Details

  • Test Result: Failed
  • Failure Message:
    F0122 11:15:11.873811 52346 utils.cpp:97] Check failed: io::readFromPng(kPath, mask)
    *** Check failure stack trace: ***
        @     0x7ff4fb2c11c3  google::LogMessage::Fail()
        @     0x7ff4fb2c625b  google::LogMessage::SendToLog()
        @     0x7ff4fb2c0ebf  google::LogMessage::Flush()
        @     0x7ff4fb2c16ef  google::LogMessageFatal::~LogMessageFatal()
        @     0x7ff4fb459baa  nvblox::test_utils::createMaskImage()
        @     0x560cf7b6cfe7  nvblox::benchmarkRemoveSmallConnectedComponents()
        @     0x7ff4fb3ec72f  benchmark::internal::BenchmarkInstance::Run()
        @     0x7ff4fb3d8f59  (unknown)
        @     0x7ff4fb3d981e  benchmark::internal::RunBenchmark()
        @     0x7ff4fb3f17cc  benchmark::RunSpecifiedBenchmarks()
        @     0x560cf7b6c2ba  main
        @     0x7ff4fa232083  __libc_start_main
        @     0x560cf7b6c8ae  _start
    

Environment

  • Date and Time: 2024-01-22 11:15:03
  • CPU Information:
    Run on (16 X 3700 MHz CPU s)
    CPU Caches:
      L1 Data 32K (x8)
      L1 Instruction 64K (x8)
      L2 Unified 512K (x8)
      L3 Unified 8192K (x2)
    Load Average: 0.49, 0.36, 0.75
    ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
    
  • Benchmark Results:
    ----------------------------------------------------------------------------------------
    Benchmark                                              Time             CPU   Iterations
    ----------------------------------------------------------------------------------------
    benchmarkAll                                        2.27 ms         2.27 ms          257
    benchmarkAll/iterations:100                         2.56 ms         2.56 ms          100
    benchmarkIntegrateDepth                            0.152 ms        0.152 ms         4457
    benchmarkIntegrateColor                            0.284 ms        0.284 ms         2447
    benchmarkUpdateMesh                                 1.14 ms         1.14 ms          594
    benchmarkUpdateEsdf                                0.499 ms        0.499 ms         1349
    benchmarkSerializeMesh                              5.13 ms         5.12 ms          136
    

Steps to Reproduce

  1. Run the test suite with the specified configuration.
  2. Observe the failure in the test_default_stream_utilization test.

Expected Behavior

The test_default_stream_utilization test should pass without errors.

Additional Information

  • Ensure that the file path specified in io::readFromPng(kPath, mask) is correct and the image file exists.
  • Check for any recent code changes that may have introduced this issue.
  • Add logging or debugging statements to gather more information about the failure.

Run nvblox on public LiDAR datasets

Hey authors, thanks for your excellent job! I want to run nvblox on public lidar datasets like nuScenes, KITTI, Waymo, etc., could you please do me a favor to provide any tutorials or suggestions?

Best
Ziliang

Generating ESDF map problem

I am getting ‘Vector too big to sort . Falling back to thrust.‘when generating esdf map, like
map_type
May I ask what is causing this, also may I ask how to change the map type to dynamics or human detection and what are the differents among this three map(including statics).

Underestimation of distance to closest obstacles when there are unobserved grids in an ESDF map

The value of esdf at a certain grid indicates the euclidean distance to the nearest observed obstacle. If say there is an object that is entirely textureless and becomes unobserved in depth maps, the grids that are in this obstacle will not be allocated, and the grids outside and around this obstacle will tend to underestimate the true distance to an actual obstacle.

Is there a easy way to deal with this issue? For example, on top of having the euclidean signed distance to the nearest observed obstacle, is there also a way to obtain the distance to the nearest unobserved grid?

Visualization and the usage of the ESDF map

Thanks for the great work!

I have tested the nvblox in my own computer and modified the code to support two features:

  1. the mapping of using an Ouster-128 LiDAR
  2. implementation of different weight averaging methods.
    If you are interested, I can pull a request. And please remind users that they should set specially -mesh_frame_subsampling=500 (for example) to avoid the memory issue for a large-scale dataset.

20220225_building_day_mesh_img

And I have a problem about the ESDF map. Could you please recommend any tools to visualize the ESDF map as shown in https://github.com/nvidia-isaac/nvblox/blob/public/docs/images/nvblox_slice.gif?
Thanks.

degenerated face

when I use meshlab open the mesh.ply, it shows "mesh contains 0 vertices wit NAN coords and 82 degenerated face", what does this means? And is the mesh has the smae Coordinate system with the input pose?

Citation

Hi,

I would like to cite this work in an upcoming publication. Do you have a preferred format?

Thank you,
Dev

build error in cuda 12.1

[build] [ 5%] Built target ext_eigen
[build] [ 5%] Building CXX object _deps/ext_stdgpu-build/src/stdgpu/CMakeFiles/stdgpu.dir/impl/iterator.cpp.o
[build] [ 6%] Building CXX object _deps/ext_stdgpu-build/src/stdgpu/CMakeFiles/stdgpu.dir/impl/memory.cpp.o
[build] [ 6%] Building CXX object _deps/ext_stdgpu-build/src/stdgpu/CMakeFiles/stdgpu.dir/impl/limits.cpp.o
[build] [ 7%] Building CXX object _deps/ext_stdgpu-build/src/stdgpu/CMakeFiles/stdgpu.dir/cuda/impl/memory.cpp.o
[build] [ 7%] Linking CXX static library libstdgpu.a
[build] [ 7%] Built target stdgpu
[build] [ 8%] Building CUDA object CMakeFiles/nvblox_gpu_hash.dir/src/core/error_check.cu.o
[build] [ 8%] Building CXX object CMakeFiles/nvblox_gpu_hash.dir/src/utils/timing.cpp.o
[build] [ 9%] Building CXX object CMakeFiles/nvblox_gpu_hash.dir/src/utils/nvtx_ranges.cpp.o
[build] [ 10%] Building CUDA object CMakeFiles/nvblox_gpu_hash.dir/src/gpu_hash/gpu_layer_view.cu.o
[build] [ 10%] Building CUDA object CMakeFiles/nvblox_gpu_hash.dir/src/gpu_hash/gpu_set.cu.o
[build] [ 11%] Linking CXX static library libnvblox_gpu_hash.a
[build] [ 11%] Built target nvblox_gpu_hash
[build] [ 11%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/core/warmup.cu.o
[build] [ 12%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/core/error_check.cu.o
[build] [ 12%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/map/blox.cu.o
[build] [ 13%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/map/layer.cu.o
[build] [ 14%] Building CXX object CMakeFiles/nvblox_lib.dir/src/sensors/camera.cpp.o
[build] [ 14%] Building CXX object CMakeFiles/nvblox_lib.dir/src/sensors/color.cpp.o
[build] [ 15%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/sensors/pointcloud.cu.o
[build] [ 15%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/sensors/image.cu.o
[build] [ 15%] Building CXX object CMakeFiles/nvblox_lib.dir/src/geometry/bounding_spheres.cpp.o
[build] [ 16%] Building CXX object CMakeFiles/nvblox_lib.dir/src/mapper/mapper.cpp.o
[build] [ 17%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/view_calculator.cu.o
[build] [ 18%] Building CXX object CMakeFiles/nvblox_lib.dir/src/geometry/bounding_boxes.cpp.o
[build] [ 19%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/occupancy_decay_integrator.cu.o
[build] [ 19%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/projective_occupancy_integrator.cu.o
[build] [ 19%] Building CXX object CMakeFiles/nvblox_lib.dir/src/mapper/multi_mapper.cpp.o
[build] [ 20%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/esdf_integrator.cu.o
[build] [ 20%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/rays/sphere_tracer.cu.o
[build] [ 22%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/projective_tsdf_integrator.cu.o
[build] [ 22%] Building CXX object CMakeFiles/nvblox_lib.dir/src/interpolation/interpolation_3d.cpp.o
[build] [ 22%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/integrators/projective_color_integrator.cu.o
[build] [ 23%] Building CXX object CMakeFiles/nvblox_lib.dir/src/io/mesh_io.cpp.o
[build] [ 24%] Building CXX object CMakeFiles/nvblox_lib.dir/src/io/layer_cake_io.cpp.o
[build] [ 24%] Building CXX object CMakeFiles/nvblox_lib.dir/src/io/ply_writer.cpp.o
[build] [ 24%] Building CXX object CMakeFiles/nvblox_lib.dir/src/io/pointcloud_io.cpp.o
[build] [ 25%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/mesh/mesh_integrator_color.cu.o
[build] [ 25%] Building CXX object CMakeFiles/nvblox_lib.dir/src/io/image_io.cpp.o
[build] [ 25%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/mesh/marching_cubes.cu.o
[build] [ 26%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/mesh/mesh_block.cu.o
[build] [ 27%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/mesh/mesh_integrator.cu.o
[build] [ 28%] Building CXX object CMakeFiles/nvblox_lib.dir/src/mesh/mesh.cpp.o
[build] [ 29%] Building CXX object CMakeFiles/nvblox_lib.dir/src/primitives/scene.cpp.o
[build] [ 29%] Building CXX object CMakeFiles/nvblox_lib.dir/src/primitives/primitives.cpp.o
[build] [ 29%] Building CXX object CMakeFiles/nvblox_lib.dir/src/utils/nvtx_ranges.cpp.o
[build] [ 30%] Building CXX object CMakeFiles/nvblox_lib.dir/src/utils/timing.cpp.o
[build] [ 30%] Building CXX object CMakeFiles/nvblox_lib.dir/src/serialization/serializer.cpp.o
[build] [ 31%] Building CXX object CMakeFiles/nvblox_lib.dir/src/serialization/sqlite_database.cpp.o
[build] [ 32%] Building CXX object CMakeFiles/nvblox_lib.dir/src/serialization/layer_type_register.cpp.o
[build] [ 32%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/semantics/image_masker.cu.o
[build] [ 33%] Building CUDA object CMakeFiles/nvblox_lib.dir/src/semantics/image_projector.cu.o
[build] /mnt/data1/nvblox/nvblox/src/integrators/esdf_integrator.cu(1002): error: qualified name is not allowed
[build] typedef cub::BlockRadixSort<uint64_t, kBlockThreads, kItemsPerThread,
[build] ^
[build]
[build] /mnt/data1/nvblox/nvblox/src/integrators/esdf_integrator.cu(1002): error: expected a ";"
[build] typedef cub::BlockRadixSort<uint64_t, kBlockThreads, kItemsPerThread,
[build] ^
[build]
[build] /mnt/data1/nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: a class or namespace qualified name is required
[build] typename BlockRadixSortT::TempStorage sort;
[build] ^
[build]
[build] /mnt/data1//nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: global-scope qualifier (leading "::") is not allowed
[build] typename BlockRadixSortT::TempStorage sort;
[build] ^
[build]
[build] /mnt/data1//nvblox/nvblox/src/integrators/esdf_integrator.cu(1011): error: expected a ";"
[build] typename BlockRadixSortT::TempStorage sort;
[build] ^
[build]
[build] /mnt/data1//nvblox/nvblox/src/integrators/esdf_integrator.cu(1090): error: namespace "thrust" has no member "sort"
[build] thrust::sort(thrust::device, block_indices->begin(), block_indices->end(),
[build] ^
[build]
[build] /mnt/data1//nvblox/nvblox/src/integrators/esdf_integrator.cu(1095): error: namespace "thrust" has no member "unique"
[build] auto iterator = thrust::unique(thrust::device, block_indices->begin(),
[build] ^
[build]
[build] 7 errors detected in the compilation of "/mnt/data1//nvblox/nvblox/src/integrators/esdf_integrator.cu".

Feature Request: Meshing on CPU and/or Multistep processing on GPU

Meshing is performed on GPU irrespective of the memory type selected for the mapper.

Please modify the code to perform it on CPU or the GPU based on memory type of mapper.
I am running out of GPU memory on my system quite often.

If possible can you also add an option to perform computation in multiple steps in meshBlocksGPU function and to disable computations of normals.
It will allow the users to prevent GPU memory run out errors.

about the masked_map.

hi, thanks for sharing the great project.
I checked the multi-mapper which contains two mapper instance, the unmasked and masked map.
what is the use of the masked map ?

'gflags' has not been declared compilation error.

When trying to build using make I get the following error:

[ 83%] Building CXX object experiments/CMakeFiles/fuse_3dmatch.dir/src/fuse_3dmatch.cpp.o
/nvblox/nvblox/experiments/src/fuse_3dmatch.cpp: In function ‘int main(int, char**)’:
/nvblox/nvblox/experiments/src/fuse_3dmatch.cpp:44:3: error: ‘gflags’ has not been declared
   44 |   gflags::ParseCommandLineFlags(&argc, &argv, true);
      |   ^~~~~~
make[2]: *** [experiments/CMakeFiles/fuse_3dmatch.dir/build.make:63: experiments/CMakeFiles/fuse_3dmatch.dir/src/fuse_3dmatch.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1375: experiments/CMakeFiles/fuse_3dmatch.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

It seems that gflags made a switch in namespace from google to gflags in version 2.1. On my system I was building in a terminal that had a catkin workspace sourced in which I will be using this library, and a different library seems to be using an older version of gflags there, which clashes with my system install. Building in a terminal that does not source this works fine.

Caffe for example also uses gflags and this issue with build failing on older versions of gflags is known there too:

BVLC/caffe#2597

They get around it by doing this:

#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif  // GFLAGS_GFLAGS_H_

See here

A similar suggestion is made on stackoverflow here

The Caffe fix actually does not work for me, as they use GFLAGS_GFLAGS_H_ to detect which version of gflags is installed, but because I also have the new gflags installed, it incorrectly determines the version as it sees the system GFLAGS_GFLAGS_H, but still uses the older version gflags.h header file defined in the catkin workspace include folder. (I think at least, it really is quite the mess.)

The fix from the stackoverflow link does work for me (also for the non-catkin-sourced environment).

If someone else happens to run into this highly specific issue of things clashing, here is a branch with the fix applied.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.