Giter Site home page Giter Site logo

large-scale_point_cloud_semantic_segmentation's Introduction

Large-scale_Point_Cloud_Semantic_Segmentation

Progection Fusion

This project is a point cloud semantic segmentation network trained by the snapnet network on semantic3D data. The network first generates images for training on the point cloud and then uses other image semantic segmentation networks to train on the images. When processing new point cloud data, RGB images are automatically generated on the point cloud, and these images are labeled by the image semantic segmentation network. The network backprojects the results of the image segmentation to the point cloud.
The following GIF is the result of the network segmentation point cloud



SnapNet & Semantic 3D dataset

ShapNet

SnapNet's performance on the semantic 3D dataset is second only to SPGraph. It is the best network based on tensorflow in the field of point cloud segmentation. Then the snapnet network later introduced the robot version snapNet++ for target detection.


Semantic 3D dataset

They provide training and test data as a compressed ascii text file format of {x, y, z, intensity, r, g, b}. The basic facts are provided in the form of a single column ascii file, where the row id of the class labels corresponds to the point. 7zip is used for compression and is available for Windows and Linux.


You need to put the downloaded point cloud data and tag data in the train folder.

Operating Environment

C++: 

  • Cython 0.29.1
  • PCL = 1.8
  • OpenMP
  • NanoFlann: nanoflann.hpp should be included in the include directory
  • Eigen: Eigen should also be included in the include directory

Python:

  • TensorFlow
  • TQDM, Scipy, Numpy ues the pip install

Building

cd pointcloud_tools
python setup.py install --home="."


Configuration file

{
    "train_input_dir":"path_to_directory_TRAIN",
    "test_input_dir":"path_to_directory_TEST",
    "train_results_root_dir":"where_to_put_training_products",
    "test_results_root_dir":"where_to_put_test_products",
    "images_dir":"images",

    training:true,

    "imsize":224,
    "voxel_size":0.1,

    "train_cam_number":10,
    "train_create_mesh" : true,
    "train_create_views" : true,
    "train_create_images" : true,

    "test_cam_number":10,
    "test_create_mesh" : true,
    "test_create_views" : true,
    "test_create_images" : true,

    "vgg_weight_init":"path_to_vgg_weights.npy",
    "batch_size" : 24,
    "learning_rate" : 1e-4,
    "epoch_nbr" : 100,
    "label_nbr" : 10,
    "input_ch" : 3,

    "train_rgb" : true,
    "train_composite" : true,
    "train_fusion" : true,

    "saver_directory_rgb" : "path_to_rgb_model_directory",
    "saver_directory_composite" : "path_to_composite_model_directory",
    "saver_directory_fusion" : "path_to_fusion_model_directory",
    "output_directory":"path_to_output_product_directory"
}


Processing training datas

  • The point cloud decimation
  • views and images generation
    The images that will be generated when running the image generation script and the corresponding camera location files will be placed in './'folder. When the training script is run, the train_save file will be created containing the label point cloud and RGB point cloud, as well as the 2D image images generated on these point clouds, which will be used for training.
python3 sem3d_gen_images.py --config config.json 


Train the models (rgb, composite and fusion) from scratch

Three files are generated during training to save the training results.'composite_save''fusion_save''rgb_save'

python3 sem3d_train_tf.py --config config.json


semantic on decimated clouds

  • The semantic predictions on images
  • back-projection on the decimated clouds
    This script will generate tagged point cloud data(ply)
python3 sem3d_test_backproj.py --config config.json


Assign a Label to original point

  • generate the files at the Semantic 3D format
  • assign a label to each point of the original point cloud
    This script marks the point of the corresponding row in the point cloud file as the Train ID.
python3 sem3d_test_to_sem3D_labels.py --config config.json


Using the network to inference on new data

Preprocessing point cloud data

First you need to use my las.py to convert your data into a model readable format.Modify the point cloud path in the script.

python las.py


Inference

Modify 'trainng model = false' in the Config.json file.
Add new data names to the sem3d_gen_images.py, sem3d_test_backproj.py, sem3d_test_to_sem3D_labels.py scripts
Run the script to process the point cloud.

python3 sem3d_gen_images.py --config config.json 

Run the script to predict and output the ply file.

python3 sem3d_test_backproj.py --config config.json

Run the script to mark each point in the original point cloud.

python3 sem3d_test_to_sem3D_labels.py --config config.json

large-scale_point_cloud_semantic_segmentation's People

Contributors

zgx010 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

hejiao610 ywfwyht

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.