Giter Site home page Giter Site logo

drethage / fully-convolutional-point-network Goto Github PK

View Code? Open in Web Editor NEW
85.0 85.0 22.0 1.57 MB

Fully-Convolutional Point Networks for Large-Scale Point Clouds

License: MIT License

Python 71.53% Shell 1.61% C++ 11.46% Cuda 15.40%
3d captioning computer-vision deep-learning deep-neural-networks meshes point-cloud point-clouds semantic-segmentation

fully-convolutional-point-network's People

Contributors

drethage avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fully-convolutional-point-network's Issues

InvalidArgumentError: No OpKernel was registered to support Op 'QueryBallPoint' with these attrs

Extracted 248 samples from 12 items in train set
Extracted 837 samples from 21 items in val set

2019-03-06 16:24:42.091695: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

Parameters in model: 10076067

Layers in model: (Name - Shape - # weights)
abstraction/points_to_15cm/simplified_pointnet/1x1_conv_1/weights:0 - (1, 1, 4, 64) - 256
abstraction/points_to_15cm/simplified_pointnet/1x1_conv_2/weights:0 - (1, 1, 64, 96) - 6144
abstraction/points_to_15cm/skip_15cm/1x1x1_conv/weights:0 - (1, 1, 1, 96, 256) - 24576
abstraction/points_to_15cm/skip_45cm/3x3x3_conv/weights:0 - (3, 3, 3, 96, 256) - 663552
abstraction/15cm_to_30cm/3d_convolution/2x2x2_conv/weights:0 - (2, 2, 2, 96, 128) - 98304
abstraction/15cm_to_30cm/3d_convolution/1x1_conv_1/weights:0 - (1, 1, 1, 128, 192) - 24576
abstraction/15cm_to_30cm/skip_30cm/1x1x1_conv/weights:0 - (1, 1, 1, 192, 256) - 49152
abstraction/15cm_to_30cm/skip_90cm/3x3x3_conv/weights:0 - (3, 3, 3, 192, 256) - 1327104
abstraction/30cm_to_60cm/3d_convolution/2x2x2_conv/weights:0 - (2, 2, 2, 192, 256) - 393216
abstraction/30cm_to_60cm/3d_convolution/1x1x1_conv_1/weights:0 - (1, 1, 1, 256, 512) - 131072
abstraction/30cm_to_60cm/skip_60cm/1x1x1_conv_1/weights:0 - (1, 1, 1, 512, 256) - 131072
abstraction/30cm_to_60cm/skip_180cm/3x3x3_conv/weights:0 - (3, 3, 3, 512, 256) - 3538944
spatial_pool/skip_spatial_pool/1x1x1_conv/weights:0 - (1, 1, 1, 512, 256) - 131072
upsampling/60cm_to_30cm/2x2x2_deconv/weights:0 - (2, 2, 2, 256, 768) - 1572864
upsampling/60cm_to_30cm/1x1x1_conv_1/weights:0 - (1, 1, 1, 256, 256) - 65536
upsampling/60cm_to_30cm/1x1x1_conv_2/weights:0 - (1, 1, 1, 256, 192) - 49152
upsampling/30cm_to_15cm/2x2x2_deconv/weights:0 - (2, 2, 2, 192, 704) - 1081344
upsampling/30cm_to_15cm/1x1x1_conv_1/weights:0 - (1, 1, 1, 192, 128) - 24576
upsampling/30cm_to_15cm/1x1x1_conv_3/weights:0 - (1, 1, 1, 384, 128) - 49152
upsampling/15cm_to_5cm/final_deconv/weights:0 - (3, 3, 3, 64, 384) - 663552
upsampling/15cm_to_5cm/final_conv/weights:0 - (3, 3, 3, 64, 22) - 38016
Traceback (most recent call last):
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
self._extend_graph()
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'QueryBallPoint' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[{{node abstraction/points_to_15cm/simplified_pointnet/QueryBallPoint}} = QueryBallPoint[nsample=64, radius=0.129903808, _device="/device:GPU:0"](fifo_queue_DequeueMany, Reshape_1)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 31, in
main()
File "main.py", line 26, in main
training.train(cla.config)
File "/cluster/home/haliang/fully-convolutional-point-network/training.py", line 245, in train
sess.run([init_g, init_l], {is_training_pl: True})
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'QueryBallPoint' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[node abstraction/points_to_15cm/simplified_pointnet/QueryBallPoint (defined at <string>:55)  = QueryBallPoint[nsample=64, radius=0.129903808, _device="/device:GPU:0"](fifo_queue_DequeueMany, Reshape_1)]]

Caused by op 'abstraction/points_to_15cm/simplified_pointnet/QueryBallPoint', defined at:
File "main.py", line 31, in
main()
File "main.py", line 26, in main
training.train(cla.config)
File "/cluster/home/haliang/fully-convolutional-point-network/training.py", line 209, in train
queue_batch_placeholders['input_features_pl'], is_training_pl, dataset.get_num_learnable_classes(), batch_normalization_decay)
File "/cluster/home/haliang/fully-convolutional-point-network/fcpn.py", line 217, in build_model
grouped_points_xyz_and_features = self.radius_search_and_group(pointnet_locations, self.get_pointnet_radius(self._config['model']['pointnet']['spacing']), self._config['model']['pointnet']['neighbors'], points_xyz, points_features)
File "/cluster/home/haliang/fully-convolutional-point-network/fcpn.py", line 174, in radius_search_and_group
point_indices, _ = tf_grouping.query_ball_point(radius, num_neighbors, points_xyz, centroids_xyz)
File "tf_grouping/tf_grouping.py", line 22, in query_ball_point
return grouping_module.query_ball_point(xyz1, xyz2, radius, nsample)
File "", line 55, in query_ball_point
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/cluster/home/haliang/.local/lib64/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'QueryBallPoint' with these attrs. Registered devices: [CPU,XLA_CPU], Registered kernels:
device='GPU'

 [[node abstraction/points_to_15cm/simplified_pointnet/QueryBallPoint (defined at <string>:55)  = QueryBallPoint[nsample=64, radius=0.129903808, _device="/device:GPU:0"](fifo_queue_DequeueMany, Reshape_1)]]

Spatial pool weights. Some clarifications needed.

Hi. I'm trying to understand the function here

def get_spatial_pool_weighting(sphere_radius, top_level_centroid_locations):

Not clear about what it does.

Below I try to go through the algorithm.

def get_spatial_pool_weighting(sphere_radius, top_level_centroid_locations):
        """ Compute a spatial weighting for every cell.
            Args:
            sphere_radius: float, the weight of neighboring cells will be greatest on this sphere's surface
            top_level_centroid_locations: tf.tensor, locations of cells in metric space
            Returns: tf.tensor
        """
       top_level_centroid_locations_repeated = tf.tile(tf.expand_dims(top_level_centroid_locations, axis=1), [1, top_level_centroid_locations.get_shape()[1].value, 1, 1]) #row-wise repeated sample locations

As top_level_centroid_locations has shape (batch_size, num_points, 3)
I expect top_level_centroid_locations_repeated shape to be (batch_size, num_points, num_points, 3)

        difference = tf.subtract(top_level_centroid_locations_repeated, tf.transpose(top_level_centroid_locations_repeated, [0, 2, 1, 3]))
        # Euclidean distance from every centroid to every other centroid
        distance = tf.norm(difference, axis=3, ord=2, keepdims=True)

distance shape should be: (batch_size, num_points, num_points, 1). Is it?

        # Clipped distance in [sphere_radius - 1, sphere_radius + 1] range
        clipped_distance = tf.clip_by_value(distance, sphere_radius - 1, sphere_radius + 1)
        # Neighboring voxels weighting based on (cos(3(x-1.5)) + 1) / 2, max weighting on voxels sphere_radius away from a given voxel
        cos_distance_to_sphere_surface = (tf.cos(3 * (clipped_distance - sphere_radius)) + 1) / 2
        # Normalized weighting
        return cos_distance_to_sphere_surface / tf.reduce_sum(cos_distance_to_sphere_surface, axis=2, keepdims=True)

The output should still be in shape (batch_size, num_points, num_points, 1).

At line

skip_spatial_pool = features * spatial_pooling_weights

features shape should be: (batch_size, num_points, num_points, 512).
Doing features * spatial_pooling_weights should output a shape of (batch_size, num_points, num_points, 512) thanks to broadcasting. Is it right?

Thank you in advance.

alternate dataset

@drethage thanks for open sourcing the wonderfull work , i had few queries
Q1 have you trained the architecture on the available other dataset like semanttic Kitti and 3D dataset
Q2 If not trained can we follow the same training pipeline , if trained can you please share the pre-trained model
Q3 can we use the currently pre-trained model to test on custom dataset which less number of point cloud density

Thanks in advance

Insufficient memory

Hi, whenever I do data.prepare to extract samples from scene, it gives error that memory is insufficient. When I reduce the number of scenes in the train_split set, it starts to work.
I'd like to know how much RAM do you use for training?
I use 1 core with memory 14000M, but it seems far less sufficient.

OutOfRangeError (see above for traceback): Read less bytes than requested

Hi I am trying to infer with the pre-trained model you provided. I am using one of the .PLY files from ScanNet dataset. But it just gives the same error;

The command I use to infer:

python main.py --mode predict --config sessions/session_0/config.json --colors datasets/scannet/metadata/colors.txt --input datasets/scene0000_00_vh_clean_2.labels.ply

The error logs are:

Loaded configuration from: sessions/session_0/config.json
Size: 8.424978, 8.743941, 3.025378, # Points: 81369
Model Receptive Field Size: 9.000000, 9.000000, 3.600000
2019-12-18 15:59:51.956596: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-18 15:59:52.045512: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 15:59:52.046023: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x563cab617650 executing computations on platform CUDA. Devices:
2019-12-18 15:59:52.046042: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): GeForce GTX 1060 with Max-Q Design, Compute Capability 6.1
2019-12-18 15:59:52.065024: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2808000000 Hz
2019-12-18 15:59:52.066188: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x563cab67fdb0 executing computations on platform Host. Devices:
2019-12-18 15:59:52.066248: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-18 15:59:52.067183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: GeForce GTX 1060 with Max-Q Design major: 6 minor: 1 memoryClockRate(GHz): 1.3415
pciBusID: 0000:01:00.0
totalMemory: 5.94GiB freeMemory: 5.24GiB
2019-12-18 15:59:52.067247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-18 15:59:52.069467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-18 15:59:52.069515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-12-18 15:59:52.069535: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-12-18 15:59:52.070138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5067 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 with Max-Q Design, pci bus id: 0000:01:00.0, compute capability: 6.1)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

WARNING:tensorflow:From /home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From util/util.py:960: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-12-18 15:59:54.401315: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Out of range: Read less bytes than requested
Traceback (most recent call last):
  File "main.py", line 31, in <module>
    main()
  File "main.py", line 24, in main
    inference.predict(cla.config, cla.input, cla.device, cla.colors)
  File "/home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py", line 186, in predict
    sess, placeholders, pred_op, pointnet_locations, constant_features = setup_model(model, receptive_field_size, points.shape[0], config['model']['pointnet']['spacing'], config['dataset']['num_learnable_classes'], checkpoint_path, device)
  File "/home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py", line 87, in setup_model
    tf.train.Saver().restore(sess, checkpoint_path)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1276, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: Read less bytes than requested
	 [[node save/RestoreV2 (defined at /home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py:87) ]]
	 [[node save/RestoreV2 (defined at /home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py:87) ]]

Caused by op u'save/RestoreV2', defined at:
  File "main.py", line 31, in <module>
    main()
  File "main.py", line 24, in main
    inference.predict(cla.config, cla.input, cla.device, cla.colors)
  File "/home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py", line 186, in predict
    sess, placeholders, pred_op, pointnet_locations, constant_features = setup_model(model, receptive_field_size, points.shape[0], config['model']['pointnet']['spacing'], config['dataset']['num_learnable_classes'], checkpoint_path, device)
  File "/home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py", line 87, in setup_model
    tf.train.Saver().restore(sess, checkpoint_path)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 832, in __init__
    self.build()
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 844, in build
    self._build(self._filename, build_save=True, build_restore=True)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 881, in _build
    build_save=build_save, build_restore=build_restore)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 513, in _build_internal
    restore_sequentially, reshape)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 332, in _AddRestoreOps
    restore_sequentially)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 580, in bulk_restore
    return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1572, in restore_v2
    name=name)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
    op_def=op_def)
  File "/home/x23/miniconda3/envs/pcs2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

OutOfRangeError (see above for traceback): Read less bytes than requested
	 [[node save/RestoreV2 (defined at /home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py:87) ]]
	 [[node save/RestoreV2 (defined at /home/x23/workspace_3D_pc/fully-convolutional-point-network/inference.py:87) ]]

code

Hello, can you send a code to this email,[email protected]? I especially hope to use your research now, thank you.

Code

Your paper looks great. I was wondering when you are planning to release code?
Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.