Giter Site home page Giter Site logo

srama2512 / poni Goto Github PK

View Code? Open in Web Editor NEW
77.0 3.0 10.0 9.79 MB

PONI: Potential Functions for ObjectGoal Navigation with Interaction-free Learning. CVPR 2022 (Oral).

Home Page: https://vision.cs.utexas.edu/projects/poni/

License: MIT License

Shell 2.47% Jupyter Notebook 0.54% Python 97.00%
cvpr2022 objectnav pytorch-implementation scene-understanding visual-navigation

poni's People

Contributors

srama2512 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

poni's Issues

errors about scripts/create_poni_dataset.py --split "train"

hello, thank you for your great work
when I run the command scripts/create_poni_dataset.py --split "train" to generate 400k train data on gibson, I met this problem and I have no idea about it

multiprocessing.pool.RemoteTraceback:                                                                                                                                 
"""                                                                                                                                                                   
Traceback (most recent call last):                                                                                                                                    
  File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/pool.py", line 125, in worker                                                                     
    result = (True, func(*args, **kwds))                                                                                                                              
  File "scripts/create_poni_dataset.py", line 50, in precompute_dataset_for_map                                                                                       
    input, label = dataset.get_item_by_name(name)                                                                                                                     
  File "/home/wdw/Projects/PONI/poni/dataset.py", line 153, in get_item_by_name                                                                                       
    return self[idx]                                                                                                                                                  
  File "/home/wdw/Projects/PONI/poni/dataset.py", line 137, in __getitem__                                                                                            
    input, label = self.create_spath_based_input_output_pairs(                                                                                                        
  File "/home/wdw/Projects/PONI/poni/dataset.py", line 286, in create_spath_based_input_output_pairs                                                                  
    ) = self.transform_input_output_pairs(                                                                                                                            
  File "/home/wdw/Projects/PONI/poni/dataset.py", line 308, in transform_input_output_pairs                                                                           
    max_dist = out_fmm_dists[out_fmm_dists != math.inf].max() + 1                                                                                                     
IndexError: index 69 is out of bounds for dimension 0 with size 17                                                                                                    
"""                                                                                                                                                                   
                                                                                                                                                                      
The above exception was the direct cause of the following exception:                                                                                                  
                                                                                                                                                                      
Traceback (most recent call last):                                                                                                                                    
  File "scripts/create_poni_dataset.py", line 134, in <module>                                                                                                        
    precompute_dataset(args)                                                                                                                                          
  File "scripts/create_poni_dataset.py", line 119, in precompute_dataset                                                                                              
    for _ in pool.imap_unordered(precompute_dataset_for_map, inputs):                                                                                                 
  File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/pool.py", line 868, in next                                                                       
    raise value                                                                                                                                                       
IndexError: index 69 is out of bounds for dimension 0 with size 17

Questions about Nav settings; Do you use Chaplot's OGN?

Hello,
I know that someone else has already opened an issue here(#2); I understand that releasing the whole code can take up much energy and time.

If you using the infrastructure in Chaplot's OGN(https://github.com/devendrachaplot/Object-Goal-Navigation), I wonder if you could share some settings for plugging in PONI to OGN.

  1. Do you use the infrastructure of OGN?
  2. If so, for the "input" of PONI, do you use the "local map", or "full map"? If you use the "full map" as your input, then how do you map back from the argmax of the potential (the selected goal point) back to the local map (since the local map is used for fmm planning)?
  3. If you use OGN, what were your sizes of local map/ full map?

Thanks!

Convert the semantic annotations to habitat format

ACTIVE_DATASET="gibson" python scripts/create_semantic_maps.py
Number of available scenes: 0
===========> Extracting scene boundaries
0it [00:00, ?it/s]
===========> Extracting point-clouds
0it [00:00, ?it/s]
===========> Extracting semantic maps
0it [00:00, ?it/s]

After I have generated the semantic annotations for Gibson from 3DSceneGraph dataset, I obtained the .ids and .scn files for each scene, and I put all of them in /gibson_semantic folder, then I would like to use create_semantic_maps.py to convert them to habitat format, but I got the aboved msgs. Could you plz take a look? Thanks!

Error when running scripts/create_semantic_maps.py

I tried to create a semantic map using matterport3d, but I got error for both bias-compressed version and non-basis compressed version.

command is ths.
ACTIVE_DATASET="mp3d" python scripts/create_semantic_maps.py

Error log by bias-compressed version is this.

failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
failed to load image!
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1149, in _parse_materials
    images[i] = PIL.Image.open(util.wrap_as_stream(blob))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/PIL/Image.py", line 3147, in open
    raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f78d22dd540>
I0820 08:20:27.053874 125483 Simulator.cpp:54] Deconstructing Simulator
  0%|                                                                                                   | 0/1 [00:43<?, ?it/s]I0820 08:20:27.053992 125483 Simulator.cpp:54] Deconstructing Simulator

multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 262, in _aux_fn
    return inputs[0](*inputs[1:])
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 236, in extract_scene_point_clouds
    per_floor_wall_pc = extract_wall_point_clouds(
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 309, in extract_wall_point_clouds
    scene = trimesh.load(glb_path)
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/load.py", line 127, in load
    loaded = load_mesh(file_obj,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/constants.py", line 153, in timed
    result = method(*args, **kwargs)
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/load.py", line 209, in load_mesh
    results = mesh_loaders[file_type](file_obj,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 372, in load_glb
    kwargs = _read_buffers(header=header,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1333, in _read_buffers
    materials = _parse_materials(
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1176, in _parse_materials
    idx = header["textures"][v["index"]]["source"]
KeyError: 'source'
"""

Error log by non-bias-compressed version is this.

multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 262, in _aux_fn
    return inputs[0](*inputs[1:])
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 236, in extract_scene_point_clouds
    per_floor_wall_pc = extract_wall_point_clouds(
  File "/home/dachii/workspace/Project/PONI/scripts/create_semantic_maps.py", line 309, in extract_wall_point_clouds
    scene = trimesh.load(glb_path)
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/load.py", line 127, in load
    loaded = load_mesh(file_obj,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/constants.py", line 153, in timed
    result = method(*args, **kwargs)
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/load.py", line 209, in load_mesh
    results = mesh_loaders[file_type](file_obj,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 372, in load_glb
    kwargs = _read_buffers(header=header,
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1333, in _read_buffers
    materials = _parse_materials(
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1140, in _parse_materials
    blob = _uri_to_bytes(uri=img['uri'], resolver=resolver)
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 403, in _uri_to_bytes
    return resolver[uri]
TypeError: 'NoneType' object is not subscriptable
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "scripts/create_semantic_maps.py", line 736, in <module>
    _ = list(tqdm.tqdm(pool.imap(_aux_fn, inputs), total=len(inputs)))
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/home/dachii/anaconda3/envs/poni2/lib/python3.8/multiprocessing/pool.py", line 868, in next
    raise value
TypeError: 'NoneType' object is not subscriptable

How do I solve this?

===my environment===
Ubuntu20.04
GPU: RTX3080
CUDA: 11.3.1
Pytorch: 1.10

dataset sampling

How do you compare the difference between the partial map obtained in training and in your pre-prepared dataset? How can you guarantee that the dataset sampled all locations in complete map for training?

Creating semantic map datasets of MP3D

Thank you for your wonderful work!
When I run the command ACTIVE_DATASET="mp3d" python scripts/create_poni_dataset.py --split "train" ,it fell into infinite loop.
Here is my procedure:
1.I download basis-compressed versions of the MP3D scenes and version of the raw MP3D
2.I convert *.obj file in raw MP3D into .glb file with trimesh.
3.Then I copy .house ,.navmesh ,
.ply from basis-compressed versions of the MP3D to version of the raw MP3D and obtained the final folder.
4.I run ACTIVE_DATASET="mp3d" python scripts/create_semantic_maps.py
ACTIVE_DATASET="mp3d" python scripts/precompute_fmm_dists.py
ACTIVE_DATASET="mp3d" python scripts/create_poni_dataset.py --split "train"
ACTIVE_DATASET="mp3d" python scripts/create_poni_dataset.py --split "val"
But when I run the command ACTIVE_DATASET="mp3d" python scripts/create_poni_dataset.py --split "train" ,it fell into infinite loop.
Is my procedure for creating semantic map datasets of MP3D right?

TensorFlow1.13.1

The request in giuhub is Python:3.8.5 , and TensorFlow : 1.13.1 .
But installing TensorFlow1.13.1 requires Python version 3.8 or below
'''
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:

Specifications:

  • tensorflow==1.13.1 -> python[version='>=2.7,<2.8.0a0|>=3.7,<3.8.0a0|>=3.6,<3.7.0a0']

Your python: python=3.8.5

If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
'''

Installation Issue on Install dependencies Part : Could not build wheels for torch-scatter,

Hello. I am trying to follow up installation on Readme.
I could process successfully until Install dependencies part.
I am using Ubuntu 24.04 LTS.
When I try to copy Install dependencies part and and run I got following logs(Simplified):

poni) anil@anil-PC:~/Desktop/Projects/PONI/dependencies/habitat-lab$ cd $PONI_ROOT/dependencies/habitat-lab
pip install -r requirements.txt
python setup.py develop --all
cd $PONI_ROOT/dependencies/habitat-sim
pip install -r requirements.txt
python setup.py install --headless --with-cuda
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.9.0+cu102.html
cd $PONI_ROOT/dependencies/astar_pycpp && make
Installed /home/anil/Desktop/Projects/PONI/dependencies/habitat-lab
Processing dependencies for habitat==0.2.1
Searching for tensorflow==1.13.1
Reading https://pypi.org/simple/tensorflow/
No local packages or working download links found for tensorflow==1.13.1
error: Could not find suitable distribution for Requirement.parse('tensorflow==1.13.1')
setup.py:419: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
assert StrictVersion(
setup.py:421: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
) >= StrictVersion("3.6"), "Must use python3.6 or newer"
running install
/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: EasyInstallDeprecationWarning: easy_install command is deprecated.
running bdist_egg
running egg_info
writing habitat_sim.egg-info/PKG-INFO
writing dependency_links to habitat_sim.egg-info/dependency_links.txt
writing requirements to habitat_sim.egg-info/requires.txt
writing top-level names to habitat_sim.egg-info/top_level.txt
reading manifest file 'habitat_sim.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'habitat_sim.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
copying examples/settings.py -> build/lib.linux-x86_64-cpython-38/examples
copying habitat_sim/sensors/noise_models/init.py -> build/lib.linux-x86_64-cpython-38/habitat_sim/sensors/noise_models
/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'habitat_sim.sensors.noise_models.data' is absent from the packages configuration.
check.warn(importable)
copying habitat_sim/sensors/noise_models/data/redwood-depth-dist-model.npy -> build/lib.linux-x86_64-cpython-38/habitat_sim/sensors/noise_models/data
running build_ext
Traceback (most recent call last):
File "setup.py", line 239, in run
subprocess.check_output([osp.join(CMAKE_BIN_DIR, "cmake"), "--version"])
File "/home/anil/anaconda3/envs/poni/lib/python3.8/subprocess.py", line 415, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/home/anil/anaconda3/envs/poni/lib/python3.8/subprocess.py", line 493, in run
with Popen(popenargs, **kwargs) as process:
File "/home/anil/anaconda3/envs/poni/lib/python3.8/subprocess.py", line 858, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/home/anil/anaconda3/envs/poni/lib/python3.8/subprocess.py", line 1720, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "setup.py", line 440, in
setup(
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/init.py", line 104, in setup
return distutils.core.setup(**attrs)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 184, in setup
return run_commands(dist)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 200, in run_commands
dist.run_commands()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/install.py", line 87, in run
self.do_egg_install()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/install.py", line 139, in do_egg_install
self.run_command('bdist_egg')
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 167, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 153, in call_command
self.run_command(cmdname)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 110, in build
self.run_command('build_ext')
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 316, in run_command
self.distribution.run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/dist.py", line 967, in run_command
super().run_command(command)
File "/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "setup.py", line 241, in run
raise RuntimeError(
RuntimeError: CMake must be installed to build the following extensions: habitat_sim._ext.habitat_sim_bindings
Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
Looking in links: https://data.pyg.org/whl/torch-1.9.0+cu102.html
Collecting torch-scatter
Using cached torch_scatter-2.1.2.tar.gz (108 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: torch-scatter
Building wheel for torch-scatter (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [40 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-38
creating build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/placeholder.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/testing.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/segment_coo.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/segment_csr.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/utils.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/scatter.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
copying torch_scatter/init.py -> build/lib.linux-x86_64-cpython-38/torch_scatter
creating build/lib.linux-x86_64-cpython-38/torch_scatter/composite
copying torch_scatter/composite/softmax.py -> build/lib.linux-x86_64-cpython-38/torch_scatter/composite
copying torch_scatter/composite/std.py -> build/lib.linux-x86_64-cpython-38/torch_scatter/composite
copying torch_scatter/composite/logsumexp.py -> build/lib.linux-x86_64-cpython-38/torch_scatter/composite
copying torch_scatter/composite/init.py -> build/lib.linux-x86_64-cpython-38/torch_scatter/composite
running egg_info
writing torch_scatter.egg-info/PKG-INFO
writing dependency_links to torch_scatter.egg-info/dependency_links.txt
writing requirements to torch_scatter.egg-info/requires.txt
writing top-level names to torch_scatter.egg-info/top_level.txt
reading manifest file 'torch_scatter.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '
' found under directory 'test'
adding license file 'LICENSE'
writing manifest file 'torch_scatter.egg-info/SOURCES.txt'
running build_ext
building 'torch_scatter._segment_csr_cpu' extension
creating build/temp.linux-x86_64-cpython-38
creating build/temp.linux-x86_64-cpython-38/csrc
creating build/temp.linux-x86_64-cpython-38/csrc/cpu
gcc -pthread -B /home/anil/anaconda3/envs/poni/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_PYTHON -Icsrc -I/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/torch/include -I/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/torch/include/TH -I/home/anil/anaconda3/envs/poni/lib/python3.8/site-packages/torch/include/THC -I/home/anil/anaconda3/envs/poni/include/python3.8 -c csrc/cpu/segment_csr_cpu.cpp -o build/temp.linux-x86_64-cpython-38/csrc/cpu/segment_csr_cpu.o -O3 -Wno-sign-compare -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_segment_csr_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command-line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
csrc/cpu/segment_csr_cpu.cpp:6:10: fatal error: ATen/OpMathType.h: No such file or directory
6 | #include <ATen/OpMathType.h>
| ^~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for torch-scatter
Running setup.py clean for torch-scatter
Failed to build torch-scatter
ERROR: Could not build wheels for torch-scatter, which is required to install pyproject.toml-based projects
g++ astar.cpp -o astar.so -O3 -Wall -shared -fpic

Install dependencies

I execute the code step by step, and then report an error when I execute “python setup.py install -- headless -- with-cuda. ”
The error is: “[Errno 2] No such file or directory: 'build/compile_ commands.json'” @srama2512

/

/

Question about object categories

Nice work!

The potential function network πpf uses the goal object category (o) to predict the object potential functions. What form of object category inputted into the network? One-hot vector or others?
Only the names of the object categories are presented in the literature [6], and the specific form in the dataset is not described.

Plz give me some advice.

How to obtain .scn file for matterport dataset

I run the command

ACTIVE_DATASET="mp3d" python scripts/create_semantic_maps.py

But the error notify me that the ".scn" file is missing.

Number of available scenes: 67
===========> Extracting scene boundaries
  0%|                                                                                                                                                                                     | 0/62 [00:00<?, ?it/s]WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.619292  1356 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/759xd9YjKW5/matterport_mesh/da8b0a1818094c3590080b6fbcc5d161/759xd9YjKW5.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.619701  1357 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/B6ByNegPMKs/matterport_mesh/85cef4a4c3c244479c56e56d9a723ad2/B6ByNegPMKs.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.629235  1362 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/5q7pvUzZiYa/matterport_mesh/d7a2911178dd48e89d6a23afb09cbc11/5q7pvUzZiYa.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.630138  1360 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/2azQ1b91cZZ/matterport_mesh/7812e14df5e746388ff6cfe8b043950a/2azQ1b91cZZ.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.631971  1358 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/8WUmhLawc2A/matterport_mesh/caef338e1683434ba3a471ead89008cc/8WUmhLawc2A.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.632398  1355 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/1LXtFkjw3qL/matterport_mesh/b94039b4eb8947bdb9ff5719d9173eae/1LXtFkjw3qL.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.633939  1361 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/7y3sRwLe3Va/matterport_mesh/f04f5cfeb7f34d03a05e6a62fe5bd572/7y3sRwLe3Va.scn does not exist.  Aborting load.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0921 07:11:30.636287  1359 SemanticScene.h:155] ::loadSemanticSceneDescriptor : File data/scene_datasets/mp3d_uncompressed/v1/scans/1pXnuDYAj8r/matterport_mesh/2e84c97e728d46babd3270f4e1a0ae3a/1pXnuDYAj8r.scn does not exist.  Aborting load.

I have downloaded the semantic data following the instruction

python download_mp.py --type matterport_mesh -o $PONI_ROOT/data/scene_datasets/mp3d_uncompressed

How can I get the ".scn" files?

goal category

could I ask a simple question? where to define the goal category, I know there are 6 goals in Gibson dataset.

plan_path() --- Run into infinite loop! when create poni dataset

I$ ACTIVE_DATASET="gibson" python create_poni_dataset.py --split "train"
Maps 63
0%| | 0/63 [00:00<?, ?it/s]====> Pre-computing for map Benevolence_1
====> Pre-computing for map Benevolence_0
====> Pre-computing for map Forkland_0
====> Pre-computing for map Benevolence_2
====> Pre-computing for map Coffeen_0
====> Pre-computing for map Beechwood_0
====> Pre-computing for map Coffeen_3
====> Pre-computing for map Hiteman_2
====> Pre-computing for map Coffeen_1
====> Pre-computing for map Allensville_0
====> Pre-computing for map Hiteman_1
====> Pre-computing for map Hiteman_0
====> Pre-computing for map Hanson_0
====> Pre-computing for map Forkland_2
====> Pre-computing for map Beechwood_1
====> Pre-computing for map Forkland_1
====> Pre-computing for map Coffeen_2
====> Pre-computing for map Hanson_2
====> Pre-computing for map Hanson_1
====> Pre-computing for map Klickitat_0
====> Pre-computing for map Cosmos_0
====> Pre-computing for map Cosmos_1
====> Pre-computing for map Klickitat_2
====> Pre-computing for map Lakeville_0
====> Pre-computing for map Klickitat_1
plan_path() --- Run into infinite loop!
plan_path() --- Run into infinite loop!
plan_path() --- Run into infinite loop!
plan_path() --- Run into infinite loop!
0%|

How the RL policy used during inference?

I study your code and find out that the RL-Policy class only be used in inference, in the training, your pipline is a supervised learning framework to predict the long-term goal position, am I correct? Thanks!

Conversion to GLB

Hi! Thanks for your Great work!

I have followed the guidance below for GLB conversion:
https://github.com/srama2512/PONI/blob/main/DATASETS.md:

  • Conversion to GLB can be performed using existing tools like assimp.
    assimp export <OBJ FILE> <GLB PATH>

when I use the COMMANE: python download_mp.py --type matterport_mesh -o $PONI_ROOT/data/scene_datasets/mp3d_uncompressed, I get files like this: *scans/1LXtFkjw3qL/matterport_mesh.zip.
Then, I unzip this file and get something like:

../scans/1LXtFkjw3qL/matterport_mesh/b94039b4eb8947bdb9ff5719d9173eae$ ls 
b94039b4eb8947bdb9ff5719d9173eae_000.jpg  b94039b4eb8947bdb9ff5719d9173eae_020.jpg  b94039b4eb8947bdb9ff5719d9173eae_083.jpg
......
b94039b4eb8947bdb9ff5719d9173eae.mtl
b94039b4eb8947bdb9ff5719d9173eae.obj

Then, I execute the GLB conversion COMMAND:

assimp  export  b94039b4eb8947bdb9ff5719d9173eae.obj  1LXtFkjw3qL.glb 

When I use the command python scripts/create_semantic_maps.py, the scene_boundaries at _ = list(tqdm.tqdm(pool.imap(get_scene_boundaries, inputs), total=len(inputs))) are successfully generated.
But I got an error in _ = list(tqdm.tqdm(pool.imap(_aux_fn, inputs), total=len(inputs))):

def extract_wall_point_clouds: 
    ......
    scene = trimesh.load(glb_path)

Traceback (most recent call last):
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/load.py", line 129, in load
    loaded = load_mesh(file_obj,
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/load.py", line 212, in load_mesh
    results = loader(file_obj,
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 401, in load_glb
    kwargs = _read_buffers(header=header,
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1360, in _read_buffers
    materials = _parse_materials(
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1217, in _parse_materials
    images = _parse_textures(header, views, resolver)
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 1185, in _parse_textures
    blob = _uri_to_bytes(uri=img['uri'], resolver=resolver)
  File "/home/zlin/anaconda3/envs/habitat/lib/python3.8/site-packages/trimesh/exchange/gltf.py", line 433, in _uri_to_bytes
    return resolver[uri]
TypeError: 'NoneType' object is not subscriptable

So, I would like to inquire whether the above-mentioned GLB conversion procedure is correct?
And, what version of trimesh are you using?
Hope the above questions are answered, thanks again for your work.

create sementic maps

p1 = ply_data["vertex"][vids[0]]
p1 = [p1[0], p1[2], -p1[1]]
p2 = ply_data["vertex"][vids[1]]
p2 = [p2[0], p2[2], -p2[1]]
p3 = ply_data["vertex"][vids[2]]
p3 = [p3[0], p3[2], -p3[1]]
obj_id_to_faces[obj_id].append([p1, p2, p3])

could you plz explain why you change the vertex order in semantic.ply file to [x z -y] for all objects?

problems on eval_poni.sh for gibson

I prepared datasets and pretrained models according to README
put the eval_poni.sh of gibson into folder /home/wdw/Projects/PONI/experiments/gibson_eval
but when I run ./eval_poni.sh:
Auto GPU config:
Number of processes: 5
Number of processes on GPU 0: 2
Number of processes per GPU: 3
Dumping at /home/wdw/Projects/PONI/experiments/gibson_eval/gibson_objectnav/models/seed_100/
Namespace(add_agent2loc_distance=False, add_agent2loc_distance_v2=False, agent='sem_exp', alpha=0.99, area_weight_coef=0.5, auto_gpu_config=1, camera_height=0.88, cat_pred_threshold=5.0, clip_param=0.2, collision_threshold=0.2, cuda=True, dist_weight_coef=0.3, du_scale=1, dump_location='/home/wdw/Projects/PONI/experiments/gibson_eval/gibson_objectnav', entropy_coef=0.001, env_frame_height=480, env_frame_width=640, eps=1e-05, eval=1, exp_name='seed_100', exp_pred_threshold=1.0, floor_thr=50, frame_height=120, frame_width=160, gamma=0.99, global_downscaling=1, global_hidden_size=256, hfov=79.0, intrinsic_rew_coeff=0.02, load='0', log_interval=10, lr=2.5e-05, lr_schedule=[], main_model='simple_cnn', map_pred_threshold=1.0, map_resolution=5, map_size_cm=2400, mask_nearest_locations=True, mask_size=1.0, max_d=100.0, max_depth=5.0, max_episode_length=500, max_grad_norm=0.5, min_d=1.5, min_depth=0.5, no_cuda=False, num_eval_episodes=200, num_global_steps=20, num_goals=1, num_local_steps=1, num_mini_batch=2, num_pf_maps=0, num_processes=5, num_processes_on_first_gpu=2, num_processes_per_gpu=3, num_sem_categories=16, num_train_episodes=10000, num_training_frames=10000000, object_cat_offset=1, pf_masking_opt='unexplored', pf_model_path='/home/wdw/Projects/PONI/pretrained_models/gibson_models/poni_seed_345.ckpt', ppo_epoch=4, print_images=0, reward_coeff=0.1, save_interval=1, seed=100, seg_interval=3, sem_gpu_id=-1, sem_pred_prob_thr=0.9, sem_pred_weights='../pretrained_models/maskrcnn_gibson.pth', sim_gpu_id=1, split='val', success_dist=1.0, success_distance=0.1, task_config='tasks/objectnav_gibson.yaml', tau=0.95, total_num_scenes=5, turn_angle=30, use_egocentric_transform=False, use_gae=False, use_gt_segmentation=False, use_nearest_frontier=False, use_recurrent_global=0, value_loss_coef=0.5, version='v1.1', vision_range=100, visualize=0)
Scenes per thread:
Thread 0: ['Collierville.glb']
Thread 1: ['Corozal.glb']
Thread 2: ['Darden.glb']
Thread 3: ['Markleeville.glb']
Thread 4: ['Wiconisco.glb']
2022-12-09 00:07:54,338 Initializing dataset PointNav-v1
2022-12-09 00:07:54,339 initializing sim Sim-v0
2022-12-09 00:07:54,423 Initializing dataset PointNav-v1
2022-12-09 00:07:54,425 initializing sim Sim-v0
WARNING: Logging before InitGoogleLogging() is written to STDERR
E1209 00:07:54.425750 19103 StageAttributesManager.cpp:86] ::registerObjectFinalize : Render asset template handle : ../data/scene_datasets/gibson_semantic//Wiconisco.glb specified in stage template with handle : ../data/scene_datasets/gibson_semantic//Wiconisco.glb does not correspond to any existing file or primitive render asset. Aborting.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E1209 00:07:54.494887 19102 StageAttributesManager.cpp:86] ::registerObjectFinalize : Render asset template handle : ../data/scene_datasets/gibson_semantic//Markleeville.glb specified in stage template with handle : ../data/scene_datasets/gibson_semantic//Markleeville.glb does not correspond to any existing file or primitive render asset. Aborting.
2022-12-09 00:07:54,533 Initializing dataset PointNav-v1
2022-12-09 00:07:54,534 initializing sim Sim-v0
2022-12-09 00:07:54,554 Initializing dataset PointNav-v1
2022-12-09 00:07:54,555 initializing sim Sim-v0
WARNING: Logging before InitGoogleLogging() is written to STDERR
E1209 00:07:54.598381 19099 StageAttributesManager.cpp:86] ::registerObjectFinalize : Render asset template handle : ../data/scene_datasets/gibson_semantic//Collierville.glb specified in stage template with handle : ../data/scene_datasets/gibson_semantic//Collierville.glb does not correspond to any existing file or primitive render asset. Aborting.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E1209 00:07:54.624519 19101 StageAttributesManager.cpp:86] ::registerObjectFinalize : Render asset template handle : ../data/scene_datasets/gibson_semantic//Darden.glb specified in stage template with handle : ../data/scene_datasets/gibson_semantic//Darden.glb does not correspond to any existing file or primitive render asset. Aborting.
2022-12-09 00:07:54,729 Initializing dataset PointNav-v1
2022-12-09 00:07:54,730 initializing sim Sim-v0
WARNING: Logging before InitGoogleLogging() is written to STDERR
E1209 00:07:54.785553 19100 StageAttributesManager.cpp:86] ::registerObjectFinalize : Render asset template handle : ../data/scene_datasets/gibson_semantic//Corozal.glb specified in stage template with handle : ../data/scene_datasets/gibson_semantic//Corozal.glb does not correspond to any existing file or primitive render asset. Aborting.
Traceback (most recent call last):
File "eval_poni.py", line 949, in
main()
File "eval_poni.py", line 102, in main
envs = make_vec_envs(args, workers_ignore_signals=not args.eval)
File "/home/wdw/Projects/PONI/semexp/envs/init.py", line 7, in make_vec_envs
envs = construct_envs(args, workers_ignore_signals=workers_ignore_signals, **kwargs)
File "/home/wdw/Projects/PONI/semexp/envs/habitat/init.py", line 147, in construct_envs
envs = VectorEnv(
File "/home/wdw/Projects/PONI/semexp/envs/habitat/utils/vector_env.py", line 139, in init
self.observation_spaces = [read_fn() for read_fn in self._connection_read_fns]
File "/home/wdw/Projects/PONI/semexp/envs/habitat/utils/vector_env.py", line 139, in
self.observation_spaces = [read_fn() for read_fn in self._connection_read_fns]
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Exception ignored in: <function VectorEnv.del at 0x7f6a786bd430>
Traceback (most recent call last):
File "/home/wdw/Projects/PONI/semexp/envs/habitat/utils/vector_env.py", line 596, in del
self.close()
File "/home/wdw/Projects/PONI/semexp/envs/habitat/utils/vector_env.py", line 431, in close
write_fn((CLOSE_COMMAND, None))
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/wdw/anaconda3/envs/poni/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

It seems like a data path problem, but I can't figure it out
looking forward for your reply, thanks!

Loss only on frontier?

Hi, still me.
I have question about the MSE loss calculation, in the paper you mantioned that the loss only calculate on the frontier pixels, but I check the

mask = labels["loss_masks"][:, 2:]
# Evaluate predictions only on mask = 1 regions
mask_sum = mask.sum(dim=3).sum(dim=2) + 1e-16  # (b, 1)

The mask is a whole floor map not the frontier pixel by visualization, so you calaucte on the entier floor map instead of frontier pixels, am I correct?

Besides, why there is no mask processing on area_pf loss?

area_pf_loss = self.area_loss_fn(area_preds, area_gts).mean()
loss = loss + area_pf_loss
losses["area_pf_loss"] = area_pf_loss.item()

some questions about eval_poni.sh

thank you for a great job!
When I want to run the evaluation code, I encountered the following problem. I did not use conda, but directly installed the relevant dependencies in docker. I guess this is caused by the file package. Can you help me to look at this problem? ?
Finally, would you consider posting a Dockerfile?
from habitat.config import Config
ImportError: cannot import name 'Config' from 'habitat.config' (/root/PONI/dependencies/habitat-lab/habitat-lab/habitat/config/init.py)
The habitat.config here refers to the config in the habitat-lab under dependencies or the habitat here in PONI/semexp/envs/habitat ?

image

tools/gen_gibson_semantics.sh: line 28: tools/../build/utils/datatool/datatool: No such file or directory

Hello, @srama2512
Thanks for your excellent work!
I want to convert the semantic annotations to habitat format following instructions in habitat-sim, but I encountered this problem: tools/gen_gibson_semantics.sh: line 24: tools/../build/utils/datatool/datatool: No such file or directory

I found that there is no utils in build and there is a file utils/datatool in src
How can I resolve this problem ? Thank you very much !

The calculation of area_pf

Could you plz explain more about "we first group the unexplored free-space cells into con- nected components C = {c1, · · · , cn} using OpenCV , A component c is associated with frontier f only if at least one pixel in c is an 8-connected neighbor [58] of some pixel in f " , and where is the implementation of area_pf calculation of aboved sentence? Thanks ahead!!

Training Time

Dear Authors,

How long does your method need to train?

visulization of mp3d

is there a way to save videos like Gibson dataset "--print_images 1" for the demo of MP3D dataset evaluations?

Add instructions for parallelizing create_poni_dataset.py

The existing instructions only support generating the data on a single machine.

ACTIVE_DATASET="gibson" python scripts/create_poni_dataset.py --split "train"

TODO: Add instructions to demonstrate parallelization across machines

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.