Giter Site home page Giter Site logo

facebookresearch / habitat-lab Goto Github PK

View Code? Open in Web Editor NEW
1.7K 45.0 447.0 130.33 MB

A modular high-level library to train embodied AI agents across a variety of tasks and environments.

Home Page: https://aihabitat.org/

License: MIT License

Python 99.30% Dockerfile 0.06% Shell 0.32% HTML 0.32%
ai computer-vision robotics simulator sim2real deep-learning deep-reinforcement-learning reinforcement-learning research python

habitat-lab's Introduction

CircleCI codecov GitHub license GitHub release (latest by date) Supports Habitat_Sim Python 3.9 pre-commit Code style: black Imports: isort Twitter Follow

Habitat-Lab

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks.

Towards this goal, Habitat-Lab is designed to support the following features:

  • Flexible task definitions: allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks.
  • Diverse embodied agents: configuring and instantiating a diverse set of embodied agents, including commercial robots and humanoids, specifying their sensors and capabilities.
  • Training and evaluating agents: providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics.
  • Human in the loop interaction: providing a framework for humans to interact with the simulator, enabling to collect embodied data or interact with trained agents.

Habitat-Lab uses Habitat-Sim as the core simulator. For documentation refer here.

Habitat Demo


Table of contents

Citing Habitat

If you use the Habitat platform in your research, please cite the Habitat 1.0, Habitat 2.0, and Habitat 3.0 papers:

@misc{puig2023habitat3,
      title  = {Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots},
      author = {Xavi Puig and Eric Undersander and Andrew Szot and Mikael Dallaire Cote and Ruslan Partsey and Jimmy Yang and Ruta Desai and Alexander William Clegg and Michal Hlavac and Tiffany Min and Theo Gervet and Vladimír Vondruš and Vincent-Pierre Berges and John Turner and Oleksandr Maksymets and Zsolt Kira and Mrinal Kalakrishnan and Jitendra Malik and Devendra Singh Chaplot and Unnat Jain and Dhruv Batra and Akshara Rai and Roozbeh Mottaghi},
      year={2023},
      archivePrefix={arXiv},
}

@inproceedings{szot2021habitat,
  title     =     {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
  author    =     {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
  booktitle =     {Advances in Neural Information Processing Systems (NeurIPS)},
  year      =     {2021}
}

@inproceedings{habitat19iccv,
  title     =     {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
  author    =     {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
  booktitle =     {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      =     {2019}
}

Installation

  1. Preparing conda env

    Assuming you have conda installed, let's prepare a conda env:

    # We require python>=3.9 and cmake>=3.14
    conda create -n habitat python=3.9 cmake=3.14.0
    conda activate habitat
  2. conda install habitat-sim

    • To install habitat-sim with bullet physics
      conda install habitat-sim withbullet -c conda-forge -c aihabitat
      
      Note, for newer features added after the most recent release, you may need to install aihabitat-nightly. See Habitat-Sim's installation instructions for more details.
  3. pip install habitat-lab stable version.

    git clone --branch stable https://github.com/facebookresearch/habitat-lab.git
    cd habitat-lab
    pip install -e habitat-lab  # install habitat_lab
  4. Install habitat-baselines.

    The command above will install only core of Habitat-Lab. To include habitat_baselines along with all additional requirements, use the command below after installing habitat-lab:

    pip install -e habitat-baselines  # install habitat_baselines

Testing

  1. Let's download some 3D assets using Habitat-Sim's python data download utility:

    • Download (testing) 3D scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/

      Note that these testing scenes do not provide semantic annotations.

    • Download point-goal navigation episodes for the test scenes:

      python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/
  2. Non-interactive testing: Test the Pick task: Run the example pick task script

    python examples/example.py

    which uses habitat-lab/habitat/config/benchmark/rearrange/skills/pick.yaml for configuration of task and agent. The script roughly does this:

    import gym
    import habitat.gym
    
    # Load embodied AI task (RearrangePick) and a pre-specified virtual robot
    env = gym.make("HabitatRenderPick-v0")
    observations = env.reset()
    
    terminal = False
    
    # Step through environment with random actions
    while not terminal:
        observations, reward, terminal, info = env.step(env.action_space.sample())

    To modify some of the configurations of the environment, you can also use the habitat.gym.make_gym_from_config method that allows you to create a habitat environment using a configuration.

    config = habitat.get_config(
      "benchmark/rearrange/skills/pick.yaml",
      overrides=["habitat.environment.max_episode_steps=20"]
    )
    env = habitat.gym.make_gym_from_config(config)

    If you want to know more about what the different configuration keys overrides do, you can use this reference.

    See examples/register_new_sensors_and_measures.py for an example of how to extend habitat-lab from outside the source code.

  3. Interactive testing: Using you keyboard and mouse to control a Fetch robot in a ReplicaCAD environment:

    # Pygame for interactive visualization, pybullet for inverse kinematics
    pip install pygame==2.0.1 pybullet==3.0.4
    
    # Interactive play script
    python examples/interactive_play.py --never-end

    Use I/J/K/L keys to move the robot base forward/left/backward/right and W/A/S/D to move the arm end-effector forward/left/backward/right and E/Q to move the arm up/down. The arm can be difficult to control via end-effector control. More details in documentation. Try to move the base and the arm to touch the red bowl on the table. Have fun!

    Note: Interactive testing currently fails on Ubuntu 20.04 with an error: X Error of failed request: BadAccess (attempt to access private resource denied). We are working on fixing this, and will update instructions once we have a fix. The script works without errors on MacOS.

Debugging an environment issue

Our vectorized environments are very fast, but they are not very verbose. When using VectorEnv some errors may be silenced, resulting in process hanging or multiprocessing errors that are hard to interpret. We recommend setting the environment variable HABITAT_ENV_DEBUG to 1 when debugging (export HABITAT_ENV_DEBUG=1) as this will use the slower, but more verbose ThreadedVectorEnv class. Do not forget to reset HABITAT_ENV_DEBUG (unset HABITAT_ENV_DEBUG) when you are done debugging since VectorEnv is much faster than ThreadedVectorEnv.

Documentation

Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide.

Docker Setup

We provide docker containers for Habitat, updated approximately once per year for the Habitat Challenge. This works on machines with an NVIDIA GPU and requires users to install nvidia-docker. To setup the habitat stack using docker follow the below steps:

  1. Pull the habitat docker image: docker pull fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  2. Start an interactive bash session inside the habitat docker: docker run --runtime=nvidia -it fairembodied/habitat-challenge:testing_2022_habitat_base_docker

  3. Activate the habitat conda environment: conda init; source ~/.bashrc; source activate habitat

  4. Run the testing scripts as above: cd habitat-lab; python examples/example.py. This should print out an output like:

    Agent acting inside environment.
    Episode finished after 200 steps.

Questions?

Can't find the answer to your question? Look up for common issues or try asking the developers and community on our Discussions forum.

Datasets

Common task and episode datasets used with Habitat-Lab.

Baselines

Habitat-Lab includes reinforcement learning (via PPO) baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.

ROS-X-Habitat

ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2's physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.

Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework's repository for docs and discussions.

License

Habitat-Lab is MIT licensed. See the LICENSE file for details.

The trained models and the task datasets are considered data derived from the correspondent scene datasets.

habitat-lab's People

Contributors

0mdc avatar abhiskk avatar aclegg3 avatar aszot avatar danielgordon10 avatar dhruvbatra avatar erikwijmans avatar eundersander avatar facebook-github-bot avatar henrysamer avatar jacobkrantz avatar jasonjiazhizhang avatar jimmytyyang avatar jturner65 avatar laikhtewari avatar mathfac avatar matsuren avatar mosra avatar mukulkhanna avatar nakuramino avatar naokiyokoyama avatar ram81 avatar rpartsey avatar skylion007 avatar srama2512 avatar thibautlavril avatar vauduong avatar vincentpierre avatar xavierpuigf avatar ykarmesh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

habitat-lab's Issues

Logging from child processes in `VectorEnv`

Author @abhiskk: Currently there is no way to log from child process to main process during usage of VecEnv for training. That would be good to organize logging properly log into main process or have separate files for each child process.

How to initialize environment outside of main repo?

Steps to reproduce

  1. I cloned examples/example.py outside of the repository (i.e. in a different project repo) and attempted to run it.

Observed Results

I got No such file or directory: 'data/datasets/pointnav/gibson/v1/train/train.json.gz'. because the data path in the config is a relative path. When I changed DATA_PATH in configs/tasks/pointnav_gibson.yaml I got

Expected Results

I expected the script to run as normal.

How do I make an environment from config from outside the main repo? We'd like to integrate Habitat into our existing project as an environment and are having extreme trouble just initializing an environment from outside habitat-api. Thank you so much for the help!

Tasks other than PointNav

Will alternate tasks (Object Goal, Area Goal etc) be released? If so, is there an approximate timeline?

SLAM baselines

Thanks for your hard work!
Couple of questions regarding the SLAM baseline:

  • When will it be released?
  • If I wanted to make my own custom SLAM for Habitat, where should I edit?

goal distance is computed inconsistently for PointGoalSensor and SPL measure

For the navigation task the computation of goal distance is inconsistent for the PointGoal sensor here and the SPL measure here. The former seems to be a 2D distance while the latter seems to be 3D distance.

As a result, in some cases the agent is within desired distance to the goal according to the PointGoal sensor, but not according to the SPL code.

Unclear config usage

Author @abhiskk:
When using the default config, config.SIMULATOR contains information about depth sensor but the depth sensor is not attached to the agent unless you explicitly do it. This can be confusing because if you print out the config there is information about depth sensor but you don't get any depth sensor information from the environment.

Not of immediate concern but we should try to make the config usage cleaner

cc: @mathfac

Relevant Code
import habitat
from habitat.config.default import cfg
from habitat.tasks.nav.nav_task import NavigationEpisode

config = cfg()
config.SIMULATOR.AGENT_0.SENSORS = ["RGB_SENSOR", "DEPTH_SENSOR"] # specify sensors
env = habitat.Env(config=config)
env.episodes = [NavigationEpisode(episode_id='0',
scene_id=config.SIMULATOR.SCENE,
start_position=None,
start_rotation=None,
goals=[])]

observations = env.reset()

Sensor configuration

Firstly, Thank you for your great work!

Questions

  1. In configs/tasks/pointnav_gibson.yaml line 5 and configs/tasks/pointnav_mp3d.yaml line 5, SENSORS for SIMULATOR is set as ['RGB_SENSOR']. However, in the observations returned by the simulator, there are still 'rgb', 'depth' and 'goal'. How can we specify the sensors we want to use in simulator observations correctly?
  2. Are all the instances in the gibson/mp3d train/val/test set reachable within 500 steps? (One instance means one episode_id data like this:
    image
    (I am not sure whether the word instance is suitable to describe the above data example))

Rename LookLeft and LookRight to RotateLeft and RotateRight

Author @danielgordon10:
https://github.com/fairinternal/habitat-sim/blob/439d586b8c643fd42f53a31a4b15a65af3e39bcf/src/esp/scene/ObjectControls.cpp#L46

Rotate is a better description of what's actually happening since the entire agent body is being rotated along with the camera.
Based on my understanding of previous conversations, we would not want to change lookUp and lookDown to rotate.

This change will have to be synced with one on habitat-api to similarly rename things.

From @dhruvbatra

-- "look right/left" --> "turn right/left"? Because after "looking" semantics of "forward" don't necessarily change, but after "turning" they do.

Create multiagent environment

Expected Results

My goal is to create a scene where there are other agents moving around, an ego view percept these movements. Also, I would like to keep all the images from the ego view along with the trajectory and angles of the ego camera. Is the platform compatible with this kind of task? Any potential tutorial to look at?

Thanks!

Incorrect render() method in VectorEnv class.

https://github.com/facebookresearch/habitat-api/blob/8c5329ce9f777395b45c2daac135377e629ec148/habitat/core/vector_env.py#L313-L331
In this function you send tuple

(args, {"mode": "rgb_array", **kwargs})

However, it does not contain command index that you want to be executed by worker. Furthermore, your environments do not have rgb_array mode of rendering, they use rgb instead. This makes render() incorrect.
I think you have to replace line with write_fn call with:

write_fn( (RENDER_COMMAND, (args, {"mode": "rgb", **kwargs})) )

Some objects of some scenes in the MatterPort3D dataset are void and not accessible

I tried to iterate the scenes in the MatterPort3D dataset and obtain all the object information. I did something similar as follows:

scene = env.sim.semantic_annotations()
for obj in scene.objects:
            print(obj)

It turned out that some objects are void and cannot be accessible. So I encounted the error outout

<habitat_sim_bindings.SemanticObject object at 0x7f2bd02260d8>
<habitat_sim_bindings.SemanticObject object at 0x7f2bd0226110>
Segmentation fault (core dumped)

The scene id for this error is ur6pFq6Qu1A, which is from the train split. But there are also some other problematic scenes in all splits (train, val, test), which you need also to check.

Thanks,
Xin

Move all interfaces from List[float] to np.array(float)

Motivation

While to support more general API and be json friendly we designed agent position and state as List[float] np.array(float) is more efficient in terms of communication between HSIM (Habitat Simulator) as well as is more native for PyTorch usage.

Steps for implementation:

  1. Find List[float] in core interfaces and check if it's convenient to switch to np.array(float) and do the switch. Switch in core implementation classes as well.
  2. Update Dataset reading and writing code to do conversion between List[float] and np.array(float), as in np.array isn't json serializable.
  3. Run mypy to check API are consistent and pytest.

SUNCG pointnav dataset

Hello, in the "Architecture of Habitat-API" diagram you refer to the SUNCG pointgoal dataset. Is this available yet or is it on the roadmap?

Thank you.

Clarification

Can anyone please help me in understanding the difference between 'pointgoal' in observation and self._env.current_episode.goals?
observation['pointgoal'] \in R^2 while target_goals \in R^3, what are each of them?

Library: habitat_sim_bindings.so -- Supported Python Version: Unknown

I have python2.7, python 3.5 and python 3.6 installed on my gpu laptop, ubuntu 16.04.
I tried to install api with python3.6, and ./build.sh as indicated in sim readme.
I tested the sim, it works fine.
Then I test on example/example.py, error occurs

Traceback
(most recent call last):
File "/habitat-sim/habitat_sim/bindings/init.py", line 10, in
from habitat_sim._ext.habitat_sim_bindings import Simulator as SimulatorBackend
ModuleNotFoundError: No module named 'habitat_sim._ext'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "
/habitat-sim/habitat_sim/bindings/dev_bindings.py", line 72, in
import habitat_sim_bindings
ImportError: /habitat-sim/build/esp/bindings/habitat_sim_bindings.so: undefined symbol: _Py_ZeroStruct
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples/example.py", line 9, in
import demo_runner as dr
File "
/habitat-sim/examples/demo_runner.py", line 15, in
import habitat_sim
File "/habitat-sim/habitat_sim/init.py", line 12, in
from .simulator import *
File "
/habitat-sim/habitat_sim/simulator.py", line 7, in
from habitat_sim.bindings import *
File "/habitat-sim/habitat_sim/bindings/init.py", line 18, in
from .dev_bindings import *
File "
/habitat-sim/habitat_sim/bindings/dev_bindings.py", line 84, in
raise ImportError(msg)
ImportError:
Could not import habitat sim bindings
Please follow the building instructions in the README
Found the following libraries
Library: habitat_sim_bindings.so -- Supported Python Version: Unknown
Your python interpreter is version 36
Please re-build habitat sim with this version of python

I have tried to install api for 2 days. Any advice will be appreciated! Thanks!

Habitat-Sim change causes failing test

Steps to reproduce

python test/test_mp3d_eqa.py

Observed Results

                logger.info(
                    "diff position: {} diff rotation: {} "
                    "cur_state.position: {} shortest_path.position: {} "
                    "cur_state.rotation: {} shortest_path.rotation: {} action: {}"
                    "".format(
                        cur_state.position - point.position,
>                       cur_state.rotation - point.rotation,
                        cur_state.position,
                        point.position,
                        cur_state.rotation,
                        point.rotation,
                        point.action,
                    )
                )
E               TypeError: Binary operation involving quaternion and \neither float nor quaternion.

test/test_mp3d_eqa.py:196: TypeError

Shortest path returns points that are List[float], List[float], but cur_state returns points that are np.array, quaternion.

top-down map visualization

author @dhruvbatra:
Creating an issue so we don't forget.

We should include top-down map visualization.
v0: Static: showing just the occupied cells in the environment. Right now @abhiskk has a hacky solution, but in future this should be pulled in from habitat-sim. I created an issue there.
https://github.com/fairinternal/habitat-sim/issues/105

v1: Dynamic: showing agent progress in the navigation episode. This can use a lot of code from #131. @mathfac has a preliminary version of this visualization. We should add a flag to indicate the target.

v2: 3D top down map: like this
https://sites.google.com/view/scene-memory-transformer

Add metadata information about all episodes to dataset class

Author @danielgordon10:
These are things that are shared between multiple (probably many) episodes

Examples:

Vocabulary for question answer dataset
Semantic classes for a scene
Objects present in a scene
It would be good to have these things precomputed so you don't have to iterate through a dataset to find the information, and it then is nicely linked with the episodes.

Overhead camera view with an actual agent

author @dhruvbatra: [I believe this feature should be implemented in habitat-api not habitat-sim but please correct me if I'm mistaken]

Feature request: we should add an overhead camera sensor so we can see the agent move around like a player in a game.

See for example:
https://raw.githubusercontent.com/StanfordVL/GibsonEnv/master/misc/husky_camera.png
https://raw.githubusercontent.com/Unity-Technologies/obstacle-tower-env/master/banner.png

This could be an additional "sensor" that is attached to the agent, but the observations from this sensor are not used by the agent (the agent only uses ego-centric camera).

Feature: Adding Surface Normals Sensor

Author @danielgordon10:
I have code to turn depth into surface normals, but ideally it should be something provided by habitat-api directly.

Here is my surface normals code. One nice thing about it is I can batch it and run on GPU, though you should at least be able to do the second part on habitat-sim side if necessary.

import torch
import torch.nn.functional as F

surfnorm_kernel = None
def depth_to_surface_normals(depth, surfnorm_scalar=256):
# depth is torch tensor in N x C x H x W order.
global surfnorm_kernel
if surfnorm_kernel is None:
surfnorm_kernel = torch.from_numpy(
np.array([[[1, 2, 1],
[0, 0, 0],
[-1, -2, -1]],
[[1, 0, -1],
[2, 0, -2],
[1, 0, -1]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]])
)[:, np.newaxis, ...].to(
dtype=torch.float32, device=depth.device)
with torch.no_grad():
surface_normals = F.conv2d(depth, surfnorm_scalar * surfnorm_kernel, padding=1)
surface_normals[:, 2, ...] = 1
surface_normals = surface_normals / surface_normals.norm(dim=1, keepdim=True)
return surface_normals

[Feature Request] Compositional configuration

Author @erikwijmans:
We should should support passing in a list of configuration files along with a list of override options into the get_config method. This allows the configurations to be compositional (i.e break-up the agent config, task config, and dataset config into separate files) which better reflects the rest of the API. A list of override options allows for little changes without needing to create a new .yaml file and would play nicely with SLURM as SLURM saves the batch file internally.

What I would like to see is something that allows this:

# args.config_files == ['tasks/pointnav.yaml',
#    'datasets/mp3d.pointnav.yaml',
#    'agents/rgbd-agent.yaml']
# args.extra_config_opts == ['ENVIRONMENT.MAX_EPISODE_STEPS', '1000']

cfg = habitat.get_config(args.config_files, args.extra_config_opts)

The order of the config files would matter, and it makes sense that they do matter, as then tasks/pointnav.yaml can define a default agent and dataset, but you can then easily overwrite those with other configs for a specific dataset and a specific agent.

Choosing what scene is loaded in Simulator

Hi, line 100 mentions that config_env.SIMULATOR.SCENE = dataset.episodes[0].scene_id . Does this mean that the agent is always evaluated on just the first scene-id present in the val.json.
Also, the train.json file is empty, so how are the different scenes loaded during the train time?

[ORBSLAM2 Agent bug] estimatedGoalPos outside map bounds

Thank you for the great work and releasing the simulator and baselines.

I am trying to run the SLAM baseline on the MP3D val split, which runs for the first 19 episodes, and crashes on the 20th.

Steps to reproduce

  1. create point_nav_mp3d_val.yaml from point_nav_mp3d.yaml provided, with:
    SENSORS: ['RGB_SENSOR', 'DEPTH_SENSOR'] and SPLIT: val
  2. python habitat_baselines/agents/slam_agents.py --task-config configs/tasks/pointnav_mp3d_val.yaml

Observed Results

  • when evaluating episode 20 (scene: 2azQ1b91cZZ.glb, start_position: [16.921436309814453, 0.12711000442504883, 9.845909118652344], goal_position: [4.39768648147583, 0.12711000442504883, -7.979780673980713] ), the
    IndexError: index 418 is out of bounds for dimension 3 with size 400
    (caused by: self.estimatedGoalPos2D = tensor([[201., 418.]], which is then used to set ones in the goal_map which is of size [1,1,400,400]).

Line numbers in the stacktrace are off, due to local minor edits, not changing the functionality (e.g. debug code to isolate the problem).

Traceback (most recent call last):
  File "habitat_baselines/agents/slam_agents.py", line 635, in <module>
    main()
  File "habitat_baselines/agents/slam_agents.py", line 629, in main
    metrics = benchmark.evaluate(agent)
  File "...habitat-api/habitat/core/benchmark.py", line 64, in evaluate
    action = agent.act(observations)
  File "habitat_baselines/agents/slam_agents.py", line 340, in act
    self.planned2Dpath, self.planned_waypoints = self.plan_path()
  File "habitat_baselines/agents/slam_agents.py", line 472, in plan_path
    ] = 1.0
IndexError: index 418 is out of bounds for dimension 3 with size 400

Expected:

  • handle value out of bounds / not get an estimated position outside the bounds?

Relevant Code

habitat-api/habitat_baselines/agents/slam_agents.py -- plan_path()

Any hints how to fix this?
Many thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.