Comments (4)
Hi, will look at these rgb anomalies, thanks for the note! Let me try to address the config for navigation issue here:
The tutorial does not include defining tasks with multi-agents. This is something we will probably add in a different tutorial. Will try to think what is the easiest way to explain this.
The only thing you need to support humans when defining a task (even PointNav) is the agents, actions and measurements. See for example in social_nav, which is very similar to what you want to do. The social nav config calls here, the hssd_human_spot_social_nav.yaml, which defines a spot agent, a human agent (lines 1-12) and the actions (up to line 20).
For pointnav, you could do something very similar. The difficulty is in following the hydra configs...
If you look at pointnav it calls /benchmark/nav/pointnav: pointnav_gibson
which is here. You can see how it defines a simualtor:agents:main_agent
. This also happens for object nav. What you need to do is to modify the config (in the first case the pointnav_gibson.yaml
so that instead of main_agent
, it defines agent_0
and agent_1
. Then you can make agent_1 be the humanoid just like in hssd_human_spot_social_nav.yaml.
Let me know if that helps or you reach out other blockers!
from habitat-lab.
Hi, @xavierpuigf. Thank you for your reply. I have previously tried a similar method to what you mentioned, and I have tried again. Below is my test code and config:
The test code is as follows:
import os
import git
from matplotlib import pyplot as plt
import habitat
repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
data_path = os.path.join(dir_path, "data")
os.chdir(dir_path)
if __name__ == "__main__":
config = habitat.get_config(
config_path=os.path.join(
dir_path,
"habitat-lab/habitat/config/benchmark/nav/pointnav/pointnav_gibson_test.yaml",
),
)
try:
env.close()
except NameError:
pass
env = habitat.Env(config=config)
Config file(pointnav_gibson_test.yaml) is as follows:
# @package _global_
defaults:
- pointnav_base_wsy
- /habitat/dataset/pointnav: gibson
- /habitat/simulator/[email protected]_0: spot
- /habitat/simulator/[email protected]_0.sim_sensors.rgb_sensor: rgb_sensor
- /habitat/simulator/[email protected]_1: human
- /habitat/simulator/[email protected]_1: rgbd_head_agent
# - /habitat/task/[email protected]_0_base_velocity: base_velocity_non_cylinder
# - /habitat/task/[email protected]_1_base_velocity: base_velocity
# - /habitat/task/[email protected]_1_rearrange_stop: rearrange_stop
# - /habitat/task/[email protected]_1_pddl_apply_action: pddl_apply_action
# - /habitat/task/[email protected]_1_oracle_nav_action: oracle_nav_action
# - /habitat/task/[email protected]_1_oracle_nav_randcoord_action: oracle_nav_action
- _self_
habitat:
environment:
max_episode_steps: 500
simulator:
agents_order:
- agent_0
- agent_1
Visualization code is as follows:
obs = env.reset()
valid_key_list = ["rgb", 'head_rgb', 'head_depth']
valid_num = 0
print(obs.keys())
for ind, name in enumerate(obs.keys()):
if name not in valid_key_list:
continue
valid_num += 1
_, ax = plt.subplots(1,valid_num)
for ind, name in enumerate(obs.keys()):
if name not in valid_key_list:
continue
ax[ind].imshow(obs[name])
ax[ind].set_axis_off()
ax[ind].set_title(name)
As shown in the visualization results, you can see that even though I added the spot agent and humanoid agent, their sensors show the same location, and after performing multiple env.reset(), I did not see the added agent models. Therefore, I wonder if there might be an issue with this method? If I use Rearrangesim, this problem does not occur.
Additionally, I encounter problems when trying to add actions in the config above. The error message is too long so I put it into the attached file error.txt.
My purpose is to add some moving models to train the main_agentโs dynamic obstacle avoidance capabilities. Maybe it's not necessarily to define tasks with multi-agents? In fact, I only want the main agent to be trained, and the rest can just serve as dynamic obstacles, but I hope they can walk freely in the environment, like moving with OracleNavActionConfig. Could you please tell me, if I define tasks with multi-agents, will all added agents be involved in the task training?
Thank you for your patient explanation. If there is anything wrong with my approach, please point it out. Thank you.
from habitat-lab.
That makes sense! Will try to help, some of the blockers you see here will be useful as we update the simulator to be more flexible/easier to use.
I think the easiest thing would be to look at the social navigation task and configs, since it is pretty much the same task you are trying to do here, and defines humans that walk around the scene.
The main issue you have here is that your simulator does not have an agent_manager
. The agent manager is what deals with multi-agent. It is also what will allow you to have per-agent observations (not that once you use that manager, the observations will be prepended by the agent name, e.g. agent_0_rgb
, agent_0_head_depth
. So your issue is not as much about humans but more about multi-agents.
You can see in rearrange_sim where the agent manager is defined: https://github.com/facebookresearch/habitat-lab/blob/main/habitat-lab/habitat/tasks/rearrange/rearrange_sim.py#L118
What I would recommend you is to either:
- Add agent manager in your sim
- Make your sim be a subclass of rearrange_sim, so that it inherits the agent manager
- start with the social navigation task and modify that to include the dataset you want to work with.
I hope this is useful! Happy to keep on this thread.
from habitat-lab.
That makes sense! Will try to help, some of the blockers you see here will be useful as we update the simulator to be more flexible/easier to use.
I think the easiest thing would be to look at the social navigation task and configs, since it is pretty much the same task you are trying to do here, and defines humans that walk around the scene.
The main issue you have here is that your simulator does not have an
agent_manager
. The agent manager is what deals with multi-agent. It is also what will allow you to have per-agent observations (not that once you use that manager, the observations will be prepended by the agent name, e.g.agent_0_rgb
,agent_0_head_depth
. So your issue is not as much about humans but more about multi-agents.You can see in rearrange_sim where the agent manager is defined: https://github.com/facebookresearch/habitat-lab/blob/main/habitat-lab/habitat/tasks/rearrange/rearrange_sim.py#L118
What I would recommend you is to either:
- Add agent manager in your sim
- Make your sim be a subclass of rearrange_sim, so that it inherits the agent manager
- start with the social navigation task and modify that to include the dataset you want to work with.
I hope this is useful! Happy to keep on this thread.
Thans a lot! I think these suggestions will be very helpful. I will let you know if there are any developments!
from habitat-lab.
Related Issues (20)
- Is there a sensor producing height coordinate of the robot's position?
- How to use Habitat3 to train and evaluate my new transformer based vision language navigation system?
- Add a new robot into one of the habitats scenes HOT 1
- Collision handling between multiple humanoid agents HOT 2
- Is there any community for habitat-lab discussions? HOT 2
- DISPLAY not detected. For headless systems, compile with --headless for EGL support HOT 4
- Is there a way to improve the quality of rendered image of hm3d inside the habitat-lab? HOT 1
- Is it possible to upload gaussian splatting-based 3D world to Habitat simulator?
- Not able to run examples.py
- data/humanoids/humanoid_data/walking_motion_processed.pkl not exist HOT 4
- chunk = read(handle, remaining) ConnectionResetError: [Errno 104] Connection reset by peer
- Question of TopDown Map with HSSD dataset HOT 1
- Strange black patches in rendering observations in hm3d v1 scenes HOT 3
- How to use the provided auxiliary tasks in the codebase
- Error in generation of semantic datasets
- Platform::WindowlessEglApplication::tryCreateContext(): unable to find CUDA device 0 among 1 EGL devices in total WindowlessContext: Unable to create windowless context HOT 3
- [EQA task] Question about answer_token.
- Scene Selection in HITL App HOT 5
- What does ``access_mgr`` in ``habitat-baselines/rl/ppo`` stands for? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from habitat-lab.