Giter Site home page Giter Site logo

vikashplus / robohive Goto Github PK

View Code? Open in Web Editor NEW
428.0 11.0 79.0 66.72 MB

A unified framework for robot learning

Home Page: https://sites.google.com/view/robohive

License: Apache License 2.0

Python 96.67% Shell 0.62% Jupyter Notebook 1.77% Cap'n Proto 0.94%
benchmarks environments python reinforcement-learning robotics simulation tasks imitation-learning mujoco mujoco-environments

robohive's People

Contributors

0wu avatar andrearosasco avatar aravindr93 avatar divye02 avatar dosssman avatar gaoyuezhou avatar girifb avatar jasonma2016 avatar jdvakil avatar palanc avatar raghavauppuluri13 avatar shahrutav avatar sriramsk1999 avatar vikashplus avatar vittorio-caggiano avatar vittorione94 avatar vmoens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robohive's Issues

dependencies missing

the following dependencies are missing from a clean install:

  • transforms3d
  • pydantic
  • mjrl

Modify Adroit left hand Mujoco model

Hello, I try to modify the Adroit right hand model to left one. Appended files are the current progress wrt relocate scenario. The following parts are things I have adjusted:

@DAPG_assets_left.xml
1)mesh scale="-0.001 0.001 0.001" for "wrist","palm","lfmetacarpal", "knuckle", "F3", "F2", "F1", "TH3_z", "TH2_z", "TH1_z"

@DAPG_Adroit_left.xml
2)body pos x to -x for "ffknuckle", "mfknuckle", "rfknuckle", "thbase"
3)body quat ="7.35709949e-04 -3.82683311e-01 3.04741039e-04 9.23879240e-01" for "thbase"
4)joint axis invertion for "WRJ1", "FFJ3", "MFJ3", "RFJ3", "LFJ4", "LFJ3", "THJ4", "THJ3"
5)body pos ="0.05 0 0.044" for "lfmetacarpal"
6)joint pos="-0.033 0 0" for "LFJ4"

@DAPG_relocate_left.xml
7) <body pos="-0.033 -0.7 0.2" for "forearm"

For "lfmetacarpal" body pos x adjustment, it seems not following rule in 2). Thus by trial-and-error I found out 5) visually looks ok... Consequently 6) and 7) are needed for compensation... Actually I don't think this way is resonable.. could you correct it for me?

Also, are there other parts/files that need modification as well? For example, geom pose? site pose?

Thank you for the help in advance.

DAPG_Adroit_left.xml.txt
DAPG_assets_left.xml.txt
DAPG_relocate_left.xml.txt

About mjrl_envs

Has the mjrl_envs package been renamed?I came in from someone else's link and found I couldn't find the package name

Replaying data from Roboset FK1-v4(human) dataset with FK1_RelaxFixed-v4 environment.

Hello,

I'm trying to replay the Roboset FK1-v4(human) dataset and I'm facing problems with the new v4 kitchen environment. I'm able to replay the training data using the kitchen_relax-v1 environment from Relay Policy Learning, but am unable to replay them using the FK1_RelaxFixed-v4 environment. The arm moves seemingly random instead of following the trajectory from the data. Here are the code snippets with the functioning kitchen_relax-v1 replay code and the non-functioning FK1_RelaxFixed-v4 one. Thank you very much!

Apart from that, do you happen to have an estimated date for when the other multi task suites apart from the kitchen will release? Thanks!

FK1_RelaxFixed-v4, not working:

import torch
import h5py
import numpy as np
import gym
import time
from tqdm import tqdm
import robohive

torch.cuda.empty_cache()

trace = '/SNIP/datasets/human_demos_playdata/FK1_RelaxFixed_v2d-v4_60_20230506-111653_trace.h5'
with h5py.File(trace, 'r') as file:
    h = dict()
    # kettle to top left, bottom stove, right slider, left cupboard
    for key in file['Trial60'].keys():
        if key == 'env_infos':
            h['qpos'] = file['Trial60/env_infos/state/qpos'][()]
            h['qvel'] = file['Trial60/env_infos/state/qvel'][()]
            continue
        h[key] = file['Trial60'][key][()]
    
    print('loaded 60')

actions = h['actions']
qpos = h['qpos'][0]
qvel = h['qvel'][0]

speedup = 1

env_name = 'FK1_RelaxFixed-v4'
# env_name = 'kitchen-v2'
env = gym.make(env_name)

env.reset()
init_qpos = qpos.copy()
init_qvel = qvel.copy()
env.sim.data.qpos[:] = init_qpos
env.sim.data.qvel[:] = init_qvel
env.sim.forward()

# pick scaling for actions
act_mid = np.zeros(env.sim.model.nu)
act_amp = 2 * np.ones(env.sim.model.nu)

env.mj_render()

obs = env.get_obs()
for i in tqdm(range(actions.shape[0] - 1)):
    ctrl = actions[i]

    # act = ctrl
    act = act_mid + ctrl * act_amp
    next_obs, reward, done, env_info = env.step(act)

    # if i % render_skip == 0:
    env.mj_render()
    time.sleep(env.dt / speedup)

    obs = next_obs
    if done:
        break

env.close()

kitchen_relax-v1, working:

import torch
import h5py
import numpy as np
import gym
import time
from tqdm import tqdm
import adept_envs.franka
torch.cuda.empty_cache()

trace = '/SNIP/datasets/human_demos_playdata/FK1_RelaxFixed_v2d-v4_60_20230506-111653_trace.h5'
with h5py.File(trace, 'r') as file:
    h = dict()
    # put kettle on top left, bottom stove, right slider, left cupboard
    for key in file['Trial60'].keys():
        if key == 'env_infos':
            h['qpos'] = file['Trial60/env_infos/state/qpos'][()]
            h['qvel'] = file['Trial60/env_infos/state/qvel'][()]
            continue
        h[key] = file['Trial60'][key][()]
    
    print('loaded 60')

actions = h['actions']
qpos = h['qpos'][0]
qvel = h['qvel'][0]

speedup = 1

env = gym.make('kitchen_relax-v1')

env.reset()
init_qpos = qpos.copy()
init_qvel = qvel.copy()
env.sim.data.qpos[:init_qpos.shape[0]] = init_qpos
env.sim.data.qvel[:init_qvel.shape[0]] = init_qvel
env.sim.forward()

env.mj_render()

print(f'act_mid: {env.act_mid}, {env.act_mid.shape}\nact_amp: {env.act_amp}, {env.act_amp.shape}\nskip: {env.skip}\nframe_skip: {env.frame_skip}\nmodel.opt.timestep: {env.model.opt.timestep}\n')

for i in tqdm(range(actions.shape[0] - 1)):
    act = actions[i]

    observation, reward, done, info = env.step(act)
    env.mj_render()
    time.sleep((env.model.opt.timestep * env.frame_skip) / speedup)
    if done:
        break

env.close()

set_env_state function not working

On calling set_env_state
I am getting an error:

Traceback (most recent call last):
  File "test.py", line 67, in <module>
    env.set_env_state(state)
  File "/home/rutavms/research/robohive/latest/mj_envs/mj_envs/envs/env_base.py", line 487, in set_env_state
    self.set_state(qp, qv, act)
AttributeError: 'FrankaAppliance' object has no attribute 'set_state'

I believe the set_env_state() function should be changed to:

     def set_env_state(self, state_dict):
       """
       Set full state of the environemnt
       Default implemention provided. Override if env has custom state
       """
          qp = state_dict['qpos']
          qv = state_dict['qvel']
          act = state_dict['act']
          self.sim.set_state(qpos=qp, qvel=qv, act=act)
          self.sim_obsd.set_state(qpos=qp, qvel=qv, act=act)
          if self.sim.model.nmocap>0:
              self.sim.model.mocap_pos[:] = state_dict['mocap_pos']
              self.sim.model.mocap_quat[:] = state_dict['mocap_quat']
              self.sim_obsd.model.mocap_pos[:] = state_dict['mocap_pos']
              self.sim_obsd.model.mocap_quat[:] = state_dict['mocap_quat']
          if self.sim.model.nsite>0:
              self.sim.model.site_pos[:] = state_dict['site_pos']
              self.sim.model.site_quat[:] = state_dict['site_quat']
              self.sim_obsd.model.site_pos[:] = state_dict['site_pos']
              self.sim_obsd.model.site_quat[:] = state_dict['site_quat']
          self.sim.model.body_pos[:] = state_dict['body_pos']
          self.sim.model.body_quat[:] = state_dict['body_quat']
          self.sim.forward()
          self.sim_obsd.model.body_pos[:] = state_dict['body_pos']
          self.sim_obsd.model.body_quat[:] = state_dict['body_quat']
          self.sim_obsd.forward()

Creating a bright colored LED-like object ?

Hello everyone.
Thanks a lot for your answer on Twitter @vikashplus . Opening in issue as you suggested.

I am currently working with the Franka arms environment for pick and place.
I am trying to create an object looking like a moderately bright LED.
For example, the blue status LED at the base of the Franka robotics Arm, like the screenshot below.

So far I have played with the light from the docs but it seems to only support general room lighting.

Another thing I tried was the lightButton from furniture_sim repository, but the "light effect" has neither _reflection nor shininess-like properties.

Is perhaps another way to achieve the aforementioned desired effect ?
Or more generally, an object that is emitting light itself ?

Thanks a lot for your continued assistance.
Looking forward to hear from you again.
Best regards.

image

PS: Also asked this at the Mujoco repository: google-deepmind/mujoco#1626

Task Space Mapping for Franka Kitchen environments

Hello,

I am trying to set up a task-space training pipeline for the Franka Kitchen environments.

My understanding is that the input to RoboHive environments is in joint-space. In the teleop script the input to the rpFrankaRobotiqData-v0 environment is simply the normalized qpos.

This does not work for the Franka Kitchen tasks. I have a trained torchRL agent which predicts the actions, qpos and qvel. I found that the env.robot._act_mode is "vel" for the Franka Kitchen environments, so I expected that the action is the normalized qvel i.e. action = env.robot.normalize_actions(qvel).

This does not work and the robot does not move as expected. What am I doing wrong?

.

PS - This seems like a bug in the normalize_actions function. Shouldn't it be:

                        act_rng = (actuator['vel_range'][1]-actuator['vel_range'][0])/2.0

instead of

                        act_rng = (actuator['vel_range'][1]-actuator['pos_range'][0])/2.0

Multi-robot environment support

Hello @vikashplus

Thanks a lot to you and the team for putting together this collection of robotics learning environment.

I got that you plan to eventually support multi-robot / multi-agent environments in the paper, but is there any reference / documentation or resource you could share about what would be required to get a multi-robot version of FrankaReach-v0 for example ?

Thank you for your time.

MuJoCo installation issues on MacBook

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for mujoco
Running setup.py clean for mujoco
Failed to build mujoco
ERROR: Could not build wheels for mujoco, which is required to install pyproject.toml-based projects

Issuses from mujoco_py/mujoco

Following using pip install robohive on Windows 11 running an Ubuntu 20.04 (64-bit) VM. I run into a lot of problems coming up on my side. When I run the demo, something is always off in this step:

image

Since many issues related to mujoco_py, I have got a lot of them solved by installing mujoco-py before installing robohive, following answers from the side of mujoco_py repo like openai/mujoco-py#284. However, now I seem stuck in here:

image

,which can only be solved by build Mujoco from source with modification according to them openai/mujoco-py#737 and google-deepmind/mujoco#849. And yet I am still waiting for clear instruction from them.

So I am concerned about whether I am on the right track, since it may take another amount of time.

I appreciate for any advice.

All files inside the simhive folder are empty

Hello dear Vikash,

I hope you and everyone in your family are doing well. I wanted to use robohive for a Reinforcement Learning project in robotics scenarios. Since I might have to create custom environment I decided to download robohive from the github repository and install it both with mujoco-py and mujoco backends. I am workin on Ubuntu 20.04.06 LTS through WSL2 in Windows11. In this working environment I have made a venv virtual environment with Python 3.8. I have MuJoco200 simulator installed, and the activate file of my environment has the necessary paths for the bin and licence for MuJoco sim. After installing robohive I attempted running the demo command : python -m robohive.utils.examine_env -e FrankaReachRandom-v0 .

However I received the following error:

raceback (most recent call last):
File "robohive/utils/examine_env.py", line 110, in
main()
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "robohive/utils/examine_env.py", line 55, in main
env = gym.make(env_name) if env_args==None else gym.make(env_name, **(eval(env_args)))
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/gym/envs/registration.py", line 156, in make
return registry.make(id, **kwargs)
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/gym/envs/registration.py", line 101, in make
env = spec.make(kwargs)
File "/home/cocp5/robohiveRL02/lib/python3.8/site-packages/gym/envs/registration.py", line 73, in make
env = cls(
_kwargs)
File "/home/cocp5/robohive/robohive/envs/arms/reach_base_v0.py", line 41, in init
super().init(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
File "/home/cocp5/robohive/robohive/envs/env_base.py", line 57, in init
self.sim = SimScene.get_sim(model_path)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 56, in get_sim
return SimScene.create(model_handle=model_handle, backend=SimBackend.MUJOCO_PY)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 43, in create
return mjpy_sim_scene.MjPySimScene(*args, **kwargs)
File "/home/cocp5/robohive/robohive/physics/sim_scene.py", line 73, in init
self.sim = self._load_simulation(model_handle)
File "/home/cocp5/robohive/robohive/physics/mjpy_sim_scene.py", line 53, in _load_simulation
model = mujoco_py.load_model_from_path(model_handle)
File "cymj.pyx", line 175, in mujoco_py.cymj.load_model_from_path
Exception: Failed to load XML file: /home/cocp5/robohive/robohive/envs/arms/franka/assets/franka_reach_v0.xml. mj_loadXML error: b"XML Error: Include error: 'XML parse error at line 0, column 0:\nFailed to open file\n'\nElement 'include', line 13, column 5\n"

Then I noticed that the simhive folder was empty.

What did go wrong and how could I fix it ?

Thank you very much in advance for all your valuable help!

Kind regards,

Christos Peridis

Using and registering RoboHive environments to the Ray API

Hello dear Dr. Vikash,

I hope you and everyone in your family are doing well! For conducting Reinforcement Learning experiments I have been using the Ray API and more specifically the implemented algorithms of RLlib and the Tune library for hyper parameter tuninng. In the past I had integrated and register in the Ray API the CT-graph benchmark, as a custom gym environment. You can see how I have performed this in the example code here

I have developed the following code that works on the same principals as the code developed for the integration of the CT-graph benchmark:

import robohive
import gym
import ray
ray.init()

resources = ray.cluster_resources()
print(resources)

resources = ray.cluster_resources()
print(resources)

def env_creator(env_config={}):
env = gym.make('FrankaPickPlaceFixed-v0')
env.reset()
return env

from ray.tune.registry import register_env
register_env("RoboHive_Pick_Place_0", env_creator)

sac_config = {
"env": "FrankaPickPlaceFixed-v0", # Specify your environment class here
"framework": "torch",
"num_workers": 4,
"num_gpus": 1,
"monitor": True,
# Add more SAC-specific config here
}

from ray import tune

analysis = tune.run(
"SAC",
config=sac_config,
stop={"training_iteration": 100}, # Specify stopping criteria
checkpoint_at_end=True
)

However I am getting the following error:

The above code throughs the following error:

(RolloutWorker pid=318903) ray::RolloutWorker.init() (pid=318903, ip=158.125.234.46, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x7f878b6db6a0>)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/ray/rllib/env/utils.py", line 54, in gym_env_creator
(RolloutWorker pid=318903) return gym.make(env_descriptor, **env_context)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 156, in make
(RolloutWorker pid=318903) return registry.make(id, **kwargs)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 100, in make
(RolloutWorker pid=318903) spec = self.spec(path)
(RolloutWorker pid=318903) File "/home/lunet/cocp5/roboHive200/lib/python3.10/site-packages/gym/envs/registration.py", line 142, in spec
(RolloutWorker pid=318903) raise error.UnregisteredEnv('No registered env with id: {}'.format(id))
(RolloutWorker pid=318903) gym.error.UnregisteredEnv: No registered env with id: FrankaPickPlaceFixed-v0

Why is the system unable to detect the RoboHive registered environment? Is it something hidden in the layer complexity of the API that I have not yet understand? Which tutorials or other material do you suggest me to study in order to better understand the structure of the RoboHive API, how it works and how I could modify it ?

Thank you very much in advance for the valuable help!!!

Kind regards,

Christos Peridis

Rendered depth is upside down

Hello,

Thanks for the great work on RoboHive. I am observing a strange issue - the depth rendered by RoboHive is flipped upside down. The issue exists for all 4 cams (wrist, left, right, top) as well as other environments used directly through RoboHive or through torchRL's RoboHiveEnv.

Minimal example:

import gym
import robohive
import matplotlib.pyplot as plt
import numpy as np

env = gym.make("rpFrankaRobotiqData-v0")
env.reset()

extero_dict = env.get_exteroception()

rgb = extero_dict["rgb:left_cam:240x424:2d"]
dep = extero_dict["d:left_cam:240x424:2d"].squeeze()

plt.imsave("rgb.png", rgb)
plt.imsave("depth.png", dep)  ### Vertically flipped ###
plt.imsave("depth_flipped.png", np.flipud(dep))  ### Works ###

The corresponding images:

  1. rgb.png
    rgb
  2. depth.png
    depth
  3. depth_flipped.png
    depth_flipped

Run 'python utils/visualize_env.py --env_name hammer-v0' Cause - File "cymj.pyx", line 175, in mujoco_py.cymj.load_model_from_path

LOGS:
Exception: Failed to load XML file: /home/pi/workspace/tanwenxuan/Project/mj_envs/mj_envs/envs/hand_manipulation_suite/assets/DAPG_hammer.xml. mj_loadXML error: b

"Error: could not open STL file '/home/pi/workspace/tanwenxuan/Project/mj_envs/mj_envs/envs/hand_manipulation_suite/assets/../../../sims/Adroit/resources/meshes/../../../robohive.stl'\nObject name = robohive, id = 11, line = 69, column = 9"

R3M Franka Kitchen Environments / Demonstration Data

Hey @vikashplus! This is an awesome effort, and I'd love to help out if I can! I'm looking for a clean way to integrate the R3M Franka Kitchen and Adroit HMS evaluations into the Voltron Evaluation repository (https://github.com/siddk/voltron-evaluation), and I think that using Robohive as a dependency instead of the R3M Evaluation branch (+ corresponding forked submodules) makes the most sense.

From the docs though, it's not clear which environments/dataset is the right one to use; for the evaluations we ran in the Voltron work (https://arxiv.org/abs/2302.12766) we followed the same setup as R3M, evaluating 3 camera angles at 5, 10, and 25 demonstrations for each of the 5 tasks (with 50 heldout environments for validation). Should I be using the v3 or v4 environments? And how should I specify the three camera angles (left_cap2, right_cap2, default)?

Thanks so much - and if there's anything I can help out with to incorporate the changes to setup this eval, please let me know!

Request for support for xArm Series arm

Hello, many thanks for this solid work!

I would like to hear about your further plans that are related to the support for new robot arms, especially for xArm series constructed by uFactory.

Awaiting your reply and thanks in advance!

Deformable object simulation support in RoboHive

Thanks for the authors' hard work building a comprehensive simulation environment based on MuJoCo. I noticed that deformable object simulation is already implemented from the demo figures, but I couldn't find the exact environment. Would you happen to know when the relevant code of deformable object simulation will be released? Much thanks.

Visual interface black

When I run python utils/visualize_env.py --env_name hammer-v0, I can run it. There is nothing in the loaded graphical interface. The interface is all black, I don't know how.

mujoco_py issue when creating a conda environment. Same code runs fine in a Python venv environment.

Hello dear Dr. Vikash,

How are you doing? I hope you are doing well. I had to conduct some research during the past week so I postponed working on the RoboHive API. I am starting again from today. Before I had successfully made a Python3 venv virtual environment, where I installed RoboHive.

In my system I had downloaded and extracted under the ".mujoco" folder mujoco200 simulator, along with its key and license. In order to make visible to the virtual environment the mujoco200 simulator I appended the following to the activate file of the bin folder of my venv virtual environment:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/.mujoco/mujoco200/bin
export MUJOCO_PY_MJKEY_PATH=
/.mujoco/mjkey.txt
export MUJOCO_PY_MUJOCO_PATH=~/.mujoco/mujoco200

After appending the above in the activate file of my venv environment I activated it and install RoboHive API from the cloned GitHub repository with the following command:

pip install -e ".[mujoco]"

Then I ran the example command : python robohive/utils/examine_env.py -e FrankaReachRandom-v0

And everything ran fine, I was able to see the simulation .

However, when I attempted to follow the same steps to install RoboHive API in a conda environment it did not work as expected. The installation seemed to proceed well, unfortunately though, when executing the example code I received the following error:

Compiling /home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/cymj.pyx because it changed.
[1/1] Cythonizing /home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/cymj.pyx
Traceback (most recent call last):
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/utils/import_utils.py", line 3, in mujoco_py_isavailable
import mujoco_py
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/init.py", line 15, in
from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/builder.py", line 499, in
cymj = load_cython_ext(mujoco_path)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/builder.py", line 106, in load_cython_ext
mod = load_dynamic_ext('cymj', cext_so_path)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/mujoco_py/builder.py", line 125, in load_dynamic_ext
return loader.load_module()
ImportError: /home/lunet/cocp5/anaconda3/envs/robohiveRL04/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /usr/lib/x86_64-linux-gnu/libLLVM-15.so.1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "robohive/utils/examine_env.py", line 110, in
main()
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/click/core.py", line 1130, in call
return self.main(*args, **kwargs)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "robohive/utils/examine_env.py", line 55, in main
env = gym.make(env_name) if env_args==None else gym.make(env_name, **(eval(env_args)))
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/gym/envs/registration.py", line 156, in make
return registry.make(id, **kwargs)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/gym/envs/registration.py", line 101, in make
env = spec.make(kwargs)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/gym/envs/registration.py", line 73, in make
env = cls(
_kwargs)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/envs/arms/reach_base_v0.py", line 41, in init
super().init(model_path=model_path, obsd_model_path=obsd_model_path, seed=seed)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/envs/env_base.py", line 57, in init
self.sim = SimScene.get_sim(model_path)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/physics/sim_scene.py", line 56, in get_sim
return SimScene.create(model_handle=model_handle, backend=SimBackend.MUJOCO_PY)
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/physics/sim_scene.py", line 42, in create
from robohive.physics import mjpy_sim_scene # type: ignore
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/physics/mjpy_sim_scene.py", line 15, in
import_utils.mujoco_py_isavailable()
File "/home/lunet/cocp5/anaconda3/envs/robohiveRL04/lib/python3.8/site-packages/robohive/utils/import_utils.py", line 11, in mujoco_py_isavailable
raise ModuleNotFoundError(f"{e}. {help}")
ModuleNotFoundError: /home/lunet/cocp5/anaconda3/envs/robohiveRL04/bin/../lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /usr/lib/x86_64-linux-gnu/libLLVM-15.so.1).
Options:
(1) follow setup instructions here: https://github.com/openai/mujoco-py/
(2) install mujoco_py via pip (pip install mujoco_py)
(3) install free_mujoco_py via pip (pip install free-mujoco-py)

I tried to create the conda environment and install RoboHive with all proposed pip install comands, both the ones for PyPi and the ones from the GitHub repository. Despite my efforts however, the same issue was always present.

As a final attempt I created an activate.sh file inside the bin folder of the conda environment that I was working on, and I appended at the end the same export commands to the ones I appended to the activate file of the venv environment. I also created an deactivate.sh in order to unset these environment variables when deactivating the environment. It did not work either ...

What is the real cause of this error and how can I overcome this?

It is important to note that the same issue was present both in my WSL2 Ubuntu 20.04.06 LTS system and in a native Linux Ubuntu 22.04 LTS workstation system.

I also wanted to inform you that I am working on the creation of the document that describes the setup procedure of my WSL2 setup where I am working with the RoboHive API, and I am planning to send it you as soon as possible!

Thank you very much in advance for your valuable help!!!

Kind regards,

Christos Peridis

What's the best way to train/deploy policy in Robohive?

What's the best way policy trained outside robohive and want to evaluate in robohive?

A policy with state inputs

class StatePolicy:
    def __init__(self, env):
        self.env=env
    def get_action(self, obs_state):
        action = self.policy(obs_state)
        return action, action

A policy with custom visual inputs

class VisualPolicy:
    def __init__(self,env, user_encoder):
        self.env=env
        self.user_encoder=user_encoder
    def get_action(self, obs_state):
        prop_feat = self.env.get_proprio()
        img_feat = self.env.get_extero()
        feature = user_encoder(prop_feat, img_feat)
        action = self.policy(feature)
        return action, action

by @0wu

Cannot find installation of real FFmpeg during offline rendering

Details

  • Offline rendering goes well but crashes at the end with FFMPEG error
  • Likely an issues with conda env
  • Likely an issue on mac. I remember successfully working with it on linux.

Code

(mjrl-env) ~/Libraries/mj_envs/mj_envs$ python utils/visualize_env.py -e FrankaReachFixed-v0 -r offscreen
RS:> Registering Biomechanics Envs
RS:> Registering Hand Envs
Episode 0: rendering offline Creating offscreen glfw
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, Traceback (most recent call last):
  File "utils/visualize_env.py", line 64, in <module>
    main()
  File "/Users/vikashplus/.conda/envs/mjrl-env/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/Users/vikashplus/.conda/envs/mjrl-env/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/Users/vikashplus/.conda/envs/mjrl-env/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/vikashplus/.conda/envs/mjrl-env/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "utils/visualize_env.py", line 61, in main
    filename=filename)
  File "/Users/vikashplus/Libraries/mj_envs/mj_envs/envs/env_base.py", line 419, in visualize_policy_offscreen
    skvideo.io.vwrite( file_name, np.asarray(arrs))
  File "/Users/vikashplus/.conda/envs/mjrl-env/lib/python3.7/site-packages/skvideo/io/io.py", line 60, in vwrite
    assert _HAS_FFMPEG, "Cannot find installation of real FFmpeg (which comes with ffprobe)."
AssertionError: Cannot find installation of real FFmpeg (which comes with ffprobe).

Resetting robot during env setup

I found this line in env_base.MujocoEnv._setup

observation, _reward, done, _info = self.step(np.zeros(self.sim.model.nu))
# Question: Should we replace above with following? Its specially helpful for hardware as it forces a env reset before continuing, without which the hardware will make a big jump from its position to the position asked by step.
# observation = self.reset()

Since on the real robot the requested position causes an overcurrent, I commented the line and de-commented observation = self.reset()

I wonder, since it wasn't de-commented already, can it cause any issue?

Control console print verbosity

Environments seem to be very verbose, always indicating “closing”, “resetting” etc. Is there an option to disable that? Better: shouldn’t we enable that only for debugging? In torchrl we have a VERBOSE environment variable that is used for this

BUG: set_env_state throws an error

python -c "import gym; import mj_envs; env=gym.make('kitchen_light_off-v3'); env.set_env_state(env.get_env_state());"

This throws an error:

AttributeError: 'mujoco_py.cymj.PyMjModel' object has no attribute 'mocap_pos'

Roadmap?

This looks like an amazing framework for robot learning, with many considerations about various things that we need to worry/care about. I would like my team to use this, but it seems like there are still parts which are WIP, so it's tricky to commit to without knowing how the development team is planning to go. Would you be willing to put up a roadmap on the wiki?

For example, right now we use PerAct, which uses voxels (via point clouds) and motion planning, so point cloud and motion planning support would not only allow us to reimplement PerAct in RoboHive, but it would also open up a lot of other robot learning algorithms. In a similar vein, is MuJoCo 3 somewhere on the roadmap (allowing non-convex geometries and deformable objects)?

Are quadruped robots available now?

Hi, thank you for the great work! From the white paper, there are quadruped robots in MuJoCo environment. Are they available now? I am looking for the Spot Mini or MIT Cheetah model in MuJoCo. Thank you!
quadruped_robots

Elevate obs_keys and weighted_reward_keys to _setup() signature

https://github.com/vikashplus/mj_envs/blob/6952fd4a1782cd8ed683f160ec6615310cfafb71/mj_envs/envs/biomechanics/reach_v0.py#L45-L46

  • Currently obs_keys and weighted_reward_keys are getting hardcoded to the defaults
  • Kwags can't overide them
  • I think if we elevate these arguments to _setup() signature, and provide defaults there, we can get around this limitation
  • Updated code will look something like this. What do you think?
    def _setup(self,
                target_reach_range:dict,
                obs_keys=DEFAULT_OBS_KEYS,
                weighted_reward_keys=DEFAULT_RWD_KEYS_AND_WEIGHTS,
                **kwargs,
        ):

        self.target_reach_range = target_reach_range

        super()._setup(obs_keys=obs_keys,
                       weighted_reward_keys=weighted_reward_keys,
                       sites=self.target_reach_range.keys(),
                       **kwargs)

Dataset Description for FK1-v4(human).

Hi,

Thank you for this really great repos. This is the most easily and stable of all the environments I have used so far. Really appreciate what you have been doing here!!

I just want to ask if there is any description for observations of the dataset linked here: https://drive.google.com/drive/folders/1a-q6TpskJD3J7G2FcJBzYRb7UxG1N_K4.

I am trying to use the FK1-v4(human) dataset for my model. I downloaded the dataset in the Google Drive in the wiki linked above, I found there are some mismatches between the dataset in the original repo. https://github.com/google-research/relay-policy-learning and the one in the Google Drive.

The observation is 60 dimensions in the original dataset while the one in Google Drive is 75 dimensions. I found the description for 59 of the dimensions in the gymnasium(https://robotics.farama.org/envs/franka_kitchen/franka_kitchen/) and the 60th is the time step, which I assume to be the first 60 dimensions out of 75 in the FK1-v4(human). Then what are the descriptions for the rest 15 dimensions?

I have browsed a while and couldn't find any useful information.

Any help would be greatly appreciated!

Best Regards
Juyan

Not able to visualize the environment on AWS EC2

I am able to run the utils/visualize_env.py in the terminal and it is returning the score for 10 episodes. But it is not showing the actual visualization.

Any suggestions?

I am using xvfb-run bash to get the above said output. Otherwise, it returns libGLEW error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.