Giter Site home page Giter Site logo

calvin_env's Introduction

calvin_env

Installation

git clone --recursive https://github.com/mees/calvin_env.git
cd calvin_env/tacto
pip install -e .
cd ..
pip install -e .

calvin_env's People

Contributors

lukashermann avatar mees avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

calvin_env's Issues

Questions about VR device

The platform I am using does not have VR support, So I am reading everything from openvr using steamvr.

I would really appreciate if you can help me out here.

what is the purpose of
self.gripper_position_offset
and
self.gripper_orientation_offset

Are they initial positions and orientations for the robot gripper in the world?

I try to set up vr control on my platform. However, I found out that a rotation in vr device does not match a rotation in the game.
Are these variables supposed to help with this situation?

Different in `use_egl` or not

Hi, I am playing with calvin env and found the evaluation with use_egl=True and use_egl=False have a big difference: the evaluation with egl (so GPU) will provide reasonable accuracies, but when I turned use_egl=False, the evaluation results became much much worse. Also, the table in rendering also has a little different. Therefore, I am wondering if there's anything missing or if this is expected. Thanks!

How to render in a high-resolution mode?

Hi I am using CALVIN for my research and I want to show some visualizations in high resolution (200x200 by default). I tried to change the camera size but it won't work. I think there must be solutions as I noticed that the visualization demo shown on the home page is good.

Major concern about evaluation

Hi there!
I've found that rolling out ground truth trajectories (labelled by the language annotator) from the dataset is not always evaluated to be successful by the Tasks.get_task_info. This seems to be quite concerning. Perhaps I've done something wrong on my end?

image

"Rich" package missing from requirements

After running the installation as per the instructions, the play_table_env.py file throws an error since it imports from the "rich" package, which is not part of requirements.txt. Can be manually fixed via pip install rich, but would be good to include "rich" in the requirements.txt file.

(note that I installed the environment without the "calvin_models" package, ie only "calvin_env")

When open_drawer and close_drawer are performed consecutively, only open_drawer is flagged successful. Is this expected behaviour?

The env flags open_drawer as a successful task but doesn't recognize close_drawer as a successful task when [open_drawer, close_drawer] are performed in that exact order. However, if I manually do the following

self.start_info["scene_info"]["doors"]["base__drawer"]["current_state"] = 0.2 

when open_drawer is flagged successful, it flags both tasks as successful. But I was wondering if this is expected behaviour. Ideally, I want to do a long horizon task involving the same env objects, for example [open_drawer, turn_on_lightbulb, close_drawer, turn_off_lightbulb].

You can run the CALVIN_Eval.ipynb in the zip file attached to reproduce this. Please know that I have also provided a trajectory (a small .npy file) using which you can perform [open_drawer, close_drawer]. The trajectory looks as follows:

Trajectory
CALVIN_Eval.zip

Relative actions do not seem to work. Unable to spot the issue.

I cannot seem to make the relative actions work. For the same EE trajectory, when absolute actions are given to the environment, everything works as expected but fails when relative actions are given. I have tested it thoroughly and suspect a bug. Despite some debugging, I couldn't pinpoint the problem in the robot.py. Here's an example (note that the same trajectory data was used here):

Absolute Actions Relative Actions
Absolute Actions Relative Actions

Please use my carefully prepared Google Colab Notebook to reproduce the problem. The notebook needs to be in the root folder of this repository, and it saves two GIFs of the robot in action. No additional installations are required. However, you will need to use my tiny 241KB .npy file with some trajectories (extracted from the CALVIN dataset). You can easily get the file from here. Or you can also get the notebook and the data from the zip file attached.

I am confident that there is no mistake in the way I am feeding the actions to the env. Please let me know if I can help with the reproduction of the issue. If this behaviour is due to a mistake at my end, an explanation would be really helpful. Thanks!

CALVIN_Eval.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.