Giter Site home page Giter Site logo

mees / calvin Goto Github PK

View Code? Open in Web Editor NEW
267.0 6.0 42.0 1.66 MB

CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

Home Page: http://calvin.cs.uni-freiburg.de

License: MIT License

Python 95.33% Shell 0.89% Jupyter Notebook 3.78%
natural-language-processing robotics deep-learning grounding vision-language manipulation computer-vision pytorch vision vision-and-language

calvin's People

Contributors

aroefer avatar erickrosete avatar lgtm-com[bot] avatar lukashermann avatar mees avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

calvin's Issues

I get the following error when tring to clone from calvin_env

1. clone from calvin_env
git clone [email protected]:mees/calvin_env.git
Cloning into 'calvin_env'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

  1. directly following the instruction
    git clone --recurse-submodules https://github.com/mees/calvin.git
    Cloning into 'calvin'...
    remote: Enumerating objects: 1360, done.
    remote: Counting objects: 100% (1360/1360), done.
    remote: Compressing objects: 100% (726/726), done.
    remote: Total 1360 (delta 815), reused 1108 (delta 570), pack-reused 0
    Receiving objects: 100% (1360/1360), 1.56 MiB | 6.71 MiB/s, done.
    Resolving deltas: 100% (815/815), done.
    Submodule 'calvin_env' ([email protected]:mees/calvin_env.git) registered for path 'calvin_env'
    Cloning into '/mnt/research/calvin/calvin_env'...
    [email protected]: Permission denied (publickey).
    fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

ModuleNotFoundError: No module named 'calvin_agent.datasets.play_data_module'

Hi,

I am trying to run this: python evaluation/evaluate_policy.py --dataset_path /home/systemtec/calvin/dataset/task_D_D --train_folder /home/systemtec/calvin/D_D_static_rgb_baseline --checkpoint /home/systemtec/calvin/D_D_static_rgb_baseline/mcil_baseline.ckpt --debug

ModuleNotFoundError: No module named 'calvin_agent.datasets.play_data_module'

No play_data_module.py in the directory /calvin_models/calvin_agent/datasets

Error during downloading the D dataset

Hello Oier,
I am trying to download the D dataset but need help getting this error. However, there is no error for downloading other distributions like debug datasets.

(calvin_venv) user@user:~/Pictures/Gaurav/calvin/dataset$ sh download_data.sh D

Downloading task_D_D ...
--2023-09-11 13:50:17-- http://calvin.cs.uni-freiburg.de/dataset/task_D_D.zip
Connecting to 172.31.2.4:8080... connected.
Proxy request sent, awaiting response... 403 Forbidden
2023-09-11 13:50:17 ERROR 403: Forbidden.

unzip: cannot find or open task_D_D.zip, task_D_D.zip.zip or task_D_D.zip.ZIP.
saved folder: task_D_D

Physics of the robot arm is odd

I am using the SlideEnv provided in the tutorial jupyter notebook for evaluating a trained RL policy. In some of my rollouts, I noticed in the static camera RGB image that there is an weird piece sticking out of the robot's arm. In the video I attached, that additional piece seems to be manipulating the sliding drawer. I also notice this thing protuding from the arm in the picture of Env B from the paper. Is this supposed to be part of the environment. Thanks!

Link to video: https://drive.google.com/file/d/13P954eWY-stc-Ty4xEhZ2D9HDs6lu_Wb/view?usp=sharing

image

The proportion of the recorded robot interaction data with language instructions

Hi,

Thanks for your excellent benchmark!

I have a question regarding the proportion of the recorded robot interaction data with language instructions.

The CALVIN paper says thet "we annotate only 1% of the recorded robot interaction data with language instructions."

After I download the dataset "task_D_D", "ep_start_end_ids.npy" under the training folder records 512046 unique episodes (saved as .npz file). Under "training/lang_annotations" folder, "auto_lang_ann.npy" records 303794 episodes with 192607 unique episodes. In this way, it seems that 192607 episodes are annotated with language instructions among all the 512046 episodes in the training set. The proportion is 192607/512046 , different from 1% in the paper.

If my analysis is correct, why is the the proportion of the recorded robot interaction data with language instructions in the dataset different from that in paper?

Looking forward to your reply.

Feature Request: Only download language data

Hi,

Is there a way to only download the trajectories that are labeled with language data? I'd like to download the ABC->D dataset, but my use case requires only the trajectories with language data. This would be helpful as the disk space and download time would be only ~1% of what it is now.

Thanks!

Viewing data from the datasets easily

As it stands, the documentation only offers to generate video from the dataset. It seems like one could use data_visualization.py to visualize the dataset somehow, however when trying to run the file in the proper location relative to conf it just segfaults (N=1). It would be nice if there was documentation on an easy way of just looking at a dataset interactively.

As an initial contribution towards the kind of visualization tool I have in mind, I submit PR #21 as a staarting point for a discussion.

Request for Guidance on Secondary Development Based on CALIVIN Benchmark

Hello,

I am currently working on a project that involves the use of the CALIVIN benchmark. I am interested in performing secondary development on this dataset to generate more tasks. However, I am not quite sure about the best approach to take.

Could you kindly provide some documentation or examples that could guide me through the process? Any assistance on how to manipulate the dataset to generate more tasks would be greatly appreciated.

Thank you in advance for your help.

Language annotations from the automatic annotation tool

Hi,

I visualize the episode with language instructions by sampling 4 images ("rgb_static") within the episode in ABC training set (Task_ABC_D) and some language instructions ("task" and "ann") seem to be wrong as shown below,

35_744905_744969_grasp the blue block and rotate it right

The language instruction of the above episode is "grasp the blue block and rotate it right". You can check the example, whose ['info']['indx'] is (744905, 744969). Such cases are not rare in ABC training set, but not in D training and validation set.

I further use the automatic annotation tool to re-annoate the episodes as showed in #24 and get the same task information ("rotate_blue_block_right" in the above example) as in the downloaded "auto_lang_ann.npy".

Is there anyway to make the language annotations more accurate?

Looking forward to your reply.

Best regards,
Yuying

Illegal instruction, Global seed set to 42

Hi @mees
I am trying to run the training code but facing the problem.
Please guide me to correct it.
Thank you!

(calvin_venv) gyadav@papenburg:/vol/isy-rl/Gaurav_Yadav/calvin/calvin_models/calvin_agent$ python training.py datamodule.root_data_dir=/vol/isy-rl/Gaurav_Yadav/calvin/dataset/task_D_D datamodule/datasets=vision_lang_shm
Global seed set to 42
Illegal instruction

Major concern about evaluation

Hi there!
I've found that rolling out ground truth trajectories (labelled by the language annotator) from the dataset is not always evaluated to be successful by the Tasks.get_task_info. This seems to be quite concerning. Perhaps I've done something wrong on my end?
image

To reproduce, I have forked the repo with minimal changes here: #33
The only difference is in line 47 in calvin_modesl/calvin_agent/evaluation/evaluate_policy_singlestep.py, where instead of rolling out the model I roll out the dataset actions.

The exact commands I ran from beginning to end:

# set up environment
git clone [email protected]:ezhang7423/calvin.git --recursive
cd calvin
conda create --name calvin python=3.8
conda activate calvin
pip install setuptools==57.5.0 torchmetrics==0.6.0
./install.sh

# get pretrained weights and fix the config.yaml
cp ./D_D_static_rgb_baseline/.hydra/config.yaml ./tmp.yaml
wget http://calvin.cs.uni-freiburg.de/model_weights/D_D_static_rgb_baseline.zip
unzip D_D_static_rgb_baseline.zip
unzip D_D_static_rgb_baseline.zip
mv ./tmp.yaml ./D_D_static_rgb_baseline/.hydra/config.yaml

# get data
cd dataset
./download_data.sh D
cd ../

# run the evaluation
python calvin_models/calvin_agent/evaluation/evaluate_policy_singlestep.py --dataset_path $DATA_GRAND_CENTRAL/task_D_D/ --train_folder ./D_D_static_rgb_baseline/ --checkpoint D_D_static_rgb_baseline/mcil_baseline.ckpt

Missing default config: ../datamodule/observation_space/lang_rgb_static.yaml

Hi,

thanks for the great work! I had a very minor isse regarding a missing default config: There is a reference to a non-existing hydra config file in calvin/calvin_models/conf/datamodule/default.yaml:

defaults:
  - observation_space: lang_rgb_static

However, in calvin/calvin_models/conf/datamodule/observation_space there does not exist a config file called lang_rgb_static.yaml.

Not sure, if I missed something there.
Thanks!

some inconsistencies in the dataset

My student @emrecanacikgoz and I are experimenting with the calvin dataset and noticed a couple of minor inconsistencies that we wanted to bring to your attention (in case somebody relies on exact indices for alignment etc):

  • scene_info.npy for D/training indicates calvin_scene_A but it should be calvin_scene_D.
  • ABCD/training includes several frames from scene_D that are also in the validation set (seems like an off by one error): 37682, 53818, 244284,420498.

(these could be errors we introduced in the download process as well).

Hardware and time used for training?

Hi -- thought i'd ask: For your MCIL baseline, could you share which hardware setup you used for training and roughly how long one training run took on that setup?

Is it similar to the HULC case?

Thank you very much!

Scene ID in Dataset

Hi! Thanks for creating this environment!

One question: in the provided datasets, is there a way to figure out which of the four scenes a data sample came from without manually looking at the scene rendering (eg in the large ABCD dataset)? I didn't see this info in the data structures. Or if you have a list of which sequences were collected in which scenes that would also work!
I want to re-render some of the states in the dataset and want to figure out which scene I should load in the simulator!

Thanks,
Karl

Benchmark Question

Hello there!

I'm intrigued by the results showcased on your website, specifically the one related to Task Train D -> Test D. Upon reviewing the methods used, I noticed that there is a technique referred to as "baseline + delta actions".

Would you mind elaborating on what the term "delta actions" refers to in this particular context? Is the delta action learned by some learning methods like residual policy learning or something else? I would appreciate more information to further my understanding of the process.

Thank you!

Reseting env to state from dataset

Hi,

I'm trying to generating skill id / language annotations for the unlabeled frames in the dataset. I was thinking of using the reset_from_storage method in the environment class to reset to a state from the dataset and using the task checker to check for task success. However, the reset function requires a serialized version of the env/robot state which is not provided. Is there a way I could reset the env from offline data or is there another way for me to get skill annotations for the entire dataset?

Thanks!

May a have a small part of data to get started?

Dear authors,

Thanks for this great work. It inspires me a lot.

However, the dataset size is too overwhelming to my desktop to get started.

May I have a small fraction of data/demo/performance test to just run the pre-trained model?

How to render in a high-resolution mode?

Hi I am using CALVIN for my research and I want to show some visualizations in high resolution (200x200 by default). I tried to change the camera size but it won't work. I think there must be solutions as I noticed that the visualization demo shown on the home page is good.

D_D_static_rgb_baseline epochs

I downloaded the pretrain D_D_static_rgb_baseline model checkpoint provided in the main page, but found its epoch only 23. Is it checkpoint already fully trained (like the loss has already converged) or not?

Stuck at beginning of training

Hi!

Running the baseline training gets stuck at the very beginning. Do you have any clue why that might be? Is it normal for iterations to take 23.91s/it? There is no error.

The only difference I have with your requirements is the PyTorch version as only the nightly release seems to work with CUDA and Pytorch lightning 1.4.9 on our machine.

root_data_dir: /media/dennisushi/DREVO-P1/DATA/calvin/task_D_D/task_A_A
...
slurm: false
...
[2022-03-17 11:16:03,707][__main__][INFO] - * CUDA:
	- GPU:
		- GeForce RTX 3080
		- GeForce RTX 3080
	- available:         True
	- version:           11.1
* Packages:
	- numpy:             1.21.2
	- pyTorch_debug:     False
	- pyTorch_version:   1.12.0.dev20220224+cu111
	- pytorch-lightning: 1.4.9
	- tqdm:              4.63.0
...
...
[2022-03-17 11:16:22,988][calvin_agent.models.play_lmp][INFO] - Finished validation epoch 0
Global seed set to 42                                                                                                
Epoch 0:   0%|                                                                   | 0/19063 [00:00<00:04, 3979.42it/s][2022-03-17 11:16:23,004][calvin_agent.models.play_lmp][INFO] - Start training epoch 0
Epoch 0:   0%|                                          | 3/19063 [01:35<126:35:39, 23.91s/it, loss=42.1, v_num=6-02]

Can i train model with 1 laptop gpu?

Hi, can i train the model with i laptop gpu? when i run the script it outputs the error:
[1] 26075 segmentation fault (core dumped)

i have rtx 3060 (laptop version) with 6gb vram

Errors with EGL

Thanks for this work. When I followed the readme and ran python training.py datamodule.root_data_dir=/path/to/dataset/, it reported an error as Segmentation fault (core dumped) when loading EGL plugin in calvin_env.

I use a Ubuntu 16.04 sever with a Nvidia 2080Ti card, and the driver is nvidia-container-runtime 3.5.0-1 and cuda is 11.2. I have searched the Internet for a while, such as installing mesa assudo apt-get install libglfw3-dev libgles2-mesa-dev, but still did not work.

I would like to inquire if you know how to enable EGL with my hardware setting and what is the function of EGL, for displaying the robot?

By the way, what is the time of training the baseline on the three datasets provided, i.e., task_D.zip, task_ABC_D.zip, task_ABCD_D.zip?

Thanks very much.

I am trying to follow the instruction to run the code, however, after resolving some errors, there are some errors I cannot fix

The command I exexuted :
$ cd $CALVIN_ROOT/calvin_models/calvin_agent
$ python training.py

errors I got after fixing some errors.
File "/home/nikepupu/anaconda3/envs/calvin/lib/python3.7/site-packages/hydra/_internal/utils.py", line 573, in _locate
raise ImportError(f"Error loading module '{path}'") from e
ImportError: Error loading module 'lfp.utils.transforms.NormalizeVector'

I investigate the github, and I found out that this file is from here: which is missing
image

python /home/hermannl/repos/learning_from_play/lfp/training.py

In addition, I need to modify utils.transforms.py to get to this point.

I modify ScaleImageTensor to this:
class NormalizeVector(object):
"""Normalize a tensor vector with mean and standard deviation."""

def __init__(self, mean=0.0, std=1.0):
    **if isinstance(mean, float):
        mean = [mean]
    if isinstance(std, float):
        std = [std]**
    print("success")
    self.std = torch.Tensor(std)
    self.std[self.std == 0.0] = 1.0
    self.mean = torch.Tensor(mean)

def __call__(self, tensor: torch.Tensor) -> torch.Tensor:
    assert isinstance(tensor, torch.Tensor)
    return (tensor - self.mean) / self.std

def __repr__(self):
    return self.__class__.__name__ + "(mean={0}, std={1})".format(self.mean, self.std)

I also modify add depthNoise to this:
class AddDepthNoise(object):
"""Add multiplicative gamma noise to depth image.
This is adapted from the DexNet 2.0 code.
Their code: https://github.com/BerkeleyAutomation/gqcnn/blob/master/gqcnn/training/tf/trainer_tf.py"""

def __init__(self, shape=1000.0, rate=1000.0):
    self.shape = torch.tensor(shape)
    self.rate = torch.tensor(rate)
    self.dist = torch.distributions.gamma.Gamma(torch.tensor(shape), torch.tensor(rate))

def __call__(self, tensor: torch.Tensor) -> torch.Tensor:
    assert isinstance(tensor, torch.Tensor)
    multiplicative_noise = self.dist.sample()
    return multiplicative_noise * tensor

def __repr__(self):
    # return self.__class__.__name__ + f"{self.shape=},{self.rate=},{self.dist=}"
    **return self.__class__.__name__ + "(self.shape={0}, self.rate={1}, self.dist={2})".format(self.shape, self.rate, self.dist)**

Description of the data in the dataset.

Hello!

I am trying to use the dataset from CALVIN. Would be nice to have a more detailed description of the data format used. I have created the dataset object with calvin.util.visualize_annotations.load_data function. To the objects in dataset have robot_obs property (vectors of 8 numbers), and 'state_info.robot_obs' (vectors of 15 numbers), 'state_info.scene_obs' (vectors of 24 numbers). As was able to understand from source the 'robot_obs' (vectors of 8) are just preprocessed from 'state_info.robot_obs'. But what do actually those numbers mean? I suppose that first 3 are the coordinates of the gripper (x,y,z), but what are the remaining 5 (probably rotations etc)? And for 'state_info.robot_obs' what are those vectors of 15? 'state_info.scene_obs' (vectors of 24 numbers)?
Could you please give a description somewhere in the README?

How to control panda in joint space ?

Hi, thanks for sharing this interesting work with us!

I am using the SlideEnv provided in the tutorial Jupiter notebook. I would like to know how to control the panda robot arm directly in the joint space.
I see that in the "Robot.apply_action(self, action)" function, it seems to use inverse kinematics to translate the gripper pose to the joint angles and only after that to control the robot arm;

I would like the RL agent to output joint angles and gripper action to control the panda motion directly, is there a pre-defined interface for this?
Thanks in advance for your help!

Error in visualizing dataset

Thank you for supporting this great benchmark!

I tried to run the provided visualization code but it failed due to several bugs. Here is a quick fix:

  • In calvin/calvin_models/cavin_agent/utils/visualize_annotations.py:84 change .avi to .mp4 and *"XVID" to *"mp4v"
  • In calvin/calvin_models/cavin_agent/utils/visualize_annotations.py:61 change dataset[start][1][0] to dataset[start]["rgb_obs"]["rgb_static"]
  • In calvin/calvin_models/conf/lang_ann.yaml:14 comment out override datamodule/observation_space: state_only

Language-Only Demonstrations Performance?

From my read of the paper (and the follow-up HULC), it seems like policies are trained on a combination of the play data, as well as the (language, demonstration) pairs that are annotated to get the performance reported?

Is there a "simple" language-conditioned policy baseline that just trains on language and RGB state pairs and is then evaluated? How does this perform?

Requirements: wheel and CMake 3.18.4 for Multicore-TSNE

I have installed CALVIN using a venv and ran into two problems:

  1. Typical pip problem: Packages require bdist_wheel but don't depend on wheel.
  2. MulticoreTSNE does not compile with CMake 3.20.3. As suggested by this thread I installed CMake 3.18.4 using pip which fixed the issue. Maybe this dependency can be added as well.

scene_A has red-pink coordinates switched, scene_C has red-blue coordinates switched

This is a bug in the scene_obs section of the coordinates: object coordinates are mixed up in A and C which finally explains why we had these color mixup errors on confusion matrices ;)

To reproduce:

  1. just visualize the data, or
  2. track the std of object coordinates for the red/pink/blue annotated frame ranges, or
  3. track the distance of tcp to said object for the red/pink/blue annotated frames

Headless Machines Support

Hi,

The code is exceptionally well-written!

I‘m training the MCIL on a headless machine and it doesn't seem to report errors so far.

However, I'd still like to ask if it supports running on the headless machine

TypeError: Error instantiating 'calvin_agent.utils.transforms.NormalizeVector' : new(): data must be a sequence (got float)

Hello Oier, thank you for publishing your work. I am interested to work in this area. I am trying to implement your repository on my GPU desktop. I am getting an error when running the command
$ python training.py datamodule.root_data_dir=/media/robita/second_ssd/Gaurav/calvin/dataset/ datamodule/datasets=vision_lang_shm

Please let me know what is the mistake. What should I do to remove this error?

15 Extracting /media/robita/second_ssd/Gaurav/calvin/dataset/training/50steps.tar.xz to /media/robita/second_ssd/Gaurav/calvin/dataset/training
16 Downloading http://www2.informatik.uni-freiburg.de/~meeso/50steps.tar.xz to /media/robita/second_ssd/Gaurav/calvin/dataset/validation/50steps.tar.xz
17 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12960424/12960424 [00:01<00:00, 11767561.95it/s]
18 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 14339.50it/s]
19 Extracting /media/robita/second_ssd/Gaurav/calvin/dataset/validation/50steps.tar.xz to /media/robita/second_ssd/Gaurav/calvin/dataset/validation
20 [2023-08-09 04:11:04,627][calvin_agent.datasets.utils.shared_memory_utils][INFO] - Loading train language episodes into shared memory. (progress bar shows only worker process 0).
21 [2023-08-09 04:11:04,813][calvin_agent.datasets.utils.shared_memory_utils][INFO] - Loading val language episodes into shared memory. (progress bar shows only worker process 0).
22 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 7681.88it/s]
23 Error executing job with overrides: ['datamodule.root_data_dir=/media/robita/second_ssd/Gaurav/calvin/dataset/', 'datamodule/datasets=vision_lang_shm']
24 Traceback (most recent call last):
25 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
26 return target(*args, **kwargs)
27 File "/media/robita/second_ssd/Gaurav/calvin/calvin_models/calvin_agent/utils/transforms.py", line 22, in init
28 self.std = torch.Tensor(std)
29 TypeError: new(): data must be a sequence (got float)
30 During handling of the above exception, another exception occurred:
31 Traceback (most recent call last):
32 File "training.py", line 68, in train
33 trainer.fit(model, datamodule=datamodule, ckpt_path=chk) # type: ignore
34 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 603, in fit
35 call._call_and_handle_interrupt(
36 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
37 return trainer_fn(*args, **kwargs)
38 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in _fit_impl
39 self._run(model, ckpt_path=self.ckpt_path)
40 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1037, in _run
41 self._call_setup_hook() # allow user to setup lightning_module in accelerator environment
42 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1284, in _call_setup_hook
43 self._call_lightning_datamodule_hook("setup", stage=fn)
44 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1361, in _call_lightning_datamodule_hook
45 return fn(*args, **kwargs)
46 File "/media/robita/second_ssd/Gaurav/calvin/calvin_models/calvin_agent/datasets/calvin_data_module.py", line 79, in setup
47 self.train_transforms = {
48 File "/media/robita/second_ssd/Gaurav/calvin/calvin_models/calvin_agent/datasets/calvin_data_module.py", line 80, in
49 cam: [hydra.utils.instantiate(transform) for transform in transforms.train[cam]] for cam in transforms.train
50 File "/media/robita/second_ssd/Gaurav/calvin/calvin_models/calvin_agent/datasets/calvin_data_module.py", line 80, in
51 cam: [hydra.utils.instantiate(transform) for transform in transforms.train[cam]] for cam in transforms.train
52 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
53 return instantiate_node(config, *args, recursive=recursive, convert=convert)
54 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 249, in instantiate_node
55 return _call_target(target, *args, **kwargs)
56 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 64, in _call_target
57 raise type(e)(
58 File "/home/robita/anaconda3/envs/calvin_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 62, in _call_target
59 return target(*args, **kwargs)
60 File "/media/robita/second_ssd/Gaurav/calvin/calvin_models/calvin_agent/utils/transforms.py", line 22, in init
61 self.std = torch.Tensor(std)
62 TypeError: Error instantiating 'calvin_agent.utils.transforms.NormalizeVector' : new(): data must be a sequence (got float)
63 Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Support for PyTorch 2.0?

Hi, thanks a lot for this work.

I was wondering if the hard constraints ("==") to pytorch 1.13.1 and pytorch lightning 1.8.6 are necessary (link).

Particularly for PyTorch 2.0, this is fully backwards compatible with 1.13, and given the performance gains, it would be nice to add support for this by dropping the hard constraint. I can open a quick PR if that makes it easier for you.

"InvalidGitRepositoryError" while running jupyter notebook

Hi, I'd like to run RL_with_CALVIN.ipynb file in my locally established environment, but I meet the following issue while running the line "env = hydra.utils.instantiate(cfg.env)":
‘InvalidGitRepositoryError: Error instantiating 'calvin_env.envs.play_table_env.PlayTableSimEnv'
Exactly I don't know how to solve it, so please give more help.

PS. it seems that there exists some script issues while opening RL_with_CALVIN.ipynb.

ModuleNotFoundError: No module named 'calvin_agent.datasets.play_data_module'

when i run

python calvin_agent/evaluation/evaluate_policy.py 
--dataset_path $CALVIN_ROOT/dataset/calvin_debug_dataset 
--train_folder /home/pi/workspace/tanwenxuan/Project/agi-perception-roboton/resources/model/hulc-mtlc/D_D_static_rgb_baseline 
--checkpoints mcil_baseline.ckpt

I download the debug dataset in $CALVIN_ROOT/dataset/calvin_debug_dataset
& download D_D_static_rgb_baseline model

# ----- Load ckpt from  /home/pi/workspace/tanwenxuan/Project/agi-perception-roboton/resources/model/hulc-mtlc/D_D_static_rgb_baseline/mcil_baseline.ckpt
Iter with ckpt: /home/pi/workspace/tanwenxuan/Project/agi-perception-roboton/resources/model/hulc-mtlc/D_D_static_rgb_baseline/mcil_baseline.ckpt
Traceback (most recent call last):
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 585, in _locate
    import_module(mod)
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
Traceback (most recent call last):
  File "calvin_agent/evaluation/evaluate_policy.py", line 252, in <module>
    main()
  File "calvin_agent/evaluation/evaluate_policy.py", line 241, in main
    model, env, _ = get_default_model_and_env(
  File "/home/pi/workspace/tanwenxuan/Project/agi-perception-roboton/third_party/calvin/calvin_models/calvin_agent/evaluation/utils.py", line 35, in get_default_model_and_env
    data_module = hydra.utils.instantiate(cfg.datamodule, num_workers=0)
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 180, in instantiate
    return instantiate_node(config, *args, recursive=_recursive_, convert=_convert_)
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 240, in instantiate_node
    _target_ = _resolve_target(node.get(_Keys.TARGET))
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 104, in _resolve_target
    return _locate(target)
  File "/home/pi/anaconda3/envs/hulc_venv/lib/python3.8/site-packages/hydra/_internal/utils.py", line 587, in _locate
    raise ImportError(
ImportError: Encountered error: `No module named 'calvin_agent.datasets.play_data_module'` when loading module 'calvin_agent.datasets.play_data_module.PlayDataModule'

failed to EGL with glad.

is there any option to train without EGL? I'm using nvidia gpus but it seems like that I don't have a libEGL,

Feature request: single task datasets

Hi there! I think it would be really nice if there was a script and dataset for a selection of individual tasks in CALVIN, so that one could test their method on just a single task. I've started working on this already, does it sound like a useful feature?

failed to EGL with glad.

error:failed to EGL with glad.
Does this error occur because I didn't install EGL properly?
When I enter "ldconfig -p | grep libEGL" in the terminal, I get the following output.
libEGL_nvidia.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libEGL_nvidia.so.0
libEGL_nvidia.so.0 (libc6) => /lib/i386-linux-gnu/libEGL_nvidia.so.0
libEGL_mesa.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libEGL_mesa.so.0
libEGL.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libEGL.so.1
libEGL.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libEGL.so
Can you please guide me on what to do next? Thank you very much.

Error in automatic annotation

Thank you for supporting great benchmark.

I'm trying to generate language annotation for unlabeled steps(frames).

I used automatic_lang_annotator_mp.py
and adjust the eps Hyperparameter here to be 0.5.

I encountered IndexError:

2023-02-01_12-25-36

(Running the same code with eps=0.01 worked fine for me.)

I have two questions

  1. Should I decrease the epsilon to fix the error?
  2. How big should epsilon be to have an annotation on every frame?

Error related to the hydra module

I followed the instructions to create a virtual environment and download the dataset. However, when training the baseline model, I encountered an error related to the hydra module.
image
(Although the path to my data set may seem confusing, rest assured that it is complete.)

I'm not sure if this is an error in the code or a problem with the environment configuration.

There are two things in my environment that I would like to mention.

The first issue I encountered was that the pyopengl module failed to install due to a network problem when running the install.sh script. However, after all other modules were installed automatically, I executed a separate command in the command line to successfully install the pyopengl module.

The second thing is: I made a mistake in training the baseline model at the very beginning because I didn't have an EGL_options.o file. So, I manually went to the calvin/calvin_env/egl_check directory and ran build.sh to generate EGL_options.o.

As a deep learning beginner, thank you in advance for your guidance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.