Giter Site home page Giter Site logo

jhu-lcsr / good_robot Goto Github PK

View Code? Open in Web Editor NEW
101.0 15.0 29.0 87.18 MB

"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task Transfer; and “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer

Home Page: https://openreview.net/forum?id=Pxs5XwId51n

License: BSD 2-Clause "Simplified" License

Python 27.41% Shell 0.52% CMake 0.02% C++ 0.17% C 0.10% Jupyter Notebook 71.78%
robotics reinforcement-learning deep-learning deep-reinforcement-learning deep-q-network multi-step-learning multi-step-dqn grasping manipulation computer-vision

good_robot's People

Contributors

adit98 avatar ahundt avatar andyzeng avatar aneeshchauhan avatar benjamindkilleen avatar esteng avatar hkwon214 avatar hongtaowu67 avatar instigatorofawe avatar nickgreene avatar zooeyhe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

good_robot's Issues

Why the generated real robot‘s color heightmap is not good???

Hello, first of all thanks for sharing your amazing job. I want to know Why the generated real robot‘s color heightmap is not good?My color heightmaps are often deformed and have a lot of black Points. Is this caused by inaccurate camera calibration? I have try with VPG‘s calibration method.(0.448x0.448 resolution=0.02 and 0.672x0.672 resolution=0.03)
and ROS easy_handeye method. the second method cant get the same R.t matrix with the first. so i can‘t use it.
input

Thanks in advance ;)

Detect Changes

Hi Dear;
I was looking at your code, I found this part Detect Changes
I need help to understand what does it mean 0.3 , 0.01, et?. what is the standard of 300 and how it is obtained to be as change_threshold?.
May anyone help me in clarification this part, please?.
I'd appreciate your cooperation

`if 'prev_color_img' in locals():

        # Detect changes
        depth_diff = abs(depth_heightmap - prev_depth_heightmap)
        depth_diff[np.isnan(depth_diff)] = 0
        depth_diff[depth_diff > 0.3] = 0
        depth_diff[depth_diff < 0.01] = 0
        depth_diff[depth_diff > 0] = 1
        change_threshold = 300
        change_value = np.sum(depth_diff)
        change_detected = change_value > change_threshold or prev_grasp_success  
        print('Change detected: %r (value: %d)' % (change_detected, change_value))`

Sincerely

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.79 GiB total capacity; 6.03 GiB already allocated; 10.94 MiB free; 6.17 GiB reserved in total by PyTorch)

I tried to follow this repository. However, when I entered

export CUDA_VISIBLE_DEVICES="0" && python3 main.py --is_sim --obj_mesh_dir objects/blocks --num_obj 4 --push_rewards --experience_replay --explore_rate_decay --check_row --tcp_port 19997 --place --future_reward_discount 0.65 --max_train_actions 20000 --random_actions --common_sense --trial_reward

I got following error

Traceback (most recent call last):
  File "main.py", line 1755, in <module>
    one_train_test_run(args)
  File "main.py", line 1563, in one_train_test_run
    training_base_directory, best_dict = main(args)
  File "main.py", line 1141, in main
    trainer.backprop(prev_color_heightmap, prev_valid_depth_heightmap, prev_primitive_action, prev_best_pix_ind, label_value, goal_condition=prev_goal_condition)
  File "/home/khan/cop_ws/src/good_robot/trainer.py", line 717, in backprop
    push_predictions, grasp_predictions, place_predictions, state_feat, output_prob = self.forward(color_heightmap, depth_heightmap, is_volatile=False, specific_rotation=best_pix_ind[0], goal_condition=goal_condition)
  File "/home/khan/cop_ws/src/good_robot/trainer.py", line 445, in forward
    output_prob, state_feat = self.model.forward(input_color_data, input_depth_data, is_volatile, specific_rotation, goal_condition=goal_condition)
  File "/home/khan/cop_ws/src/good_robot/models.py", line 246, in forward
    interm_push_feat, interm_grasp_feat, interm_place_feat, tiled_goal_condition = self.layers_forward(rotate_theta, input_color_data, input_depth_data, goal_condition, tiled_goal_condition)
  File "/home/khan/cop_ws/src/good_robot/models.py", line 301, in layers_forward
    interm_place_depth_feat = self.place_depth_trunk.features(rotate_depth)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
    input = module(input)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
    input = module(input)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/khan/anaconda3/envs/raisim_env/lib/python3.8/site-packages/torchvision/models/densenet.py", line 33, in forward
    new_features = super(_DenseLayer, self).forward(x)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
    input = module(input)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 135, in forward
    return F.batch_norm(
  File "/home/khan/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2149, in batch_norm
    return torch.batch_norm(
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 7.79 GiB total capacity; 6.09 GiB already allocated; 28.69 MiB free; 6.26 GiB reserved in total by PyTorch)

After reading some blogs, I found some discussion here which pointed out that it is due to batch size and solution is possible if batch size is reduced. To do this, I tried trainer.py file and I found following section of the code

# Construct minibatch of size 1 (b,c,h,w)
        input_color_image.shape = (input_color_image.shape[0], input_color_image.shape[1], input_color_image.shape[2], 1)
        input_depth_image.shape = (input_depth_image.shape[0], input_depth_image.shape[1], input_depth_image.shape[2], 1)
        input_color_data = torch.from_numpy(input_color_image.astype(np.float32)).permute(3,2,0,1)
        input_depth_data = torch.from_numpy(input_depth_image.astype(np.float32)).permute(3,2,0,1)
        if self.flops:
            # sorry for the super random code here, but this is where we will check the
            # floating point operations (flops) counts and parameters counts for now...
            print('input_color_data trainer: ' + str(input_color_data.size()))
            class Wrapper(object):
                custom_params = {'input_color_data': input_color_data, 'input_depth_data': input_depth_data, 'goal_condition': goal_condition}
            def input_constructor(shape):
                return Wrapper.custom_params
            flops, params = get_model_complexity_info(self.model, color_heightmap.shape, as_strings=True, print_per_layer_stat=True, input_constructor=input_constructor)
            print('flops: ' + flops + ' params: ' + params)
            exit(0)
        # Pass input data through model
        output_prob, state_feat = self.model.forward(input_color_data, input_depth_data, is_volatile, specific_rotation, goal_condition=goal_condition)

I think I need to divide something to reduce batch size. Can anyone help please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.