Giter Site home page Giter Site logo

grasping's Introduction

grasping

This project contains the code used for generating multi-modal grasps in V-REP, and described in the paper "An Integrated Simulator and Data Set that Combines Grasping and Vision for Deep Learning" (TBA).

Requirements

Initialize paths

  • In lib/python_config.py change the variable project_dir to point to where this project was downloaded to. This file is used for controlling basic parameters within all python files
  • In lib/lua_config.py change the different directory lists to point to where the project was downloaded to. This file is used for controlling parameters within the different V-REP simulations
  • Open scenes/get_initial_poses.ttt in V-REP. Modify the threaded script by double-clicking the blue page icon beside 'GraspCollection', and change the variable local file_dir to point to where the lua_config.lua file is found
  • Open scenes/scene_collect_grasps.ttt in V-REP. Modify the threaded script by double-clicking the blue page icon beside 'GraspCollection', and change the variable local file_dir to point to where the lua_config.lua file is found

Download meshes

  • Download object meshes as either the Wavefront .obj file format, or .stl file format, and place them in data/meshes/object_files. To obtain the mesh files used in the work, you can download them from the Waterloo G3DB project. In this work, we only used a subset of these meshes, and the list can be found within the /lib folder.
  • Note 1: In this work, the meshes were labeled according to the convention 'XXX_yyyyyyy', where 'XXX' is the object class (e.g. 42, 25), and 'yyyyyy' is the name of the object name (e.g. 'wineglass', 'mug'). Example: '42_wineglass'.
  • Note 2: The simulation works best with simple meshes; For complex meshes, you may need to manually process them to reduce the number of triangles or complexity before running in the simulation. Some meshes in the above file are complex, and note that the more complex the mesh is, the more unstable the simulations will be.

Step 1: Prepare all meshes / mesh properties for simulation

  • First, we need to preprocess all the meshes we'll use to identify their relative properties, and fix any meshes that are not watertight. This process will create a file called 'mesh_object_properties.txt' in the data/ folder, containing information about each mesh, including mass, center of mass, and inertia.
$: cd initialize
$: python prepare_mesh.py
  • Open V-REP and run scenes/get_initial_poses.ttt. This will create a file data/initial_poses.txt that contains all information on the starting pose of the object and gripper, and will be used for generating potential grasp candidates.
  • Run initialize/prepare_candidates.py. This will read the pose information collected by V-REP, and generate a list of candidates for each object that will be tested in the simulator. Note that these candidates will be saved under collect/candidates
$: cd initialize
$: cat ../data/initial_poses.txt | parallel python prepare_candidates.py
  • Once the candidates have been generated, you can either run each of them manually through the simulation, or create a "commands" file that will aid in autonomously running them through the simulator. If you plan on running in headless mode (and the following line works for you), you can skip the subsequent aside.
$: cd initialize
$: python prepare_commands.py

Aside: Dissecting the commands, and running with / without headless mode

On Line 72 of prepare_commands.py, we have the following code:

commands[i] = \
            'ulimit -n 4096; export DISPLAY=:1; vrep.sh -h -q -s -g%s -g%s -g%s %s '\
            %(sub_cmd[0], sub_cmd[1], sub_cmd[2], config_simulation_path)

Each element serves the following purpose:

  • ulimit -n 4096 : Setting the number of available processes a user can run. Note that you may not need this, but we found it useful.
  • export DISPLAY=:1 : For running these V-REP simulations in headless mode, an Xserver is needed for handling the vision information captured through the cameras. Here, we index a specific server attached to a display, but note that depending on how yours has been set up, you may need to change the index accordingly.
  • vrep.sh -h -q -s -g%s -g%s -g%s %s : V-REP has a number of command-line options available (see here). -h specifies headless mode (i.e. without the GUI), -q tells the program to quit after simulation has ended, -s is to start the simulation, and -g are arguments that get passed inside of the simulation script. The first -g specifies a specific set of grasps we've generated (e.g. 40_carafe_final-11-Mar-2016-17-09-17.txt), while the second and third -g's specify a range of grasps we wish to use (e.g. lines 1 --> 1500 of the specified file). The final element of the command specifies where the V-REP scene we wish to run can be found.

Notice that the -h flag assumes you will be running the simulations in headless mode, which require an Xorg server for handling vision information. If you are not running in headless mode, you can replace the above line with the following:

commands[i] = \
            'ulimit -n 4096; vrep.sh -q -s -g%s -g%s -g%s %s '\
            %(sub_cmd[0], sub_cmd[1], sub_cmd[2], config_simulation_path)

Step 2: run the generated grasps through the simulator

  • Launch the simulations using generated command files. Note that there may be some specific simulator variables you may be interested in changing (i.e. camera near/far clipping plances, which contacts are part of the gripper), which can be found inside the 'config.lua' file. The simulation will save successful grasps to the data/collected folder.
  • Assuming you are running with linux and using GNU Parallel, you can launch the simulations with:
$: cd collect
$: screen
$: cat commands/mainXXX.txt | parallel -j N_JOBS 

where XXX is a specific file to be run on a compute node, and N_JOBS is a number (i.e. 8), which specifies the number of jobs you want to run in parallel. If no number is specified, GNU parallel will use the maximum number of cores available. If you are running on windows, you can sequentially collect grasps by manually launching the simulation, and changing which file is used within lib/lua_config.lua file

  • (optional) To get a sense of what the simulation is doing, you can also run commands not using headless mode. Adjust the following command accordingly, with the object and grasp range you wish to use:
$: vrep.sh -q -s -g40_carafe_final-11-Mar-2016-17-09-17.txt -g1 -g1501 /path/to/grasping/collect/scene_collect_grasps.ttt

where /path/to/grasping/collect/scene_collect_grasps.ttt represents the config_simulation_path variable in lib/python_config.py

  • Once simulations are done, decode the collected data
$: cd collect

.. For decoding data sequentially (class by class):

$: python decode_grasp_data.py

.. For decoding data in parallel (multiple classes at once):

$: ls ../data/collected/ > files.txt
$: cat files.txt | parallel python decode_grasp_data.py
  • Run split_train_test.py to move one file from each object class to a test directory
$: cd collect
$: python split_train_test.py
  • Create a list of items you want to consider in the train set, by looking at the top N classes (in the generated bar plot of grasp statistics), and save these in the file collect/train_items.txt. Run postprocess.py to generate the final grasping dataset
$: cd collect
$: python postprocess_split_data.py

Miscellaneous resources

  • lib/sample_load.py is a standalone script that shows how data can be loaded and grasps can be plotted
  • lib/sample_clean_list.txt is a list of items in the train/test/validation set that have been checked over for correctness. This represents only a small sample of overall grasps in the dataset, and there may be other improper/incorrect grasps representing noise in the dataset.
  • lib/sample_mesh_list.txt is a list of meshes that were used in this project

grasping's People

Contributors

mveres01 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.