Giter Site home page Giter Site logo

robofit / arcor Goto Github PK

View Code? Open in Web Editor NEW
8.0 5.0 14.0 303.64 MB

Augmented reality-based human-robot interaction.

License: GNU Lesser General Public License v2.1

CMake 1.43% Python 96.71% QMake 0.18% Shell 0.10% C++ 1.58%
ros hri robot augmented-reality arcor

arcor's Introduction

ARCOR (formerly known as ARTable) - the main repository

Build Status

ARCOR - vision of a near future workspace, where human and robot may safely and effectively collaborate. Our main focus is on human-robot interaction and especially on robot programming - to make it feasible for any ordinary skilled worker. The interaction is based mainly on interactive spatial augmented reality - combination of projection and touch sensitive surface. However, more modalities are integrated or currently under development.

Repositories / packages

This repository holds the main components of the system, which are not specific to any particular setup (combination and type of components) or robot:

  • art_brain - central node which communicates with robot, manages program execution, holds current system state etc.
  • art_bringup - to launch the system.
  • art_calibration - AR marker-based mutual calibration of cameras.
  • art_collision_env - manages detected as well as artificial objects within the workspace.
  • art_db - permanent storage for object types, programs, etc.
  • art_instructions - for each supported instruction there is its definition in yaml and respective classes for art_brain and art_projected_gui. Those classes are loaded on startup based on /art/instructions parameter.
  • art_led - RGB LED strip interface.
  • art_projected_gui - shows system state, allows to set program parameters, etc.
  • art_projector - calibrates projector wrt. Kinect and displays scene generated by art_projected_gui.
  • art_simple_tracker - not a real tracker, it "tracks" objects based on already assigned IDs and performs position/orientation filtering from multiple detectors.
  • art_sound - a sound interface: plays sound for selected system events (error).
  • art_table_pointing - uses Kinect skeleton tracking to compute where user points on the table.
  • art_touch_driver - reads data from touch foil (which is HID device) a publishes it as ROS messages.

Additional repositories:

For each integrated robot, there are two repositories: one with custom packages providing high-level functions compatible with arcor ROS API and one with implementation of art_brain plugin (-interface one):

Currently supported setups (see links for further information):

Any supported setup may be used with any supported robot (or even without one).

Functionality

The system has two main modes: setting program parameters and program execution.

The video below briefly introduces the system and shows how we did its first UX testing:

arcor video

Currently, the robot program has to be created beforehand (e.g. using script like this. Then, program parameters could be easily set using the projected interface. In order to make setting parameters as simple as possible, the system is based on complex instructions, with high-level of abstraction (for supported instructions see instructions.yaml).

API

All topics, parameters and services can be found in /art namespace.

TBD

Installation

TBD

Contributing

  • Follow PyStyleGuide or CppStyleGuide
    • for Python, you may use pre-commit hook to automatically format your code according to PEP8 (just copy the file into .git/hooks).
  • Use catkin_lint to check for common problems (catkin_lint -W2 your_package_name)
  • Use roslint to run static analysis of your code.
  • Ideally, create and use unit tests.
  • Feel free to open pull requests!

Publications

  • MATERNA Zdeněk, KAPINUS Michal, BERAN Vítězslav, SMRŽ Pavel a ZEMČÍK Pavel. Interactive Spatial Augmented Reality in Collaborative Robot Programming: User Experience Evaluation. In: Robot and Human Interactive Communication (RO-MAN). NanJing: Institute of Electrical and Electronics Engineers, 2018 (to be published).
  • MATERNA Zdeněk, KAPINUS Michal, BERAN Vítězslav a SMRŽ Pavel. Using Persona, Scenario, and Use Case to Develop a Human-Robot Augmented Reality Collaborative Workspace. In: HRI 2017. Vídeň: Association for Computing Machinery, 2017, s. 1-2. ISBN 978-1-4503-4885-0.
  • MATERNA Zdeněk, KAPINUS Michal, ŠPANĚL Michal, BERAN Vítězslav a SMRŽ Pavel. Simplified Industrial Robot Programming: Effects of Errors on Multimodal Interaction in WoZ experiment. In: Robot and Human Interactive Communication (RO-MAN). New York City: Institute of Electrical and Electronics Engineers, 2016, s. 1-6. ISBN 978-1-5090-3929-6.

arcor's People

Contributors

artable-dev avatar jvida avatar kapim avatar vovaekb avatar xbambusekd avatar zdenekm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

arcor's Issues

User tracking (with Kinect2 or unorganized pointcloud)

System-wide parameters

Some commonly used parameters should be defined for all nodes - at one place.

  • table/workspace width, height (/art/table/width, /art/table/height).
  • what else?

Continuous integration (testing)

Curently, travis can't check much as there is not much code to compile. It would be good to:

  • write unit tests (at least for the main parts of the system)
  • write ros tests
  • run static analysis -> fail on error
  • launch file testing

http://wiki.ros.org/unittest
https://github.com/hcrlab/wiki/blob/master/software_engineering/unit_testing.md
https://github.com/hcrlab/wiki/blob/master/software_engineering/continuous_integration.md
http://wiki.ros.org/roslint
http://wiki.ros.org/roslaunch (roslaunch_add_file_check)

program_helper: place pose bez frame_id

U place_pose instrukce se zřejmě neukládá frame_id a program helper pak vrací PoseStamped s prázdným frame_id.. Možná by bylo dobré při ukládání kontrolovat jestli je frame_id vyplněné a když ne, tak nedovolit uložit

art_simple_tracker - weighted averaging

  • object detectors should publish detections in camera frame_id
  • tracker should do weighted averaging based on distance from camera to object (closer = better detection)

art_touch_driver: attempt to assign to self.slot which is None

Traceback (most recent call last):                                                                                 
  File "/home/dev/artable_ws/src/artable-repos/artable/art_touch_driver/src/node.py", line 275, in <module>                                                                                                                            
    node.process()                                                                                                                                                                                                                     
  File "/home/dev/artable_ws/src/artable-repos/artable/art_touch_driver/src/node.py", line 179, in process                                                                                                                             
    self.slot.x = event.value                                                                                                                                                                                                          
AttributeError: 'NoneType' object has no attribute 'x'

AR Tripod (Projector + Kinect2 + NUC)

  • prepare HW
  • separate repository (ar-table-tripod)?
  • package with projector node (extract it from art_projected_gui)
  • node to transform + filter pointcloud??

Picking from feeder

  • art_projected_gui
    • selection of object type
    • button for activation of arm interactive mode -> move arm automatically in front of user and then switch the interactive mode on)
    • visualize feeder / direction of feeder somehow?
  • art_brain
    • open gripper in learning phase
    • check/test state_pick_from_feeder
  • object detection + tracking
    • detectors should publish detections in camera frame_id with RPY 0/90/180/270 degrees
    • tracker should do weighted averaging (or better - kalman filtering?) based on distance from camera to object (closer = better detection)
    • use AR code bundle on both ends of al. profiles?
    • object detector using PR2 forearm cameras
  • art_pr2_grasping:
    • prepare to pregrasp pose (pose in ProgramItem with -0.2m in x-axis of end eff)
    • wait for detection, get object ID
    • add object to collision map
    • open gripper
    • move to pregrasp pose based on detected pose
    • cartesian movement to grasp pose
    • close gripper, check its state, attach collision object
    • move upward
    • move out of feeder

art_brain, art_projected_gui: Use ProgramArray instead of Program

Using ProgramArray.msg instead of Program.msg (Program.msg will serve as container for logically related instructions (program block) - e.g. assembly of one side of the trolley)

  • Rewrite DB #42
  • Write helper class to deal with Program/ProgramBlock/ProgramItem
  • Integrate it in art_brain
  • update interface_state_manager
  • Integrate it in art_projected_gui

Check if place pose for the object is reachable

UI should ask brain - brain asks art_pr2_grasping for left/right arm - brain replyes

What if robot takes object from somewhere using left arm but right arm is required to place the object?

UI should indicate invalid (unreachable) place pose.

Program stop, pause, stepping

  • stop (there is already service, red button in art_projected_gui)
  • pause (there is already service)
  • stepping - need to think about it

art_projected_gui: SceneLabel

Display label stored in ProgramItem message. Allow resizing / repositioning -> labels will be hand coded to templates

Integrate touch foil

art_touch_driver

  • read singletouch data (python-evdev?) -> publish as PoseStamped (/art/interface/touchtable/single)
  • calibration (homography from arbitrary number of points) - ask art_projected_gui to show those points (service)
  • latched bool topics: /art/interface/touchtable/calibrated, /art/interface/touchtable/calibrating
  • normal (not latched) topic: /art/interface/touchtable/touch_detected (std_msgs/Empty)
  • parameter /art/interface/touchtable/active_area -> array of x,y coordinates defining touchable area on the table
  • empty service /art/interface/touchtable/calibrate (called by art_brain)
  • save calibration on the parameter server

art_projected_gui

  • service to show calibration points: /art/interface/projected_gui/touch_calibration (PointStamped[])
  • read /art/interface/touchtable/touch_detected and display calibrations points (one by one)
  • new cursor type
  • make pointing/touch cursors work together
  • show active area

art_brain

  • wait until art_projected_gui is ready, check if touchtable requires calibration (/art/interface/touchtable/calibrated) and if so, ask it to calibrate by calling service /art/interface/touchtable/calibrate

Other stuff

  • test calibration with a new table
  • make a cool video ;-)

art_projected_gui bug

[INFO] [WallTime: 1504615938.726930] Notification: This program item seems to be done
Traceback (most recent call last):
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/touch_table_item.py", line 128, in ps_cb_evt
    self.cb(msg)
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/touch_table_item.py", line 202, in touch_cb
    self.delete_id(msg.id)
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/touch_table_item.py", line 179, in delete_id
    self.touch_points[id].end_of_touch()
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/touch_table_item.py", line 35, in end_of_touch
    self.pointed_item.cursor_release()
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/item.py", line 160, in cursor_release
    self.cursor_click()
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/button_item.py", line 53, in cursor_click
    self.clicked(self)
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/list_item.py", line 76, in item_clicked_cb
    self.item_selected_cb()
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/program_item.py", line 433, in item_selected_cb
    self.item_switched_cb(self.block_id, self.item_id)
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/gui/ui_core_ros.py", line 822, in active_item_switched
    self.learning_vis(block_id, item_id, read_only)
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/gui/ui_core_ros.py", line 756, in learning_vis
    self.select_object_type(self.ph.get_object(block_id, item_id)[0][0])
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/gui/ui_core.py", line 266, in select_object_type
    if it.object_type.name == obj_type_name:
AttributeError: 'NoneType' object has no attribute 'name'
Traceback (most recent call last):
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/gui/ui_core_ros.py", line 1105, in object_cb_evt
    obj.set_orientation(conversions.q2a(inst.pose.orientation))
  File "/home/dev/artable_ws/src/artable/art_projected_gui/src/art_projected_gui/items/object_item.py", line 125, in set_orientation
    self.lx = self.m2pix(self.inflate + self.object_type.bbox.dimensions[2])
AttributeError: 'NoneType' object has no a

art_projected_gui: Use Object's BB

  • To show detected objects
  • When setting place pose (instead of circle)
  • Make sure it does not interfere with object detection (as the current circle do)

art_projected_gui crash

  File "/home/ikapinus/catkin_ws/src/ar-table-itable/art_projected_gui/src/gui/ui_core_ros.py", line 258, in interface_state_evt
    self.add_place(translate("UICoreRos", "OBJECT PLACE POSE"),  it.place_pose, obj.object_type, obj_id,  fixed=True)
AttributeError: 'NoneType' object has no attribute 'object_type'

art_db: Rewrite

  • Rewrite art_db to use http://wiki.ros.org/mongodb_store as a permanent storage - we use arrays in our messages a lot and it's not possible to save them using current approach.
  • Store only object type (name, bounding box / 3D model, additional info)
  • Update program storage (to enable #44)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.