This program aims to format collected segmentation images using OpenCV.
The program can read collected images and convert them into an ML-friendly format.
For first time users, please make sure to correctly set up the environment by following all steps and links below:
1. Setup AMBF
2. Setup surgical_robotics_challenge
Once the repository is cloned and setup properly, when you navigate to the directory ~/surgical_robotics_challenge
, you should be able to run ./run_environment.sh
to launch the scene.
3. Setup dVRK
Once the repository is cloned and setup properly, you should be able to run roscd dvrk_config
, which will land you from whichever folder you were in to ~/catkin_ws/src/cisst-saw/sawIntuitiveResearchKit/share
4. Setup ROS video recorder
In the config.yaml
file, change the output_dir
to the following code block, so that the recorded dataset will be numbered in ways compatible for future application:
output_dir: "~/annotation_reformat/recordings/recxx" # with xx be whichever record you are working on
Change rostopic
to subscribe to AMBF rostopics to subscribe to left ECM and its corresponding annotation video:
rostopic:
cam1: "/ambf/env/cameras/cameraL/ImageData" # left ECM video
cam2: "/ambf/env/cameras/cameraL2/ImageData" # left annotation video
Move the repository to your catkin workspace and re-build it with the new repository
mv dvrk_record_video ~/catkin_ws/src
catkin build --summary
git clone https://github.com/ruiruihuangannie/annotation_reformat
Feature | Description |
---|---|
convert_ambf_standard.py |
python script that converts ambf annotation video to match those from the public open dataset |
image.py |
python script that defines a customized image class |
source ~/catkin_ws/devel/setup.bash
roscore
source ~/catkin_ws/devel/setup.bash
roscd dvrk_config
rosrun dvrk_robot dvrk_console_json -j jhu-daVinci/console-MTML-MTMR.json
source ~/ambf/build/devel/setup.bash
cd ~/surgical_robotics_challenge/
./run_environment.sh
Note: It is recommended to change the ECM angle to increase the variety of training data.
source ~/ambf/build/devel/setup.bash
cd ~/surgical_robotics_challenge/scripts/surgical_robotics_challenge/teleoperation
python3 mtm_multi_psm_control.py --mtm MTML -c mtml --one 1 --two 0
source ~/ambf/build/devel/setup.bash
cd ~/surgical_robotics_challenge/scripts/surgical_robotics_challenge/teleoperation
python3 mtm_multi_psm_control.py --mtm MTMR -c mtmr --one 0 --two 1
source ~/catkin_ws/devel/setup.bash
cd ~/catkin_ws/src/dvrk_record_video/scripts
python3 node_dvrk_record_video.py
When data collection begins, the following [Info]
should be displayed in the terminal. When finish recording data, press ctrl+c
to exit the recorder.
When successfully launched, 5 separate application windows should appear, which includes:
Application | Example |
---|---|
dvRK console | ![]() |
AMBF simulator | ![]() |
MTM GUI (L/R) |
![]() |
dvrk recorder | ![]() |
Note: At this point, the AMBF simulator should be projected onto the MTM console screen. If not, potential problems might include:
- Not properly installing AMBF, dVRK, surgical robotics challenge, or the video recorder
- Not properly sourcing the environment
- If homing failed when launching the dVRK console, try
qlacommand -c close-relays
before launching again.
Navigate to the folder that now contains the recorded images and videos, which should contain:
- one .mkv video
- multiple .png segmentation images
- one .txt timestamp file
The raw image data will then be processed with the following goals in mind:
- the annotation video will be selected in 1 out of 10 to make sure that every frame is slightly different from the previous ones.
- annotation #1: black-white (PSM arms/grippers, needle, thread = white), the rest are black
- annotation #2: black-3 colors (PSM arms/grippers = white, needle = red, thread = green), the rest are black
- annotation #3: black-4 colors (PSM arms = white, PSM grippers = blue, needle = red, thread = green), the rest are black
cd ~/annotation_reformat/ # Navigate to the folder that contains the current repo
python3 convert_ambf_standard.py -i ~/data/rec01 # Change it to point to whatever folder that contains the images
If applicable, repeat the above steps for all recordings.
In each folder that now contains processed segmented images, each image should correspond to 4 segmented images, example:
Raw Image | ![]() |
---|---|
AMBF annotation | ![]() |
annotation #1 | ![]() |
annotation #2 | ![]() |
annotation #3 | ![]() |
This program provides a simple way to streamline collected segmentation images from AMBF using OpenCV. I hope this is helpful!