Giter Site home page Giter Site logo

uzh-rpg / sim2real_drone_racing Goto Github PK

View Code? Open in Web Editor NEW
87.0 13.0 23.0 28.34 MB

A Framework for Zero-Shot Sim2Real Drone Racing

Home Page: http://rpg.ifi.uzh.ch/research_drone_racing.html

License: MIT License

CMake 1.47% C++ 77.36% Python 21.06% Shell 0.11%
sim2real robotics deep-learning neural-network

sim2real_drone_racing's Introduction

Deep Drone Racing: From Simulation to Reality with Domain Randomization

This repo contains the implementation of a zero-shot sim2real method for drone racing.

ddr

For more information visit the project page:http://rpg.ifi.uzh.ch/research_drone_racing.html.

Citing

If you use this code in an academic context, please cite the following publication:

Paper: Deep Drone Racing: From Simulation to Reality with Domain Randomization

Video: YouTube

@article{loquercio2019deep,
  title={Deep Drone Racing: From Simulation to Reality with Domain Randomization},
  doi={10.1109/TRO.2019.2942989},
  author={Loquercio, Antonio and Kaufmann, Elia and Ranftl, Ren{\'e} and Dosovitskiy, Alexey and Koltun, Vladlen and Scaramuzza, Davide},
  journal={IEEE Transactions on Robotics},
  year={2019}
}

Installation

Requirements

The code was tested with Ubuntu 18.04 and ROS Melodic. Different OS and ROS versions are possible but not supported.

Step-by-Step Procedure

Use the following commands to create a new catkin workspace and a virtual environment with all the required dependencies.

export ROS_VERSION=melodic
mkdir drone_racing_ws
cd drone_racing_ws
export CATKIN_WS=./catkin_ddr
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color
cd src

git clone https://github.com/uzh-rpg/sim2real_drone_racing.git
cd sim2real_drone_racing
cd ..
vcs-import < sim2real_drone_racing/dependencies.yaml
touch octomap/octovis/CATKIN_IGNORE

# Build and re-source the workspace
catkin build
. $CATKIN_WS/devel/setup.bash

# Create your learning environment
virtualenv -p python2.7 ./droneflow
source ./droneflow/bin/activate

# If you have a GPU, use the following command. You will need to have CUDA 10.0 installed for it to work.
#pip install tensorflow-gpu==1.13.1
# If you don't have a GPU, uncomment the previous line and comment the next
pip install tensorflow==1.13.1

# Install Required Python dependecies
cd $CATKIN_WS/src/sim2real_drone_racing
pip install -r python_dependencies.txt

Let's Race

Once you have installed the dependencies, you will be able to fly in simulation with our pre-trained checkpoint. You don't need GPU for execution. Note that if the network can't run at least at 10Hz, you won't be able to fly successfully.

Open a terminal and type:

cd drone_racing_ws
. ./catkin_ddr/devel/setup.bash
. ./droneflow/bin/activate
export CUDA_VISIBLE_DEVICES=''
roslaunch deep_drone_racing_learning  net_controller_launch.launch

Open an other terminal and type:

cd drone_racing_ws
. ./catkin_ddr/devel/setup.bash
. ./droneflow/bin/activate
roslaunch test_racing test_racing.launch

Train your own Sim2Real model

You can use the following commands to generate data in simulation and train your model on it. The trained checkpoint can then be used to control a physical platform on a race track.

Generate data

cd drone_racing_ws
. ./catkin_ddr/devel/setup.bash
. ./droneflow/bin/activate
roscd drone_racing/resources/scripts
python collect_data.py

It is possible to change parameters (number of iteration per background/ gate texture/ etc. ) in the above script. Defaults should be good. Optionally, you can use the data we have already collected, available at this link.

Train the Network

roscd deep_drone_racing_learner/src/ddr_learner

Modify the file train_model.sh to add the path of validation data collected in the real world, which you can download from this link. Then, run the following command to train the model.

./train_model.sh

Test the Network

Edit the following file to use the checkpoint you just trained

rosed deep_drone_racing_learning net_controller_launch.launch

The trained network can now be tested in an environment which was never observed at training time.

Open a terminal and run:

cd drone_racing_ws
. ./catkin_ddr/devel/setup.sh
. ./droneflow/bin/activate
export CUDA_VISIBLE_DEVICES=''
roslaunch deep_drone_racing_learning  net_controller_launch.launch

Open another terminal and run:

cd drone_racing_ws
. ./catkin_ddr/devel/setup.sh
. ./droneflow/bin/activate
roslaunch test_racing test_racing.launch

sim2real_drone_racing's People

Contributors

foehnx avatar kelia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sim2real_drone_racing's Issues

Can't find the generated data

Hello there,

I am trying to use your code to generate data for training a CNN of my own. However, I can't seem to do it. Every time I run the collect_python.py file (in the correct directory), the process seems to be running, but i can't find the generated data anywhere. The TRAIN_DIR path leads to a directory that remains empty.

In the command line, other than the error messages caused by the fact that it couldn't open the joystick force feedback, there were these two error messages, and i thought perhaps it was due to them that I can't generate data properly:

[ERROR] [1591312435.843784, 2911.803000]: Spawn service failed. Exiting.

this leads to this red message:
[hummingbird/spawn_hummingbird-2] process has died [pid 7162, exit code 1, cmd /opt/ros/melodic/lib/gazebo_ros/spawn_model -param robot_description -urdf -x 0.0 -y 22.0 -z 0.1 -model hummingbird __name:=spawn_hummingbird __log:=/home/fechec/.ros/log/381c0d20-a41b-11ea-972f-080027ba532d/hummingbird-spawn_hummingbird-2.log].
log file: /home/fechec/.ros/log/381c0d20-a41b-11ea-972f-080027ba532d/hummingbird-spawn_hummingbird-2*.log

By looking up the log file, the error seems to come only from the spawn service. Here is what one iteration over the background pictures looks like:

[rospy.client][INFO] 2020-06-04 19:13:52,541: init_node, name[/hummingbird/spawn_hummingbird], pid[7162]
[xmlrpc][INFO] 2020-06-04 19:13:52,546: XML-RPC server binding to 0.0.0.0:0
[xmlrpc][INFO] 2020-06-04 19:13:52,546: Started XML-RPC server [http://fechec-VirtualBox:46397/]
[rospy.impl.masterslave][INFO] 2020-06-04 19:13:52,546: _ready: http://fechec-VirtualBox:46397/
[xmlrpc][INFO] 2020-06-04 19:13:52,547: xml rpc node: starting XML-RPC server
[rospy.init][INFO] 2020-06-04 19:13:52,550: ROS Slave URI: [http://fechec-VirtualBox:46397/]
[rospy.registration][INFO] 2020-06-04 19:13:52,550: Registering with master node http://localhost:11311
[rospy.init][INFO] 2020-06-04 19:13:52,651: registered with master
[rospy.rosout][INFO] 2020-06-04 19:13:52,651: initializing /rosout core topic
[rospy.rosout][INFO] 2020-06-04 19:13:52,663: connected to core topic /rosout
[rospy.simtime][INFO] 2020-06-04 19:13:52,667: initializing /clock core topic
[rospy.simtime][INFO] 2020-06-04 19:13:52,684: connected to core topic /clock
[rosout][INFO] 2020-06-04 19:13:52,698: Loading model XML from ros parameter robot_description
[rosout][INFO] 2020-06-04 19:13:52,780: Waiting for service /gazebo/spawn_urdf_model
[rospy.internal][INFO] 2020-06-04 19:13:52,937: topic[/rosout] adding connection to [/rosout], count 0
[rospy.internal][INFO] 2020-06-04 19:13:55,492: topic[/clock] adding connection to [http://fechec-VirtualBox:42541/], count 0
[rosout][INFO] 2020-06-04 19:13:55,514: Calling service /gazebo/spawn_urdf_model
[rosout][INFO] 2020-06-04 19:13:55,842: Spawn status: SpawnModel: Entity pushed to spawn queue, but spawn service timed out waiting for entity to appear in simulation under the name hummingbird
[rosout][ERROR] 2020-06-04 19:13:55,843: Spawn service failed. Exiting.
[rospy.core][INFO] 2020-06-04 19:13:55,845: signal_shutdown [atexit]
[rospy.internal][INFO] 2020-06-04 19:13:55,847: topic[/rosout] removing connection to /rosout
[rospy.internal][INFO] 2020-06-04 19:13:55,850: topic[/clock] removing connection to http://fechec-VirtualBox:42541/
[rospy.impl.masterslave][INFO] 2020-06-04 19:13:55,850: atexit
[rospy.internal][WARNING] 2020-06-04 19:13:55,853: Unknown error initiating TCP/IP socket to fechec-VirtualBox:35421 (http://fechec-VirtualBox:42541/): Traceback (most recent call last):
File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 563, in connect
self.local_endpoint = self.socket.getsockname()
AttributeError: 'NoneType' object has no attribute 'getsockname'

If somehow you guys can help me solve this issue, I would greatly appreciate it. I have been looking around for quite a while without finding what was the problem.

Thank you very much!

Feng

P.S. I'm a big fan of you work! It's really cool :)

rpg_common not available

Hello guys.
Real amazing work on the paper. I wanted to check the code fo some random environments but faced following issues

  1. The syntax for git clone doesn't work. Needs some keys or something. Or edit to the conventional method
  2. Cannot find repo rpg_common. Has it been replaced by something else?

Installing Problem

I have a problem in the installation step.
Every time I tried to:

catkin build

my machine stop working after ~2 minutes in building rotors_gazebo part.
It seems that this part is very heavy. Does anybody encounter this problem as well?
Maybe my machine is not strong enough?

Thanks a lot in advanced!

How to generate a global trajectory?

If I modify the position of a door frame, I want a new global trajectory. In this case, how to generate a new global trajectory? I.e. How to generate global_trajectory.txt?

Trainging Epochs

How many epochs does the task need to train to complete?
I trained 1000 epochs with the data I collected, but the results were poor.And trained 500 epochs with your dataset, and the results are also very poor.

No Image received

Environment:

  • Ubuntu 18.04
  • ROS Melodic
  • OpenCV 3.4.11
  • python 2.7
  • TensorFlow 1.13.1

Problem:
I have followed the Step-by-Step Procedure, and built the environment successfully. However, when I start racing, what I got is shown as follow:

image

I noticed three Error information:

(1)

Traceback (most recent call last):
File "/home/tyZhang/Documents/tyZhang/RPG/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/nodes/deep_drone_racing_node.py", line 31, in
run_network()
File "/home/tyZhang/Documents/tyZhang/RPG/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/nodes/deep_drone_racing_node.py", line 19, in run_network
network.run(sess)
File "/home/tyZhang/Documents/tyZhang/RPG/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/src/Network/Network.py", line 82, in run
cv_input_image = bridge.imgmsg_to_cv2(data_camera)
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 163, in imgmsg_to_cv2
dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: libopencv_core.so.3.2: cannot open shared object file: No such file or directory

(2)

ERROR: cannot launch node of type [joy/joy_node]: joy
ROS path [0]=/opt/ros/melodic/share/ros
.....

(3)

process[hummingbird/drone_racing_node-10]: started with pid [25991]
/home/tyZhang/Documents/tyZhang/RPG/drone_racing_ws/catkin_ddr/devel/lib/drone_racing/drone_racing_node: error while loading shared libraries: libopencv_imgcodecs.so.3.2: cannot open shared object file: No such file or directory
[hummingbird/drone_racing_node-10] process has died [pid 25991, exit code 127, cmd /home/tyZhang/Documents/tyZhang/RPG/drone_racing_ws/catkin_ddr/devel/lib/drone_racing/drone_racing_node image_rgb:=/hummingbird/rgb_camera/camera_1/image_raw camera_info:=/hummingbird/vi_sensor_1/camera_depth/depth/camera_info state_estimate:=odometry_sensor1/odometry __name:=drone_racing_node __log:=/home/tyZhang/.ros/log/0e72b08c-eb83-11ea-a3da-3497f6dc6705/hummingbird-drone_racing_node-10.log].

What should I do next? Thanks.

Seems like no DAgger strategy be used when collecting the data and training like mentioned in the paper.

In the paper, you first let the expert policy fly 40s to collect data and train the network for 10 epochs on the accumulated data. Then in the next run, you use the trained network to navigate and exploit expert policy to label those situations as augmented data, if the distance from the global trajectory is higher than a margin, then you switch it back to the expert policy. If the network needs less than 50 times help from the expert to complete the track, then you increase the margin by 0.5m.

I have two questions here:

  1. I've followed the instructions posted in this repository to collect data and train the network and I've also looked through some parts of the code, seems like instead of the DAgger strategy, simply the normal data-collecting and training pipeline is used here, in which no augmented data from the partially-trained network execution is collected.
  2. In the DAgger strategy, why do you increase the margin if the trained network is doing well? Shouldn't we instead be stricter and decrease the margin in order to make to trained network performs good enough eventually?

The data links in Readme

Hi guys, Thank you for your amazing work.

It seems the two links (collected real-world/simulation data) in Readme are not available, could you please check this? Thanks.

Build failed

Hi all,

OS: Ubuntu 18.04
ROS: Meldoic
Python: 2.7

I'm getting the below error when I run catkin build

________________________________________________________________________________________________________
Errors     << drone_racing:make /home/raghad/drone_racing_ws/catkin_ddr/logs/drone_racing/build.make.001.log
In file included from /opt/ros/melodic/include/ros/serialization.h:37:0,
                 from /opt/ros/melodic/include/ros/publisher.h:34,
                 from /opt/ros/melodic/include/ros/node_handle.h:32,
                 from /opt/ros/melodic/include/ros/ros.h:45,
                 from /home/raghad/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/drone_racing/drone_racing/include/drone_racing/drone_racing.h:5,
                 from /home/raghad/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/drone_racing/drone_racing/src/drone_racing.cpp:1:
/opt/ros/melodic/include/ros/message_traits.h: In instantiation of ‘static const char* ros::message_traits::MD5Sum<M>::value(const M&) [with M = double]’:
/opt/ros/melodic/include/ros/message_traits.h:254:102:   required from ‘const char* ros::message_traits::md5sum(const M&) [with M = double]’
/opt/ros/melodic/include/ros/publisher.h:116:38:   required from ‘void ros::Publisher::publish(const M&) const [with M = double]’
/home/raghad/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/drone_racing/drone_racing/src/drone_racing.cpp:388:48:   required from here
/opt/ros/melodic/include/ros/message_traits.h:125:14: error: request for member ‘__getMD5Sum’ in ‘m’, which is of non-class type ‘const double’
     return m.__getMD5Sum().c_str();
            ~~^~~~~~~~~~~
/opt/ros/melodic/include/ros/message_traits.h: In instantiation of ‘static const char* ros::message_traits::DataType<M>::value(const M&) [with M = double]’:
/opt/ros/melodic/include/ros/message_traits.h:263:104:   required from ‘const char* ros::message_traits::datatype(const M&) [with M = double]’
/opt/ros/melodic/include/ros/publisher.h:118:11:   required from ‘void ros::Publisher::publish(const M&) const [with M = double]’
/home/raghad/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/drone_racing/drone_racing/src/drone_racing.cpp:388:48:   required from here
/opt/ros/melodic/include/ros/message_traits.h:142:14: error: request for member ‘__getDataType’ in ‘m’, which is of non-class type ‘const double’
     return m.__getDataType().c_str();
            ~~^~~~~~~~~~~~~
make[2]: *** [CMakeFiles/drone_racing_base.dir/src/drone_racing.cpp.o] Error 1
make[1]: *** [CMakeFiles/drone_racing_base.dir/all] Error 2
make: *** [all] Error 2
cd /home/raghad/drone_racing_ws/catkin_ddr/build/drone_racing; catkin build --get-env drone_racing | catkin env -si  /usr/bin/make --jobserver-fds=6,7 -j; cd -
........................................................................................................
Failed     << drone_racing:make                                 [ Exited with code 2 ]                  
Failed    <<< drone_racing                                      [ 7.5 seconds ]                         
[build] Summary: 47 of 48 packages succeeded.                                                           
[build]   Ignored:   2 packages were skipped or are blacklisted.                                        
[build]   Warnings:  None.                                                                              
[build]   Abandoned: None.                                                                              
[build]   Failed:    1 packages failed.                                                                 
[build] Runtime: 20.9 seconds total.     

I followed the steps exactly so not sure from where this is coming.

Found a bug in the global trajetory generation, and fixed it.

If we change to the 'load_existing_trajectory' in main.yaml to false, then we can generate a new trajectory based on the replaced gate positions instead of reading from a trajectory file that was generated from a different gate layout.

In order to realize that, I made several minor changes in the code and found a typo (bug) in the code:

  1. in global_trajectory.cpp, GlobalTrajectory::generateGlobalTrajectory(),
    if (!load_existing_trajectory_):
    ...(some code)
    saveGlobalTrajectory();
    loadGlobalTrajectory();(added line)

  2. in global_trajectory.cpp, GlobalTrajectory::saveGlobalTrajectory(),
    delete this line:
    outfile_trajectory.open(filename_trajectory, std::ios_base::app);
    instead use this line:
    outfile_trajectory.open(filename_trajectory, std::ofstream::out | std::ofstream::trunc);
    (in order to clear the content saved in the previous file)

  3. this is typo (bug) that exists in the code which would disable the global trajetory generation.
    in rpg_quadrotor_control repository, minimum_snap_trajectories.cpp,
    Eigen::VectorXd generateFVector(),
    delete this line:
    for (int k = 0; k < num_polynoms; k++)
    instead use this line:
    for (int k = 0; k < num_polynoms - 1; k++)

since within the for loop, there is a reference of way_points_1D(k + 1), and num_polynoms equals to the number of waypoints, which is 14. The previous usage would use way_points_1D(14) which is out of the dimension, however, this won't arise any error when compile which makes the bug hard to find.
this is because in PolynomialTrajectory generateMinimumSnapRingTrajectory(),
this line:
Eigen::VectorXd way_points_d = Eigen::VectorXd::Zero(num_waypoints);
defines the way_points_d as Eigen::VectorXd, which would later give to function
Eigen::VectorXd generateFVector().
when you refer an element that is out of the dimension, such as way_points_d(14), it won't arise error, instead, it will give you a random number which could be extremely large to eventually disable the global trajectory generation.

Workspace Build Problem

I have a problem in the installation step
when I use "catkin build" to build the workspace, the "drone_racing" package always build failed.
the errors log is as follows:
屏幕截图 2023-03-29 15:51:16

Does the compilation failure of this package have an impact on the following parts, or how to solve this problem?

No pretrained checkpoint file provided.

There is no provided pretrained checkpoint file.

So, I got the error:
NotFoundError: /home/control02/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learner/src/ddr_learner/results/best_model; No such file or directory

roslaunch failed

Thank you for your fantastic work.

I'm in Ubuntu 18.04, ROS Melodic. But got some problems while launching net controller within the virtualenv 'droneflow'. Could you please give me some advice? Thanks in advance.

(droneflow) ubuntu@dbfb2a9cb529:~/catkin_wss/drone_racing_ws$ roslaunch deep_drone_racing_learning  net_controller_launch.launch
... logging to /home/ubuntu/.ros/log/9d640d72-67ff-11ed-bfdc-0242ac110002/roslaunch-dbfb2a9cb529-167807.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://dbfb2a9cb529:39213/

SUMMARY
========

PARAMETERS
 * /rosdistro: melodic
 * /rosversion: 1.14.13

NODES
  /
    deep_drone_racing_learning (deep_drone_racing_learning/deep_drone_racing_node.py)

auto-starting new master
process[master]: started with pid [167817]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 9d640d72-67ff-11ed-bfdc-0242ac110002
process[rosout-1]: started with pid [167828]
started core service [/rosout]
process[deep_drone_racing_learning-2]: started with pid [167831]
Traceback (most recent call last):
  File "/home/ubuntu/catkin_wss/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/nodes/deep_drone_racing_node.py", line 4, in <module>
    from Network import Network
  File "/home/ubuntu/catkin_wss/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/src/Network/Network.py", line 13, in <module>
    from ddr_learner.models.base_learner import TrajectoryLearner
  File "/home/ubuntu/catkin_wss/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learner/src/ddr_learner/models/base_learner.py", line 10, in <module>
    from keras.utils.generic_utils import Progbar
  File "/home/ubuntu/catkin_wss/drone_racing_ws/droneflow/local/lib/python2.7/site-packages/keras/__init__.py", line 20, in <module>
    from keras import distribute
  File "/home/ubuntu/catkin_wss/drone_racing_ws/droneflow/local/lib/python2.7/site-packages/keras/distribute/__init__.py", line 18, in <module>
    from keras.distribute import sidecar_evaluator
  File "/home/ubuntu/catkin_wss/drone_racing_ws/droneflow/local/lib/python2.7/site-packages/keras/distribute/sidecar_evaluator.py", line 195
    f"{_CHECKPOINT_TIMEOUT_SEC} seconds. "
    ^
SyntaxError: invalid syntax
[deep_drone_racing_learning-2] process has died [pid 167831, exit code 1, cmd /home/ubuntu/catkin_wss/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learning_node/nodes/deep_drone_racing_node.py --f=0.5 --ckpt_file=/home/ubuntu/catkin_wss/drone_racing_ws/catkin_ddr/src/sim2real_drone_racing/learning/deep_drone_racing_learner/src/ddr_learner/results/best_model/navigation_model cnn_predictions:=/cnn_out/traj state_change:=/hummingbird/state_change camera:=/hummingbird/rgb_camera/camera_1/image_raw state_estimate:=/hummingbird/state_estimate __name:=deep_drone_racing_learning __log:=/home/ubuntu/.ros/log/9d640d72-67ff-11ed-bfdc-0242ac110002/deep_drone_racing_learning-2.log].
log file: /home/ubuntu/.ros/log/9d640d72-67ff-11ed-bfdc-0242ac110002/deep_drone_racing_learning-2*.log
^C[rosout-1] killing on exit
[master] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.