Comments (4)
Hello,
We split images from training set into train/val and used images from 003
, 008
and 010
as test set. Images from folders 011
- 014
are special: they were recorded by original authors to test performance of DNNs for extreme scenarios like left and right turns (see info.txt
file in each of those folders for more information).
As for the accuracy: as mentioned in our paper/slides, the accuracy alone is not a very good indicator of how DNN will perform for navigation purposes in the real world. For example, ResNet-18 CE (i.e. ResNet-18 with standard cross-entropy loss) model achieved around 92% accuracy which was the highest among all the models we trained, but did not do very well on the test trail - its autonomy score was around 88% (see slide 21 from our GTC talk).
Unfortunately, we could not find a good, reproducible and generic way to measure autonomy score, it's all pretty subjective.
from redtail.
Thanks for answering my question!
So you mean the images from 003,008 and 010 as the test set to get accuracy. However, in the wiki you set 003,008 and 010 as the validation set. i am confused about this. Are the 003,008 and 010 the test set or the validation set? If they are validation set, what dataset is the test set?
By the way, I also want to train the TrailNet from the scratch, but itβs difficult for me to create a dataset. So, could you provide the lateral offset dataset?
Thank you for your time!
from redtail.
We started with 003,008 and 010 as our test set and 85%/15% as train/val split from our train set. In ideal world you train and fine-tune your model (hyperparameters search etc) on train/val split only and then run the "final" model on the test set only once. In real world you keep changing and improving the models so your test set eventually becomes validation so that's why we decided to call it a validation in our scripts.
For every model we did hyperparameter tuning only on train/val split and tested on test set once, but we had a lot of models.
As for the lateral translation dataset: we haven't released the data mostly due to internal review process which is more involved for the data than for the code. However, we released complete instructions and scripts which should allow to collect your own dataset and train translation head of the model. There is nothing special in our dataset - you can collect the similar data in your nearest park or forest.
from redtail.
You can collect data with a rig similar to this one: https://github.com/NVIDIA-Jetson/redtail/wiki/Datasets
that uses GoPro 4/5 cameras. Please note - for better results and be less camera dependent (in your runtime), you need to calibrate your rig cameras and un-distort collected footage before training (see the link). This way your robot camera does not need to be the same as dataset collection camera (have same camera model)
from redtail.
Related Issues (20)
- camera image not received for gscam kind camera in trailnet_debug_gscam.launch/trailnet_debug_zed_gscam.launch
- stereoDNN resnet18 test fails HOT 2
- gscam problem with no video shown, even not shown in rqt-image window, not even gst-launch on host machine HOT 1
- tx2 onboard connection with px4 flushed by apm not connected even with mavros HOT 11
- gscam donot stream on nano,but do stream on tx2
- lateral offset dataset
- Failed with error code 1 in /tmp/pip-build-U_9LbZ/numpy/
- Extended Redtail implementation for Arducopter HOT 7
- Training Resnet18_2D
- Error Loading TensorRT plan for ResNet-18_2D HOT 1
- Testing in Simulator
- Regarding tensorRT of TrailNet_SResNet-18 HOT 1
- Takeoff Mode Initiated Mid-air when starting px4_controller node in Navigate Mode HOT 2
- Unable to complete build_redtail_image.sh script HOT 3
- clarification on difference between visionworks and redtail? HOT 1
- Dataset used for training HOT 2
- The segmentation effect in your video HOT 1
- Jetpack 4.4 caffe_ros node Assertion failed HOT 3
- 2.4 GHz range cut due to Jetson RF noise HOT 6
- Could not find a configuration file for package "OpenCV"
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from redtail.