Giter Site home page Giter Site logo

pepes97 / optimal-trajectory-generation-in-the-duckietown-environment Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 1.0 498.79 MB

Optimal Trajectory Generation in the Duckietown Environment

Python 99.99% Shell 0.01%
robotics duckietown python optmization camera obstacle-avoidance obstacle-detection feedback

optimal-trajectory-generation-in-the-duckietown-environment's Introduction

Optimal Trajectory Generation in the Duckietown Environment

Introduction

The goal of the project is to implement trajectory generation scheme, the approach consists in reducing the search space by considering the optimization of specific cost functions which yield polynomial trajectories, coupled with a feedback linearization control, to navigate in the Duckietown environment, which consists in a simulator providing customizable city-like scenarios with lanes and intersections. In this setting, the vehicle is controlled using information provided by the monocular camera on top of the vehicle, which is responsible for recognizing the center lane which defines the local Frenet frame. The simulations are in the Duckietown simulator (based on OpenAI Gym) using Python.

Table of Contents

Installation

conda create -n gym-duckietown python=3.8.5
conda activate gym-duckietown
pip install -r requirements.txt

Simulator Environment Python

We simulated an environment with a similar trajectory to the duckietown circuit, we tested the planner, the controller and the unicycle in three cases: without obstacles, with fixed obstacles and with moving obstacles. Not having the camera, we assumed we had a sensor that allowed us to recognize obstacles if they were within the visual range.

Run

Run without obstacles

python tester.py -t planner_full -p --config ./config/config_circle2D.json

Run with fixed obstacles

python tester.py -t planner_obstacles -p --config ./config/config_circle2D.json

Run with moving obstacles

python tester.py -t planner_moving_obstacles -p --config ./config/config_circle2D.json

Duckietown

The information of the environment is unknown, everything is learned through a monocular camera placed on the robot. In our work we extracted the information using two filters, CentralFilter() and LateralFilter() (Implementation see here), respectively for the central yellow line filter and for the lateral white lines.

The processed filters are passed through a Segmentator() (Implementation see here) which processes the filters and finds the relative boundaries of the areas of them.

After that everything is passed to a SemanticMapper() (Implementation see here) which will type the various filters. For the object identified by that filter, the area, the center with respect to the position of the robot and an inertial tensor are calculated, these elements allow us to establish thresholds for typing the object. This means that we will determine if an object that is detected, for example by the yellow filter, is a yellow line, a duck or an unknown one. A list of the objects and the polyfits of the yellow and white lines is returned.

The objects are given in input to the ObstacleTracker() (Implementation see here) which establishes, based on the typing of the objects themselves, whether they are obstacles or simply lines.

Run Duckietown

We analyzed two scenarios: environment without obstacles and with obstacles.

Run without obstacles

The Duckietown environment is loop_empty.

python tester.py -t dt_mapper_semantic

Run with obstacles

The Duckietown environment is modified version of loop_obstacles.

python tester.py -t dt_obstacles

Credits

References

optimal-trajectory-generation-in-the-duckietown-environment's People

Contributors

emanuelegiacomini avatar pepes97 avatar rossettisimone avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

Forkers

manman25

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.