Giter Site home page Giter Site logo

aagarwal32 / usf-mobile-robotics Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 4.47 MB

This repo contains the necessary Python controller files that implement navigational control logic for robots in the Webots simulation environment. The controller files, reports, and videos were created during the Fall'23 Control of Mobile Robots elective course at USF.

Python 99.77% Shell 0.12% Batchfile 0.11%
bug0-algorithm kinematics localization mapping-algorithms occupancy-grid-mapping probability-statistics proportional-gain python3 robotics-simulation trilateration

usf-mobile-robotics's Introduction

USF Control of Mobile Robot Labs (Fall 2023)

This repository is an extension of the project framework, FAIRIS-Lite, enabling users to implement navigational control logic for robots in the Webots simulation. It includes the necessary Python controller files that I have created for the various lab tasks required by this course.

The controller files are used to simulate robot motion and sensor readings in the Webots development environment. The free, open source Webots simulator can be found here.

The controller files are the main deliverables for each lab and can be found in WebotsSim/controllers.

Each Lab will be explained in more detail in the following sections:

Lab 1 - Kinematics

Objective

The objective for this lab is to use kinematics to move a 4-wheeled differential drive robot through a set of pre-defined waypoints. This is done using the Webots simulator that runs python code as instructions for the robot to follow.

The waypoints through the maze are displayed as follows:

image

Figure 1: Maze file with predefined waypoints the robot must navigate through

To accomplish this task, required the use of several functions that make the robot perform straight-line, curved-line, and rotation motions. A full report with more information and calculations for this lab can be found in this report.

Here is the corresponding Python controller for lab 1: Lab 1 Controller

The video below shows Lab 1 in action as the robot follows the waypoints described in Figure 1.

lab1-maze-navigation.mov

Lab 2 - PID and Wall Following

Objective

This lab requires the use of proportional gain to control the speed of the robot based on its distance from the wall. Additionally, using proportional gain and LiDAR scanners to detect, avoid, and follow walls.

image

Figure 2: A flowchart describing the process of proportional gain

Proportional gain control works by first, finding the distance error, e(t). This can be calculated by subtracting the current LiDAR distance sensor reading, y(t), by the target distance, r(t). Using the distance error, you can obtain the control signal, u(t), which is the calculated velocity directly proportional to the error. This can be calculated by choosing a constant Kp value and multiplying it with the error. Passing this control signal into the saturation function allows the speed to stay within robot limits by testing the extremes.

The second part to this lab required the implementation of wall-following in combination with proportional gain control. Figure 3 below visualizes how wall-following works. Proportional gain slows down the robot as it gets closer to the walls and the wall-following algorithm tries to steer the robot back to the center line - essentially avoiding the walls.

image

Figure 3: Visualization of the wall-following algorithm

A more in-depth overview of Lab 2 can be found in this report.

Here are the corresponding Python controllers for Lab 2: Task 1, Task 2.

The video below shows Lab 2 Task 2 in action with the robot navigating using proportional gain and wall-following:

PID-Wall-Following-Lab-2.mov

Lab 3 - Bug0 Algorithm

Objective

This lab implements bug 0, a search algorithm that navigates the robot to a specified goal. It utilizes the onboard camera to identify the goal and LiDAR scanner for wall-avoidance. If the camera detects the goal, it will perform straight line motion towards it. If blocked by an obstacle, it will perform wall-avoidance until the goal is visible again.

image

Figure 4: Visualization of the path followed by the robot using bug0

In Figure 4, the blue shapes represent obstacles and the orange line is the path it would follow. From q-start to q-H1 the robot detects the goal. The goal in this case would be a tall structure visible from any distance. At q-H1, which is also known as a "Hit-point", the onboard cameras of the robot lose sight of the goal as it is blocked by an obstacle. In this situation, it will perform obstacle avoidance until the goal is visible again. At q-L1, the "Leave-point", the robot's cameras detect the goal and initiate straight line motion to q-goal.

The following stages describe the bug0 algorithm:

  1. Head towards the goal (if not blocked by an obstacle).
  2. If obstacle is encountered, ALWAYS turn left or ALWAYS turn right.
  3. Follow obstacle until robot can perform straight line motion to goal at leave point.
  4. Repeat above stages until goal is reached.

A more in-depth overview explaining the programming and calculations for lab 3 can be found in this report.

The Python controllers for both tasks part of lab 3: Task 1, Task 2.

The video below shows the lab 3 task 2 controller in action. The robot follows the bug0 algorithm - set to take right turns when met with an obstacle:

lab3-task2-bug0.mov

Lab 4 - Localization

Objective

The objective for this lab is to perform trilateration calculations to estimate coordinates and cell location of the robot. The robot uses its onboard cameras and sensors to scan its surroundings, compute its pose, and navigate the maze while updating information for each new cell. The robot uses these localization methods to mark cells as visited and find unvisited cells. Additionally, it performs wall-based localization using probability and a sensor model.

As seen by Figure 5 below, trilateration can mathematically calculate the robot's x and y position. The center x, y coordinates of each circle and their distance from the robot (radius) are passed as inputs into the trilateration function. The position of the robot is computed at the intersection of the three circles:

image

Figure 5: Visual representation of how trilateration works to find robot position

Upon finding the current cell the robot is in, if the cell is empty, it marks it as visited. The goal of this lab is to mark all cells as visited. In order to do this more efficiently, if the current cell is already visited, the program finds the next empty cell in the map. Figure 6 below shows the algorithm to move the robot to the next empty cell:

image

Figure 6: Visual representation of how the robot finds and moves to the next empty cell

As seen by Figure 6, the robot is currently at cell 9. Before the robot moves, the program finds the next empty cell by iterating through the array of cells starting at index 0. The target is locked to cell 6 as it is the next empty cell. The robot compares its position to the target and takes a step-by-step approach to navigate to cell 6:

  1. The target row < current row
  2. Robot rotates to heading 90 deg from east and moves one cell (current - 4 = next)
  3. target row == current row and target cell > current cell
  4. Robot rotates to heading 0 deg from east and moves one cell (current + 1 = next)
  5. target cell == current cell
  6. mark current cell as visited

Here is the Python controller for this task: Task 1.

A video of the robot performing trilateration calculations to determine pose and mark all cells in the map as visited is shown below:

lab4-task1-localization-find-next-cell.mov

The second part of this lab involves using Bayes theorem to determine move and stay probabilities for the robot. Using a given sensor model, the robot compares its calculated sensor readings to the actual environment. The sensor model is provided in Figure 7 below:

image

Figure 7: Motion and sensor model for probability calculations

Based on the motion model from Figure 7, 0.8 represents the probability of a forward move motion and 0.2 if the robot ended up to stay in place. In the same figure, the sensor model determines the probability of a wall being present at the current LiDAR reading. In the sensor model, s is the given value and z is the calculated value. 0 means "no wall" and 1 means "wall". Using this information, the robot can calculate move and stay probabilities while taking into account the current and next cell configurations.

A more in-depth overview for both lab 4 task 1 and task 2 can be found in this report.

Here is the Python controller for this task: Task 2.

A video of the robot analyzing wall-configurations and calculating move and stay probabilities is shown below:

motion-sensor-model-lab4-task2.mov

Lab 5 - Mapping

Objective

The objective for this lab is to create an occupancy grid that is used to generate a map based off the robot's surroundings. It uses its onboard LiDAR scanner to detect free and occupied spaces and updates the map accordingly.

To create the occupancy grid, the controller utilizes NumPy's 4D array to create a matrix of matrices. Each cell on the map contains sub cells that hold information on whether that space is empty or occupied. For this lab, each cell contains 3x3 matrix of sub cells. Figure 8 below visualizes this:

image

Figure 8: A map consisting of 4x4 cells - each cell consisting of 3x3 sub cells

The information that each sub cell holds is in the form of log odds. The occupied value is assigned 0.6 and the empty value, 0.3 (these values are adjustable). The calculated log odds with values determined by the LiDAR scanner readings are then updated on the occupancy grid map. An example of this is provided by Figure 9 below:

image

Figure 9: The robot's current cell occupancy values

As seen by Figure 9, the cell the robot is currently in has occupancy values that match its surroundings. For instance, the log odd values, 0.41, match with the presence of the top and left walls - occupied space. The log odd values, -0.85, match with the presence of no walls right below the robot and to its immediate bottom and right surroundings - empty space. The occupancy grid is a key component in generating a map of the robot's surroundings. The final result is presented below in Figure 10:

Example Map 1 Example Map 2
maze-1 maze-2

Figure 10: Generated maps based off the occupancy grid printed to the console

The "W" in the generated map represents a wall, --> represents that the cell is visited and the direction the robot is facing, and any empty space means that there is no wall present. A more in-depth overview for this lab can be found in this report.

Here is the Python controller for this task: Task 1

The video below shows the creation of the occupancy grid and map generation in real time as the robot navigates through the maze:

lab5-real-time-map-generation-SLAM.mov

usf-mobile-robotics's People

Contributors

aagarwal32 avatar chancehamilton59 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.