Giter Site home page Giter Site logo

icaropires / objectlevel_fusion Goto Github PK

View Code? Open in Web Editor NEW
8.0 3.0 4.0 68.52 MB

This repository was firstly developed when writing a bachelor's thesis and contributes to the fusion of data from multiple sensors (the perception ones) to get the best information from each sensor. It was implemented as ROS 2 C++ packages and has some python experiments interacting with CARLA, including some plotting results.

License: Apache License 2.0

CMake 7.91% C++ 79.75% Shell 8.75% Python 3.60%
ros2 ros2-foxy carla-simulator self-driving autonomous-vehicles data-fusion multisensor perception

objectlevel_fusion's Introduction

Object-level Fusion

CI

Summary: This repository was firstly developed when writing a bachelor's thesis, and contributes to the fusion of data from multiple sensors (the perception ones) to get the best information from each sensor.

Object-level fusion performs fusion at a higher level of abstraction and, for this reason, contributes to modularity and reuse. This work implements a software solution to address part of this reimplementation problem. It's composed of ROS 2 packages and implements the object list preprocessing from the fusion layer of an object-level fusion architecture. This preprocessing is composed of the spatial and temporal alignments, plus the objects association. Finally, this preprocessing was validated with an experiment using CARLA self-driving simulator, using as main metric the number of failed associations in some test case scenarios (check experiment).

Bachelor's thesis document

The document version that was reviewed and approved by the thesis comittee can be found at:

Architecture Layers

Checked boxes means implemented in this repository

  • Sensor Layer
    • ...
  • Fusion Layer
    • Spatial Alignment
    • Temporal Alignment
    • Object Association*
    • State and Covariance Fusion
    • Existence Fusion
    • Classification Fusion
  • Application Layer
    • ...

*implemented a simpler version

Requirements

Dockerized execution

  • Linux
  • Docker
  • Docker Compose

Local execution

Using

Much of the usage is facilited by the run script. Under the hood it just calls docker-compose. Feel free to customize your execution by directly calling docker-compose if you're more experienced.

Executing

Execute the instructions of one of the following subsections, register your sensors, and then publish your object lists ๐Ÿ˜„

Executing (Easy, dockerized way)

Execute:

$ ./run

# Or in background:
$ ./run -d

When the application is up, it will be waiting for messages of type object_model_msgs/msg/ObjectModel on the topic objectlevel_fusion/fusion_layer/fusion/submit, and returning the list of global objects being tracked on the topic objectlevel_fusion/fusion_layer/fusion/get.

Executing in a ROS 2 workspace

Clone this project in your ROS workspace and follow the ROS 2 procedures: ref.

Registering/removing sensors and publishing object lists

Check some examples in examples (bash and python available).

Running (unitary) tests

With the application up, tests can be run with:

./run tests

Development flow

After editing the source code, if the application is up, first bring it down (calling ./run down if running in background, otherwise just CTRL+C in the terminal it's running), then:

./run compile

then, bring the application up again (./run up). Now, the modified should be in execution.

Initializing a shell

To run a shell in the container where the application is running, just execute (with the application up):

./run shell

Other commands from run

To see a list and descriptions, execute:

./run help

How to contribute

  • Creating issues (questions, bugs, feature requests, etc);
  • Modifying the repository: pull requests. Just make sure to describe your changes and that everything is working.

objectlevel_fusion's People

Contributors

icaropires avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

objectlevel_fusion's Issues

Implement sensors registration

  • An user must be able to add new sensors that will have its data fused
  • Data from non-registered sensors must be ignored
  • The registration must be implemented as a ROS 2 service (inspiration)
  • Must have an option to unregister the sensors
  • Information that must be provided about the sensors:
    • Delta x, when compared to the vehicle
    • Delta y, when compared to the vehicle
    • Rotation, when compared to the vehicle
    • List of attributes from Object Model that the sensor is able to provide
      • Adapt EKF to consider such list
    • Sensor accuracy and precision information
      • Pass the measurement noise matrix
      • For now, the measurement noise matrix will follow the current CTRA implementation, with a state of size = 6
      • The order of attributes will be {x, y, v, a, yaw, yaw_rate}, to be more similar to the object model definition
  • Adapt python mock publisher script to register the sensor

Integrate a Linear Algebra library

It will be very convenient to have any linear algebra operations as matrix generation, multiplication, transposition, etc.

  • Find out which linear algebra library is usually used for this kind of application
  • Integrate this library
  • Multiply any matrices to check it is working

Add base CI

  • The application must be built in the docker environment
  • The application must be executed in the docker environment
  • The tests must be run and break pipeline if failed

Setup general environment

  • Write correct building configurations to cmake and package.xml
  • Setup Google Tests in cmake and package.xml files
  • Setup Google Mocks in cmake and package.xml files
  • Write docker files to ease application and checking executions, independent of the environment

Rewrite CTRA using the state definition from object model

  • Rewrite CTRA model from EKF temporal alignment using the state from object model
  • Adapt all vectors with length 6 to 8 and all matrices 6x6 to 8x8
  • Adapt experiment file att experiment/temporal_alignment_EKF.ipynb
  • Adapt unit tests

Setup linter

  • Setup linter checking in cmake and package.xml files with the package ament_lint_auto
  • Add linter checking to CI

Create simpler association function

Create a very simple function capable of performing object association (maybe only useful in the best cases).

This function will be used for demonstrations until a more robust function is implemented.

Implement object association

Steps

  • Feature Selection

    • Transform feature from object coordinate system to vehicle coordinate system (use transformation matrix)
      • Choose feature according to Table 3.1 (see dissertation) for some features (which?)
      • In some cases, special "reduced features" with just one dimension are used
    • Identify feature constellation
      • Determines the transformation applied to the two objects positions
      • Options
        • Common corner feature
          • Don't need geometrical association
        • Common side feature
          • Check consistency of the side
            • If succeeded, don't need geometrical association
            • Fails if the result is not small enough, then use reduced features
        • Features lie on common side
          • Use reduced feature
        • Features unrelated
          • Heuristic approach used
            • Two-dimensional state vector association test of all four corner features
              • If a common corner feature is found (i. e. association result of one or more common features lies below a certain threshold)
                • minimum common feature result is used for complete state vector association
              • else, reduced features are used
                • If the result doesn't meet the threshold, the association failed
  • State Vector Association (track-to-track association)

    • Omit x or y if using reduced feature
    • Don't consider attributes that the sensor is not capable to measure
    • Implement the extended Mahalanobis Distance with Attribute Information
      • Basically, a Mahalanobis Distance that considers the probability of the existence of the objects and their classification vectors
    • Check the result of state vector association against the threshold before putting in the association matrix
  • Geometrical Association

    • Applied when reduced features are used
    • Essentially, returns true if the objects overlap when projecting the features into one dimension, else returns false
  • Calculate the association matrix

    • Association matrix must have N columns representing the N global fusion-level objects
    • Association matrix must have M columns representing the M sensor-level objects
    • Calculate the first cost matrix out of the association matrix
    • Calculate the second cost matrix M x M
    • Calculate the complete cost matrix (concatenation of the other two)
    • Implement the auction algorithm loop

Aeberhard illustration

Aeberhard description of object association

Add object model message types

Add message types according to the object model specification. Types to create:

  • ObjectModel
  • Classification
  • Dimension
  • ExistenceProbability
  • ShapeFeatures
  • Track
  • section 2.2.1 from PhD thesis

Transform measurement noise matrix to 8x8

This issue doesn't need to be done if #15 is completed.

Currently, in sensor registration and temporal alignment, the matrix R is 6x6, because the CTRA model from EKF has a state of size 6.

To complete this issue is necessary to be able to transform a matrix with covariances to attributes {x, y, Vx, Vy, Ax, Ay, yaw, yaw_rate} to {x, y, V, A, yaw, yaw_rate} and vice-versa.

Create size limits to strings in ros services/messages

It's a good practice to limit the size of the strings that can be sent through the ROS services and messages. This issue is about limiting them.

  • Limit the strings sizes that can be sent in all srvs and msgs
  • Adapt the checking in the code to be bounded by the same limits

Reference on how to limit them: ros2 documentation

Document code using Doxygen

  • Prepare structure for doxygen documentation
  • Document each class/function/method/file following doxygen structure

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.