Giter Site home page Giter Site logo

trackinglaboratory / tracklab Goto Github PK

View Code? Open in Web Editor NEW
57.0 5.0 7.0 73.26 MB

A modular end-to-end tracking framework for research and development

Home Page: https://trackinglaboratory.github.io/tracklab/

License: MIT License

Python 99.65% Shell 0.09% Jupyter Notebook 0.25%
deep-learning machine-learning python pytorch reidentification tracking tracking-algorithm hydra

tracklab's Introduction

TrackLab

TrackLab is an easy-to-use modular framework for multi-object pose/segmentation/bbox tracking that supports many tracking datasets and evaluation metrics.

News

  • [2024.02.05] Public release

Upcoming

  • Public release of the codebase
  • Add support for more datasets (DanceTrack, MOTChallenge, SportsMOT, ...)
  • Add many more SOTA tracking methods and object detectors
  • Improve documentation and add more tutorials

๐Ÿค How You Can Help

The TrackLab library is in its early stages, and we're eager to evolve it into a robust, mature tracking framework that can benefit the wider community. If you're interested in contributing, feel free to open a pull-request or reach out to us!

Introduction

Welcome to this official repository of TrackLab, a modular framework for multi-object tracking. TrackLab is designed for research purposes and supports many types of detectors (bounding boxes, pose, segmentation), datasets and evaluation metrics. Every component of TrackLab, such as detector, tracker, re-identifier, etc, is configurable via standard yaml files (Hydra config framework) TrackLab is designed to be easily extended to support new methods.

TrackLab is composed of multiple modules:

  1. A detector (YOLOv8, ...)
  2. A re-identification model (BPBReID, ...)
  3. A tracker (DeepSORT, StrongSORT, OC-SORT, ...)

Here's what makes TrackLab different from other existing tracking frameworks:

  • Fully modular framework to quickly integrate any detection/reid/tracking method or develop your own
  • It allows supervised training of the ReID model on the tracking training set
  • It provides a fully configurable visualization tool with the possibility to display any dev/debug information
  • It supports online and offline tracking methods (compared to MMTracking, AlphaPose, LightTrack and other libs who only support online tracking)
  • It supports many tracking-related tasks:
    • multi-object (bbox) tracking
    • multi-person pose tracking
    • multi-person pose estimation
    • person re-identification

Documentation

You can find the documentation at https://trackinglaboratory.github.io/tracklab/ or in the docs/ folder. After installing, you can run make html inside this folder to get an html version of the documentation.

Installation guide1

Clone the repository

git clone https://github.com/TrackingLaboratory/tracklab.git
cd tracklab

Manage the environment

Create and activate a new environment

conda create -n tracklab pip python=3.10 pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia -y
conda activate tracklab

You might need to change your torch installation depending on your hardware. Please check on Pytorch website to find the right version for you.

Install the dependencies

Get into your repo and install the requirements with :

pip install -e .
mim install mmcv==2.0.1

You might need to redo this if you update the repository, and some dependencies changed.

External dependencies

  • Get the SoccerNet Tracking dataset here, rename the root folder as "SoccerNetMOT" and put it under the global dataset directory (specified under the data_dir config as explained below). Otherwise, you can modify the dataset_path config in soccernet_mot.yaml with your custom SoccerNet dataset directory.
  • Download the pretrained model weights here and put the "pretrained_models" directory under the main project directory (i.e. "/path/to/tracklab/pretrained_models").

Setup

You will need to set up some variables before running the code :

  1. In configs/config.yaml :
    • data_dir: the directory where you will store the different datasets (must be an absolute path !)
    • All the parameters under the "Machine configuration" header
  2. In the corresponding modules (tracklab/configs/modules/.../....yaml) :
    • The batch_size
    • You might want to change the model hyperparameters

To launch TrackLab with the default configuration defined in configs/config.yaml, simply run:

tracklab

This command will create a directory called outputs which will have a ${experiment_name}/yyyy-mm-dd/hh-mm-ss/ structure. All the output files (logs, models, visualization, ...) from a run will be put inside this directory.

If you want to override some configuration parameters, e.g. to use another detection module or dataset, you can do so by modifying the corresponding parameters directly in the .yaml files under configs/.

All parameters are also configurable from the command-line, e.g. : (more info on Hydra's override grammar [here](https://hydra.cc/docs/advanced/override_grammar/basic/))
```bash
tracklab 'data_dir=${project_dir}/data' 'model_dir=${project_dir}/models' modules/reid=bpbreid pipeline=[bbox_detector,reid,track]

${project_dir} is a variable that is configured to be the root of the project you're running the code in. When using it in a command, make sure to use single quotes (') as they would otherwise be seen as environment variables.

To find all the (many) configuration options you have, use :

tracklab --help

The first section contains the configuration groups, while the second section shows all the possible options you can modify.

Framework overview

Hydra Configuration

TODO Describe TrackLab + Hydra configuration system

Architecture Overview

Here is an overview of the important TrackLab classes:

  • TrackingDataset: Abstract class to be instantiated when adding a new dataset. The TrackingDataset contains one TrackingSet for each split of the dataset (train, val, test, etc).
  • TrackingSet: A tracking set contains three Pandas dataframes:
    1. video_metadatas: contains one row of information per video (e.g. fps, width, height, etc).
    2. image_metadatas: contains one row of information per image (e.g. frame_id, video_id, etc).
    3. detections_gt: contains one row of information per ground truth detection (e.g. frame_id, video_id, bbox_ltwh, track_id, etc).
  • TrackerState: Core class that contains all the information about the current state of the tracker. All modules in the tracking pipeline update the tracker_state sequentially. The tracker_state contains one key dataframe:
    1. detections_pred: contains one row of information per predicted detection (e.g. frame_id, video_id, bbox_ltwh, track_id, reid embedding, etc).
  • TrackingEngine: This class is responsible for executing the entire tracking pipeline on the dataset. It loops over all videos of the dataset and calls all modules defined in the pipeline sequentially. The exact execution order (e.g. online/offline/...) is defined by the TrackingEngine subclass.
    • Example: OfflineTrackingEngine. The offline tracking engine performs tracking one module after another to speed up inference by leveraging large batch sizes and maximum GPU utilization. For instance, YoloV8 is first applied on an entire video by batching multiple images, then the re-identification model is applied on all detections in the video, etc.
  • Pipeline: Define the order in which modules are executed by the TrackingEngine. If a tracker_state is loaded from disk, modules that should not be executed again must be removed.
    • Example: [bbox_detector, reid, track]
  • VideoLevelModule: Abstract class to be instantiated when adding a new tracking module that operates on all frames simultaneously. Can be used to implement offline tracking strategies, tracklet level voting mechanisms, etc.
    • Example: VotingTrackletJerseyNumber. To perform majority voting within each tracklet and compute a consistent tracklet level attribute (an attribute can be, for instance, the result of a detection level classification task).
  • ImageLevelModule: Abstract class to be instantiated when adding a new tracking module that operates on a single frame. Can be used to implement online tracking strategies, pose/segmentation/bbox detectors, etc.
    • Example 1: YOLOv8. To perform object detection on each image with YOLOv8. Creates a new row (i.e. detection) within detections_pred.
    • Example 2: StrongSORT. To perform online tracking with StrongSORT. Creates a new "track_id" column for each detection within detections_pred.
  • DetectionLevelModule: Abstract class to be instantiated when adding a new tracking module that operates on a single detection. Can be used to implement pose estimation for top-down strategies, re-identification, attributes recognition, etc.
    • Example 1: EasyOCR. To perform jersey number recognition on each detection with EasyOCR. Creates a new "jersey_number" column within detections_pred.
    • Example 2: BPBReId. To perform person re-identification on each detection with BPBReID. Creates a new "embedding" column within detections_pred.
  • Callback: Implement this class to add a callback that is triggered at a specific point during the tracking process, e.g. when dataset/video/module processing starts/ends.
    • Example: VisualizationEngine. Implements "on_video_loop_end" to save each video tracking results as a .mp4 or a list of .jpg.
  • Evaluator: Implement this class to add a new evaluation metric, such as MOTA, HOTA, or any other (non-tracking related) metrics.

Execution Flow Overview

Here is an overview of what happen when you run TrackLab: tracklab/main.py is the main entry point and receives the complete Hydra's configuration as input. tracklab/main.py is usually called via the following command through the root main.py file: python main.py. Within tracklab/main.py, all modules are first instantiated. Then training any tracking module (e.g. the re-identification model) on the tracking training set is supported by calling the "train" method of the corresponding module. Tracking is then performed on the validation or test set (depending on the configuration) via the TrackingEngine.run() function. For each video in the evaluated set, the TrackingEngine calls the "run" method of each module (e.g. detector, re-identifier, tracker, ...) sequentially. The TrackingEngine is responsible for batching the input data (e.g. images, detections, ...) before calling the "run" method of each module with the correct input data. After a module has been called with a batch of input data, the TrackingEngine then updates the TrackerState object with the module outputs. At the end of the tracking process, the TrackerState object contains the tracking results of each video. Visualizations (e.g. .mp4 results videos) are generated during the TrackingEngine.run() call, after a video has been tracked and before the next video is processed. Finally, evaluation is performed via the evaluator.run() function once the TrackingEngine.run() call is completed, i.e. after all videos have been processed.

Tutorials

Dump and load the tracker state to save computation time

When developing a new module, it is often useful to dump the tracker state to disk to save computation time and avoid running the other modules several times. Here is how to do it:

  1. First, save the tracker state by using the corresponding configuration in the config.yaml file:
defaults:
    - state: save
    ...
  1. Run Tracklab. The tracker state will be saved in the experiment folder as a .pcklz file.
  2. Then modify "tracklab/configs/state/load.yaml" to specify the path to the tracker state file that has just been created (load_file: "..." config).
  3. Then change config.yaml to load the tracker state by using the corresponding configuration in the config.yaml file:
defaults:
    - state: load
    ...
  1. In config.yaml, remove from the pipeline all modules that should not be executed again. For instance, if you want to use the detections and reid embeddings from the saved tracker state, remove the "bbox_detector" and "reid" modules from the pipeline. Use pipeline: [] if no module should be run again.
  2. Run Tracklab again.

Footnotes

  1. Tested on conda 22.11.1, Python 3.10.8, pip 22.3.1, g++ 11.3.0 and gcc 11.3.0 โ†ฉ

tracklab's People

Contributors

aghasemzadeh avatar bstandaert avatar matty22396 avatar silviogiancola avatar victorjoos avatar vlsomers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tracklab's Issues

configs/modules/pose_bottomup/yolov8_pose.yaml missing

Hello! was trying to learn how to use a custom detection module and was having problems with the importing part. Following the tutorial to add yolov8_pose it says there should a config example in configs/modules/pose_bottomup/yolov8_pose.yaml but it's not there.

pose_bottomup results not showing in video nor in state file

Thanks a lot for the great codebase!

I have been having trouble obtaining any pose estimation results, like the ones shown on the Readme file (https://github.com/TrackingLaboratory/tracklab/blob/main/docs/assets/gifs/PoseTrack21_008827.gif). There also seems to be no entry related to pose estimation in the state saving file when testing the pipeline on soccernet samples . What is the recommended setup to generate pose estimation results?

I have
1) changed the draw_keypoints and draw_skeleton flags in the config file
2) added a pose_bottomup module to the pipeline (logging clearly shows: yolov8->yolov8->...).
3) lowered the min_confidence
However, despite all this, I do not have any pose estimation results appearing on the video, nor in the .pklz state file.

Also, OpenPifPaf hangs on my machine (TitanXP, 64 GB RAM) when performing pose estimation.

Thanks a lot for your help โ˜บ

Using custom configs and modules located outside the tracklab and sn-gamestate directories

Hello! been trying to learn how to use tracklab on the past days, so far managed to add my own object detection module. To do that I needed to add a custom config yaml to sn_gamestate/configs/custom_config.yaml, a module config yaml to tracklab/configs/modules/bbox_detector/fake_yolov8.yaml and finally the actual module to tracklab/wrappers/detect_multiple/fake_yolov8.py. This is currently working as desired by running tracklab -cn custom_config

The issue is that all those files had to be added to the cloned sn-gamestate and tracklab directories. What would I have to change if I wanted to have those configs on a custom directory outside of tracklab and sn_gamestate?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.