Giter Site home page Giter Site logo

koehlp / wda_tracker Goto Github PK

View Code? Open in Web Editor NEW
51.0 1.0 12.0 13.31 MB

Weighted distance aggregation multi person multi camera tracker

License: MIT License

Python 69.60% MATLAB 0.06% Shell 0.39% Jupyter Notebook 25.36% Dockerfile 0.04% Makefile 0.02% Batchfile 0.02% C++ 1.45% Cuda 2.86% Cython 0.22%

wda_tracker's Introduction

WDA tracker

The WDA (weighted distance aggregation) tracker is an offline multi camera tracking approach.

It is published as part of The MTA Dataset for Multi Target Multi Camera Pedestrian Tracking by Weighted Distance Aggregation (https://github.com/schuar-iosb/mta-dataset).

This repository is structured in two parts:

The first part can be used to create single camera tracks from an input video. Startable via run_tracker.py.

The second part can be used to cluster single camera tracks to obtain multi camera tracks with subsequent evaluation. Startable via run_multi_cam_clustering.py.

Getting started

Setting up an artefacts folder
Download work_dirs.zip (https://drive.google.com/uc?export=download&id=1SMlrtuGsgZ-DMZlIlUThgEpXQI9MOWrL) and unzip it in your base repository folder or create a symlink e.g. ln -s /media/philipp/philippkoehl_ssd/work_dirs work_dirs. It contains one or two re-id and detector models. Furthermore output result files will be stored in this folder.

Install python requirements

Create a new conda environment:

conda create -n wda python=3.7.7 -y

Activate the conda environment:

conda activate wda

Install all needed python packages into the environment:

pip install -r requirements.txt

Go into detectors/mmdetection

and build mmdetection (https://github.com/open-mmlab/mmdetection/blob/master/docs/install.md)

pip install -r requirements/build.txt
pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"
pip install -v -e .  # or "python setup.py develop"

Download the MTA Dataset

Go to https://github.com/schuar-iosb/mta-dataset and follow the instructions to obtain the MTA Dataset. It is also possible to use the smaller extracted version MTA ext short at first. Unzip the dataset somewhere.

Configure the single camera tracker

E.g. in configs/tracker_configs/frcnn50_new_abd_test.py and configs/tracker_configs/frcnn50_new_abd_train.py set the data -> source -> base_folder to your MTA dataset location.

E.g. for the test set:

...
"data" : {
        "selection_interval" : [0,10000],

        "source" : {
            "base_folder" : "/media/philipp/philippkoehl_ssd/MTA_ext_short/test",
            "cam_ids" : [0,1,2,3,4,5]
        }
    },
...

Run the single camera tracking

Run the single camera tracking to generate single camera tracks.

For the train set:

python run_tracker.py --config configs/tracker_configs/frcnn50_new_abd_train.py

And for the test set:

python run_tracker.py --config configs/tracker_configs/frcnn50_new_abd_test.py

Configure the multi camera clustering

Adjust the following paths of the single camera tracker results in the multi camera clustering config.

E.g. in configs/clustering_configs/mta_es_abd_non_clean.py

...
"work_dirs" : "/media/philipp/philippkoehl_ssd/work_dirs"
,"train_track_results_folder" : "/home/philipp/Documents/repos/wda_tracker/work_dirs/tracker/config_runs/frcnn50_new_abd_train/tracker_results"
,"test_track_results_folder" : "/home/philipp/Documents/repos/wda_tracker/work_dirs/tracker/config_runs/frcnn50_new_abd_test/tracker_results"
,"train_dataset_folder" : "/media/philipp/philippkoehl_ssd/MTA_ext_short/train"
,"test_dataset_folder" : "/media/philipp/philippkoehl_ssd/MTA_ext_short/test"
...

Run the multi camera clustering

Run the following command to start the clustering of single camera tracks which have been specified in the config file. As a result multi camera tracks will be output and a subsequent evaluation using several tracking metrics will be performed.

python run_multi_cam_clustering.py \
    --config configs/clustering_configs/mta_es_abd_non_clean.py

Some files for caching and results will be created in the clustering/config_runs/mta_es_abd_non_clean folder. If you change some code it might be necessary that you delete some of these files.

log.txt                        #Contains some log info
cam_homographies               #Contains calculated homographies between cameras                         
multicam_clustering_results    #Contains the multi camera clustering tracks
multicam_distances_and_indices #Contains calculated distances between single camera tracks 
multi_cam_evaluation           #Contains multi camera evaluation results
velocity_stats                 #Contains the average velocity of all persons 
overlapping_area_hulls         #Contains the calculated overlapping areas between all cameras 
person_id_tracks               #Contains pickled person id to tracks dictionary 
pickled_appearance_features    #Contains the pickled appearance feature for all frames
single_cam_evaluation          #Contains the single camera tracks

Other contained scripts

There are also some scripts in the utilities folder e.g. to visualize multi camera tracks:

Tracking results

  • Config files: frcnn50_new_abd_test.py, frcnn50_new_abd_train.py ,mta_es_abd_non_clean.py
    • Person detection: Faster R-CNN ResNet 50
    • Person re-identification: ABD-NET ResNet 50
    • DeepSort Tracker
    • All distances with the weights in mta_es_abd_non_clean.py
  • Dataset
    • MTA ext short

Results:

IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
0.40 0.44 0.37 0.83 0.98 188 123 64 1 14723 124852 1846 7140 0.80 0.18

Development info

If you use pycharm for development, it is neccessary to add the following folders as source root. This will add these paths to the python path.

['feature_extractors/reid_strong_baseline'
                      ,'feature_extractors/ABD_Net'
                        ,'detectors/mmdetection'
                       ,'evaluation/py_motmetrics']

Contained repositories

This repository contains a person detector called mmdetection (https://github.com/open-mmlab/mmdetection).

It also contains two person re-identification approaches strong reid baseline (https://github.com/michuanhaohao/reid-strong-baseline) and ABD-Net (https://github.com/TAMU-VITA/ABD-Net).

For multi person single camera tracking it contains DeepSort from (https://github.com/ZQPei/deep_sort_pytorch) which is originally from (https://github.com/nwojke/deep_sort).

The IOU-Tracker (https://github.com/bochinski/iou-tracker) is also contained but not integrated into the system.

Parts of (https://github.com/ZwEin27/Hierarchical-Clustering) are used for clustering.

For evaluation the py-motmetrics is contained (https://github.com/cheind/py-motmetrics).

An approach for getting distinct colors is used:

(https://github.com/taketwo/glasbey).

Some scripts from the JTA-Dataset (https://github.com/fabbrimatteo/JTA-Dataset) are also contained.

Citation

If you use it, please cite our work. The affiliated paper was published at the CVPR 2020 VUHCS Workshop (https://vuhcs.github.io/)

@InProceedings{Kohl_2020_CVPR_Workshops,
    author = {Kohl, Philipp and Specker, Andreas and Schumann, Arne and Beyerer, Jurgen},
    title = {The MTA Dataset for Multi-Target Multi-Camera Pedestrian Tracking by Weighted Distance Aggregation},
    booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month = {June},
    year = {2020}
}

wda_tracker's People

Contributors

dependabot[bot] avatar koehlp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

wda_tracker's Issues

How to visualize the tracks?

Hi,

Thanks for the open repository. it is a great project!
I had completed Run the single camera tracking and Run the multi camera clustering, got some revelant files.
so what need to do if i want to visualize the tracks?

Thanks

How to visualize multi camera tracks?

Hi Philipp,

Thank you very much for the excellent network.

I have some problems with visualizing multi-camera tracks. I run a file in utilities which named "draw_multi_camera_tracks.py" after the command “python run_multi_cam_clustering.py --config configs/clustering_configs/mta_es_abd_non_clean.py”

But there is an error that can't find the file which path is "work_dirs/evaluation/multi_cam_trackwise_evaluation/eval_results.csv"
Below is the error message:

Traceback (most recent call last):
File "draw_multi_camera_tracks.py", line 346, in
trv.run_visualization()
File "draw_multi_camera_tracks.py", line 284, in run_visualization
track_evaluation_results = self.read_track_evaluation_results()
File "draw_multi_camera_tracks.py", line 74, in read_track_evaluation_results
track_evaluation_results = pd.read_csv(self.track_evaluation_results_path)
File "/home/xb/anaconda3/envs/wda/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/xb/anaconda3/envs/wda/lib/python3.7/site-packages/pandas/io/parsers.py", line 448, in _read
parser = TextFileReader(fp_or_buf, **kwds)
File "/home/xb/anaconda3/envs/wda/lib/python3.7/site-packages/pandas/io/parsers.py", line 880, in init
self._make_engine(self.engine)
File "/home/xb/anaconda3/envs/wda/lib/python3.7/site-packages/pandas/io/parsers.py", line 1114, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/home/xb/anaconda3/envs/wda/lib/python3.7/site-packages/pandas/io/parsers.py", line 1891, in init
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 374, in pandas._libs.parsers.TextReader.cinit
File "pandas/_libs/parsers.pyx", line 674, in pandas._libs.parsers.TextReader._setup_parser_source
FileNotFoundError: [Errno 2] File /home/xb/wda_tracker/wda_tracker-master/work_dirs/evaluation/multi_cam_trackwise_evaluation/eval_results.csv does not exist: '/home/xb/wda_tracker/wda_tracker-master/work_dirs/evaluation/multi_cam_trackwise_evaluation/eval_results.csv'

Should I run some file to create "eval_results.csv" before running "draw_multi_camera_tracks.py"?
Or did I run the wrong file to visualize multi-camera tracks?

Look forward to your reply, thank you!

The problem of package version

Hi,
I met some questions when I install python requirements.
I can't find the corresponding version of mkl-fft==1.0.15 and mkl-random==1.1.0
1
2
so i install them with version==1.2.0, but they are incompatible with the numpy's version==1.18.1
3
so i upgrade the version of numpy to 1.19.5, then run run_tracker.py met the following problem
4
could you please tell me how to modify this problem?

Running "multicam_trackwise_evaluation.py" gives me bellow error. Kindly help me the fix it

Traceback (most recent call last):
File "multicam_trackwise_evaluation.py", line 383, in
result = Multicam_trackwise_evaluation(dataset_folder="/home/mca/Downloads/wda_tracker-master/MTA_ext_short/test"
File "multicam_trackwise_evaluation.py", line 195, in evaluate
track_eval_res_df = self.get_track_eval_res_df(summary)
File "multicam_trackwise_evaluation.py", line 275, in get_track_eval_res_df
idx_hids = id_global_assignment["idx_hids"]
KeyError: 'idx_hids'

Could provide the tracking results?

Hi,

I do some modifications on the original code and when I want to do the evaluation, I meet key Error in metrics.py

I think there may be some problem and could u provide your cam tracking results? I know that you have provide the multi-cam results in other issues. I replace my files with them but met some errors.

Thanks.

Test on my own dataset

Hi,

Thanks for the open repository. it is a great project!
I want to use the WDA model to test on my own dataset, so how can I modify the project?

Thanks

EOFError: Ran out of input

Hi Philipp,

First of all, thanks for the excellent work!

I have encountered an "EOFError: Ran out of input" when running the command "run_multi_cam_clustering.py --config configs/clustering_configs/mta_es_abd_non_clean.py". This error was raised when calculating the pickled person_id_tracks. Below is the error message:

`Did not find pickled person_id_tracks. Calculating them now.
36%|███▋ | 423/1165 [00:29<02:17, 5.40it/s]Ran out of input
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/lxfhfut/anaconda3/envs/wda/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 1413, in cluster_from_weights_task
, dataset_type=dataset_type
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 1113, in cluster_from_weights
,dataset_type=dataset_type)
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 818, in cluster_tracks_via_hierarchical
self.get_all_tracks_with_feature_mean(track_results_folder,dataset_type)
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 239, in get_all_tracks_with_feature_mean
, dataset_type=dataset_type)
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 280, in calculate_track_feature_mean
feature_dict = pickle.load(handle)
EOFError: Ran out of input
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/run_multi_cam_clustering.py", line 108, in
run_clustering.run()
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/run_multi_cam_clustering.py", line 84, in run
, n_split_parts=self.cfg.cluster_from_weights.split_count
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 1377, in splitted_clustering_from_weights
eval_results = get_async_tracking_results(eval_results)
File "/home/lxfhfut/Dropbox/PyCharm/wda_tracker/clustering/multi_cam_clustering.py", line 1277, in get_async_tracking_results
result = result.get()
File "/home/lxfhfut/anaconda3/envs/wda/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
EOFError: Ran out of input
`
I checked the '*/work_dirs/clustering/config_runs/mta_es_abd_non_clean/person_id_tracks' folder, nothing was generated.
Would you kindly help with this issue? Please let me know if you need further information to resolve the error. Thank you!

hard to run..

what I have to say is, this code is hard to run, when I run the tracker "run_tracker.py", especially the detector module

Inference on my dataset using WDA model.

Hi @koehlp
Thanks a lot for the open repository. This is a great help!
I am using your trained models on a different sample dataset for inference. I got the SCT and now I want to do the clustering of the tracks across cameras. When I configure the configs/clustering_configs/mta_es_abd_non_clean.py, do I need the "train_data" paths? Can I just perform a simple inference (as I don't have the gt values for my dataset)? If so, can you please help me as to what needs to be done?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.