Giter Site home page Giter Site logo

phil-bergmann / tracking_wo_bnw Goto Github PK

View Code? Open in Web Editor NEW
809.0 23.0 195.0 6.68 MB

Implementation of "Tracking without bells and whistles” and the multi-object tracking "Tracktor"

Home Page: https://arxiv.org/abs/1903.05625

License: GNU General Public License v3.0

Python 83.70% Jupyter Notebook 1.95% MATLAB 10.89% C++ 3.24% Objective-C 0.09% C 0.13%

tracking_wo_bnw's Introduction

Tracking without bells and whistles

This repository provides the implementation of our paper Tracking without bells and whistles (Philipp Bergmann, Tim Meinhardt, Laura Leal-Taixe) [https://arxiv.org/abs/1903.05625]. This branch includes an updated version of Tracktor for PyTorch 1.3 with an improved object detector. The original results of the paper were produced with the iccv_19 branch.

In addition to our supplementary document, we provide an illustrative web-video-collection. The collection includes exemplary Tracktor++ tracking results and multiple video examples to accompany our analysis of state-of-the-art tracking methods.

Visualization of Tracktor

Installation

  1. Clone and enter this repository:

    git clone https://github.com/phil-bergmann/tracking_wo_bnw
    cd tracking_wo_bnw
    
  2. Install packages for Python 3.7 in virtualenv:

    1. pip3 install -r requirements.txt
    2. Install PyTorch 1.7 and torchvision 0.8 from here.
    3. Install Tracktor: pip3 install -e .
  3. MOTChallenge data:

    1. Download 2DMOT2015, MOT16, MOT17Det, MOT17, MOT20Det and MOT20 and place them in the data folder.
    2. Unzip all the data in the data directory.
  4. Download model (MOT17 object detector, MOT20 object detector, and re-identification network) and MOTChallenge result files:

    1. Download zip file from here.
    2. Extract in output directory.

Evaluate Tracktor

In order to configure, organize, log and reproduce our computational experiments, we structured our code with the Sacred framework. For a detailed explanation of the Sacred interface please read its documentation.

  1. Tracktor can be configured by changing the corresponding experiments/cfgs/tracktor.yaml config file. The default configuration runs Tracktor++ with the FPN object detector as described in the paper.

  2. The default configuration is Tracktor++. Run Tracktor++ by executing:

    python experiments/scripts/test_tracktor.py
    
  3. The results are logged in the corresponding output directory.

For reproducibility, we provide the new result metrics of this updated code base on the MOT17 challenge. It should be noted, that these surpass the original Tracktor results. This is due to the newly trained object detector. This version of Tracktor does not differ conceptually from the original ICCV 2019 version (see branch iccv_19). The results on the official MOTChallenge webpage are denoted as the Tracktor++v2 tracker. The train and test results are:

********************* MOT17 TRAIN Results *********************
IDF1  IDP  IDR| Rcll  Prcn   GT  MT   PT   ML|    FP     FN   IDs    FM|  MOTA  MOTP MOTAL
65.2 83.8 53.3| 63.1  99.2 1638 550  714  374|  1732 124291   903  1258|  62.3  89.6  62.6

********************* MOT17 TEST Results *********************
IDF1  IDP  IDR| Rcll  Prcn   GT  MT   PT   ML|    FP     FN   IDs    FM|  MOTA  MOTP MOTAL
55.1 73.6 44.1| 58.3  97.4 2355 498 1026  831|  8866 235449  1987  3763|  56.3  78.8  56.7

We complement the results presented in the paper with MOT20 train and test sequence results. To this end, we run the same tracking pipeline as for MOT17 but apply an object detector model trained on the MOT20 training sequences. The corresponding model file is the same as used for this work.

********************* MOT20 TRAIN Results *********************
IDF1  IDP  IDR| Rcll  Prcn   GT   MT   PT  ML|    FP     FN  IDs   FM| MOTA
60.7 73.4 51.7| 68.5  97.4 2212  892 1064 259| 20860 357227 2664 6504| 66.4

********************* MOT20 TEST Results *********************
IDF1  IDP  IDR| Rcll  Prcn   GT   MT   PT  ML|    FP     FN  IDs   FM| MOTA
52.6 73.7 41.0| 54.3  97.6 1242  365  546 331|  6930 236680 1648 4374| 52.6

Train and test object detector (Faster R-CNN with FPN)

For the object detector, we followed the new native torchvision implementations of Faster R-CNN with FPN which are pre-trained on COCO. The provided object detection model was trained and tested with the experiments/scripts/faster_rcnn_fpn_training.ipynb Jupyter notebook. The object detection results on the MOT17Det train and test sets are:

********************* MOT17Det TRAIN Results ***********
Average Precision: 0.9090
Rcll  Prcn|  FAR     GT     TP     FP     FN| MODA  MODP
97.9  93.8| 0.81  66393  64989   4330   1404| 91.4  87.4

********************* MOT17Det TEST Results ***********
Average Precision: 0.8150
Rcll  Prcn|  FAR     GT     TP     FP     FN| MODA  MODP
86.5  88.3| 2.23 114564  99132  13184  15432| 75.0  78.3

Training the re-identification model

  1. The training config file is located at experiments/cfgs/reid.yaml.

  2. Create reID data python experiments/evaluation_tools/reid_mot_to_coco_gt.py --dataset MOT17 --data_root data

  3. Start training by executing python experiments/scripts/train_reid.py.

Publication

If you use this software in your research, please cite our publication:

  @InProceedings{tracktor_2019_ICCV,
  author = {Bergmann, Philipp and Meinhardt, Tim and Leal{-}Taix{\'{e}}, Laura},
  title = {Tracking without bells and whistles},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2019}}

tracking_wo_bnw's People

Contributors

alebeck avatar dependabot[bot] avatar guillembraso avatar phil-bergmann avatar timmeinhardt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tracking_wo_bnw's Issues

RunTimeError

Runtimerror: Attempting to deserialize objects on CUDA device but torch.cuda.is_available is False. If you are running on a cpu only machine . Please use torch.load with map_location=torch.device('cpu') to map your storage to cpu

I think there is a problem with torch and torchvision version

Can anyone help me?
Thanks in Advance.

How to training detector and Siamese?

Hi, If I want to train object detector, I need to convert my dataset format from MOT17 to VOC, is that right? or I don't need to do that?
How about training the re-identifaction Siamese network? What kind of format should be? MOT17 or VOC format?

Detection and ReID weight for CVPR19 challenge

Hello,

thanks for your great work.

For the MOT challenge CVPR2019 your tracktor also gets nice result. Could you please also share the weight of detection and reid network of the TracktorCV?

Thank you very much!

Which pytorch version should be used?

If I uses pytorch 1.3 error occurs at compiling FasterRCNN as "ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")", but able to load weights FasterRCNN.
On the other hand if I use pytorch 0.4 FasterRCNN is able to compile but weights are unable to load.

so which pytorch verison need to be used?

Fail to reproduce MOTA on MOT17

Hi,

Thank you for your wonderful work! I am trying to reproduce you tracktor++ result in MOT17 dataset. Using your trained weights, 61.6 MOTA was reproduced on mot17_train_FRCNN17 as described from paper. However, when I tried to train FPN and siamese network, I only get 49.8 MOTA on mot17_train_FRCNN17.

I noticed that the config.yaml file downloaded and the config file produced by training RPN are different. Here is part of the difference. It seems that parameters are saved in model.


< FEAT_STRIDE: &id012

FEAT_STRIDE: &id010
20c19
< FPN_ANCHOR_SCALES: &id013


FPN_ANCHOR_SCALES: &id011
27c26
< FPN_FEAT_STRIDES: &id014


FPN_FEAT_STRIDES: &id012
37c36
< MOBILENET: &id015 !!python/object/new:easydict.EasyDict


MOBILENET: &id013 !!python/object/new:easydict.EasyDict
48c47
< PIXEL_MEANS: &id016 !!python/object/apply:numpy.core.multiarray._reconstruct


PIXEL_MEANS: &id014 !!python/object/apply:numpy.core.multiarray._reconstruct


I evaluated the downloaded model and the model trained by myself, mAPs are roughly the same on val set(0.79x). However, the PIXEL MEANS and STDVS are different.

trained by myself:
'PIXEL_MEANS': array([[[123.675, 116.28 , 103.53 ]]]),
'PIXEL_STDVS': array([[[58.395, 57.12 , 57.375]]]),

downloaded from you(also same as pascal voc pre file):
'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
'PIXEL_STDVS': array([[[1., 1., 1.]]]),

Do you have any ideas?

setup.py not found

When I type pip3 install -e src/frcnn(fpn). There's an error:
ERROR: File "setup.py" not found. Directory cannot be installed in editable mode: /media/ray/Data/MOT/tracking_wo_bnw/src/frcnn

Is there setup.py file in fpn/ or frcnn/ ?

Questions about the bounding box regression part

Thanks for your excellent work! After reading your code I have some questions about the bounding box regression part:

  1. If we use public detections on MOT datasets, does bbox regresson still work? I mean, the bounding box regression needs to train the weights so as to refine the coordinates when testing. FPN and Faster
    RCNN has already trained this in the loss. If we use public detections, how can we get the weights to do bbox regression?
  2. In your code, I didn't find the weights of bounding box regression. It seems like you directly get the features through network and feed them to the formulations.

Thank you very much!

Val mean AP shows nan during training the detector

[VAL]: im_detect: 5316/5316 0.107s 0.001s
[TRAIN]: Mean AP = 0.5192
/tracking_wo_bnw/src/fpn/fpn/datasets/mot.py:420: RuntimeWarning: invalid value encountered in true_divide
rec = tp / float(npos)

[VAL]: Mean AP = nan

I was trying to train the detector. But it shows the following nan value for test set mean AP of MOT17 dataset.

The problem with regression

hello,when i read your detector training code(https://colab.research.google.com/drive/1_arNo-81SnqfbdtAhb3TBSU5H0JXQ0_1), I found that its dataset is constructed using the current image and ground-truth.
`
if int(row[0]) == file_index and int(row[6]) == 1 and int(row[7]) == 1 and visibility >= self._vis_threshold:

                bb = {}
                bb['bb_left'] = int(row[2])
                bb['bb_top'] = int(row[3])
                bb['bb_width'] = int(row[4])
                bb['bb_height'] = int(row[5])
                bb['visibility'] = float(row[8])`

So I want to know why it can predict the future target position,namely,the regression of the object detector aligns already existing track bounding boxes of frame t-1 to the object`s new position at frame t. How is this different from the original Faster-Rcnn training?
Thank you very much.

About Reid network

Nice work!
I find that the reid network referd in your paper was trained on mot dataset.
Have you ever compared you reid network with other SOTA reid network which trained on large dataset?

Can't download weight file

Hello!
I'm trying to download the weight files and I get this message:
"ERROR 400: Bad Request"

Could you please check if the weightfile is correctly uploaded? Thank you!

probelms about evaluate on MOT16

Thanks for sharing your great work !I wondor what the input to evuluate Tracktor on MOT16 . MOT16 labels or MOT16-det-dpm-raw.

host setup details (CUDNN_STATUS_EXECUTION_FAILED)

Hello,

I'm trying to test the tracker with python experiments/scripts/test_tracktor.py but I always end up with this error

ERROR - test_tracktor - Failed after 0:00:14!
Traceback (most recent calls WITHOUT Sacred internals):
  File "experiments/scripts/test_tracktor.py", line 120, in my_main
    tracker.step(frame)
  File "/home/user/va-workspace/tracking_wo_bnw/src/tracktor/tracker.py", line 257, in step
    self.obj_detect.load_image(blob['data'][0], blob['im_info'][0])
  File "/home/user/va-workspace/tracking_wo_bnw/src/tracktor/fpn.py", line 63, in load_image
    c1 = self.RCNN_layer0(self.im_data)
  File "/home/user/miniconda3/envs/tracking_wo_bnw/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/miniconda3/envs/tracking_wo_bnw/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
    input = module(input)
  File "/home/user/miniconda3/envs/tracking_wo_bnw/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/user/miniconda3/envs/tracking_wo_bnw/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 282, in forward
    self.padding, self.dilation, self.groups)
  File "/home/user/miniconda3/envs/tracking_wo_bnw/lib/python3.6/site-packages/torch/nn/functional.py", line 90, in conv2d
    return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

I believe this might be related to the fact that I'm running the latest NVIDIA Driver (440.36) which supports CUDA 10.2 on an NVIDIA 2070 RTX SUPER.
I have tried multiple operating systems (Ubuntu 16.04 with conda env for Python 3.6) and Fedora 31 (conda env).

Could you please detail your setup so I can try to match it?

Problems about MOTDT

Thanks for your great work! May I ask that have you run MOTDT code supported by its author ? I found it claims that it's a reimplement version and the code performance is quite lower than what papar and MOT Challenge shows.

public detections and private detections?

I am confusing about public detections and private detections concept. My understanding after reading codes is the public detections are generated by 'fpn' while private detections are from 'faster rcnn'. Is this correct? Thanks.

test on my own dataset

Hi,
I want to train on my dataset, my data set is MOT17 format. But the target is car, not person. So I think I need to retrain object detector and re-identifaction Siamese network, is that right?

I have not trained the object detector yet.
When I training re-identifaction Siamese network, it has error message:
File "experiments/scripts/train_siamese.py", line 86, in my_main
max_epochs = 25000 // len(db_train.dataset) + 1 if 25000%len(db_train.dataset) else 25000 // len(db_train.dataset)
ZeroDivisionError: integer division or modulo by zero

Which files are needed to change the name from "MOT17" to "Car" ?

Problem with requirements.txt

Line no 83 in requirements.txt file is throwing an error while running the command pip3 install -r requirements.txt

Dataset issue

I am trying for my own dataset. but it is asking for gt and det files. But according to the base paper ,algorithm will be using faster RCNN for object detection. Then y it is asking for ground truth and detection file. Is it compulsory to have the two files in dataset.

Thanks in Advance.

I got this error when I run "pip3 install -r requirements.txt", any idea?

ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-fnb16wle/lapsolver/setup.py'"'"'; file='"'"'/tmp/pip-install-fnb16wle/lapsolver/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-fnb16wle/lapsolver/pip-egg-info
cwd: /tmp/pip-install-fnb16wle/lapsolver/
Complete output (27 lines):
zip_safe flag not set; analyzing archive contents...

Installed /tmp/pip-install-fnb16wle/lapsolver/.eggs/pytest_runner-5.2-py3.5.egg
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2237, in resolve
    return functools.reduce(getattr, self.attrs, module)
AttributeError: module 'setuptools.dist' has no attribute 'check_specifier'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-install-fnb16wle/lapsolver/setup.py", line 88, in <module>
    keywords='hungarian munkres kuhn linear-sum-assignment bipartite-graph lap'
  File "/usr/lib/python3.5/distutils/core.py", line 108, in setup
    _setup_distribution = dist = klass(attrs)
  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 272, in __init__
    _Distribution.__init__(self,attrs)
  File "/usr/lib/python3.5/distutils/dist.py", line 281, in __init__
    self.finalize_options()
  File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 327, in finalize_options
    ep.load()(self, ep.name, value)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2229, in load
    return self.resolve()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2239, in resolve
    raise ImportError(str(exc))
ImportError: module 'setuptools.dist' has no attribute 'check_specifier'
----------------------------------------

ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

The evaluation results are different

I have run test_tracktor.py twice on the same device and evaluated the results using MOTChallenge devkit . But the evaluation results are different,and there is a gap of 4% on MOTA using the train dataset of MOT17. I am a beginner with multi-target tracking and I want to know why this phenomenon occurs.
Thank you.

I cannot download the zip file from google drive

I cannot download the zip file from google drive

every time i try to download it , it stucked at 20 MB and then failed as 'Network error'.

Can you check it or provide another link?

By the way, do you think it is fair to use a more powerful detector, like some private detector in MOT16 private to make tracktors in MOT17 better? Then it seems that it is very easy to improve MOT17 performance to 60% MOTA via your method because many MOT16 private detectors can boost trackers to 70% MOTA. Why you choose FPN & FRCNN but not such private one? If you use such private detector to refine your tracktor, can you get better MOTA ?

How to use multi-gpus in this code

Thanks for your progress first! It's a new direction for MOT problem.I have run your code succcessfully but with a slow speed to run all MOT17 Dataset.Did you have try to use multi-gpus to run this code?

an intuitve definition of "oracle trackers"

It seems the authors borrowed the term "oracle" from its original definition in classical antiquity. However, it is pretty confusing when the term appearing first time in your paper's section 4.2 title.

could you provide an intuitive explanation of what is oracle trackers, perhaps focusing on what is the purpose of these oracle trackers. It sounds to me an ablation study in a sense, but with more of foreseeing potential research direction. Correct me if it is a wrong understanding. Thanks in advance

running the default configuration fails

I followed the described installation steps in the readme file.
Running python experiments/scripts/test_tracktor.py fails with the following error:

python experiments/scripts/test_tracktor.py
WARNING - test_tracktor - No observers have been added to this run
INFO - test_tracktor - Running command 'main'
INFO - test_tracktor - Started
ERROR - test_tracktor - Failed after 0:00:00!
Usage:
test_tracktor.py [(with UPDATE...)] [options]
test_tracktor.py help [COMMAND]
test_tracktor.py (-h | --help)
test_tracktor.py COMMAND [(with UPDATE...)] [options]
sacred.utils.MissingConfigError: main is missing value(s): ['reid']

Is it necessary to add an additional reid config?

Question about data

Hello,

I have a question regarding the given dataset.

In step 4 it's asked to download the MOT Challenge dataset. There is a dataset called MOT16Labels. Although the name contains 16, but the folder in it contains actually the detection (DPM, FRCNN, SDP) from MOT17. Besides, according to MOTChallenge website, MOT16 only offers detection DPM.

I would like to ask what's the difference between the dataset MOT16Labels and MOT17Labels? Is this a mistake?

Thank you in advance

Readme link not working

Hi,

Your readme section:

Download object detector and re-identifiaction Siamese network weights and MOTChallenge result files for ICCV 2019:

Download zip file from here.

Extract in output directory.

The link that contains the zip file is not working... Could you correct that please?

Yours sincerely

how about the runtime speed

Thanks for your kind work,here is some question

  • 1: how about the runtime speed ?
  • 2: if i use it in mobile device, is it convenience for transplant?
  • 3: also I have a detector (yolo3 )already, need i retrain a new network for regress and classify?

about classification

I want to sure whether the classification belongs to Faster-RCNN or you train it by your own. Thanks

Evaluation FPN detector

Hello,
I have tested the provided object detector using the following code given in Readme to check the object detector:
python experiments/scripts/test_fpn.py voc_init_iccv19 --cuda --net res101 --dataset mot_2017_train --imdbval_name mot_2017_train --checkepoch 27

The weight I test is the weight you provide in your google drive, which is "res101/mot_2017_train/voc_init_iccv19/fpn_1_27.pth".
I haven't done any changes to the weight and the test code.

However, I can't get the same result as reported in your paper, which is
image
The result I got from the test code is:
AP: 0.7896991147961206 Prec: 0.8078499487666508 Rec: 0.9737321705601494 TP: 64649.0 FP: 15377.0
The result varies a lot and I am not sure if I have done anything wrong during the implementation. Could you please help? Thanks!

Executing faster RCNN for demo

Hello,

Thank you for sharing your code, I was wondering if there is a way to visualize the results for FasterRCNN's efficacy on MoT using the code and the weights that you have provided.

Your Readme.md file mentions we have to use PyTorch 0.3 and Cuda 9. However, the entire FasterRCNN repo is built on torchvision which requires Torch>1.0 , contradicting your Readme instructions. Is there a way to visualize FasterRCNN results from this repository? Could you please help me figure out how to?

Cheers,

According to the requirements, take the environment and report the error "RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

ERROR - test_tracktor - Failed after 0:00:07!
Traceback (most recent calls WITHOUT Sacred internals):
File "experiments/scripts/test_tracktor.py", line 123, in my_main
tracker.step(frame)
File "/home/jyw/Code/tracking_wo_bnw-master/src/tracktor/tracker.py", line 312, in step
self.obj_detect.load_image(blob['data'][0], blob['im_info'][0])
File "/home/jyw/Code/tracking_wo_bnw-master/src/tracktor/fpn.py", line 63, in load_image
c1 = self.RCNN_layer0(self.im_data)
File "/home/jyw/anaconda3/envs/tracktor/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/jyw/anaconda3/envs/tracktor/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
input = module(input)
File "/home/jyw/anaconda3/envs/tracktor/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/jyw/anaconda3/envs/tracktor/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 282, in forward
self.padding, self.dilation, self.groups)
File "/home/jyw/anaconda3/envs/tracktor/lib/python3.6/site-packages/torch/nn/functional.py", line 90, in conv2d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_EXECUTION_FAILED

Which tensorflow version is required?

The requirements text said the required the tensorflow version is 1.1.0, but it seems that tensorflow 1.1.0 is not compatible with cuda 9.0. I am confusing about the configuration.
Thanks.

Error when re-training reidentification model

Hi,

I am trying to retrain the re-identification model using the command you gave: python experiments/scripts/train_reid.py

However, I encounter an issue in the dataloader:

File "experiments/scripts/train_reid.py", line 47, in my_main db_train = Datasets(reid['db_train'], reid['dataloader']) File "/home/Algorithms/MultiObjectTracking/tracking_wo_bnw/experiments/scripts/tracktor/datasets/factory.py", line 63, in __init__ self._data = _sets[dataset](*args) File "/home/Algorithms/MultiObjectTracking/tracking_wo_bnw/experiments/scripts/tracktor/datasets/factory.py", line 31, in <lambda> _sets[name] = (lambda *args, split=split: MOT_Siamese_Wrapper(split, *args)) File "/home/Algorithms/MultiObjectTracking/tracking_wo_bnw/experiments/scripts/tracktor/datasets/mot_siamese_wrapper.py", line 19, in __init__ self._dataloader = MOT_Siamese(None, split=split, **dataloader) File "/home/Algorithms/MultiObjectTracking/tracking_wo_bnw/experiments/scripts/tracktor/datasets/mot_siamese.py", line 26, in __init__ super().__init__(seq_name, vis_threshold=vis_threshold) File "/home/Algorithms/MultiObjectTracking/tracking_wo_bnw/experiments/scripts/tracktor/datasets/mot_sequence.py", line 50, in __init__ 'Image set does not exist: {}'.format(seq_name) AssertionError: Image set does not exist: None
I think it is because the seq_name that initially gets passed in the MOT_Siamese_Wrapper class is None, which the MOT17_Sequence class has no way of handling. Could you advise on how to amend this issue? Thank you.

ModuleNotFoundError: No module named 'frcnn.nms._ext.nms._nms'

when I run the Tracktor by executing:
python experiments/scripts/test_tracktor.py
I got an error as follows:

Traceback (most recent call last):
File "experiments/scripts/test_tracktor.py", line 16, in
from tracktor.datasets.factory import Datasets
File "tracktor2/tracking_wo_bnw/src/tracktor/datasets/factory.py", line 2, in
from .mot_wrapper import MOT17_Wrapper, MOT19CVPR_Wrapper, MOT17LOWFPS_Wrapper
File "/home/ymy/tracktor2/tracking_wo_bnw/src/tracktor/datasets/mot_wrapper.py", line 4, in
from .mot_sequence import MOT17_Sequence, MOT19CVPR_Sequence, MOT17LOWFPS_Sequence
File "tracktor2/tracking_wo_bnw/src/tracktor/datasets/mot_sequence.py", line 12, in
from frcnn.model import test
File "tracktor2/tracking_wo_bnw/src/frcnn/frcnn/model/test.py", line 22, in
from .nms_wrapper import nms
File "tracktor2/tracking_wo_bnw/src/frcnn/frcnn/model/nms_wrapper.py", line 11, in
from ..nms.pth_nms import pth_nms
File tracktor2/tracking_wo_bnw/src/frcnn/frcnn/nms/pth_nms.py", line 2, in
from ._ext import nms
File "tracktor2/tracking_wo_bnw/src/frcnn/frcnn/nms/_ext/nms/init.py", line 3, in
from ._nms import lib as _lib, ffi as _ffi
ModuleNotFoundError: No module named 'frcnn.nms._ext.nms._nms'

The _nms.py file does not exsit under the path "tracktor2/tracking_wo_bnw/src/frcnn/frcnn/nms/_ext/nms/_nms.py"

AttributeError: 'FPN' object has no attribute 'detect'

Thanks for this wonderful code base - it was refreshingly easy to set up due to the instructions. I'm running into one issue: I'm trying to run the provided detector (by setting public_detections: False), but when using the FPN backbone, I get the following error:

File "experiments/scripts/test_tracktor.py", line 123, in my_main
  tracker.step(frame)
File "src/tracktor/tracker.py", line 323, in step
  _, scores, bbox_pred, rois = self.obj_detect.detect()

I believe this is because detect is not defined in src/tracktor/fpn.py the way it is for src/tracktor/frcnn.py. Is this not supported right now, or am I just looking in the wrong place?

Thanks!

Use own or detections provided by mot for detection-part "offline"

hi, thanks for publishing this.

how can one use offline detections instead of using the whole pipe-line ? I'm using this as a baseline for my master thesis and my other codes are working in Cuda10, would be great if I could use offline detections to focus on other functionalities.

About the results in papers

hello,I just want to know the results in papers that you record the boxes .The boxes are the boxes which are provided by MOTchanllege or the FPN .In your code ," self.results[t.id][self.im_index] = np.concatenate([t.pos[0].cpu().numpy(), np.array([t.score])])" ,the boxes are provided by FPN.How about the boxes in papers?Thanks very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.