Giter Site home page Giter Site logo

motchallengeevalkit's People

Contributors

dendorferpatrick avatar songheony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

motchallengeevalkit's Issues

How to customize this code to evaluate MOT metrics on custom datasets?

Dear all,
I'm trying to evaluate my custom mot algorithm on my custom datasets.
I've read almost related papers on motchanllenge.net and I was wondering if I can prepare my custom datasets with the same format of groundtruth of Mot (eg. MOT16) and then apply this repositories. But still I've not found the right way to do this.
Any recommendations is appreciated.
Thanks!

Converting MOT20DET to YoloV8 format

Hello

Im raising an issue to ask how might i convert MOT20DET dataset annotations to YoloV8 format
I am having trouble understanding what the individual numbers mean
for example
1,1,199,813,140,268,1,1,0.83643
but it does not seem to follow the <frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z> format written on the website.

So how should i interpret it?
If i want to have 1 txt file per image, how should i go about doing this?

custom data annotation?

I want to annotate my custom data to make a similar format to the MOT data format.
Are there any open source/free annotation tools?

Thanks

MOT 20 Data annotation

what software is used for the official annotation of the MOT20 data set, and how is the visibility rate mentioned in the annotation obtained?

Segmentation GT masks for MOT-17 OR Tracking GT annotations for MOTS

Hello Dear Authors,
Thank you for your effort in creating the dataset.

I am trying to use MOT17 data for my project and I was wondering if you have segmentation masks for the frames along with tracking/detection annotations?

Also, I see that MOTS has the segmentation masks but not sure if it has tracking and detection annotations?

Could you please clarify on both the fronts?
Thank you in advance.

MinCostMatching may return incorrect result with infinite edges

Note: This bug did not seem to change the results for any tracker on the MOT17 train set.

There is a small bug that can cause MinCostMatching() to return the wrong solution when there are infinite-cost edges.
Minimum working example:

MinCostMatching([inf 4 2; 1 inf inf; 5 inf inf])

Expected result: (* could be 0 or 1)

   0   0   1
   1   0   0
   0   *   0

Actual result:

   0   1   0
   1   0   0
   0   0   1

From what I've seen, this bug only occurs when the optimal solution is not a perfect matching (i.e. in the example above, the largest matching contains 2 edges, not 3). It seems like it can be fixed by either avoiding the use of infinite-cost edges or by modifying this line:

alldist[i][j] = distMatrix[j*nOfRows + i];

to be:

alldist[i][j] = min(INF, distMatrix[j*nOfRows + i]);

Alternatively, one could change the comparison to use < INF instead of != INF:

if (Lmate[i] < nOfColumns && alldist[i][Lmate[i]] != INF)

This issue does not affect clearMOTMex() since it is aware of INF when it constructs the cost matrix.

It could affect IDmeasures.m, but perhaps it has no impact in practice because a perfect matching always exists.

If MinCostMatching() is used instead of Hungarian() in preprocessResult.m, then it is important to consider.

3DZeF or 3DZeF20

In the code evalZf3D.py, it says in line 84 that gt_dir = "data/3DZeF", but I'm pretty sure this should be 3DZeF20 right?

Hungarian.m returns incorrect solution in presence of infinite edges

There seems to be a bug in Hungarian.m, which is used by matlab_devkit/utils/preprocessResult.m. This was also noticed by @JonathonLuiten. The bug seems to occur when there are infinite edges in the bipartite graph (which is the case in the preprocess script).

Here is a minimal working example. While there exists a perfect matching (size 3), the function does not find it and returns a matching of size 2

Hungarian([0 inf 10; inf 1 inf; 1 inf inf])

Expected result:

ans =
   0   0   1
   0   1   0
   1   0   0

Actual result:

ans =
   1   0   0
   0   1   0
   0   0   0

The problem may arise from the min_cover_line() function. It seems to identify the "deficiency" as 1, which leads to additional vertices being added to ensure that a perfect match exists, and the solver is applied to the matrix:

P_cond =
     0   Inf    10    10
   Inf     1   Inf    10
     1   Inf   Inf    10
    10    10    10    10

obtaining (correctly) the matching

M =
   1   0   0   0
   0   1   0   0
   0   0   0   1
   0   0   1   0

On the other hand, MinCostMatching.mex returns the correct solution. Maybe this could be fixed by simply replacing Hungarian() with MinCostMatching().

If I try using large finite values instead, the correct solution is obtained:

Hungarian([0 1e6 10; 1e6 1 1e6; 1 1e6 1e6])
ans =
   0   0   1
   0   1   0
   1   0   0

(Tested on octave 5.2.0)

MOT20\train\MOT20-01\gt\gt.txt data format not consistent

https://github.com/dendorferpatrick/MOTChallengeEvalKit/blob/master/MOT/README.md states

Data Format:
The file format should be the same as the ground truth file, which is a CSV text-file containing one object instance per line. Each >line must contain 10 values:
<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>

The data at https://motchallenge.net/data/MOT20/ in MOT20\train\MOT20-01\gt\gt.txt does not contain 10 values, only contains 9 in each line

MATLAB evalutation failed

Hi, I am running Ubuntu 18.04, Python 3.6.9, Matlab R2020a and the latest version this GIT repository.
This is the output I am getting while running: $ python3 MOT/evalMOT.py

Evaluating Benchmark: MOT16
Found 2 ground truth files and 2 test files.
['res/MOT16res/MOT16-02.txt', 'res/MOT16res/MOT16-09.txt']
Check prediction files
res/MOT16res/MOT16-02.txt
res/MOT16res/MOT16-09.txt
Files are ok!
Evaluating on 2 cpu cores
multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/matlab/engine/__init__.py", line 45, in <module>
    pythonengine = importlib.import_module("matlabengineforpython"+_PYTHONVERSION)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'matlabengineforpython3_6'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/matlab/engine/__init__.py", line 61, in <module>
    pythonengine = importlib.import_module("matlabengineforpython"+_PYTHONVERSION)
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 658, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 571, in module_from_spec
  File "<frozen importlib._bootstrap_external>", line 922, in create_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
ImportError: /usr/local/MATLAB/R2020a/extern/engines/python/dist/matlab/engine/glnxa64/../../../../../../../bin/glnxa64/libssl.so.1.1: symbol EVP_idea_cbc version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 with link time reference

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/home/luca/MOTChallengeEvalKit/Evaluator.py", line 233, in run_metrics
    metricObject.compute_metrics_per_sequence(**args)
  File "/home/luca/MOTChallengeEvalKit/MOT/MOT_metrics.py", line 132, in compute_metrics_per_sequence
    import matlab.engine
  File "/usr/local/lib/python3.6/dist-packages/matlab/engine/__init__.py", line 64, in <module>
    'MathWorks Technical Support for assistance: %s' % e)
OSError: Please reinstall MATLAB Engine for Python or contact MathWorks Technical Support for assistance: /usr/local/MATLAB/R2020a/extern/engines/python/dist/matlab/engine/glnxa64/../../../../../../../bin/glnxa64/libssl.so.1.1: symbol EVP_idea_cbc version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 with link time reference
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "MOT/evalMOT.py", line 62, in eval
    self.results = [p.get() for p in processes]
  File "MOT/evalMOT.py", line 62, in <listcomp>
    self.results = [p.get() for p in processes]
  File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
OSError: Please reinstall MATLAB Engine for Python or contact MathWorks Technical Support for assistance: /usr/local/MATLAB/R2020a/extern/engines/python/dist/matlab/engine/glnxa64/../../../../../../../bin/glnxa64/libssl.so.1.1: symbol EVP_idea_cbc version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 with link time reference

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/luca/MOTChallengeEvalKit/Evaluator.py", line 91, in run
    results = self.eval()
  File "MOT/evalMOT.py", line 71, in eval
    raise Exception("<exc> MATLAB evalutation failed <!exc>")
Exception: <exc> MATLAB evalutation failed <!exc>

<br> Evaluation failed! <br>
Error Message Error:  MATLAB evalutation failed Error:  MATLAB evalutation failed 
ERROR Error:  MATLAB evalutation failed Error:  MATLAB evalutation failed 
Evaluation Finished
Your Results
Traceback (most recent call last):
  File "MOT/evalMOT.py", line 86, in <module>
    eval_mode = eval_mode)
  File "/home/luca/MOTChallengeEvalKit/Evaluator.py", line 161, in run
    print(self.render_summary())
  File "/home/luca/MOTChallengeEvalKit/Evaluator.py", line 219, in render_summary
    output = self.summary.to_string(
AttributeError: 'NoneType' object has no attribute 'to_string'

NOTE: I tried to import matlab.engine on my own on a separate file and it does not give any error:

luca@luca:~$ python3
Python 3.6.9 (default, Oct  8 2020, 12:12:24) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys, os, math
>>> import matlab.engine
>>> eng = matlab.engine.start_matlab()
>>> 

Attached one of the res files (for MOT16 - sequence 09)
MOT16-09.txt

open source

Will you open source the code of this article? Quo Vadis: Is Trajectory Forecasting the Key Towards Long-Term Multi-Object Tracking?

Pose keypoints annotations for MOTSynth

After downloading the MOT-style annotations from your official website, I found there were 12 columns for every row and I couldn’t find the description for these columns. From my understanding, they have included ‘frame’, ‘object_id’, ‘bbox_left’, ‘bbox_top’, ‘bbox_width’, ‘bbox_height’, ‘confidence’, ‘x’, ‘y’, ‘z’. I’m not sure what are the other two columns of ones. Maybe my understandings are not correct, if so, please let me know.

Therefore, my first question is what are the definitions of the 7th and 8th column. Because I didn’t find the pose information, so my second question is whether you have given the pose key-points data in the “MOTSynth”?

Thanks in advance!

MOT20Det Evaluation Clarification

Hello, I am undergoing an evaluation of a model I am currently testing using the MOT20Det. However, when it comes to submitting the results, most of the ground truth examples contradict each other, and I am unsure as to the values expected in the confidence. I've seen the following:

The 2019 Paper shows confidence values of higher than 19 and requests 9 values in total.
The repository, shifts to confidence values from -1 to 4.5, stating it requests 10 values in total, while only showing 7 values in the example.
The gt and det files show confidence values of 0 to 1, with a total of 9 and 10 values respectively.

If possible, I am requesting additional clarification on which of these formats is valid, how many values the format requests, and how the confidence values should be represented, such as 0 to 1, 0 to 100 or something else entirely? Much Appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.