Giter Site home page Giter Site logo

qdtrack's People

Contributors

fyu avatar oceanpang avatar siyuanliii avatar tobiasfshr avatar xialipku avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qdtrack's Issues

Problem while converting BDD tracking dataset. KeyError: "video_name"

(qdtrack) user1@king-MS-7B48:~/quasi/qdtrack$ python -m bdd100k.label.to_coco -m track -i bdd100k/labels/box_track_20/train -o data/bdd/labels/box_track_20/box_track_train_cocofmt.json
[2021-07-13 01:25:32,075 to_coco.py:305 main] Mode: track
remove-ignore: False
ignore-as-class: False
[2021-07-13 01:25:32,075 to_coco.py:307 main] Loading annotations...
[2021-07-13 01:30:06,229 to_coco.py:318 main] Converting annotations...
0%| | 0/1400 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/iiitd/anaconda3/envs/qdtrack/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/iiitd/anaconda3/envs/qdtrack/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/iiitd/anaconda3/envs/qdtrack/lib/python3.7/site-packages/bdd100k-1.0.0-py3.7.egg/bdd100k/label/to_coco.py", line 337, in
main()
File "/home/iiitd/anaconda3/envs/qdtrack/lib/python3.7/site-packages/bdd100k-1.0.0-py3.7.egg/bdd100k/label/to_coco.py", line 327, in main
labels, args.ignore_as_class, args.remove_ignore
File "/home/iiitd/anaconda3/envs/qdtrack/lib/python3.7/site-packages/bdd100k-1.0.0-py3.7.egg/bdd100k/label/to_coco.py", line 227, in bdd100k2coco_track
video = dict(id=video_id, name=video_anns[0]["video_name"])
KeyError: 'video_name'

The directory structure for the BDD Dataset
Image--|--100k - Train, test, valid folders each containing images
|--track - train, test, valid folder containing folders, each folder, for a sequence of images
|

Labels--|--det_20 - train, valid json files
|--box_track_20 - train, val folder containing .json files corresponding to each folder containing a secqence of image.
|
There is also an issue{#53} while converting the detection dataset. Please let me know if there is any more information that is required in order to help.

Problem with test on a single GPU

Hey there,

Thanks for the nice work and sharing the code!

I encountered some problems while running the code, and maybe somebody else has already solved it before?

I am running the test code on a single GPU using the TAO dataset. My command is:
python tools/test.py configs/tao/ft_qdtrack_frcnn_r50_fpn_24e_tao.py checkpoint_files/qdtrack_tao.pth --out ./outputs/tao_test.pkl --show-dir ./outputs

The complete output is:

loading annotations into memory...
Done (t=1.42s)
creating index...
index created!
Use load_from_local loader
[ ] 0/36375, elapsed: 0s, ETA:/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448238472/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/mmdet-2.10.0-py3.9.egg/mmdet/models/dense_heads/rpn_head.py:179: UserWarning: In rpn_proposal or test_cfg, nms_thr has been moved to a dict named nms as iou_threshold, max_num has been renamed as max_per_img, name of original arguments and the way to specify iou_threshold of NMS will be deprecated.
warnings.warn(
[ ] 1/36375, 2.1 task/s, elapsed: 0s, ETA: 17524sTraceback (most recent call last):
File "/pub/scratch/IMUTracking/qdtrack/tools/test.py", line 163, in
main()
File "/pub/scratch/IMUTracking/qdtrack/tools/test.py", line 135, in main
outputs = single_gpu_test(model, data_loader, args.show, args.show_dir,
File "/pub/scratch/IMUTracking/qdtrack/qdtrack/apis/test.py", line 24, in single_gpu_test
result = model(return_loss=False, rescale=True, **data)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward
return super().forward(*inputs, **kwargs)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/mmcv/runner/fp16_utils.py", line 84, in new_func
return old_func(*args, **kwargs)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/mmdet-2.10.0-py3.9.egg/mmdet/models/detectors/base.py", line 183, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/local/home/anaconda3/envs/imut/lib/python3.9/site-packages/mmdet-2.10.0-py3.9.egg/mmdet/models/detectors/base.py", line 160, in forward_test
return self.simple_test(imgs[0], img_metas[0], **kwargs)
File "/pub/scratch/IMUTracking/qdtrack/qdtrack/models/mot/quasi_dense.py", line 94, in simple_test
bboxes, labels, ids = self.tracker.match(
File "/pub/scratch/IMUTracking/qdtrack/qdtrack/models/trackers/tao_tracker.py", line 158, in match
sims = cal_similarity(
TypeError: cal_similarity() got an unexpected keyword argument 'transpose'

Issue with bddtrack2cocovid.py

Thank you for your paper and this repo! I am trying to create the coco-style .json files via your code (bddtrack2cocovid.py) and I am running into this issue:
image

I have donwloaded the bdd dataset as suggested in your repo. (https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md)
I have downloaded 'Images' and 'Detection 2020 Labels' into 'detection' and 'MOT 2020 Data' and 'MOT 2020 Labels' into 'tracking'.
I really don't see were my mistake is.
Could you please help me to solve it. Thank you!

about train

when I train the net,Epoch 1 ,200/171305 ,the result as follow:
lr: 7.992e-03,loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 81.8194, loss_bbox: nan, loss_track: nan, loss_track_aux: nan, loss: nan
why?

How to get bdd100k data?

We can not register in bdd-data.berkeley.edu, can you give us one way to download the data. Thank you.

How to track visualization on BDD

I tried python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show] [--show-dir ${SHOW_DIR}] [--cfg-options]
but I didn't get the results of track visualization on BDD

Track visualization

I want to visualize the predicted tracklets of the TAO model.
Is there a related code for this?

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Thank you for your paper and this repo!
I would like to test your pretrained model on the BDD100k dataset.
Therefore I followed the instructions (https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md) - downloaded BDD100k, converted annotations as described and stored everything as your folder structure suggests.

I used 'single-gpu testing' in the chapter 'Test a Model' and executed the following command in the terminal:
python tools/test.py ${QDTrack}/configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ${QDTrack}/pretrained_models/qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth --out testrun_01.pkl --eval track --show-dir ${QDTrack}/data/results

${QDTrack} = indicates the path on my machine to qdtrack

I get the following error:
image

Could you please help me solving this issue. Thanks a lot!

test pretrained checkpoint

Dear authors,

I am trying to test the pretrained bdd100k model using

CUDA_VISIBLE_DEVICES=0 python tools/test.py configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py \
qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth \
--eval bbox --show

However, it shows an error as below. Could you help or give any suggestion? Thank you very much!

loading annotations into memory...
Done (t=4.83s)
creating index...
index created!
/usr/local/lib/python3.6/dist-packages/mmdet/core/anchor/builder.py:16: UserWarning: ``build_anchor_generator`` would be deprecated soon, please use ``build_prior_generator``
  '``build_anchor_generator`` would be deprecated soon, please use '
Use load_from_local loader
completed: 0, elapsed: 0sTraceback (most recent call last):
  File "tools/test.py", line 163, in <module>
    main()
  File "tools/test.py", line 159, in main
    print(dataset.evaluate(outputs, **eval_kwargs))
  File "/home/chenwy/qdtrack/qdtrack/datasets/coco_video_dataset.py", line 309, in evaluate
    **bbox_kwargs)
  File "/usr/local/lib/python3.6/dist-packages/mmdet/datasets/coco.py", line 413, in evaluate
    result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
  File "/usr/local/lib/python3.6/dist-packages/mmdet/datasets/coco.py", line 358, in format_results
    result_files = self.results2json(results, jsonfile_prefix)
  File "/usr/local/lib/python3.6/dist-packages/mmdet/datasets/coco.py", line 289, in results2json
    if isinstance(results[0], list):
IndexError: list index out of range

no images saved running test.py

I am running test.py to test your pretrained model on the bdd100k dataset.
I would like to save the painted images with the evaluation e.g. including bounding boxes and track ids into my directory.
After running:
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --out testrun.pkl --eval track --show --show-dir ~/01_models/qdtrack/data/results

my folder ~/01_models/qdtrack/data/results is empty.

arguments from test.py:

parser.add_argument('--show', action='store_true', help='show results')
parser.add_argument('--show-dir', help='directory where painted images will be saved')

What am I doing wrong? Could you please help me?

Test Error

After training for two epochs, the training is interrupted by this error.
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
"subprocess.CalledProcessError: Command '['/root/anaconda3/envs/qdtrack/bin/python', '-u', './tools/train.py', '--local_rank=1', 'configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py', '--launcher', 'pytorch', '--gpu-ids', '0', '1', '--resume-from', 'work_dirs/qdtrack-frcnn_r50_fpn_12e_bdd100k/epoch_4.pth']' returned non-zero exit status 1."
When running test.py, the error appears again.
Without changing train and test codes, the project seems that it can not do test work.
The feedback also says "Setting OMP_NUM_THREADS environment variable for each process to be 1 in default", but I can not find OMP_NUM_THREADS.

MOT17 Exploding Training Gradients

Hi there,
Awesome results you guys.

When I train QDTrack on MOT17 dataset, I notice the training classification losses to be very large, and eventually they become NaN. Any ideas as to why this is happening?

I use Pytorch 1.7.1 with CUDA 11.0

Quasi Dense Faster RCNN: 'NoneType' object has no attribute 'rpn' when using pretrained model with TAO

I am trying to test your pretrained model from the TAO dataset and this is the error I am getting:

100%|██████████| 1/1 [00:00<00:00, 1535.81it/s]
Traceback (most recent call last):
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmcv/mmcv/utils/registry.py", line 52, in build_from_cfg
    return obj_cls(**args)
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/qdtrack/models/mot/quasi_dense.py", line 13, in __init__
    super().__init__(*args, **kwargs)
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmdetection/mmdet/models/detectors/two_stage.py", line 39, in __init__
    rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn)
AttributeError: 'NoneType' object has no attribute 'rpn'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/muhammadmehdi/PycharmProjects/memex/run_offline.py", line 56, in <module>
    root_dir, {})
  File "/home/muhammadmehdi/PycharmProjects/memex/algorithm.py", line 579, in run_auto
    model = build_model(cfg.model, train_cfg=None, test_cfg=None)
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/qdtrack/models/builder.py", line 15, in build_model
    return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmcv/mmcv/utils/registry.py", line 55, in build_from_cfg
    raise type(e)(f'{obj_cls.__name__}: {e}')
AttributeError: QuasiDenseFasterRCNN: 'NoneType' object has no attribute 'rpn'

The code:

cfg = Config.fromfile('configs/tao/qdtrack_frcnn_r101_fpn_24e_lvis.py')

# set cudnn_benchmark
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
cfg.data.test.test_mode = True

# build the model and load checkpoint
model = build_model(cfg.model, train_cfg=None, test_cfg=None)
checkpoint = load_checkpoint(model, 'weights/qdtrack_tao.pth', map_location='cuda')

Your code doesn't work. I get error: TypeError: an integer is required (got type bytes)

your code doesn't work, I get the following error:

Traceback (most recent call last):
  File "/home/muhammadmehdi/PycharmProjects/memex/qdtrack_test.py", line 3, in <module>
    from qdtrack.apis import multi_gpu_test, single_gpu_test
  File "/home/muhammadmehdi/PycharmProjects/memex/qdtrack/qdtrack/apis/__init__.py", line 1, in <module>
    from .inference import inference_model, init_model
  File "/home/muhammadmehdi/PycharmProjects/memex/qdtrack/qdtrack/apis/inference.py", line 11, in <module>
    from mmdet.datasets import replace_ImageToTensor
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/datasets/__init__.py", line 11, in <module>
    from .utils import (NumClassCheckHook, get_loading_pipeline,
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/datasets/utils.py", line 9, in <module>
    from mmdet.models.dense_heads import GARPNHead, RPNHead
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/models/__init__.py", line 6, in <module>
    from .dense_heads import *  # noqa: F401,F403
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/models/dense_heads/__init__.py", line 4, in <module>
    from .autoassign_head import AutoAssignHead
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/models/dense_heads/autoassign_head.py", line 12, in <module>
    from mmdet.models.dense_heads.paa_head import levels_to_images
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/mmdet/models/dense_heads/paa_head.py", line 12, in <module>
    import sklearn.mixture as skm
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/__init__.py", line 64, in <module>
    from .base import clone
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/base.py", line 13, in <module>
    from .utils.fixes import signature
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/utils/__init__.py", line 13, in <module>
    from .validation import (as_float_array,
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/utils/validation.py", line 27, in <module>
    from ..utils._joblib import Memory
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/utils/_joblib.py", line 18, in <module>
    from ..externals.joblib import __all__   # noqa
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/externals/joblib/__init__.py", line 119, in <module>
    from .parallel import Parallel
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/externals/joblib/parallel.py", line 32, in <module>
    from .externals.cloudpickle import dumps, loads
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/__init__.py", line 3, in <module>
    from .cloudpickle import *
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py", line 151, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "/home/muhammadmehdi/PycharmProjects/memex/venv/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py", line 132, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

The code that causes this:

import torch
from mmcv import Config
from qdtrack.apis import multi_gpu_test, single_gpu_test
from qdtrack.models import build_model
from qdtrack.datasets import build_dataloader

cfg = Config.fromfile('configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py')
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
cfg.data.test.test_mode = True

How to do batch inference?

Hi. I managed to make your code work for inference with 1 image per GPU (so, batch-size of 1) with our custom dataset. Here is the code I am using:

        # get the labels
        tao_labels = []
        tao_file = open('data/tao/annotations/tao_classes.txt')
        for tao_name in tao_file:
            tao_labels.append(tao_name.strip('\n'))

        # create fake bdd100k data
        fake_data = []
        for i in range(len(img_files)):
            fake_label_dict = {'id': '00114122', 'category': 'car',
                               'attributes': {'Occluded': True, 'Truncated': False, 'Crowd': False},
                               'box2d': {'x1': 0, 'x2': 0, 'y1': 0, 'y2': 0}}
            fake_data.append({'name': img_files[i], 'labels': [fake_label_dict],
                              'video_name': 'imgs_fed_to_detector', 'index': i})

        # convert the fake bdd100k file to a coco file
        coco = bdd100k2coco_track([fake_data], True, True)
        with open(os.path.join(root_dir, session_name, 'fake_coco.json'), "w+") as f:
            json.dump(coco, f)
        cfg = Config.fromfile('configs/tao/qdtrack_frcnn_r101_fpn_12e_tao_ft.py')

        # set cudnn_benchmark
        torch.backends.cudnn.benchmark = True
        cfg.model.pretrained = None
        cfg.data.test.test_mode = True

        # build the model and load checkpoint
        model = build_model(cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg'))
        checkpoint = load_checkpoint(model, 'weights/qdtrack_tao_20210812_221438-b6bd07e2.pth', map_location='cuda')
        model = fuse_conv_bn(model)
        model.CLASSES = checkpoint['meta']['CLASSES']

        # provide your fake coco file to model
        model = MMDataParallel(model, device_ids=[0])
        cfg.data.test['ann_file'] = os.path.join(root_dir, session_name, 'fake_coco.json')
        cfg.data.test['img_prefix'] = os.path.join(root_dir, session_name)
        
        # where batch_size can be changed, currently I am keeping it to 1
        dataset = build_dataset(cfg.data.test)
        data_loader = build_dataloader(
            dataset,
            samples_per_gpu=batch_size,
            workers_per_gpu=cfg.data.workers_per_gpu,
            dist=False,
            shuffle=False)
        model.eval()

        tracks = []
        for i, data in enumerate(data_loader):
            with torch.no_grad():
                result = model(return_loss=False, rescale=True, **data)

            # get all detections
            tao_dets_of_all_categories = result['bbox_results']
            category_id = 0
            for tao_dets_of_specific_category in tao_dets_of_all_categories:
                for det_of_specific_category in tao_dets_of_specific_category:
                    x1, y1, x2, y2, conf = det_of_specific_category
                    x1, y1, x2, y2, conf = int(x1), int(y1), int(x2), int(y2), round(float(conf), 2)
                category_id += 1

            # get all tracks
            tao_tracks_of_all_categories = result['track_results']
            category_id = 0
            for tao_track_of_specific_category in tao_tracks_of_all_categories:
                for track_of_specific_category in tao_track_of_specific_category:
                    track_id, x1, y1, x2, y2, conf = track_of_specific_category
                    track_id, x1, y1, x2, y2 = int(track_id), int(x1), int(y1), int(x2), int(y2)
                    conf = round(float(conf), 2)
                    tracks.append([track_id, x1, y1, x2, y2, conf, tao_labels[category_id], i])
                category_id += 1

I tried to make it work with a batch size of 2 but I noticed there is no way to know which detection/ track inside the result value belongs to which image in the batch of images passed to the model as input.

Here is my understanding of what result = model(return_loss=False, rescale=True, **data) represents. Result is a dictionary that contains two keys: bbox_results and track_results. For both keys, the value is a list whose size is equal to 482 (the exact number of classes in the TAO dataset). So, as seen above in the code, I get this list and I iterate through each member of the list. Each member, when the key is bbox_result, is a numpy array with shape num_of_detections x 5 where num_of_detections is for the whole batch (?) and 5 because each detection is represented by the bbox coordinates and a confidence value. For track_results, this is num_of_tracks x 6 because you have that one extra value for the track id.

If my above understanding of the structure of results is correct, then it seems to me there is no way to assign a track or detection to a specific image in a batch. Is there a way to do that in the code sample I posted above? Thanks.

Your BDD100K instructions are unclear

This is what you are saying:


On the official download page, the required data and annotations are

detection set images: Images
detection set annotations: Detection 2020 Labels
tracking set images: MOT 2020 Data
tracking set annotations: MOT 2020 Labels

But there is no Images or MOT 2020 Data on the official website for BDD

Computing Overall Loss

Thanks for the great work in this repo and your paper.

I am trying to reproduce the computation of the overall loss value.

In your paper you define the overall loss as follows:
image

  • with the following detection loss:
    image

When I run the function forward_train in the qdtrack/models/mot/qdtrack.py file, I'll get a dictionary of different losses with the following keys: ['loss_rpn_bbox'], ['loss_rpn_cls'],['loss_cls'],['loss_bbox'],['loss_track'],['loss_track_aux']

I understand that ['loss_track_aux'] equals the auxiliary loss and that ['loss_track'] equals the embedding loss in the overall loss equation.

How do I compute the detection loss correctly? What should I do mathematically with ['loss_rpn_bbox'], ['loss_rpn_cls'],['loss_cls'] and ['loss_bbox'] in order to compute the correct detection loss and therefore the overall loss value as printed during training?

I would also be very happy if you would point me in the right direction where I can find the computation in the code. I really tried to find the correct lines, but unfortunately I couldn't.

Thank you very much for your help!

Tao error while run "setup.py develop"

While following the steps of the installation and reach the last line "python setup.py develop"
I got this error:

Processing dependencies for qdtrack==0.1.0+b905d2a
Searching for tao
Reading https://pypi.python.org/simple/tao/
No local packages or working download links found for tao
error: Could not find suitable distribution for Requirement.parse('tao')

any clue ?

from ..evaluation import xyxy2xywh

Hi there.

The aforementioned import statement fails in qdtrack/qdtrack/core/to_bdd100k/transforms.py

I have instead implemented the following function:
def xyxy2xywh(self, bbox): _bbox = bbox.tolist() return [ _bbox[0], _bbox[1], _bbox[2] - _bbox[0], _bbox[3] - _bbox[1], ]
Is this reasonable or are there customized lines of code that you intended to have?

KeyError: 'QDTrack is not in the model registry'

Your code no longer works. I tried the following:

cfg = Config.fromfile('configs/tao/qdtrack_frcnn_r101_fpn_24e_lvis.py')

# set cudnn_benchmark
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
cfg.data.test.test_mode = True

# build the model and load checkpoint
model = build_model(cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg'))

The error:

Traceback (most recent call last):
  File "/home/muhammadmehdi/PycharmProjects/memex/run_offline.py", line 109, in <module>
    auto_estimated_ellipsoids = run_auto(auto_img_files, auto_ptcld_files, auto_pose_files, auto_int_files, root_dir)
  File "/home/muhammadmehdi/PycharmProjects/memex/algorithm.py", line 275, in run_auto
    model = build_model(cfg.model, train_cfg=None, test_cfg=cfg.get('test_cfg'))
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/qdtrack/models/builder.py", line 15, in build_model
    return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/muhammadmehdi/PycharmProjects/memex/SysCV-qdtrack/mmcv/mmcv/utils/registry.py", line 45, in build_from_cfg
    f'{obj_type} is not in the {registry.name} registry')
KeyError: 'QDTrack is not in the model registry'

Process finished with exit code 1

Training loss/Acc diagram

Thanks for the great work!

I am trying to retrain QDTrack on BDD100k, however, it is converging really slowly (at least for the first epochs). Therefore I wanted to ask, whether it is possible to share your diagrams on training loss and acc?

Thanks in advance!

Unclear which links to pick from BDD website for dataset prep

The Readme indicates Detection and Tracking sets, but the site shows 11 options, including:
Images, MOT 2020 Labels, MOT 2020 Data, Detection 2020 Labels.

Also, clicking MOT 2020 Data shows many different options. Should they all be downloaded?

Pre-trained network for tao doesn't provide any tracking ids or labels

for i, data in enumerate(data_loader):
    with torch.no_grad():
        result = model(return_loss=False, rescale=True, **data)
        trks = result['track_result']

The trks above has two keys: bbox_result and track result

When using the bdd100k, the bbox_result key provides a dictionary where each key is the track id and the value for that track id is another dictionary with two keys (bbox and label).

However, when I use the tao pretrained network, the bbox_result and track result simply give me lists. So, bbox_result now gives a list of numpy arrays where each numpy array has size (n, 6). How do I infer the track id and label from each of these numpy arrays? The first value of each row of the numpy array seems like an integer but it has a really high value that keeps going up and down, so I guess it is the label? I don't see any other integer value in this numpy array

about MOT17: loss_track degrades to zero after 50 iterations

Thanks for your great work!
I'm now trying to run qdtrack on MOT17. I find the detection part went well during training and reached a reasonable mAP score. But, the loss of the quasi-dense embedding part degraded fastly to zero within 100 iterations, and obtained very low MOTA, MOTP, IDF1, etc., after training.
Note that I modified nothing except the code related to dataset, which I've checked carefully thus I believe is not the reason.
Should I modify the settings of quasi-dense embedding head to make it work? Do you have any suggestions? Thank you very much!

Result file

Hi,

How do I match corresponding images with tracking results?
Is the frame name saved somewhere in pickle file?

Is the format of the output bounding box in a result file [top left, top right]?

Thanks!

Where is tao-classes.txt?

Your repository seems incomplete, please upload tao_classes.txt as your pre-trained model requires it.

about kitti train

Can I get the convert_datasets code about kitti to complete the training and evaluation?

[Error]: Missing Image during train

I tried to retrain the model so I downloaded the dataset from the Official BDD100k Website.

However, I met a FileNotFoundError: [Errno 2] No such file or directory: 'data/bdd/images/track/train/021c0ade-877a81e9/021c0ade-877a81e9-0000014.jpg'.

Does it mean that extra data is used? Could you please share the extra data for training

Could you please help me with it? Waiting for your reply. Thank you very much.

Evaluation results on TAO-val

Hello,

When I train the model with your code for TAO (i.e., pretrain on LVIS and finetune on TAO-train), I get the following final results on TAO-val. which are lower than the scores reported in the original paper.

mAP0.5 mAP0.75 mAP[0.5:0.95]
13.8 5.5 6.5
16.1 5.0 7.0
  • above : reproduced // below : original

Are there any issues that I have to consider for getting the original score?

Thanks,

Inconsistent Results on BDD100K Tracking Validation Set

Hi there.

I ran the pre-trained BDD100K model on the tracking validation set and the resulting MOTA IDF1 scores are lower than what QDTrack claim: MOTA: 54.5, IDF1: 66.7 vs your MOTA: 63.5, IDF1 71.5.

Kindly verify if this is the case for you or if there are any missing settings.

I followed the instructions and ran this command:
sh ./tools/dist_test.sh ./configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ./ckpts/mmdet/qdtrack_frcnn_r50_fpn_12e_bdd100k_13328aed.pth 2 --out exp.pkl --eval track

Can you explain the concept of backdrop in your paper

Hi. The paper is very interesting but I cannot understand backdrop. It seems to me that you guys don't drop unmatched tracks at all and keep them for the future just in case an object ends up matching with them and so you then match those detections with the previous unmatched tracks.

Are these backdrops kept in memory for the entire inference period or do you just eliminate the unmatched track completely after a few frames? I am asking this because if you guys do keep track of unmatched track then, realistically, when I look at an object and turn away for a while and look at that same object again, QDTRACK should be able to match this new object as having belonged to a previous track for that object.

But that never happens in my experiments and the same object is assigned an entirely new track.

about aval

I want to know whether the N task/s(the result) is FPS (paper)?

Why don't you take motion priors into account?

I tried your network on some data we collected in the parking lot. There were two similar-looking cars that were separated from each other by several meters but your network actually confused both of them as being the same (the background for both cars was quite similar as well). If you use motion priors (like the Kalman tracker of the original SORT), you can easily deal with situations like these.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.