Giter Site home page Giter Site logo

conditional-lane-detection's Introduction

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

This is the official implementation code of the paper "CondLaneNet: a Top-to-down Lane Detection Framework Based on ConditionalConvolution". (Link: https://arxiv.org/abs/2105.05003) We achieve state-of-the-art performance on multiple lane detection benchmarks. Our paper has been accepted by ICCV2021.

Architecture,

Installation

This implementation is based on mmdetection(v2.0.0). Please refer to install.md for installation.

Datasets

We conducted experiments on CurveLanes, CULane and TuSimple. Please refer to dataset.md for installation.

Models

For your convenience, we provide the following trained models on Curvelanes, CULane, and TuSimple datasets

Model Speed F1 Link
curvelanes_small 154FPS 85.09 download
curvelanes_medium 109FPS 85.92 download
curvelanes_large 48FPS 86.10 download
culane_small 220FPS 78.14 download
culane_medium 152FPS 78.74 download
culane_large 58FPS 79.48 download
tusimple_small 220FPS 97.01 download
tusimple_medium 152FPS 96.98 download
tusimple_large 58FPS 97.24 download

Testing

CurveLanes 1 Edit the "data_root" in the config file to your Curvelanes dataset path. For example, for the small version, open "configs/curvelanes/curvelanes_small_test.py" and set "data_root" to "[your-data-path]/curvelanes".

2 run the test script

cd [project-root]
python tools/condlanenet/curvelanes/test_curvelanes.py configs/condlanenet/curvelanes/curvelanes_small_test.py [model-path] --evaluate

If "--evaluate" is added, the evaluation results will be printed. If you want to save the visualization results, you can add "--show" and add "--show_dst" to specify the save path.

CULane

1 Edit the "data_root" in the config file to your CULane dataset path. For example,for the small version, you should open "configs/culane/culane_small_test.py" and set the "data_root" to "[your-data-path]/culane".

2 run the test script

cd [project-root]
python tools/condlanenet/culane/test_culane.py configs/condlanenet/culane/culane_small_test.py [model-path]
  • you can add "--show" and add "--show_dst" to specify the save path.
  • you can add "--results_dst" to specify the result saving path.

3 We use the official evaluation tools of SCNN to evaluate the results.

TuSimple

1 Edit the "data_root" in the config file to your TuSimple dataset path. For example,for the small version, you should open "configs/tusimple/tusimple_small_test.py" and set the "data_root" to "[your-data-path]/tuSimple".

2 run the test script

cd [project-root]
python tools/condlanenet/tusimple/test_tusimple.py configs/condlanenet/tusimple/tusimple_small_test.py [model-path]
  • you can add "--show" and add "--show_dst" to specify the save path.
  • you can add "--results_dst" to specify the result saving path.

3 We use the official evaluation tools of TuSimple to evaluate the results.

Speed Test

cd [project-root]
python tools/condlanenet/speed_test.py configs/condlanenet/culane/culane_small_test.py [model-path]

Training

For example, train CULane using 4 gpus:

cd [project-root]
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29001 tools/dist_train.sh configs/condlanenet/culane/culane_small_train.py 4 --no-validate 

Results

CurveLanes

Model F1 Speed GFLOPS
Small(ResNet-18) 85.09 154FPS 10.3
Medium(ResNet-34) 85.92 109FPS 19.7
Large(ResNet-101) 86.10 48FPS 44.9

CULane

Model F1 Speed GFLOPS
Small(ResNet-18) 78.14 220FPS 10.2
Medium(ResNet-34) 78.74 152FPS 19.6
Large(ResNet-101) 79.48 58FPS 44.8

TuSimple

Model F1 Speed GFLOPS
Small(ResNet-18) 97.01 220FPS 10.2
Medium(ResNet-34) 96.98 152FPS 19.6
Large(ResNet-101) 97.24 58FPS 44.8

Visualization results

Results

conditional-lane-detection's People

Contributors

alibaba-oss avatar hustllz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conditional-lane-detection's Issues

Documentation

Is there anyone who understands the structure of the code?

I would appreciate some documentation on which file is used for what...

Thank you.

How to use the official evaluation tools of SCNN to evaluate the results.

I want to test the results of the model for this CULane dataset, such as quantitative indicators such as F1 score.
You mentioned in your README file that you are usingofficial evaluation tools of SCNN to evaluate the results.
I am not very familiar with the official tool, is there a dummy-style tutorial available?

Thank you very much!!! My email is [email protected],looking forward to your reply!!

Need to update the training part of readme file

Running tools/dist_train.sh will meet the error "RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. " .

While running tools/train.py works well.

Has tools/dist_train.sh been deprecated?

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Hi all,

when training the model, I get the following error when Pytorch's anomaly detection is on

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 64, 10, 25]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

The traceback is the following:

[W python_anomaly_mode.cpp:104] Warning: Error detected in ReluBackward0. Traceback of forward call that caused the error:
  File "tools/train.py", line 159, in <module>
    main()
  File "tools/train.py", line 155, in main
    meta=meta)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/apis/train.py", line 167, in train_detector
    runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/mmcv/runner/runner.py", line 383, in run
    epoch_runner(data_loaders[i], **kwargs)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/mmcv/runner/runner.py", line 282, in train
    self.model, data_batch, train_mode=True, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/apis/train.py", line 74, in batch_processor
    losses = model(**data)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/models/detectors/condlanenet.py", line 327, in forward
    return self.forward_train(img, img_metas, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/models/detectors/condlanenet.py", line 344, in forward_train
    output, memory = self.neck(output)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/core/fp16/decorators.py", line 49, in new_func
    return old_func(*args, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/models/necks/trans_fpn.py", line 257, in forward
    trans_feat = self.trans_head(src[self.trans_idx])
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/models/necks/trans_fpn.py", line 153, in forward
    src = layer(src, pos.to(src.device))
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/Code/conditional-lane-detection/mmdet/models/necks/trans_fpn.py", line 105, in forward
    x = self.pre_conv(x)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/mmcv/cnn/bricks/conv_module.py", line 181, in forward
    x = self.activate(x)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 98, in forward
    return F.relu(input, inplace=self.inplace)
  File "/home/<user>/.conda/envs/conditional-lane-detection/lib/python3.7/site-packages/torch/nn/functional.py", line 1299, in relu
    result = torch.relu(input)
 (function _print_stack)

Anybody knows how to fix this?

V100的推理速度

python tools/condlanenet/speed_test.py configs/condlanenet/curvelanes/curvelanes_medium_test.py weights/curvelanes_medium.pth

output:

Elapsed time in all model infernece: 11.598999

是不是不太对劲,论文上报告的fps有一百多。另外增加batchsize之报错

heat_nms = heat_nms.squeeze(0).permute(1, 2, 0).detach().cpu().numpy()
RuntimeError: number of dims don't match in permute

condlanenet.py

hi i am getting ussuse in seld.neck(output) in forward train of file condlanenet.py

MMCV version

Hey! I tried to test on the tusimple large. I ran this code

python tools/condlanenet/tusimple/test_tusimple.py configs/condlanenet/tusimple/tusimple_large_test.py D:/conditional-lane-detection-master/tusimple_large.pth

and it gives error like this

File "tools/condlanenet/tusimple/test_tusimple.py", line 16, in
from mmdet.datasets import build_dataloader, build_dataset
File "d:\conditional-lane-detection-master\mmdetection\mmdet_init_.py", line 25, in
f'MMCV=={mmcv.version} is used but incompatible. '
AssertionError: MMCV==0.5.6 is used but incompatible. Please install mmcv>=1.3.2, <=1.4.0.

What should I do?

ValueError: need at least one array to concatenate

Traceback (most recent call last):
File "tools/train.py", line 158, in
main()
File "tools/train.py", line 154, in main
meta=meta)
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmdet-2.0.0+c57b242-py3.7-linux-x86_64.egg/mmdet/apis/train.py", line 165, in train_detector
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
for i, data_batch in enumerate(self.data_loader):
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 359, in iter
return self._get_iterator()
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 944, in init
self._reset(loader, first_iter=True)
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
self._try_put_index()
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
index = self._next_index()
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 226, in iter
for idx in self.sampler:
File "/home/disk/wangnian/anaconda3/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmdet-2.0.0+c57b242-py3.7-linux-x86_64.egg/mmdet/datasets/samplers/group_sampler.py", line 36, in iter
indices = np.concatenate(indices)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate

我用的 python tools/train.py configs/condlanenet/culane/culane_small_train.py在服务器训练,想请教下这个错误出现的原因

Setup.py develop

Hello,

After following all the instructions about the setup, I got to the last command:

python setup.py develop

and I got:

Illegal instruction (core dumped)

Does anyone know the reason for this?

Thank you!

Could you please tell me how to correct this error?Thank you in advance!!!

~/projects/condlanenet/conditional-lane-detection$ python tools/condlanenet/culane/test_culane.py
Traceback (most recent call last):
File "tools/condlanenet/culane/test_culane.py", line 21, in
from mmdet.models.detectors.condlanenet import CondLanePostProcessor
File "/home/yjx/projects/condlanenet/conditional-lane-detection/mmdet/models/init.py", line 1, in
from .backbones import * # noqa: F401,F403
File "/home/yjx/projects/condlanenet/conditional-lane-detection/mmdet/models/backbones/init.py", line 1, in
from .hrnet import HRNet
File "/home/yjx/projects/condlanenet/conditional-lane-detection/mmdet/models/backbones/hrnet.py", line 7, in
from mmdet.utils import get_root_logger
ImportError: cannot import name 'get_root_logger' from 'mmdet.utils' (/home/yjx/projects/condlanenet/conditional-lane-detection/mmdet/utils/init.py)

CUlane evaluate, help

hlpe me !Who can share the code of the evaluate culane dataset? Thank you very much

train --no-validate

eventhough i try --validate with one gpu ,still returned non-zero exit status 1.

conditional-lane-detection$ CUDA_VISIBLE_DEVICES=0 PORT=29001 /mnt/data/pycharmmm/conditional-lane-detection/tools/dist_train.sh /mnt/data/pycharmmm/conditional-lane-detection/configs/condlanenet/culane/culane_small_train.py 1 --validate

KeyError: 'gt_points' when run the image_demo.py

I have tried to test my image with tusimple_small.pth. I ran this command line python3 demo/image_demo.py ../linesDetected.jpg configs/condlanenet/tusimple/tusimple_small_test.py ../Downloads/tusimple_small.pth.

When I run the command I get this error:

/home/batuhanbeytekin/conditional-lane-detection/mmdet/models/necks/trans_fpn.py:44: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature**(2 * (dim_t // 2) / self.num_pos_feats)
/home/batuhanbeytekin/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
The model and loaded state dict do not match exactly

unexpected key in source state_dict: bbox_head.reg_branch.0.conv.weight, bbox_head.reg_branch.0.bn.weight, bbox_head.reg_branch.0.bn.bias, bbox_head.reg_branch.0.bn.running_mean, bbox_head.reg_branch.0.bn.running_var, bbox_head.reg_branch.0.bn.num_batches_tracked, bbox_head.reg_branch.1.conv.weight, bbox_head.reg_branch.1.bn.weight, bbox_head.reg_branch.1.bn.bias, bbox_head.reg_branch.1.bn.running_mean, bbox_head.reg_branch.1.bn.running_var, bbox_head.reg_branch.1.bn.num_batches_tracked, bbox_head.reg_branch.2.conv.weight, bbox_head.reg_branch.2.bn.weight, bbox_head.reg_branch.2.bn.bias, bbox_head.reg_branch.2.bn.running_mean, bbox_head.reg_branch.2.bn.running_var, bbox_head.reg_branch.2.bn.num_batches_tracked

Traceback (most recent call last):
File "demo/image_demo.py", line 26, in
main()
File "demo/image_demo.py", line 20, in main
result = inference_detector(model, args.img)
File "/home/batuhanbeytekin/conditional-lane-detection/mmdet/apis/inference.py", line 82, in inference_detector
data = test_pipeline(data)
File "/home/batuhanbeytekin/conditional-lane-detection/mmdet/datasets/pipelines/compose.py", line 29, in call
data = t(data)
File "/home/batuhanbeytekin/conditional-lane-detection/mmdet/datasets/pipelines/lane_formating.py", line 420, in call
valid = self.target(results)
File "/home/batuhanbeytekin/conditional-lane-detection/mmdet/datasets/pipelines/lane_formating.py", line 332, in target
gt_points = results['gt_points']
KeyError: 'gt_points'

Does anyone know how to solve this problem?

'CurvelanesDataset is not in the dataset registry'

Exception has occurred: KeyError
'CurvelanesDataset is not in the dataset registry'
File "/home/rx/ADAS/Lane/conditional-lane-detection/mmdetection/mmdet/datasets/builder.py", line 59, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "/home/rx/ADAS/Lane/conditional-lane-detection/tools/condlanenet/curvelanes/test_curvelanes.py", line 218, in main
dataset = build_dataset(cfg.data.test)
File "/home/rx/ADAS/Lane/conditional-lane-detection/tools/condlanenet/curvelanes/test_curvelanes.py", line 248, in
main()
报这个错,请帮忙看下什么原因?

How to inference

How to inference with the trained models ? I used the demo code , report the error

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

how can i solve the problem

How to use mmdet API to inference

Hi, I'm trying to use the tusimple_small model to detect lanes in a simple image.
Is it possible to use mmdet API to do it? Otherwise, how can I infer using a image (array) as the input, and not a .json file?

Thanks,

The model and loaded state dict do not match exactly.

Hi, I tried to use the given pretrained model for testing, however, I got the following results:

python tools/condlanenet/culane/test_culane.py configs/condlanenet/culane/culane_small_test.py history/official/culane_small.pth
The model and loaded state dict do not match exactly

unexpected key in source state_dict: bbox_head.reg_branch.0.conv.weight, bbox_head.reg_branch.0.bn.weight, bbox_head.reg_branch.0.bn.bias, bbox_head.reg_branch.0.bn.running_mean, bbox_head.reg_branch.0.bn.running_var, bbox_head.reg_branch.0.bn.num_batches_tracked, bbox_head.reg_branch.1.conv.weight, bbox_head.reg_branch.1.bn.weight, bbox_head.reg_branch.1.bn.bias, bbox_head.reg_branch.1.bn.running_mean, bbox_head.reg_branch.1.bn.running_var, bbox_head.reg_branch.1.bn.num_batches_tracked, bbox_head.reg_branch.2.conv.weight, bbox_head.reg_branch.2.bn.weight, bbox_head.reg_branch.2.bn.bias, bbox_head.reg_branch.2.bn.running_mean, bbox_head.reg_branch.2.bn.running_var, bbox_head.reg_branch.2.bn.num_batches_tracked

completed: 0, elapsed: 0s%
Is there something wrong with the pretrained models or the codes?

export onnx

I tried this model on my own video, hoping it can somehow improve the detection performance on corner cases like Y branching, but it fail to work as what I expected when using pretrained curvelanes model.

One vital drawback of this model is that perhaps it is extremely hard to be deployed on embedding devices, because there exist conditional weight/bias and conditional branching which depending on the input image on the fly.

So what is the suggestion to export it to onnx with dynamically weight and structure. Considering the fiexed nature of onnx inference framwwork like onnx runtime, really hard to do that!

Trained culane-small but could not get the performance described in the paper.

I trained culane-small model using CULane train split(88.9K) for 16 epochs, but the F1-score on CULane test split(34.7K) is only 77.55.
The F1-score reported in the paper is 78.14, and the gap is 0.69.
I think this gap is not small, as the F1-score gap between samll model and medium model is only 0.6 .
So, could you kindly tell us some details about how to train the model reported in the paper?
By the way, I use a batch size of 32 and did not change any hyperparameter in the codes.

Thank you very much~

mmdet这个目录是否需要删掉??

你好,我按照文档安装好环境之后,如果保留mmdet这个目录,则报“cannot import name 'deform_pool_ext'”
如果不保留Mmdel这个目录,则:'CurvelanesDataset is not in the dataset registry'

is it an error in DynamicMaskHead locations

DynamicMaskHead

            locations[:0, :, :] /= H
            locations[:1, :, :] /= [W](https://github.com/aliyun/conditional-lane-detection/blob/master/mmdet/models/dense_heads/condlanenet_head.py)
            should be
            locations[:,0, :, :] /= H
            locations[:,1, :, :] /= W

Running inference

Hi, great work !! I am trying to run inference on some of my own images, but the output is incorrect. Here are the steps I followed:-

  1. Loaded the small model trained on curvelanes dataset
  2. Resized the image to (800,320) [I want the output size to be (800,320) only].
  3. Got the predictions
  4. Supplied the predictions to the appropriate post-processor with downscale = 8.
  5. Supplied the output of the post-processor to adjust_results function with the following parameters
    :- crop_shape = (320,800)
    img_shape = (320,800)
    crop_offset = (0,0)
    ori_shape = (320,800)
  6. The output I am getting is this -
    https://drive.google.com/file/d/1BwTaZ1I5VVWAEoQKjx2q6-Wg8UFzv6fT/view?usp=sharing

It would be great if you can tell me where I am going wrong.

Thank You

questions about test_curvelanes.py

Thank you for this amazing job. It is really inspired me a lot.
I have a question about test_curvelanes.py (https://github.com/aliyun/conditional-lane-detection/blob/master/tools/condlanenet/curvelanes/test_curvelanes.py#L230).
I follow your instructions in readme, but I did not find eval_width, eval_height and lane_width in curvelanes_small_test.py.
So, do you use the default value like https://github.com/aliyun/conditional-lane-detection/blob/master/tools/condlanenet/curvelanes/test_curvelanes.py#L40 to test curvelanes ?
@hustllz @alibaba-oss

row_loss not going down

I am training culane_small with only two images to overfit. It's a crop-row image. But the row_loss is not going down. It's around 22. Is this normal? How can I tune to reduce the loss? Also after training, if I run culane_test, test output has no line drawn on them. Here is a ground truth image:

0_d03caef42cc4688a2ee02217f26c46e 00000 jpg gt

results do not agree with the author

I tested the model provided by the authors and the results were a bit different from what the authors described.
Do you have the same experience?
model: large.
result:
[{"name":"Accuracy","value":0.9636818107904509,"order":"desc"},{"name":"FP","value":0.021141864366163457,"order":"asc"},{"name":"FN","value":0.03633477114785525,"order":"asc"},{"name":"F1","value":0.9712022686934042,"order":"asc"}]

TuSimple Test: The model and loaded state dict do not match exactly.

I tried to use the pretrained model 'tusimple_large' to test and I'm running into this error.

Command used to test

python tools/condlanenet/tusimple/test_tusimple.py configs/condlanenet/tusimple/tusimple_large_test.py '/content/gdrive/MyDrive/LaneDetection/tusimple_large.pth' --show --show_dst '/content/showPath' --result_dst '/content/result'

Error:

The model and loaded state dict do not match exactly

size mismatch for bbox_head.mlp.layers.0.weight: copying a param with shape torch.Size([64, 200, 1]) from checkpoint, the shape in current model is torch.Size([64, 100, 1]).
[ ] 0/2782, elapsed: 0s, ETA:Traceback (most recent call last):
File "tools/condlanenet/tusimple/test_tusimple.py", line 236, in
main()
File "tools/condlanenet/tusimple/test_tusimple.py", line 232, in main
crop_bbox=cfg.crop_bbox)
File "tools/condlanenet/tusimple/test_tusimple.py", line 141, in single_gpu_test
return_loss=False, rescale=False, thr=hm_thr, **data)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/gdrive/My Drive/LaneDetection/conditional-lane-detection/mmdet/models/detectors/condlanenet.py", line 329, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/content/gdrive/My Drive/LaneDetection/conditional-lane-detection/mmdet/models/detectors/condlanenet.py", line 369, in forward_test
kwargs['thr'])
File "/content/gdrive/My Drive/LaneDetection/conditional-lane-detection/mmdet/models/dense_heads/condlanenet_head.py", line 444, in forward_test
masks = self.mask_head(mask_branch, mask_params, num_ins)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/gdrive/My Drive/LaneDetection/conditional-lane-detection/mmdet/models/dense_heads/condlanenet_head.py", line 119, in forward
x = torch.cat([locations, x], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 2. Got 80 and 40 (The offending index is 0)

Please help me figure out what is wrong.

Thanks in advance.

Frozen ResNet Blocks and BatchNorms

Hi all,

does somebody know why the first ResNet Block and almost all BatchNorms are frozen? I have trained the model once without freeze and the performance was a bit worse, so what is the intuition on the frozen layers?

mmdet/models/backbones/resnet.py : 546 and 606

    def _freeze_stages(self):
        if self.frozen_stages >= 0:
            if self.deep_stem:
                self.stem.eval()
                for param in self.stem.parameters():
                    param.requires_grad = False
            else:
                self.norm1.eval()
                for m in [self.conv1, self.norm1]:
                    for param in m.parameters():
                        param.requires_grad = False

        for i in range(1, self.frozen_stages + 1):
            m = getattr(self, f'layer{i}')
            m.eval()
            for param in m.parameters():
                param.requires_grad = False

    def train(self, mode=True):
        super(ResNet, self).train(mode)
        self._freeze_stages()
        if mode and self.norm_eval:
            for m in self.modules():
                # trick: eval have effect on BatchNorm only
                if isinstance(m, _BatchNorm):
                    m.eval()

export to onnx

Hi, thank you for your great work! I have run it on my own datasets (different view with culane), and it predicts very well.
but I have problem in how to export .pth to onnx.

I have tried :
python tools/pytorch2onnx.py configs/condlanenet/culane/culane_small_test.py ./culane_small.pth --out ./condlanenet.onnx --shape 320 800

and it did save a ONNX file(44.7MB) in my path.
but when I open ONNX,I don't understand the output.

image

image

can anyone here help me out? thanks!

RIM Module

Hi, I read the paper of this work that's awesome in RIM Module to solve fork lanes case. However, I can't find this module in open source codes in this repo. Could you help me? I wanna know the reasons, thank you!

TypeError: mask must be numpy array type

  main()
File "tools/condlanenet/curvelanes/test_curvelanes.py", line 248, in main
  mask_size=cfg.mask_size)
File "tools/condlanenet/curvelanes/test_curvelanes.py", line 133, in single_gpu_test
  for i, data in enumerate(data_loader):
File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
  data = self._next_data()
File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 387, in _next_data
  data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
  data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
  data = [self.dataset[idx] for idx in possibly_batched_index]

File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/albumentations/core/composition.py", line 166, in __call__
  self._check_args(**data)
File "/home/ubantu/anaconda3/envs/squee/lib/python3.6/site-packages/albumentations/core/composition.py", line 237, in _check_args
  raise TypeError("{} must be numpy array type".format(data_name))
TypeError: mask must be numpy array type```


pytorch==1.5.0
torchvison==0.6.0
mmdet==2.0.0+2.13.0
mmcv ==1.3.7

V100推理速度

python tools/condlanenet/speed_test.py configs/condlanenet/curvelanes/curvelanes_medium_test.py weights/curvelanes_medium.pth

output:

Elapsed time in all model infernece: 11.598999

是不是不太对劲,论文上报告的fps有一百多。另外增加batchsize之报错

heat_nms = heat_nms.squeeze(0).permute(1, 2, 0).detach().cpu().numpy()
RuntimeError: number of dims don't match in permute

Evaluation on CULane

hello!
Could you please tell me how to calculate the f1 score on CULane dataset?
Thank you!

add --validate then Error

File "/mnt/lustre/wangjinsheng/project/lane-detection/conditional-lane-detection/mmdet/models/detectors/condlanenet.py", line 369, in forward_test
kwargs['thr'])
KeyError: 'thr'

Manually assign values ​​to kwargs['thr'] and it can run. Where did you pass 'thr' in?

TypeError: 'DataContainer' object is not subscriptable

请教大佬 ,用推荐的环境配置

MMCV: 0.5.6
MMDetection: 2.0.0+73c2043

train的时候没问题,但是test的时候报如下error,检查了数据集是没问题的,还可能是哪里的问题呢

(openmm) [wangjinsheng@HOST-10-198-32-69 tools]$ sh test_slurm.sh 
[                                                  ] 0/2782, elapsed: 0s, ETA:[                                                  ] 0/2782, elapsed: 0s, ETA:Traceback (most recent call last):
Traceback (most recent call last):
  File "test.py", line 149, in <module>
  File "test.py", line 149, in <module>
    main()
  File "test.py", line 127, in main
    args.show_score_thr)
  File "/mnt/lustre/wangjinsheng/project/lane-detection/conditional-lane-detection/mmdet/apis/test.py", line 28, in single_gpu_test
    main()
  File "test.py", line 127, in main
    args.show_score_thr)
  File "/mnt/lustre/wangjinsheng/project/lane-detection/conditional-lane-detection/mmdet/apis/test.py", line 28, in single_gpu_test
    img_tensor = data['img'][0]
TypeError: 'DataContainer' object is not subscriptable
    img_tensor = data['img'][0]
TypeError: 'DataContainer' object is not subscriptable

docker环境的问题

@hustllz
请问,我使用提供的docker环境,创建容器后,进入环境,试运行测试代码,发现docker环境中没有mmcv和cv2.于是就安装了mmcv和opencv-python。环境如下:
WXWorkCapture_16298796963981
但是在docker环境里容器测试时,出现如下问题:
ModuleNotFoundError:No module named ‘mmcv._ext’
WXWorkCapture_16298797804249
请教该如何解决的?谢谢

RNN head

Hello, the paper proposed the LSTM block on the proposal head, but the config file don't use this head for training, why? If we want to train our model with LSTM block, could we directly use it? It's there any notice for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.