Giter Site home page Giter Site logo

online-hd-map-construction-cvpr2023's Introduction

Online HD Map Construction Challenge For Autonomous Driving

CVPR 2023 Workshop on End-to-End Autonomous Driving

Table of Contents

Introduction

Constructing HD maps is a central component of autonomous driving. However, traditional mapping pipelines require a vast amount of human efforts in annotating and maintaining the map, which limits its scalability. Online HD map construction task aims to dynamically construct the local semantic map based on onboard sensor observations. Compared to lane detection, our constructed HD map provides more semantics information of multiple categories. Vectorized polyline representation are adopted to deal with complicated and even irregular road structures.

Task

The goal of Online HD Map Construction Task is to construct the local HD map from onboard sensor observations (surrounding cameras images). A local HD map can be described by a set of map elements with different categories, e.g. road divider, road boundary and pedestrian crossing. Each map element can be vectorized to a polyline, which is consists of a set of points. Here is an example from topdown view.

We use Chamfer Distance based Average Precision ( $\mathrm{AP}_\mathrm{CD}$ ) as metric to evaluate the quality of construction as introduced in HDMapNet and VectorMapNet. For more details of evaluation metrics, please see metrics.md.

News

  • [2023/01]
  • [2023/05]
    • Note❗❗❗ It is a must to append a correct email address and other information to validate your submissions in the Challenge.
    • Due to EvalAI's memory size limitations, we restrict the maximum file size for submissions to 250MB.
    • [2023/05/23] We noticed that there are several submissions stuck with running status on EvalAI. This is caused by EvalAI's memory size limitations. We suggest reducing your submission by filtering predictions with a score threshold or using less points to represent a line (this will cause little performance drop since we explicitly do up-sample in evaluation code).
    • [2023/05/24] Fixed a bug where std-out file may print wrong mAP result (the result table is correct). It will not affect the leaderboard since it was only a bug on printing the log. The value on the leaderboard is correct.

Data

Our dataset is built on top of the Argoverse2 dataset. To download the dataset and check more details, please see data.md.

Get Started

Please refer to get_started.md.

Challenge submission and Leaderboard

Please submit at EvalAI. For details of submission file format, please see metric.md.

Method $\mathrm{mAP}$ $\mathrm{AP}_{pc}$ $\mathrm{AP}_{div}$ $\mathrm{AP}_{bound}$
VectorMapNet (baseline) 42.79 37.22 50.47 40.68

Rules

  • During inference, the input modality of the model should be camera only.
  • No future frame is allowed during inference.
  • In order to check for compliance, we will ask the participants to provide technical reports to the challenge committee and participants will be asked to provide a public talk about their works after winning the award.

Citation

The evaluation metrics of this challenge follows HDMapNet. We provide VectorMapNet as the baseline. Please cite:

@article{li2021hdmapnet,
    title={HDMapNet: An Online HD Map Construction and Evaluation Framework},
    author={Qi Li and Yue Wang and Yilun Wang and Hang Zhao},
    journal={arXiv preprint arXiv:2107.06307},
    year={2021}
}

Our dataset is built on top of the Argoverse 2 dataset. Please also cite:

@INPROCEEDINGS {Argoverse2,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}

License

Before participating in our challenge, you should register on the website and agree to the terms of use of the Argoverse 2 dataset. All code in this project is released under GNU General Public License v3.0.

online-hd-map-construction-cvpr2023's People

Contributors

yuantianyuan01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

online-hd-map-construction-cvpr2023's Issues

[Doc] Specify the mmsegmentation version

Propose to specify the version of mmseg as dependency of mmdetection3d. As the MMDetection3D V0.17.3 is suggested, the latest mmsegmentation does not work with it.

I am using mmsegmentation==0.14.1

cd mmdetection3d-0.17.3
pip install mmsegmentation==0.14.1
pip install -v -e .

different evaluation resullts

Hello, your work is awesome.
But, when I evaluate your model, I get different results from you.

Your results:
image

Mine:
category | num_preds | num_gts | [email protected] | [email protected] | [email protected] | AP |
ped_crossing | 54856 | 9517 | 0.0585 | 0.3891 | 0.649 | 0.3655 |
divider | 56501 | 23546 | 0.3467 | 0.538 | 0.6338 | 0.5062 |
boundary | 56607 | 14498 | 0.1417 | 0.4599 | 0.6657 | 0.4225 |
+--------------+-----------+---------+--------+--------+--------+--------+
mAP = 0.4314

In addtiton, I evaluate on the validation set.

How to visualize vectors on camera image from Nuscenes dataset?

Hi Tsinghua-MARS-Lab,

Thank you for your work. Currently, I want to visualize vector on camera image from nuscenes dataset. This is my pseudo code:

Assume that, i obtained vector points from sample vector line (N,2)

        import copy
        for cam_type, img_info in img_infos.items():
            img_filename = img_info['data_path']
            img = imageio.imread(img_filename)

            cam2lidar_rt = np.eye(4)
            cam2lidar_rt[:3, :3] = img_info['sensor2lidar_rotation']
            cam2lidar_rt[:3, -1] = img_info['sensor2lidar_translation']
            lidar2cam_rt = np.linalg.inv(cam2lidar_rt)

            lidar2ego_rt = np.eye(4)
            lidar2ego_rt[:3, :3] = img_metas['lidar2ego_rots']
            lidar2ego_rt[:3, -1] = img_metas['lidar2ego_trans']
            ego2lidar_rt = np.linalg.inv(lidar2ego_rt)

            ego2cam_rt = lidar2cam_rt @ ego2lidar_rt

            intrinsic = img_info['cam_intrinsic']

            viewpad = np.eye(4)
            viewpad[:intrinsic.shape[0],
                    :intrinsic.shape[1]] = intrinsic

            lidar2img = (viewpad @ lidar2cam_rt)
            ego2img = (viewpad @ ego2cam_rt)

            for vector in vectors_gt:
                pts = np.array(vector['pts'])
                # squeeze to N,2
                pts = pts.squeeze()
                if pts.shape[1] == 2:
                    zeros = np.randn((pts.shape[0], 1))
                    pts = np.concatenate([pts, zeros], axis=1)
                pts_ego_4d = np.concatenate([pts, np.ones([len(pts), 1])], axis=-1)

                ego2img_rt = copy.deepcopy(ego2img).reshape(4, 4)
                uv = pts_ego_4d @ ego2img_rt.T

                uv = remove_nan_values(uv)
                depth = uv[:, 2]

                uv = uv[:, :2] / uv[:, 2].reshape(-1, 1)

                h, w, c = img.shape

                is_valid_x = np.logical_and(0 <= uv[:, 0], uv[:, 0] < w - 1)
                is_valid_y = np.logical_and(0 <= uv[:, 1], uv[:, 1] < h - 1)
                is_valid_z = depth > 0
                is_valid_points = np.logical_and.reduce([is_valid_x, is_valid_y, is_valid_z])
                if is_valid_points.sum() == 0:
                    print('no valid points')

                uv = np.round(uv[is_valid_points]).astype(np.int32)

                line = np.round(uv).astype(int)  # type: ignore
                test_img = img.copy()
                for i in range(len(line) - 1):

                    if (not is_valid_points[i]) or (not is_valid_points[i + 1]):
                        continue
                        
                    x1 = line[i][0]
                    y1 = line[i][1]
                    x2 = line[i + 1][0]
                    y2 = line[i + 1][1]

                    # Use anti-aliasing (AA) for curves
                    test_img = cv2.line(test_img, pt1=(x1, y1), pt2=(x2, y2), color=(255,0,0), thickness=2, lineType=cv2.LINE_AA)

My script code return no valid points, thank you.

submission file example

Hi,

I use the released ckpt to write a validation submission json file. Then I added json keys according to submission format. Finally I submit this json file to val phase.

An error occured as below:

Traceback (most recent call last):
File "/code/scripts/workers/submission_worker.py", line 500, in run_submission
submission_metadata=submission_serializer.data,
File "/tmp/tmpdaalq2fx/compute/challenge_data/challenge_1954/main.py", line 67, in evaluate
assert send_email(results['meta']), 'validation failed in send_email()'
AssertionError: validation failed in send_email()

I wonder what's wrong with my submission json file. Could you provide a toy submission file?

Best regards,
Fuzhi

Will you re-open the challenge Submission

Thanks for your challenge leaderboard by EVALAI. But we find that we can't submit results to the challenge leaderboard. Do you cancel the challenge? Will you re-open the challenge Submission. Hope for your responding.

ImageNet Dataset Version

Hi, all ,
Can we use Imagenet-22K pretrained backbone or we can only use ImageNet-1k pretrained backbone?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.