Giter Site home page Giter Site logo

Comments (8)

BB88Lee avatar BB88Lee commented on May 27, 2024 1

Hi, the reason for evaluating the first category is to assess the performance on general objects. It will been taken into account in mAP.

Hi, what is the general object you are referring to?

I still don't think it makes sense to calculate this category into mIoU for the following reasons, which I hope you will consider

  1. The first category is just a generalization of noise points and some rare categories (animal, self-ego points, vehicle.emergency, pedestrian.wheelchair...) into the first category, which is not realistic in terms of category classification.
  2. The overall number of points in this category is much smaller than the lowest number of the other 16 categories, and there is an order of magnitude difference, so it is not scientific to count them in the mIoU.
  3. This category is not calculated into mIoU in the semantic and panoptic benchmark of nuScenes, and it is not calculated into mIoU in any of the paper methods so far.

from cvpr2023-3d-occupancy-prediction.

tongwwt avatar tongwwt commented on May 27, 2024 1

Thanks for you suggestion . Other works such as OpenOccupancy(https://arxiv.org/abs/2303.03991) choose to ignore this category during evaluation following the definition in the panoptic segmentation task. We keep the first category in the mIoU owing to the importance of this category in the task, which is not dependent on the number of points. And class imbalance is very common in the dataset, such as the occupancy data in the SemanticKitti dataset, in which the IoU metric on some categories is very close to zero during evaluation owing to the rare points in these categories.

Different mIoU evaluation scores can be designed to satisfy different tasks, but we still want to evaluate the performance on all
categories for the current challenge. You can abandon the first category for your furture work.

from cvpr2023-3d-occupancy-prediction.

BB88Lee avatar BB88Lee commented on May 27, 2024

"We use the well-known IOU metric, which is defined as TP / (TP + FP + FN). The IOU score is calculated separately for each class, and then the mean is computed across classes. Note that lidar segmentation index 0 is ignored in the calculation."

See here:
https://www.nuscenes.org/lidar-segmentation?externalData=all&mapData=all&modalities=Any

from cvpr2023-3d-occupancy-prediction.

OrangeSodahub avatar OrangeSodahub commented on May 27, 2024

Yes, agree with you

from cvpr2023-3d-occupancy-prediction.

waveleaf27 avatar waveleaf27 commented on May 27, 2024

Hi, the reason for evaluating the first category is to assess the performance on general objects. It will been taken into account in mAP.

from cvpr2023-3d-occupancy-prediction.

tongwwt avatar tongwwt commented on May 27, 2024

The occupancy prediction task differentiates with the traditional task such as LiDAR segmentation. We describe the driving scene as occupancy and all occupied objects in 3D space should be predicted well in autonomous driving.
The first category includes all unknown objects in the driving scene such as the unknown obstacles in the road, and these objects can not be ignored in the occupancy task.

  1. We have removed almost all invalid noise points to ensure the effectiveness of this category.
  2. Although the number of points is small, this category can not be ignored in our task.
  3. Our occupancy task differentiates with the semantic and panoptic task, and we predict all occupied region in 3D space.

from cvpr2023-3d-occupancy-prediction.

BB88Lee avatar BB88Lee commented on May 27, 2024

The occupancy prediction task differentiates with the traditional task such as LiDAR segmentation. We describe the driving scene as occupancy and all occupied objects in 3D space should be predicted well in autonomous driving. The first category includes all unknown objects in the driving scene such as the unknown obstacles in the road, and these objects can not be ignored in the occupancy task.

  1. We have removed almost all invalid noise points to ensure the effectiveness of this category.
  2. Although the number of points is small, this category can not be ignored in our task.
  3. Our occupancy task differentiates with the semantic and panoptic task, and we predict all occupied region in 3D space.

I fully understand your concerns. Your consideration is that any occupancy cannot be ignored. I agree with you. And I think a more reasonable way may be to add another IoU evaluation score specifically for occ detection or not (such as https://arxiv.org/abs/2303.03991), instead of counting these rare points into mIoU.

from cvpr2023-3d-occupancy-prediction.

BB88Lee avatar BB88Lee commented on May 27, 2024

Thanks for the reply, it's clear. I'm gonna close this issue.

from cvpr2023-3d-occupancy-prediction.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.