Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SemanticKITTI Validation Set Results #26

Open
alextanned opened this issue Jan 18, 2023 · 0 comments
Open

SemanticKITTI Validation Set Results #26

alextanned opened this issue Jan 18, 2023 · 0 comments

Comments

@alextanned
Copy link

Hello, thank you for publishing this repository, your work is very insightful.

Regarding Table 1. in the paper about validation set comparisons on SemanticKITTI, may I ask how you obtained the mIOU results of the other methods? I know that you evaluate on your own dataset created by create_fov_dataset.py , but do you:

  1. Use the published pretrained checkpoint from those methods (e.g. Salsanext, Cylinder3D, etc.) to evaluate directly even though most of them are trained on the entire point cloud (360 degree fov). Or,
  2. Use the published model architecture and training settings of those methods and train yourselves using the limited fov dataset

If none of the above is correct, it would be much appreciated if you give me some insights on how you conducted the comparison. Thanks again for your great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant