Skip to content
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.

Training match the paper performance #3

Open
kspeng opened this issue Oct 6, 2019 · 4 comments
Open

Training match the paper performance #3

kspeng opened this issue Oct 6, 2019 · 4 comments

Comments

@kspeng
Copy link

kspeng commented Oct 6, 2019

HI,

I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!

Best
Kuo

@OscarMind
Copy link

HI,

I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!

Best
Kuo

Have you achieved the multi-taskes by these codes?

@kspeng
Copy link
Author

kspeng commented Oct 31, 2019

HI,
I am trying to follow your instruction to match the result of the paper usubg NYU dataset. But the mIOU and RMSE are still can not be the same. They get stable after 300 iterations and stop at 25% and 0.7. Is there anything I missed or lost in the instructions? Thanks!
Best
Kuo

Have you achieved the multi-taskes by these codes?

To run multi-tasks is no doubt.
However, I can't fully reproduce the training procedure mentioned in the paper to get the claimed performance.
Interesting thing is that the provided model can reach the claimed performance of the pape.

@Tomas-Lee
Copy link

Me too, I also trained the example/multitask/train.py, aka mobilenetv2+MTLWRefineNet without any change.
Finally, I get the mIOU=0.27, RMSE=0.72 with 1000epoch.
It didn't achieve the paper-claimed performance.
Anyone knows why?

@DrSleep
Copy link
Owner

DrSleep commented Jul 20, 2020

See the following script for multi-task training with MobileNet-v2 and Multi-Task Light-Weight RefineNet: https://github.com/DrSleep/DenseTorch/blob/dev/examples/scripts/segm-depth-normals-mbv2-rflw.sh
Without changes, it should reach ~37% mean iou, 0.60 RMSE and 24 mean angular error. To achieve higher numbers, as described in the paper, you have to use the Raw NYUD dataset with knowledge distillation on missing segmentation labels -- this is not provided by this repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants