You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to reproduce the training process as readme.md on kangaroo datasets. I changed the init function of backend.py to read parameters directly from a yolov2.weights(because the downloaded link of pretrained model is unvisited. And proxysite.com does work either). And change the backend layers as untrainable. Then training the model on both kangaroo and raccoon datasets, the messages say that the average recall will decrease to zero after about 10 epochs. However, when I change the backend layers as trainable, the average recall will decrease to around 30%.
So, question 1, when to choose train from scratch and when to choose fine-tune?
question2, when train from scratch, what is the proper training data amount? The model has more than 50M parameters, just about 100 training(both kangaroo and raccoon) images will be enough? Will the low image amount result in overfit? @experiencor Thanks so much!
The text was updated successfully, but these errors were encountered:
@leadcain84 Thanks for your comment. Actually, when I visited the link via browser, it will dislay "Sorry something goes wrong", so i can't download the file.
I downloaded the yolov2.weights from yolo official website. So I think it may not be the problem of weights.
I'm trying to reproduce the training process as readme.md on kangaroo datasets. I changed the init function of backend.py to read parameters directly from a yolov2.weights(because the downloaded link of pretrained model is unvisited. And proxysite.com does work either). And change the backend layers as untrainable. Then training the model on both kangaroo and raccoon datasets, the messages say that the average recall will decrease to zero after about 10 epochs. However, when I change the backend layers as trainable, the average recall will decrease to around 30%.
So, question 1, when to choose train from scratch and when to choose fine-tune?
question2, when train from scratch, what is the proper training data amount? The model has more than 50M parameters, just about 100 training(both kangaroo and raccoon) images will be enough? Will the low image amount result in overfit?
@experiencor Thanks so much!
The text was updated successfully, but these errors were encountered: