We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At first , thank you for providing a complete code. When i use the VGG16 model,I got this result I think it might be wrong. But why?
90/90 [======] - 34s - loss: 7.9056 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850om inf to 8.30082, saving model to ./checkpoints/VGG16/VGG16.h5 Epoch 2/300 90/90 [======] - 24s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 3/300 90/90 [======] - 23s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 4/300 90/90 [======] - 22s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 5/300 89/90 [=====>.] - ETA: 0s - loss: 7.9866 - acc: 0.5045Epoch 00004: val_loss did not improve
90/90 [======] - 22s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850 Epoch 6/300 90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 7/300 90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 8/300 89/90 [====>.] - ETA: 0s - loss: 7.9685 - acc: 0.5056Epoch 00007: val_loss did not improve
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850 Epoch 9/300 90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 10/300 90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 11/300 89/90 [====>.] - ETA: 0s - loss: 7.9866 - acc: 0.5045Epoch 00010: val_loss did not improve
90/90 [=====] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850 Epoch 12/300 90/90 [=====] - 20s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove Epoch 00011: early stopping Done
The text was updated successfully, but these errors were encountered:
Try ResNet
Sorry, something went wrong.
这个是因为啥?vgg,alexnet这些全是0.5左右,只有resnet系列的是正常的,其他的是错的吗,还是整个都不对
No branches or pull requests
At first , thank you for providing a complete code.
When i use the VGG16 model,I got this result I think it might be wrong. But why?
90/90 [======] - 34s - loss: 7.9056 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850om inf to 8.30082, saving model to ./checkpoints/VGG16/VGG16.h5
Epoch 2/300
90/90 [======] - 24s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 3/300
90/90 [======] - 23s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 4/300
90/90 [======] - 22s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 5/300
89/90 [=====>.] - ETA: 0s - loss: 7.9866 - acc: 0.5045Epoch 00004: val_loss did not improve
90/90 [======] - 22s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850
Epoch 6/300
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 7/300
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 8/300
89/90 [====>.] - ETA: 0s - loss: 7.9685 - acc: 0.5056Epoch 00007: val_loss did not improve
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850
Epoch 9/300
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 10/300
90/90 [======] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 11/300
89/90 [====>.] - ETA: 0s - loss: 7.9866 - acc: 0.5045Epoch 00010: val_loss did not improve
90/90 [=====] - 21s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850
Epoch 12/300
90/90 [=====] - 20s - loss: 7.9785 - acc: 0.5050 - val_loss: 8.3008 - val_acc: 0.4850rove
Epoch 00011: early stopping
Done
The text was updated successfully, but these errors were encountered: