Keras code for our paper "DFNet: Discriminative feature extraction and integration network for salient object detection"
Our paper can be found at: ScienceDirect & arXiv.
Our new salient object detection model accepted by Pattern Recognition can be found at: ScienceDirect & arXiv & GitHub
You can download the pre-computed saliency maps from: Google Drive & Baidu (extraction code: e2g7) for datasets DUTS-TE, ECSSD, DUT-OMRON, PASCAL-S, HKU-IS, SOD, THUR15K.
To evaluate the performance of salient object detection models using the pre-computed saliency maps, you can use the code provided at: GitHub
1- Quantitative comparison
2- Qualitative comparison
Our Sharpening Loss guides the network to output saliency maps with higher certainty and less blurry salient objects which are much closer to the ground truth compared to the Cross-entropy Loss.
If you want to train the model with VGG16 Backbone, you can run
python main.py --batch_size=8 --Backbone_model "VGG16"
You can also try one of the following three options as the Backbone_model: "ResNet50" or "NASNetMobile" or "NASNetLarge"
In addition to batch_size and Backbone_model, you can set these training configurations: learning_rate, epochs, train_set_directory, save_directory, use_multiprocessing, show_ModelSummary
@article{noori2020dfnet,
title={DFNet: Discriminative feature extraction and integration network for salient object detection},
author={Noori, Mehrdad and Mohammadi, Sina and Majelan, Sina Ghofrani and Bahri, Ali and Havaei, Mohammad},
journal={Engineering Applications of Artificial Intelligence},
volume={89},
pages={103419},
year={2020},
publisher={Elsevier}
}