Skip to content
/ GAIN Public
forked from ngxbac/GAIN

Pytorch implementation of paper: [Tell Me Where to Look: Guided Attention Inference Network](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Tell_Me_Where_CVPR_2018_paper.pdf)

Notifications You must be signed in to change notification settings

sodaGH/GAIN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GAIN Framework

GAIN framework allows model to focus on a specific areas of object by changing the attention maps (Grad-CAM).
The flow of the framework is summarized as follow:

  • First, we should register the feed-forward and backward of the last convolution layer (or block).
  • Model generates the attention maps (Grad-CAM) from forward features and backward features then normalize it by using threshold and sigmoid function.
  • Now, the attention maps covers almost important information of the object. We want to tell the model those kind of areas are important for the task. It can be done by applying attention maps into the original image. Let imagine that the masked_image now is containing useless information. When we feed masked_image into the model again, we expect that the prediction score is as low as possible. That is the idea of attention mining in the paper.
  • The losses are computed at GAINCriterionCallback

In this implementation, I select resnet50 as the base-model to perform GAIN framework. You can change the backbone and it's gradient layer as you want.

Train GAIN

bash bin/train_gain.sh 

About

Pytorch implementation of paper: [Tell Me Where to Look: Guided Attention Inference Network](http://openaccess.thecvf.com/content_cvpr_2018/papers/Li_Tell_Me_Where_CVPR_2018_paper.pdf)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.6%
  • Dockerfile 3.3%
  • Shell 2.1%