Skip to content

Commit

Permalink
Fix ArgMaxLayer::Reshape for any num of bottom axes
Browse files Browse the repository at this point in the history
  • Loading branch information
timmeinhardt committed Nov 6, 2015
1 parent 0ec116e commit 987b3d8
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 8 deletions.
14 changes: 7 additions & 7 deletions include/caffe/common_layers.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@ class ArgMaxLayer : public Layer<Dtype> {
* -# @f$ (N \times C \times H \times W) @f$
* the inputs @f$ x @f$
* @param top output Blob vector (length 1)
* -# @f$ (N \times 1 \times K \times 1) @f$ or, if out_max_val
* @f$ (N \times 2 \times K \times 1) @f$ unless axis set than e.g.
* -# @f$ (N \times 1 \times K) @f$ or, if out_max_val
* @f$ (N \times 2 \times K) @f$ unless axis set than e.g.
* @f$ (N \times K \times H \times W) @f$ if axis == 1
* the computed outputs @f$
* y_n = \arg\max\limits_i x_{ni}
Expand All @@ -81,13 +81,13 @@ class ArgMaxLayer : public Layer<Dtype> {
* each channel in the data (i.e. axis 1), it subtracts the mean and divides
* by the variance, where both statistics are computed across both spatial
* dimensions and across the different examples in the batch.
*
*
* By default, during training time, the network is computing global mean/
* variance statistics via a running average, which is then used at test
* time to allow deterministic outputs for each input. You can manually
* toggle whether the network is accumulating or using the statistics via the
* use_global_stats option. IMPORTANT: for this feature to work, you MUST
* set the learning rate to zero for all three parameter blobs, i.e.,
* set the learning rate to zero for all three parameter blobs, i.e.,
* param {lr_mult: 0} three times in the layer definition.
*
* Note that the original paper also included a per-channel learned bias and
Expand All @@ -96,10 +96,10 @@ class ArgMaxLayer : public Layer<Dtype> {
* followed by a Convolution layer with output the same size as the current.
* This produces a channel-specific value that can be added or multiplied by
* the BatchNorm layer's output.
*
*
* [1] S. Ioffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network
* Training by Reducing Internal Covariate Shift." arXiv preprint
* arXiv:1502.03167 (2015).
* Training by Reducing Internal Covariate Shift." arXiv preprint
* arXiv:1502.03167 (2015).
*
* TODO(dox): thorough documentation for Forward, Backward, and proto params.
*/
Expand Down
4 changes: 3 additions & 1 deletion src/caffe/layers/argmax_layer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,9 @@ void ArgMaxLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
template <typename Dtype>
void ArgMaxLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
std::vector<int> shape(bottom[0]->num_axes(), 1);
int num_top_axes = bottom[0]->num_axes();
if ( num_top_axes < 3 ) num_top_axes = 3;
std::vector<int> shape(num_top_axes, 1);
if (has_axis_) {
// Produces max_ind or max_val per axis
shape = bottom[0]->shape();
Expand Down

0 comments on commit 987b3d8

Please sign in to comment.