Skip to content

Commit

Permalink
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Browse files Browse the repository at this point in the history
… dockerreadme
  • Loading branch information
yi.wu committed Mar 22, 2017
2 parents 5d0f9d3 + 4db0471 commit 3569466
Show file tree
Hide file tree
Showing 23 changed files with 505 additions and 229 deletions.
1 change: 0 additions & 1 deletion .dockerignore

This file was deleted.

15 changes: 15 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
*.DS_Store
build/
*.user
.vscode
.idea
.project
.cproject
.pydevproject
Makefile
.test_env/
third_party/
*~
bazel-*

!build/*.deb
7 changes: 1 addition & 6 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,17 @@
FROM nvidia/cuda:7.5-cudnn5-devel-ubuntu14.04
MAINTAINER PaddlePaddle Authors <[email protected]>

ARG DEBIAN_FRONTEND=noninteractive
ARG UBUNTU_MIRROR
RUN /bin/bash -c 'if [[ -n ${UBUNTU_MIRROR} ]]; then sed -i 's#http://archive.ubuntu.com/ubuntu#${UBUNTU_MIRROR}#g' /etc/apt/sources.list; fi'

# ENV variables
ARG BUILD_WOBOQ
ARG BUILD_AND_INSTALL
ARG WITH_GPU
ARG WITH_AVX
ARG WITH_DOC
ARG WITH_STYLE_CHECK

ENV BUILD_WOBOQ=${BUILD_WOBOQ:-OFF}
ENV BUILD_AND_INSTALL=${BUILD_AND_INSTALL:-OFF}
ENV WITH_GPU=${WITH_AVX:-OFF}
ENV WITH_AVX=${WITH_AVX:-ON}
ENV WITH_DOC=${WITH_DOC:-OFF}
Expand All @@ -31,7 +28,7 @@ RUN apt-get update && \
apt-get install -y wget unzip tar xz-utils bzip2 gzip coreutils && \
apt-get install -y curl sed grep graphviz libjpeg-dev zlib1g-dev && \
apt-get install -y python-numpy python-matplotlib gcc g++ gfortran && \
apt-get install -y automake locales clang-format-3.8 && \
apt-get install -y automake locales clang-format-3.8 swig && \
apt-get clean -y

# git credential to skip password typing
Expand All @@ -51,8 +48,6 @@ RUN curl -sSL https://cmake.org/files/v3.4/cmake-3.4.1.tar.gz | tar -xz && \
cd cmake-3.4.1 && ./bootstrap && make -j `nproc` && make install && \
cd .. && rm -rf cmake-3.4.1

RUN apt-get install -y swig

VOLUME ["/usr/share/nginx/html/data", "/usr/share/nginx/html/paddle"]

# Configure OpenSSH server. c.f. https://docs.docker.com/engine/examples/running_ssh_service
Expand Down
48 changes: 24 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@


[![Build Status](https://travis-ci.org/PaddlePaddle/Paddle.svg?branch=develop)](https://travis-ci.org/PaddlePaddle/Paddle)
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/cn/index.html)
[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](http://www.paddlepaddle.org/develop/doc/)
[![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](http://www.paddlepaddle.org/doc_cn/)
[![Coverage Status](https://coveralls.io/repos/github/PaddlePaddle/Paddle/badge.svg?branch=develop)](https://coveralls.io/github/PaddlePaddle/Paddle?branch=develop)
[![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle.svg)](https://github.com/PaddlePaddle/Paddle/releases)
[![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
Expand Down Expand Up @@ -59,36 +59,36 @@ Please refer to our [release announcement](https://github.com/PaddlePaddle/Paddl
the capability of PaddlePaddle to make a huge impact for your product.

## Installation
Check out the [Install Guide](http://paddlepaddle.org/doc/build/) to install from
pre-built packages (**docker image**, **deb package**) or
directly build on **Linux** and **Mac OS X** from the source code.

It is recommended to check out the
[Docker installation guide](http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/docker_install_en.html)
before looking into the
[build from source guide](http://www.paddlepaddle.org/develop/doc/getstarted/build_and_install/build_from_source_en.html)

## Documentation
Both [English Docs](http://paddlepaddle.org/doc/) and [Chinese Docs](http://paddlepaddle.org/doc_cn/) are provided for our users and developers.

- [Quick Start](http://paddlepaddle.org/doc/demo/quick_start/index_en) <br>
You can follow the quick start tutorial to learn how use PaddlePaddle
step-by-step.
We provide [English](http://www.paddlepaddle.org/develop/doc/) and
[Chinese](http://www.paddlepaddle.org/doc_cn/) documentation.

- [Deep Learning 101](http://book.paddlepaddle.org/index.en.html)

You might want to start from the this online interactive book that can run in Jupyter Notebook.

- [Distributed Training](http://www.paddlepaddle.org/develop/doc/howto/usage/cluster/cluster_train_en.html)

You can run distributed training jobs on MPI clusters.

- [Distributed Training on Kubernetes](http://www.paddlepaddle.org/develop/doc/howto/usage/k8s/k8s_en.html)

- [Example and Demo](http://paddlepaddle.org/doc/demo/) <br>
We provide five demos, including: image classification, sentiment analysis,
sequence to sequence model, recommendation, semantic role labeling.
You can also run distributed training jobs on Kubernetes clusters.

- [Distributed Training](http://paddlepaddle.org/doc/cluster) <br>
This system supports training deep learning models on multiple machines
with data parallelism.
- [Python API](http://www.paddlepaddle.org/develop/doc/api/index_en.html)

- [Python API](http://paddlepaddle.org/doc/ui/) <br>
PaddlePaddle supports using either Python interface or C++ to build your
system. We also use SWIG to wrap C++ source code to create a user friendly
interface for Python. You can also use SWIG to create interface for your
favorite programming language.
Our new API enables much shorter programs.

- [How to Contribute](http://paddlepaddle.org/doc/build/contribute_to_paddle.html) <br>
We sincerely appreciate your interest and contributions. If you would like to
contribute, please read the contribution guide.
- [How to Contribute](http://www.paddlepaddle.org/develop/doc/howto/dev/contribute_to_paddle_en.html)

- [Source Code Documents](http://paddlepaddle.org/doc/source/) <br>
We appreciate your contributions!

## Ask Questions

Expand Down
2 changes: 1 addition & 1 deletion cmake/FindSphinx.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ function( Sphinx_add_target target_name builder conf cache source destination )
${source}
${destination}
COMMENT "Generating sphinx documentation: ${builder}"
COMMAND cd ${destination} && ln -s ./index_*.html index.html
COMMAND cd ${destination} && ln -sf ./index_*.html index.html
)

set_property(
Expand Down
9 changes: 0 additions & 9 deletions paddle/gserver/dataproviders/DataProvider.h
Original file line number Diff line number Diff line change
Expand Up @@ -164,15 +164,6 @@ class DataBatch {
argu.value = value;
data_.push_back(argu);
}
/**
* @brief Append user defined data
* @param[in] ptr user defined data
*/
void appendUserDefinedPtr(UserDefinedVectorPtr ptr) {
Argument argu;
argu.udp = ptr;
data_.push_back(argu);
}

/*
* @brief Append argument
Expand Down
53 changes: 53 additions & 0 deletions paddle/gserver/layers/CostLayer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,59 @@ void SumOfSquaresCostLayer::backwardImp(Matrix& output,
outputG.sumOfSquaresBp(output, *label.value);
}

//
// class SmoothL1CostLayer
//

REGISTER_LAYER(smooth_l1, SmoothL1CostLayer);

bool SmoothL1CostLayer::init(const LayerMap& layerMap,
const ParameterMap& parameterMap) {
return CostLayer::init(layerMap, parameterMap);
}

void SmoothL1CostLayer::forwardImp(Matrix& output,
Argument& label,
Matrix& target) {
MatrixPtr targetCpu, outputCpu, labelCpu;
if (useGpu_) {
targetCpu =
Matrix::create(target.getHeight(), target.getWidth(), false, false);
outputCpu =
Matrix::create(output.getHeight(), output.getWidth(), false, false);
labelCpu = Matrix::create(
label.value->getHeight(), label.value->getWidth(), false, false);
targetCpu->copyFrom(target);
outputCpu->copyFrom(output);
labelCpu->copyFrom(*label.value);
targetCpu->smoothL1(*outputCpu, *(labelCpu));
target.copyFrom(*targetCpu);
} else {
target.smoothL1(output, *label.value);
}
}

void SmoothL1CostLayer::backwardImp(Matrix& output,
Argument& label,
Matrix& outputG) {
MatrixPtr outputGCpu, outputCpu, labelCpu;
if (useGpu_) {
outputGCpu =
Matrix::create(outputG.getHeight(), outputG.getWidth(), false, false);
outputCpu =
Matrix::create(output.getHeight(), output.getWidth(), false, false);
labelCpu = Matrix::create(
label.value->getHeight(), label.value->getWidth(), false, false);
outputGCpu->copyFrom(outputG);
outputCpu->copyFrom(output);
labelCpu->copyFrom(*label.value);
outputGCpu->smoothL1Bp(*outputCpu, *labelCpu);
outputG.copyFrom(*outputGCpu);
} else {
outputG.smoothL1Bp(output, *label.value);
}
}

//
// class RankingCost
//
Expand Down
23 changes: 23 additions & 0 deletions paddle/gserver/layers/CostLayer.h
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,29 @@ class SumOfSquaresCostLayer : public CostLayer {
Matrix& outputGrad) override;
};

/**
* This cost layer compute smooth L1 loss for real-valued regression
* tasks.
* \f[
* L =
* (output - label)^2 * 0.5 / -1 < (output - label) < 1 /
* (output - label) - 0.5 / otherwise /
* \f]
*/
class SmoothL1CostLayer : public CostLayer {
public:
explicit SmoothL1CostLayer(const LayerConfig& config) : CostLayer(config) {}

bool init(const LayerMap& layerMap,
const ParameterMap& parameterMap) override;

void forwardImp(Matrix& output, Argument& label, Matrix& cost) override;

void backwardImp(Matrix& outputValue,
Argument& label,
Matrix& outputGrad) override;
};

/**
* A cost layer for learning to rank (LTR) task. This layer contains at leat
* three inputs.
Expand Down
9 changes: 4 additions & 5 deletions paddle/gserver/layers/SequencePoolLayer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -56,17 +56,16 @@ void SequencePoolLayer::forward(PassType passType) {
CHECK_EQ(newBatchSize_, starts->getSize() - 1);

resetOutput(newBatchSize_, dim);
if (type_) {
CHECK(input.subSequenceStartPositions)
<< "when trans_type = seq, input must hasSubseq";
}

/* If type_ = kNonSeq, both seq has or not has sub-seq degrade to a non-seq,
* thus, in this case, output_ has no sequenceStartPositions.
* If type_ = kSeq, seq has sub-seq degrades to a seq, thus, only in this
* case, we should compute the new sequenceStartPositions.
*/
if (type_) {
output_.degradeSequence(input, useGpu_);
CHECK(input.subSequenceStartPositions)
<< "when trans_type = seq, input must hasSubseq";
output_.degradeSequence(input);
}
}

Expand Down
14 changes: 14 additions & 0 deletions paddle/gserver/tests/test_LayerGrad.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1602,6 +1602,20 @@ TEST(Layer, PadLayer) {
}
}

TEST(Layer, smooth_l1) {
TestConfig config;
config.layerConfig.set_type("smooth_l1");

config.inputDefs.push_back({INPUT_DATA, "layer_0", 1, 0});
config.inputDefs.push_back({INPUT_DATA_TARGET, "layer_1", 1, 0});
config.layerConfig.add_inputs();
config.layerConfig.add_inputs();

for (auto useGpu : {false, true}) {
testLayerGrad(config, "smooth_l1", 100, false, useGpu, false, 2.0);
}
}

int main(int argc, char** argv) {
testing::InitGoogleTest(&argc, argv);
initMain(argc, argv);
Expand Down
49 changes: 49 additions & 0 deletions paddle/math/Matrix.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3590,6 +3590,55 @@ void CpuMatrix::sumOfSquaresBp(Matrix& output, Matrix& label) {
}
}

void CpuMatrix::smoothL1(Matrix& output, Matrix& label) {
CHECK(output.useGpu_ == false && label.useGpu_ == false)
<< "Matrix type are not equal";

size_t numSamples = getHeight();
size_t dim = output.getWidth();
CHECK_EQ(label.getHeight(), numSamples);
CHECK_EQ(output.getHeight(), numSamples);
CHECK_EQ(label.getWidth(), dim);
CHECK_EQ(getWidth(), (size_t)1);
real* out = output.getData();
real* cost = getData();
real* lbl = label.getData();

for (size_t i = 0; i < numSamples; ++i, out += dim, cost += dim, lbl += dim) {
for (size_t j = 0; j < dim; ++j) {
cost[j] = std::fabs(out[j] - lbl[j]);
if (cost[j] < 1.0)
cost[j] = 0.5 * cost[j] * cost[j];
else
cost[j] = cost[j] - 0.5;
}
}
}

void CpuMatrix::smoothL1Bp(Matrix& output, Matrix& label) {
CHECK(output.useGpu_ == false && label.useGpu_ == false)
<< "Matrix type are not equal";

size_t numSamples = getHeight();
size_t dim = output.getWidth();
CHECK_EQ(label.getHeight(), numSamples);
CHECK_EQ(output.getHeight(), numSamples);
CHECK_EQ(label.getWidth(), dim);
CHECK_EQ(getWidth(), (size_t)1);
real* out = output.getData();
real* cost = getData();
real* lbl = label.getData();

// f'(x) = x if |x| < 1
// = sign(x) otherwise
for (size_t i = 0; i < numSamples; ++i, out += dim, cost += dim, lbl += dim) {
for (size_t j = 0; j < dim; ++j) {
cost[j] = out[j] - lbl[j];
if (std::fabs(cost[j]) >= 1) cost[j] = (0 < cost[j]) - (cost[j] < 0);
}
}
}

void CpuMatrix::tanh(Matrix& output) {
CHECK(isContiguous());
CHECK(output.isContiguous());
Expand Down
11 changes: 11 additions & 0 deletions paddle/math/Matrix.h
Original file line number Diff line number Diff line change
Expand Up @@ -783,6 +783,14 @@ class Matrix : public BaseMatrix {
LOG(FATAL) << "Not implemented";
}

virtual void smoothL1(Matrix& output, Matrix& label) {
LOG(FATAL) << "Not implemented";
}

virtual void smoothL1Bp(Matrix& outputV, Matrix& label) {
LOG(FATAL) << "Not implemented";
}

virtual void tanh(Matrix& output) { LOG(FATAL) << "Not implemented"; }

virtual void tanhDerivative(Matrix& output) {
Expand Down Expand Up @@ -1720,6 +1728,9 @@ class CpuMatrix : public Matrix {
/// gradient of sumOfSquares.
void sumOfSquaresBp(Matrix& outputV, Matrix& label);

void smoothL1(Matrix& output, Matrix& label);
void smoothL1Bp(Matrix& output, Matrix& label);

void tanh(Matrix& output);
void tanhDerivative(Matrix& output);

Expand Down
Loading

0 comments on commit 3569466

Please sign in to comment.