Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/Microsoft/CNTK into dongy…
Browse files Browse the repository at this point in the history
…u/UCIFastReaderFix
  • Loading branch information
Dong Yu committed Feb 16, 2016
2 parents 8e560b1 + 11f11b1 commit dcc49b9
Show file tree
Hide file tree
Showing 33 changed files with 2,009 additions and 224 deletions.
53 changes: 34 additions & 19 deletions Documentation/CNTK-TechReport/lyx/CNTKBook_CNTK_Chapter.lyx
Original file line number Diff line number Diff line change
Expand Up @@ -1771,8 +1771,8 @@ numMiniBatch4LRSearch
: the number of minibatches used to search the minibatch size when in adaptive
minibatch size mode.
Default value is 500.
It's typically set to 10-20% of the total minibatches in an epoch. This is
shared with the search for learning rate in SearchBeforeEpoch mode.
It's typically set to 10-20% of the total minibatches in an epoch.
This is shared with the search for learning rate in SearchBeforeEpoch mode.

\end_layout

Expand All @@ -1792,8 +1792,9 @@ autoAdjustMinibatch
\end_inset

: enable or disable whether minibatch size is adaptively adjusted.
Default value is false. Adaptive minibatch sizing will begin on epochs starting
after user minibatch sizes explicitly specified are complete.
Default value is false.
Adaptive minibatch sizing will begin on epochs starting after user minibatch
sizes explicitly specified are complete.
For example if the userspecifed minibatchSize=256:1024, then 256 and 1024are
used in the first 2 Epochs and adaptive minibatchsizing is used afterwards

Expand All @@ -1814,8 +1815,8 @@ minibatchSizeTuningFrequency

\end_inset

: The number of epochs to skip, on a periodic basis, before dynamically adjusting
the minibatch size.
: The number of epochs to skip, on a periodic basis, before dynamically
adjusting the minibatch size.
Default value is 1.

\end_layout
Expand Down Expand Up @@ -4775,6 +4776,22 @@ printValues
Default is true.
\end_layout

\begin_layout Itemize
printMetadata
\begin_inset Index idx
status open

\begin_layout Plain Layout
printMetadata
\end_layout

\end_inset

– determines whether to print the metadata (node name, dimensions, etc.)
associated with a node.
Default is true.
\end_layout

\begin_layout Subsection
WriteWordAndClass Command
\begin_inset Index idx
Expand Down Expand Up @@ -5509,8 +5526,8 @@ traceLevel=0 # larger values mean more output

The default value is 0 and specifies minimal output.
The higher the number the more output can be expected.
Currently 0 (limited output), 1 (medium output) and 2 (verbose output) are
the only values supported.
Currently 0 (limited output), 1 (medium output) and 2 (verbose output)
are the only values supported.
\end_layout

\begin_layout Subsection
Expand Down Expand Up @@ -5831,8 +5848,7 @@ status open

\begin_layout Plain Layout

cntk configFile=yourExp.cntk mnistTrain=[reader=[file="mynewfile.txt"]]

cntk configFile=yourExp.cntk mnistTrain=[reader=[file="mynewfile.txt"]]
\end_layout

\end_inset
Expand Down Expand Up @@ -5891,8 +5907,8 @@ cntk configFile=yourExp1.cntk configFile=yourExp2.cntk
\end_layout

\begin_layout Standard
If yourExp2.cntk only contains the string "mnistTrain=[reader=[file=mynewfile.tx
t]]", then both of these commands would be equivalent to:
If yourExp2.cntk only contains the string "mnistTrain=[reader=[file=mynewfile.txt]
]", then both of these commands would be equivalent to:
\end_layout

\begin_layout Standard
Expand All @@ -5902,8 +5918,7 @@ status open

\begin_layout Plain Layout

cntk configFile=yourExp1.cntk mnistTrain=[reader=[file="mynewfile.txt"]]

cntk configFile=yourExp1.cntk mnistTrain=[reader=[file="mynewfile.txt"]]
\end_layout

\end_inset
Expand All @@ -5926,8 +5941,8 @@ status open

\begin_layout Plain Layout

cntk configFile=yourExp1.cntk+yourExp2.cntk var1=value configFile=yourExp3.conf
ig
cntk configFile=yourExp1.cntk+yourExp2.cntk var1=value configFile=yourExp3.config

\end_layout

\end_inset
Expand Down Expand Up @@ -6007,9 +6022,9 @@ included
Including a configuration file is equivalent to pasting the contents of
that file at the location of the include statement.
Include statements are resolved recursively (using a depth-first search),
meaning that if yourExpA.cntk includes yourExpB.cntk, and yourExpB.cntk
includes yourExpC.cntk, then the full chain will be resolved, and yourExpC.conf
ig will effectively be included in yourExpA.cntk.
meaning that if yourExpA.cntk includes yourExpB.cntk, and yourExpB.cntk includes
yourExpC.cntk, then the full chain will be resolved, and yourExpC.config
will effectively be included in yourExpA.cntk.
If a configuration file is included multiple times (eg, 'A' includes 'B'
and 'C', and 'B' also includes 'C'), then it will effectively only be included
the first time it is encountered.
Expand Down
2 changes: 1 addition & 1 deletion Examples/Image/Miscellaneous/ImageNet/ResNet/Macros.ndl
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ ConvBNLayerW(W, inp, outMap, kW, kH, hStride, vStride, bValue, scValue, expAvg)
isd = Parameter(outMap, 1, init = fixedValue, value = 0, needGradient = false)

c = Convolution(W, inp, kW, kH, outMap, hStride, vStride, zeroPadding = true, imageLayout = "cudnn")
y = BatchNormalization(c, sc, b, m, isd, eval = false, spatial = true, expAvgFactor = expAvg, imageLayout = "cudnn")
y = BatchNormalization(c, sc, b, m, isd, eval = false, spatial = true, expAvgFactor = expAvg, epsilon = 0.000000001, imageLayout = "cudnn")
}

ConvBNLayer(inp, outMap, inWCount, kW, kH, hStride, vStride, wScale, bValue, scValue, expAvg)
Expand Down
35 changes: 22 additions & 13 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,10 @@ CXX = mpic++

SOURCEDIR:= Source
INCLUDEPATH:= $(addprefix $(SOURCEDIR)/, Common/Include Math CNTK ActionsLib ComputationNetworkLib SGDLib SequenceTrainingLib CNTK/BrainScript Readers/ReaderLib)
CPPFLAGS:= -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K
CXXFLAGS:= -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new
# COMMON_FLAGS include settings that are passed both to NVCC and C++ compilers.
COMMON_FLAGS:= -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -std=c++11
CPPFLAGS:=
CXXFLAGS:= -msse3 -std=c++0x -fopenmp -fpermissive -fPIC -Werror -fcheck-new
LIBPATH:=
LIBS:=
LDFLAGS:=
Expand All @@ -78,7 +80,7 @@ SRC:=
all : buildall
# Set up basic nvcc options and add CUDA targets from above
CUFLAGS = -std=c++11 -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -m 64
CUFLAGS = -m 64
ifdef CUDA_PATH
ifndef GDK_PATH
Expand Down Expand Up @@ -110,26 +112,33 @@ ifdef CUDA_PATH
INCLUDEPATH += $(CUDNN_PATH)/cuda/include
LIBPATH += $(CUDNN_PATH)/cuda/lib64
LIBS += -lcudnn
CPPFLAGS +=-DUSE_CUDNN
COMMON_FLAGS +=-DUSE_CUDNN
endif
else
DEVICE = cpu
CPPFLAGS +=-DCPUONLY
COMMON_FLAGS +=-DCPUONLY
endif
ifeq ("$(MATHLIB)","acml")
INCLUDEPATH += $(ACML_PATH)/include
LIBPATH += $(ACML_PATH)/lib
LIBS += -lacml_mp -liomp5 -lm -lpthread
CPPFLAGS += -DUSE_ACML
COMMON_FLAGS += -DUSE_ACML
endif
ifeq ("$(MATHLIB)","mkl")
INCLUDEPATH += $(MKL_PATH)/mkl/include
LIBPATH += $(MKL_PATH)/compiler/lib/intel64 $(MKL_PATH)/mkl/lib/intel64 $(MKL_PATH)/compiler/lib/mic $(MKL_PATH)/mkl/lib/mic
LIBS += -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lm -liomp5 -lpthread
CPPFLAGS += -DUSE_MKL
COMMON_FLAGS += -DUSE_MKL
endif
ifeq ("$(MATHLIB)","openblas")
INCLUDEPATH += $(OPENBLAS_PATH)/include
LIBPATH += $(OPENBLAS_PATH)/lib
LIBS += -lopenblas -lm -lpthread
CPPFLAGS += -DUSE_OPENBLAS
endif
Expand Down Expand Up @@ -161,7 +170,7 @@ ifeq ("$(BUILDTYPE)","debug")
CXXFLAGS += -g
LDFLAGS += -rdynamic
CPPFLAGS += -D_DEBUG
COMMON_FLAGS += -D_DEBUG
CUFLAGS += -O0 -g -use_fast_math -lineinfo $(GENCODE_FLAGS)
endif
Expand All @@ -174,7 +183,7 @@ ifeq ("$(BUILDTYPE)","release")
CXXFLAGS += -g -O4
LDFLAGS += -rdynamic
CPPFLAGS += -DNDEBUG
COMMON_FLAGS += -DNDEBUG
CUFLAGS += -O3 -g -use_fast_math -lineinfo $(GENCODE_FLAGS)
endif
Expand Down Expand Up @@ -245,7 +254,7 @@ MATH_SRC +=\
$(SOURCEDIR)/Math/GPUSparseMatrix.cu \
$(SOURCEDIR)/Math/GPUWatcher.cu \
$(SOURCEDIR)/Math/MatrixQuantizerGPU.cu \
$(SOURCEDIR)/Math/CuDnnConvolutionEngine.cpp \
$(SOURCEDIR)/Math/CuDnnConvolutionEngine.cu \
$(SOURCEDIR)/Math/GPUDataTransferer.cpp \
else
Expand Down Expand Up @@ -469,7 +478,7 @@ endif
INCLUDEPATH += $(SOURCEDIR)/1BitSGD
CPPFLAGS += -DQUANTIZED_GRADIENT_AGGREGATION
COMMON_FLAGS += -DQUANTIZED_GRADIENT_AGGREGATION
endif
########################################
Expand Down Expand Up @@ -549,13 +558,13 @@ $(OBJDIR)/%.o : %.cu Makefile
@echo $(SEPARATOR)
@echo creating $@ for $(ARCH) with build type $(BUILDTYPE)
@mkdir -p $(dir $@)
$(NVCC) -c $< -o $@ $(CUFLAGS) $(INCLUDEPATH:%=-I%) -Xcompiler "-fPIC -Werror"
$(NVCC) -c $< -o $@ $(COMMON_FLAGS) $(CUFLAGS) $(INCLUDEPATH:%=-I%) -Xcompiler "-fPIC -Werror"
$(OBJDIR)/%.o : %.cpp Makefile
@echo $(SEPARATOR)
@echo creating $@ for $(ARCH) with build type $(BUILDTYPE)
@mkdir -p $(dir $@)
$(CXX) -c $< -o $@ $(CPPFLAGS) $(CXXFLAGS) $(INCLUDEPATH:%=-I%) -MD -MP -MF ${@:.o=.d}
$(CXX) -c $< -o $@ $(COMMON_FLAGS) $(CPPFLAGS) $(CXXFLAGS) $(INCLUDEPATH:%=-I%) -MD -MP -MF ${@:.o=.d}
.PHONY: force clean buildall all
Expand Down
2 changes: 1 addition & 1 deletion Source/CNTK/BrainScript/ExperimentalNetworkBuilder.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ wstring computationNodes = // TODO: use actual TypeName() here? would first need
L"ColumnwiseCrossProduct = KhatriRaoProduct // deprecated \n" // TODO: should it be deprecated? It is described as easier to understand in the CNTKBook.
L"ClassificationError = ErrorPrediction \n"
L"Delay = PastValue \n" // TODO: should it allow negative offsets and an if test here?
L"BatchNormalization(input, scale, bias, runMean, runInvStdDev, eval, spatial, expAvgFactor, imageLayout='CHW', tag='') = new ComputationNode [ operation = 'BatchNormalization' ; inputs = (input : scale : bias : runMean : runInvStdDev) /*plus the function args*/ ]\n"
L"BatchNormalization(input, scale, bias, runMean, runInvStdDev, eval, spatial, expAvgFactor = 1.0, epsilon = 0.00001, useCntkEngine = true, imageLayout='CHW', tag='') = new ComputationNode [ operation = 'BatchNormalization' ; inputs = (input : scale : bias : runMean : runInvStdDev) /*plus the function args*/ ]\n"
// standard nodes. We use macros to define these strings.
#define UnaryStandardNode(Op, a) L## #Op L"(" L## #a L", tag='') = new ComputationNode [ operation = '" L## #Op L"' ; inputs = " L## #a L" /*plus the function args*/ ]\n"
#define BinaryStandardNode(Op, a, b) L## #Op L"(" L## #a L", " L## #b L", tag='') = new ComputationNode [ operation = '" L## #Op L"' ; inputs = (" L## #a L" : " L## #b L") /*plus the function args*/ ]\n"
Expand Down
7 changes: 6 additions & 1 deletion Source/CNTK/CNTK.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,15 @@ void DumpNodeInfo(const ConfigParameters& config)
wstring defOutFilePath = modelPath + L"." + nodeName + L".txt";
wstring outputFile = config(L"outputFile", defOutFilePath);
bool printValues = config(L"printValues", true);
bool printMetadata = config(L"printMetadata", true);
if (!printValues && !printMetadata)
{
InvalidArgument("printValues and printMetadata: Since both are set to false, there will be nothing to dump");
}

ComputationNetwork net(-1); // always use CPU
net.Load<ElemType>(modelPath);
net.DumpNodeInfoToFile(nodeName, printValues, outputFile, nodeNameRegexStr);
net.DumpNodeInfoToFile(nodeName, printValues, printMetadata, outputFile, nodeNameRegexStr);
}

size_t GetMaxEpochs(const ConfigParameters& configParams)
Expand Down
4 changes: 2 additions & 2 deletions Source/CNTK/ModelEditLanguage.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ void MELScript<ElemType>::CallFunction(const std::string& p_name, const ConfigPa
{
NetNdl<ElemType>* netNdl = &found->second;
ProcessNDLScript(netNdl, ndlPassAll, true);
found->second.cn->DumpAllNodesToFile(includeData, fileName);
found->second.cn->DumpAllNodesToFile(includeData, true, fileName);
}
}
else if (EqualInsensitive(name, "DumpNode"))
Expand All @@ -281,7 +281,7 @@ void MELScript<ElemType>::CallFunction(const std::string& p_name, const ConfigPa
NetNdl<ElemType>* netNdl;
vector<ComputationNodeBasePtr> nodes = FindSymbols(params[0], netNdl);
ProcessNDLScript(netNdl, ndlPassAll);
netNdl->cn->DumpNodeInfoToFile(nodes, includeData, fileName);
netNdl->cn->DumpNodeInfoToFile(nodes, includeData, true, fileName);
}
else if (EqualInsensitive(name, "CopyNode", "Copy"))
{
Expand Down
2 changes: 1 addition & 1 deletion Source/CNTK/NDLUtil.h
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ class NDLUtil
// if requested then dump the nodes
// Note: This happens on the invalidated network.
if (dumpFileName != L"")
m_net->DumpAllNodesToFile(false, dumpFileName);
m_net->DumpAllNodesToFile(false, true, dumpFileName);
}
SynchronousNodeEvaluator<ElemType> ndlEvaluator(m_net);
NDLNode<ElemType>* lastNode = script->Evaluate(ndlEvaluator, L"", ndlPass, skipThrough);
Expand Down
11 changes: 10 additions & 1 deletion Source/CNTK/SynchronousExecutionEngine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -400,9 +400,18 @@ void SynchronousNodeEvaluator<ElemType>::Evaluate(NDLNode<ElemType>* node, const
bool eval = node->GetOptionalParameter("eval", "false");
bool spatial = node->GetOptionalParameter("spatial", "false");
double expAvgFactor = node->GetOptionalParameter("expAvgFactor", "1.0");
double epsilon = node->GetOptionalParameter("epsilon", "0.00001");
std::wstring bnEngineS = node->GetOptionalParameter("engine", "cntk");
bool useCntkEngine;
if (EqualCI(bnEngineS, L"cntk"))
useCntkEngine = true;
else if (EqualCI(bnEngineS, L"cudnn"))
useCntkEngine = false;
else
InvalidArgument("Unsupported batch normalization engine, choose either \"cntk\"(default) or \"cudnn\".");
ImageLayoutKind imageLayoutKind = ImageLayoutKindFrom(node->GetOptionalParameter("imageLayout", "CHW"));

nodePtr = builder.BatchNormalization(nullptr, nullptr, nullptr, nullptr, nullptr, eval, spatial, expAvgFactor, imageLayoutKind, name);
nodePtr = builder.BatchNormalization(nullptr, nullptr, nullptr, nullptr, nullptr, eval, spatial, expAvgFactor, epsilon, useCntkEngine, imageLayoutKind, name);
}
}
else
Expand Down
14 changes: 8 additions & 6 deletions Source/ComputationNetworkLib/ComputationNetwork.h
Original file line number Diff line number Diff line change
Expand Up @@ -609,7 +609,7 @@ class ComputationNetwork : public ScriptableObjects::Object, public ScriptableOb
// if node name is not found, dump all nodes
// otherwise dump just that node
// This function is called from MEL, i.e. must be prepared to operate on an uncompiled network (only m_nameToNodeMap is valid).
void DumpNodeInfoToFile(const std::wstring& nodeName, const bool printValues, const std::wstring outputFile, const std::wstring& nodeNameInRegEx = L"")
void DumpNodeInfoToFile(const std::wstring& nodeName, const bool printValues, const bool printMetadata, const std::wstring outputFile, const std::wstring& nodeNameInRegEx = L"")
{
if (nodeNameInRegEx.empty())
{
Expand All @@ -619,13 +619,13 @@ class ComputationNetwork : public ScriptableObjects::Object, public ScriptableOb
FileOptions::fileOptionsText | FileOptions::fileOptionsWrite);

const ComputationNodeBasePtr& nodePtr = GetNodeFromName(nodeName);
nodePtr->DumpNodeInfo(printValues, fstream);
nodePtr->DumpNodeInfo(printValues, printMetadata, fstream);
}
else // node name is not found, dump all nodes
{
fprintf(stderr, "Warning: node name %ls does not exist in the network. dumping all nodes.\n",
nodeName.c_str());
DumpAllNodesToFile(printValues, outputFile);
DumpAllNodesToFile(printValues, printMetadata, outputFile);
}
}
else
Expand All @@ -647,12 +647,13 @@ class ComputationNetwork : public ScriptableObjects::Object, public ScriptableOb
fprintf(stderr, "\t%ls\n", x.c_str());
}
fprintf(stderr, "DumpNodeInfo: dumping node info (%s printing values) to %ls\n", printValues ? "with" : "without", outputFile.c_str());
DumpNodeInfoToFile(NodeList, printValues, outputFile);
DumpNodeInfoToFile(NodeList, printValues, printMetadata, outputFile);
}
}

// dump all nodes in the network to file
void DumpAllNodesToFile(const bool printValues,
const bool printMetadata,
const std::wstring outputFile)
{
File fstream(outputFile,
Expand All @@ -661,12 +662,13 @@ class ComputationNetwork : public ScriptableObjects::Object, public ScriptableOb
for (auto nodeIter = m_nameToNodeMap.begin(); nodeIter != m_nameToNodeMap.end(); nodeIter++)
{
ComputationNodeBasePtr nodePtr = nodeIter->second;
nodePtr->DumpNodeInfo(printValues, fstream);
nodePtr->DumpNodeInfo(printValues, printMetadata, fstream);
}
}

void DumpNodeInfoToFile(const vector<ComputationNodeBasePtr>& nodes,
const bool printValues,
const bool printMetadata,
const std::wstring outputFile)
{
File fstream(outputFile,
Expand All @@ -675,7 +677,7 @@ class ComputationNetwork : public ScriptableObjects::Object, public ScriptableOb
for (auto nodeIter = nodes.begin(); nodeIter != nodes.end(); nodeIter++)
{
ComputationNodeBasePtr nodePtr = *nodeIter;
nodePtr->DumpNodeInfo(printValues, fstream);
nodePtr->DumpNodeInfo(printValues, printMetadata, fstream);
}
}

Expand Down
4 changes: 2 additions & 2 deletions Source/ComputationNetworkLib/ComputationNetworkBuilder.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -603,9 +603,9 @@ shared_ptr<ComputationNode<ElemType>> ComputationNetworkBuilder<ElemType>::Looku
template <class ElemType>
shared_ptr<ComputationNode<ElemType>> ComputationNetworkBuilder<ElemType>::BatchNormalization(const ComputationNodePtr input,
const ComputationNodePtr scale, const ComputationNodePtr bias, const ComputationNodePtr runMean, const ComputationNodePtr runInvStdDev,
bool eval, bool spatial, double expAvgFactor, ImageLayoutKind imageLayoutKind, const std::wstring nodeName)
bool eval, bool spatial, double expAvgFactor, double epsilon, bool useCntkEngine, ImageLayoutKind imageLayoutKind, const std::wstring nodeName)
{
return net.AddNodeToNetAndAttachInputs(New<BatchNormalizationNode<ElemType>>(net.GetDeviceId(), nodeName, eval, spatial, expAvgFactor, imageLayoutKind),
return net.AddNodeToNetAndAttachInputs(New<BatchNormalizationNode<ElemType>>(net.GetDeviceId(), nodeName, eval, spatial, expAvgFactor, epsilon, useCntkEngine, imageLayoutKind),
input, scale, bias, runMean, runInvStdDev);
}

Expand Down
Loading

0 comments on commit dcc49b9

Please sign in to comment.