Skip to content

Commit

Permalink
added the SVD test to the Solution and fixed its baseline files to ma…
Browse files Browse the repository at this point in the history
…tch the changed log messages
  • Loading branch information
frankseide committed Dec 1, 2015
1 parent 24a0e80 commit 4932a5c
Show file tree
Hide file tree
Showing 7 changed files with 35 additions and 23 deletions.
12 changes: 12 additions & 0 deletions CNTK.sln
Original file line number Diff line number Diff line change
Expand Up @@ -456,6 +456,17 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "QuickE2E", "QuickE2E", "{2A
Tests\Image\QuickE2E\testcases.yml = Tests\Image\QuickE2E\testcases.yml
EndProjectSection
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "SVD", "SVD", "{669B6203-9675-4950-B526-7CD72D55E5E1}"
ProjectSection(SolutionItems) = preProject
Tests\Speech\SVD\baseline.cpu.txt = Tests\Speech\SVD\baseline.cpu.txt
Tests\Speech\SVD\baseline.gpu.txt = Tests\Speech\SVD\baseline.gpu.txt
Tests\Speech\SVD\baseline.windows.cpu.txt = Tests\Speech\SVD\baseline.windows.cpu.txt
Tests\Speech\SVD\baseline.windows.gpu.txt = Tests\Speech\SVD\baseline.windows.gpu.txt
Tests\Speech\SVD\cntk.config = Tests\Speech\SVD\cntk.config
Tests\Speech\SVD\run-test = Tests\Speech\SVD\run-test
Tests\Speech\SVD\testcases.yml = Tests\Speech\SVD\testcases.yml
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Mixed Platforms = Debug|Mixed Platforms
Expand Down Expand Up @@ -746,5 +757,6 @@ Global
{8071EF60-30F7-4A77-81AA-ADCA0E18B1E3} = {D45DF403-6781-444E-B654-A96868C5BE68}
{76F9323D-34A1-43A5-A594-C4798931FF21} = {8071EF60-30F7-4A77-81AA-ADCA0E18B1E3}
{2A884EB5-037C-481E-8170-BCDC8B3EDD93} = {8071EF60-30F7-4A77-81AA-ADCA0E18B1E3}
{669B6203-9675-4950-B526-7CD72D55E5E1} = {C47CDAA5-6D6C-429E-BC89-7CA0F868FDC8}
EndGlobalSection
EndGlobal
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ testCases:
- Finished Epoch[{{integer}} of {{integer}}]
- TrainLossPerSample = {{float,tolerance=0.01%}}
- EvalErrPerSample = {{float,tolerance=0.01%}}
- AvgLearningRatePerSample = {{float,tolerance=0.01%}}
- AvgLearningRatePerSample = {{float,tolerance=0.001%}}

Per-minibatch training results must match for each MPI Rank:
patterns:
Expand Down
10 changes: 5 additions & 5 deletions Tests/Speech/SVD/baseline.cpu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -846,7 +846,7 @@ Starting minibatch loop.
Epoch[ 1 of 3]-Minibatch[ 291- 300 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.21602783; EvalErr[0]PerSample = 0.62812500; TotalTime = 0.61986s; TotalTimePerSample = 0.96853ms; SamplesPerSecond = 1032
Epoch[ 1 of 3]-Minibatch[ 301- 310 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.29106445; EvalErr[0]PerSample = 0.60625000; TotalTime = 0.62023s; TotalTimePerSample = 0.96911ms; SamplesPerSecond = 1031
Epoch[ 1 of 3]-Minibatch[ 311- 320 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.20531006; EvalErr[0]PerSample = 0.57500000; TotalTime = 0.61718s; TotalTimePerSample = 0.96434ms; SamplesPerSecond = 1036
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 3.017344; EvalErrPerSample = 0.73061526; Ave LearnRatePerSample = 0.015625; EpochTime=20.286321
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 3.017344; EvalErrPerSample = 0.73061526; AvgLearningRatePerSample = 0.015625; EpochTime=20.286321
Starting Epoch 2: learning rate per sample = 0.001953 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Expand All @@ -859,14 +859,14 @@ Starting minibatch loop.
Epoch[ 2 of 3]-Minibatch[ 51- 60 of 80]: SamplesSeen = 2560; TrainLossPerSample = 2.01557617; EvalErr[0]PerSample = 0.54414063; TotalTime = 0.84531s; TotalTimePerSample = 0.33020ms; SamplesPerSecond = 3028
Epoch[ 2 of 3]-Minibatch[ 61- 70 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.94065170; EvalErr[0]PerSample = 0.52500000; TotalTime = 0.84554s; TotalTimePerSample = 0.33029ms; SamplesPerSecond = 3027
Epoch[ 2 of 3]-Minibatch[ 71- 80 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.94852905; EvalErr[0]PerSample = 0.54023438; TotalTime = 0.83581s; TotalTimePerSample = 0.32649ms; SamplesPerSecond = 3062
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.9919552; EvalErrPerSample = 0.54179686; Ave LearnRatePerSample = 0.001953125; EpochTime=6.772167
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.9919552; EvalErrPerSample = 0.54179686; AvgLearningRatePerSample = 0.001953125; EpochTime=6.772167
Starting Epoch 3: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 2: frames [40960..61440] (first utterance at frame 40960), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 3 of 3]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.91941833; EvalErr[0]PerSample = 0.52890625; TotalTime = 1.74051s; TotalTimePerSample = 0.16997ms; SamplesPerSecond = 5883
Epoch[ 3 of 3]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.91062393; EvalErr[0]PerSample = 0.52783203; TotalTime = 1.68678s; TotalTimePerSample = 0.16472ms; SamplesPerSecond = 6070
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.9150212; EvalErrPerSample = 0.52836913; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=3.475749
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.9150212; EvalErrPerSample = 0.52836913; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=3.475749
CNTKCommandTrainEnd: speechTrain


Expand Down Expand Up @@ -2055,13 +2055,13 @@ requiredata: determined feature kind as 33-dimensional 'USER' with frame shift 1
Starting minibatch loop.
Epoch[ 1 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.90709000; EvalErr[0]PerSample = 0.52988281; TotalTime = 1.52806s; TotalTimePerSample = 0.14922ms; SamplesPerSecond = 6701
Epoch[ 1 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.86627007; EvalErr[0]PerSample = 0.51650391; TotalTime = 1.48349s; TotalTimePerSample = 0.14487ms; SamplesPerSecond = 6902
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.88668; EvalErrPerSample = 0.52319336; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=3.969891
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.88668; EvalErrPerSample = 0.52319336; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=3.969891
Starting Epoch 2: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 2 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.84089890; EvalErr[0]PerSample = 0.51132813; TotalTime = 1.52045s; TotalTimePerSample = 0.14848ms; SamplesPerSecond = 6734
Epoch[ 2 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.85902176; EvalErr[0]PerSample = 0.51396484; TotalTime = 1.48188s; TotalTimePerSample = 0.14471ms; SamplesPerSecond = 6910
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8499603; EvalErrPerSample = 0.5126465; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=3.049746
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8499603; EvalErrPerSample = 0.5126465; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=3.049746
CNTKCommandTrainEnd: SVDTrain
COMPLETED
10 changes: 5 additions & 5 deletions Tests/Speech/SVD/baseline.gpu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -849,7 +849,7 @@ WARNING: The same matrix with dim [1, 1] has been transferred between different
Epoch[ 1 of 3]-Minibatch[ 291- 300 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.17750854; EvalErr[0]PerSample = 0.62187500; TotalTime = 0.08166s; TotalTimePerSample = 0.12760ms; SamplesPerSecond = 7837
Epoch[ 1 of 3]-Minibatch[ 301- 310 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.26263428; EvalErr[0]PerSample = 0.59687500; TotalTime = 0.08178s; TotalTimePerSample = 0.12778ms; SamplesPerSecond = 7826
Epoch[ 1 of 3]-Minibatch[ 311- 320 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.15072632; EvalErr[0]PerSample = 0.56250000; TotalTime = 0.08027s; TotalTimePerSample = 0.12542ms; SamplesPerSecond = 7973
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 2.9799573; EvalErrPerSample = 0.72216797; Ave LearnRatePerSample = 0.015625; EpochTime=2.652128
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 2.9799573; EvalErrPerSample = 0.72216797; AvgLearningRatePerSample = 0.015625; EpochTime=2.652128
Starting Epoch 2: learning rate per sample = 0.001953 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Expand All @@ -862,14 +862,14 @@ Starting minibatch loop.
Epoch[ 2 of 3]-Minibatch[ 51- 60 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.97115784; EvalErr[0]PerSample = 0.54140625; TotalTime = 0.11745s; TotalTimePerSample = 0.04588ms; SamplesPerSecond = 21797
Epoch[ 2 of 3]-Minibatch[ 61- 70 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.89518127; EvalErr[0]PerSample = 0.52031250; TotalTime = 0.11746s; TotalTimePerSample = 0.04588ms; SamplesPerSecond = 21794
Epoch[ 2 of 3]-Minibatch[ 71- 80 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.90450592; EvalErr[0]PerSample = 0.53164062; TotalTime = 0.11189s; TotalTimePerSample = 0.04371ms; SamplesPerSecond = 22879
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.949242; EvalErrPerSample = 0.53417969; Ave LearnRatePerSample = 0.001953125; EpochTime=0.943885
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.949242; EvalErrPerSample = 0.53417969; AvgLearningRatePerSample = 0.001953125; EpochTime=0.943885
Starting Epoch 3: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 2: frames [40960..61440] (first utterance at frame 40960), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 3 of 3]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.87359848; EvalErr[0]PerSample = 0.51933594; TotalTime = 0.27164s; TotalTimePerSample = 0.02653ms; SamplesPerSecond = 37696
Epoch[ 3 of 3]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.86656265; EvalErr[0]PerSample = 0.51748047; TotalTime = 0.24487s; TotalTimePerSample = 0.02391ms; SamplesPerSecond = 41818
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.8700806; EvalErrPerSample = 0.51840824; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=0.546135
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.8700806; EvalErrPerSample = 0.51840824; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=0.546135
CNTKCommandTrainEnd: speechTrain


Expand Down Expand Up @@ -2058,13 +2058,13 @@ requiredata: determined feature kind as 33-dimensional 'USER' with frame shift 1
Starting minibatch loop.
Epoch[ 1 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.86152668; EvalErr[0]PerSample = 0.51777344; TotalTime = 0.26419s; TotalTimePerSample = 0.02580ms; SamplesPerSecond = 38759
Epoch[ 1 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.81946163; EvalErr[0]PerSample = 0.51054687; TotalTime = 0.23779s; TotalTimePerSample = 0.02322ms; SamplesPerSecond = 43064
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.8404942; EvalErrPerSample = 0.51416016; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=1.390486
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.8404942; EvalErrPerSample = 0.51416016; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=1.390486
Starting Epoch 2: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 2 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.80154209; EvalErr[0]PerSample = 0.50097656; TotalTime = 0.26115s; TotalTimePerSample = 0.02550ms; SamplesPerSecond = 39210
Epoch[ 2 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.81663570; EvalErr[0]PerSample = 0.50869141; TotalTime = 0.23803s; TotalTimePerSample = 0.02325ms; SamplesPerSecond = 43019
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8090889; EvalErrPerSample = 0.504834; Ave LearnRatePerSample = 9.765625146e-05; EpochTime=0.528531
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8090889; EvalErrPerSample = 0.504834; AvgLearningRatePerSample = 9.765625146e-05; EpochTime=0.528531
CNTKCommandTrainEnd: SVDTrain
COMPLETED
10 changes: 5 additions & 5 deletions Tests/Speech/SVD/baseline.windows.cpu.txt
Original file line number Diff line number Diff line change
Expand Up @@ -855,7 +855,7 @@ Starting minibatch loop.
Epoch[ 1 of 3]-Minibatch[ 291- 300 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.15880737; EvalErr[0]PerSample = 0.58281250; TotalTime = 2.17109s; TotalTimePerSample = 3.39233ms; SamplesPerSecond = 294
Epoch[ 1 of 3]-Minibatch[ 301- 310 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.22708130; EvalErr[0]PerSample = 0.59218750; TotalTime = 2.44488s; TotalTimePerSample = 3.82012ms; SamplesPerSecond = 261
Epoch[ 1 of 3]-Minibatch[ 311- 320 of 320]: SamplesSeen = 640; TrainLossPerSample = 2.25599976; EvalErr[0]PerSample = 0.60625000; TotalTime = 2.36123s; TotalTimePerSample = 3.68942ms; SamplesPerSecond = 271
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 3.0070155; EvalErrPerSample = 0.72827148; Ave LearnRatePerSample = 0.015625; EpochTime=66.903391
Finished Epoch[ 1 of 3]: [Training Set] TrainLossPerSample = 3.0070155; EvalErrPerSample = 0.72827148; AvgLearningRatePerSample = 0.015625; EpochTime=66.903391
Starting Epoch 2: learning rate per sample = 0.001953 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Expand All @@ -868,14 +868,14 @@ Starting minibatch loop.
Epoch[ 2 of 3]-Minibatch[ 51- 60 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.91355438; EvalErr[0]PerSample = 0.53984375; TotalTime = 2.36608s; TotalTimePerSample = 0.92425ms; SamplesPerSecond = 1081
Epoch[ 2 of 3]-Minibatch[ 61- 70 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.91760941; EvalErr[0]PerSample = 0.53125000; TotalTime = 2.52977s; TotalTimePerSample = 0.98819ms; SamplesPerSecond = 1011
Epoch[ 2 of 3]-Minibatch[ 71- 80 of 80]: SamplesSeen = 2560; TrainLossPerSample = 1.87678528; EvalErr[0]PerSample = 0.52890625; TotalTime = 2.78605s; TotalTimePerSample = 1.08830ms; SamplesPerSecond = 918
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.9557171; EvalErrPerSample = 0.53979492; Ave LearnRatePerSample = 0.001953125; EpochTime=20.583875
Finished Epoch[ 2 of 3]: [Training Set] TrainLossPerSample = 1.9557171; EvalErrPerSample = 0.53979492; AvgLearningRatePerSample = 0.001953125; EpochTime=20.583875
Starting Epoch 3: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 2: frames [40960..61440] (first utterance at frame 40960), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 3 of 3]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.88589649; EvalErr[0]PerSample = 0.52529297; TotalTime = 4.18417s; TotalTimePerSample = 0.40861ms; SamplesPerSecond = 2447
Epoch[ 3 of 3]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.89380131; EvalErr[0]PerSample = 0.51816406; TotalTime = 3.71472s; TotalTimePerSample = 0.36277ms; SamplesPerSecond = 2756
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.8898489; EvalErrPerSample = 0.52172852; Ave LearnRatePerSample = 9.765625146e-005; EpochTime=8.004147
Finished Epoch[ 3 of 3]: [Training Set] TrainLossPerSample = 1.8898489; EvalErrPerSample = 0.52172852; AvgLearningRatePerSample = 9.765625146e-005; EpochTime=8.004147
CNTKCommandTrainEnd: speechTrain


Expand Down Expand Up @@ -2064,13 +2064,13 @@ requiredata: determined feature kind as 33-dimensional 'USER' with frame shift 1
Starting minibatch loop.
Epoch[ 1 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.89938011; EvalErr[0]PerSample = 0.51777344; TotalTime = 3.23396s; TotalTimePerSample = 0.31582ms; SamplesPerSecond = 3166
Epoch[ 1 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.81444931; EvalErr[0]PerSample = 0.50478516; TotalTime = 3.28329s; TotalTimePerSample = 0.32063ms; SamplesPerSecond = 3118
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.8569148; EvalErrPerSample = 0.51127928; Ave LearnRatePerSample = 9.765625146e-005; EpochTime=9.316316
Finished Epoch[ 1 of 2]: [Training Set] TrainLossPerSample = 1.8569148; EvalErrPerSample = 0.51127928; AvgLearningRatePerSample = 9.765625146e-005; EpochTime=9.316316
Starting Epoch 2: learning rate per sample = 0.000098 effective momentum = 0.656119
minibatchiterator: epoch 1: frames [20480..40960] (first utterance at frame 20480), data subset 0 of 1, with 1 datapasses

Starting minibatch loop.
Epoch[ 2 of 2]-Minibatch[ 1- 10 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.83472824; EvalErr[0]PerSample = 0.50781250; TotalTime = 3.75234s; TotalTimePerSample = 0.36644ms; SamplesPerSecond = 2728
Epoch[ 2 of 2]-Minibatch[ 11- 20 of 20]: SamplesSeen = 10240; TrainLossPerSample = 1.80246696; EvalErr[0]PerSample = 0.50654297; TotalTime = 3.29585s; TotalTimePerSample = 0.32186ms; SamplesPerSecond = 3106
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8185977; EvalErrPerSample = 0.50717777; Ave LearnRatePerSample = 9.765625146e-005; EpochTime=7.151551
Finished Epoch[ 2 of 2]: [Training Set] TrainLossPerSample = 1.8185977; EvalErrPerSample = 0.50717777; AvgLearningRatePerSample = 9.765625146e-005; EpochTime=7.151551
CNTKCommandTrainEnd: SVDTrain
COMPLETED
Loading

0 comments on commit 4932a5c

Please sign in to comment.