Skip to content

Commit

Permalink
Training routine tested
Browse files Browse the repository at this point in the history
  • Loading branch information
Anurag Ranjan committed Jan 31, 2017
1 parent e86f5f1 commit a00e18c
Show file tree
Hide file tree
Showing 15 changed files with 13 additions and 11 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ flow = computeFlow(im)
```

## Training
Training sequentially is faster than training end-to-end since you need to learn small number of parameters at each level. To train a level `N`, we need the trained models at levels `1` to `N-1`.
Training sequentially is faster than training end-to-end since you need to learn small number of parameters at each level. To train a level `N`, we need the trained models at levels `1` to `N-1`. You also initialize the model with a pretrained model at `N-1`.

E.g. To train level 3, we need trained models at `L1` and `L2`.
E.g. To train level 3, we need trained models at `L1` and `L2`, and we initialize it `modelL2_3.t7`.
```bash
th main.lua -fineWidth 128 -fineHeight 96 -level 3 -netType volcon -cache checkpoint -data FLYING_CHAIRS_DIR -L1 models/modelL1_3 -L2 modelsL2_3
th main.lua -fineWidth 128 -fineHeight 96 -level 3 -netType volcon -cache checkpoint -data FLYING_CHAIRS_DIR -L1 models/modelL1_3.t7 -L2 models/modelL2_3.t7 -retrain models/modelL2_3.t7
```
## Timing Benchmarks
Our timing benchmark is set up on Flying chair dataset. To test it, you need to download
Expand Down
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Empty file.
Binary file added checkpoint/TueJan3116:50:362017/model_1.t7
Binary file not shown.
Binary file added checkpoint/TueJan3116:50:362017/optimState_1.t7
Binary file not shown.
Empty file.
2 changes: 2 additions & 0 deletions checkpoint/TueJan3116:50:362017/train.log
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
avg loss (train set) % top1 accuracy (train set)
2.1416e+00 0.0000e+00
Binary file added checkpoint/testCache.t7
Binary file not shown.
Binary file added checkpoint/trainCache.t7
Binary file not shown.
8 changes: 4 additions & 4 deletions donkey.lua
Original file line number Diff line number Diff line change
Expand Up @@ -62,10 +62,10 @@ local modelL1path, modelL2path, modelL3path, modelL4path
local down1, down2, down3, down4, up2, up3, up4
local warpmodel2, warpmodel3, warpmodel4

modelL1path = paths.concat('models', opt.L1)
modelL2path = paths.concat('models', opt.L2)
modelL3path = paths.concat('models', opt.L3)
modelL4path = paths.concat('models', opt.L4)
modelL1path = opt.L1
modelL2path = opt.L2
modelL3path = opt.L3
modelL4path = opt.L4

if opt.level > 1 then
-- Load modelL1
Expand Down
8 changes: 4 additions & 4 deletions opts.lua
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ function M.parse(arg)
cmd:option('-weightDecay', 5e-4, 'weight decay')
cmd:option('-optimizer', 'adam', 'adam or sgd')
---------- Model options ----------------------------------
cmd:option('-L1', 'modelL1_4.t7', 'Trained Level 1 model')
cmd:option('-L2', 'modelL2_4.t7', 'Trained Level 2 model')
cmd:option('-L3', 'modelL3_4.t7', 'Trained Level 3 model')
cmd:option('-L4', 'modelL4_4.t7', 'Trained Level 4 model')
cmd:option('-L1', 'models/modelL1_4.t7', 'Trained Level 1 model')
cmd:option('-L2', 'models/modelL2_4.t7', 'Trained Level 2 model')
cmd:option('-L3', 'models/modelL3_4.t7', 'Trained Level 3 model')
cmd:option('-L4', 'models/modelL4_4.t7', 'Trained Level 4 model')

cmd:option('-netType', 'volcon', 'Lua network file')
cmd:option('-retrain', 'none', 'provide path to model to retrain with')
Expand Down

0 comments on commit a00e18c

Please sign in to comment.