Skip to content

Commit

Permalink
Added a comment
Browse files Browse the repository at this point in the history
  • Loading branch information
mseeger committed Dec 8, 2022
1 parent b8b80ce commit 0fb6a11
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 2 deletions.
1 change: 1 addition & 0 deletions chapter_hyperparameter_optimization/hyperband-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ epochs for training the neural network, but it could also be the training
subset size or the number of cross-validation folds.

## Successive Halving
:label:`sec_mf_hpo_sh`

One of the simplest ways to adapt random search to the multi-fidelity setting is
*successive halving* :cite:`jamieson-aistats16,karnin-icml13`. The basic
Expand Down
12 changes: 10 additions & 2 deletions chapter_hyperparameter_optimization/sh-async.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ scheduler = ASHA(
resource_attr=resource_attr,
grace_period=min_number_of_epochs,
reduction_factor=eta,
)
)
```

Here, `metric` and `resource_attr` specify the key names used with the `report`
Expand All @@ -177,7 +177,15 @@ tuner = Tuner(
tuner.run()
```

After the experiment has finished, we can retrieve and plot results.
Note that we are running a variant of ASHA where underperforming trials are
stopped early. This is different to our implementation in
:numref:`sec_mf_hpo_sh`, where each training job is started with a fixed
`max_epochs`. In the latter case, a well-performing trial which reaches the
full 10 epochs, first needs to train 1, then 2, then 4, then 8 epochs, each
time starting from scratch. This type of pause-and-resume scheduling can be
implemented efficiently by checkpointing the training state after each epoch,
but we avoid this extra complexity here. After the experiment has finished,
we can retrieve and plot results.

```{.python .input n=59}
e = load_experiment(tuner.name)
Expand Down

0 comments on commit 0fb6a11

Please sign in to comment.