Skip to content

Commit

Permalink
removed again
Browse files Browse the repository at this point in the history
  • Loading branch information
astokholm committed Nov 8, 2022
1 parent 478ce3c commit df3cce2
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 6 deletions.
3 changes: 2 additions & 1 deletion introduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,8 @@
"# -- Built-in modules -- #\n",
"import os\n",
"os.environ['AI4ARCTIC_DATA'] = '' # Fill in directory for data location.\n",
"os.environ['AI4ARCTIC_ENV'] = '' # Fill in directory for environment with Ai4Arctic get-started package.\n",
"os.environ['AI4ARCTIC_ENV'] = '' # Fill in directory for environment with Ai4Arctic get-started package. \n",
"\n",
"\n",
"# -- Third-part modules -- #\n",
"import xarray as xr\n",
Expand Down
8 changes: 3 additions & 5 deletions quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"tags": []
},
"source": [
"# AutoICE quick start guide\n",
"# Quick start guide\n",
"This notebook serves as an example of how to train a simple model using pytorch and the ready-to-train AI4Arctic challenge dataset. Initially, a dictionary, 'train_options', is set up with relevant options for both the example U-Net Convolutional Neural Network model and the dataloader. Note that the weights of the U-Net will be initialised at random and therefore not deterministic - results will vary for every training run. Two lists (dataset.json and testset.json) include the names of the scenes relevant to training and testing, where the former can be altered if desired. Training data is loaded in parallel using the build-in torch Dataset and Dataloader classes, and works by randomly sampling a scene and performing a random crop to extract a patch. Each batch will then be compiled of X number of these patches with the patch size in the 'train_options'. An obstacle is different grid resolution sizes, which is overcome by upsampling low resolution variables, e.g. AMSR2, ERA5, to match the SAR pixels. A number of batches will be prepared in parallel and stored until use, depending on the number of workers (processes) spawned (this can be changed in 'num_workers' in 'train_options'). The model is trained on a fixed number of steps according to the number of batches in an epoch, defined by the 'epoch_len' parameter, and will run for a total number of epochs depending on the 'epochs' parameter. After each epoch, the model is evaluated. In this example, a random number of scenes are sampled among the training scenes (and removed from the list of training scenes) to act as a validation set used for the evaluation. The model is evaluated with the metrics, and if the current validation attempt is superior to the previous, then the model parameters are stored in the 'best_model' file in the directory.\n",
"\n",
"The models are scored on the three sea ice parameters; Sea Ice Concentration (SIC), Stage of Development (SOD) and the Floe size (FLOE) with the $R²$ metric for the SIC, and the weighted F1 metric for the SOD and FLOE. The 3 scores are combined into a single metric by taking the weighted average with SIC and SOD being weighted with 2 and the FLOE with 1.\n",
Expand All @@ -32,10 +32,8 @@
"import sys\n",
"\n",
"# -- Environmental variables -- #\n",
"os.environ['AI4ARCTIC_DATA'] = ''\n",
"os.environ['AI4ARCTIC_ENV'] = ''\n",
"#os.environ['AI4ARCTIC_DATA'] = # Fill in directory for data location.\n",
"#os.environ['AI4ARCTIC_ENV'] = # Fill in directory for environment with Ai4Arctic get-started package."
"os.environ['AI4ARCTIC_DATA'] = '' # Fill in directory for data location.\n",
"os.environ['AI4ARCTIC_ENV'] = '' # Fill in directory for environment with Ai4Arctic get-started package. \n"
]
},
{
Expand Down

0 comments on commit df3cce2

Please sign in to comment.