diff --git a/docs/api/controllers.rst b/docs/api/controllers.rst index 9af1001..4679486 100644 --- a/docs/api/controllers.rst +++ b/docs/api/controllers.rst @@ -1,7 +1,7 @@ .. _api-controllers: controllers ------------ +=========== .. automodule:: mloop.controllers :members: diff --git a/docs/api/index.rst b/docs/api/index.rst index 3d2ff16..b8d6915 100644 --- a/docs/api/index.rst +++ b/docs/api/index.rst @@ -1,5 +1,6 @@ .. _sec-api: +========== M-LOOP API ========== diff --git a/docs/api/interfaces.rst b/docs/api/interfaces.rst index 80eb1e9..9d443c8 100644 --- a/docs/api/interfaces.rst +++ b/docs/api/interfaces.rst @@ -1,5 +1,5 @@ interfaces ----------- +========== .. automodule:: mloop.interfaces :members: diff --git a/docs/api/launchers.rst b/docs/api/launchers.rst index 7d3c105..3e9454c 100644 --- a/docs/api/launchers.rst +++ b/docs/api/launchers.rst @@ -1,5 +1,5 @@ launchers ---------- +========= .. automodule:: mloop.launchers :members: diff --git a/docs/api/learners.rst b/docs/api/learners.rst index 642105a..7385be9 100644 --- a/docs/api/learners.rst +++ b/docs/api/learners.rst @@ -1,7 +1,7 @@ .. _api-learners: learners ---------- +======== .. automodule:: mloop.learners :members: diff --git a/docs/api/mloop.rst b/docs/api/mloop.rst index a0127dd..affcb8f 100644 --- a/docs/api/mloop.rst +++ b/docs/api/mloop.rst @@ -1,4 +1,4 @@ mloop ------ +===== .. automodule:: mloop diff --git a/docs/api/t_esting.rst b/docs/api/t_esting.rst index 9bb25ae..1209b5a 100644 --- a/docs/api/t_esting.rst +++ b/docs/api/t_esting.rst @@ -1,5 +1,5 @@ testing -------- +======= .. automodule:: mloop.testing :members: diff --git a/docs/api/utilities.rst b/docs/api/utilities.rst index 1f22fb5..8e63990 100644 --- a/docs/api/utilities.rst +++ b/docs/api/utilities.rst @@ -1,5 +1,5 @@ utilities ---------- +========= .. automodule:: mloop.utilities :members: diff --git a/docs/api/visualizations.rst b/docs/api/visualizations.rst index f602372..91d7209 100644 --- a/docs/api/visualizations.rst +++ b/docs/api/visualizations.rst @@ -1,5 +1,5 @@ visualizations --------------- +============== .. automodule:: mloop.visualizations :members: diff --git a/docs/tutorials.rst b/docs/tutorials.rst index 5c2469e..4fdefb8 100644 --- a/docs/tutorials.rst +++ b/docs/tutorials.rst @@ -68,15 +68,19 @@ You can add comments to your file using #, everything past # will be ignored. Ex num_params = 2 #number of parameters min_boundary = [-1,-1] #minimum boundary max_boundary = [1,1] #maximum boundary + first_params = [0.5,0.5] #first parameters to try + trust_region = 0.4 #maximum % move distance from best params #Halting conditions max_num_runs = 1000 #maximum number of runs max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters target_cost = 0.01 #optimization halts when a cost below this target is found + + #Learner options + cost_has_noise = True #whether the cost are corrupted by noise or not - #Learner specific options - first_params = [0.5,0.5] #first parameters to try - trust_region = 0.4 #maximum % move distance from best params + #Timing options + no_delay = True #wait for learner to make generate new parameters or use training algorithms #File format options interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* @@ -86,7 +90,7 @@ You can add comments to your file using #, everything past # will be ignored. Ex #Visualizations visualizations = True -We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also describe a few of the most commonly used extra options. +We will now explain the options in each of their groups. In almost all cases you will only need to the parameters settings and halting conditions, but we have also described a few of the most commonly used extra options. Parameter settings ~~~~~~~~~~~~~~~~~~ @@ -99,6 +103,10 @@ The number of parameters and their limits is defined with three keywords:: num_params defines the number of parameters, min_boundary defines the minimum value each of the parameters can take and max_boundary defines the maximum value each parameter can take. Here there are two value which each must be between -1 and 1. +first_parameters defines the first parameters the learner will try. You only need to set this if you have a safe set of parameters you want the experiment to start with. Just delete this keyword if any set of parameters in the boundaries will work. + +trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. This is only needed if your experiment produces bad results when the parameters are changes significantly between runs. Simply delete this keyword if your experiment works with any set of parameters within the boundaries. + Halting conditions ~~~~~~~~~~~~~~~~~~ @@ -107,6 +115,8 @@ The halting conditions define when the simulation will stop. We present three op max_num_runs = 100 max_num_runs_without_better_params = 10 target_cost = 0.1 + first_params = [0.5,0.5] + trust_region = 0.4 max_num_runs is the maximum number of runs that the optimization algorithm is allowed to run. max_num_runs_without_better_params is the maximum number of runs allowed before a lower cost and better parameters is found. Finally, when target_cost is set, if a run produces a cost that is less than this value the optimization process will stop. @@ -119,19 +129,23 @@ If you do not want one of the halting conditions, simply delete it from your fil max_num_runs_without_better_params = 10 -Learner specific options -~~~~~~~~~~~~~~~~~~~~~~~~ +Learner Options +~~~~~~~~~~~~~~~ -There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we consider just a couple of the most commonly used ones. M-LOOP has been designed to find an optimum quickly with no custom configuration as long as the experiment is able to provide a cost for every parameter it provides. +There are many learner specific options (and different learner algorithms) described in :ref:`sec-examples`. Here we just present a common one:: -However if your experiment will fail to work if there are sudden and significant changes to your parameters you may need to set the following options:: + cost_has_noise = True + +If the cost you provide has noise in it, meaning your the cost you calculate would fluctuate if you did multiple experiments with the same parameters, then set this flag to True. If the costs your provide have no noise then set this flag to False. M-LOOP will automatically determine if the costs have noise in them or not, so if you are unsure, just delete this keyword and it will use the default value of True. - first_parameters = [0.5,0.5] - trust_region = 0.4 +Timing options +~~~~~~~~~~~~~~ -first_parameters defines the first parameters the learner will try. trust_region defines the maximum change allowed in the parameters from the best parameters found so far. In the current example the region size is 2 by 2, with a trust region of 40% thus the maximum allowed change for the second run will be [0 +/- 0.8, 0 +/- 0.8]. +M-LOOP learns how the experiment works by fitting the parameters and costs using a gaussian process. This learning process can take some time. If M-LOOP is asked for new parameters before it has time to generate a new prediction, it will use the training algorithm to provide a new set of parameters to test. This allows for an experiment to be run while the learner is still thinking. The training algorithm by default is differential evolution, this algorithm is also used to do the first initial set of experiments which are then used to train M-LOOP. If you would prefer M-LOOP waits for the learner to come up with its best prediction before running another experiment you can change this behavior with the option:: -If you experiment reliably produces costs for any parameter set you will not need these settings and you can just delete them. + no_delay = True + +Set no_delay to true to ensure there is no pauses between experiments and set it to false if you to give M-LOOP to have the time to come up with its most informed choice. Sometimes doing fewer more intelligent experiments will lead to an optimal quicker than many quick unintelligent experiments. You can delete the keyword if you are unsure and it will default to True. File format options ~~~~~~~~~~~~~~~~~~~ diff --git a/examples/tutorial_config.txt b/examples/tutorial_config.txt index cc8216a..cd07d29 100644 --- a/examples/tutorial_config.txt +++ b/examples/tutorial_config.txt @@ -8,15 +8,19 @@ interface_type = 'file' num_params = 2 #number of parameters min_boundary = [-1,-1] #minimum boundary max_boundary = [1,1] #maximum boundary +first_params = [0.5,0.5] #first parameters to try +trust_region = 0.4 #maximum % move distance from best params #Halting conditions max_num_runs = 1000 #maximum number of runs max_num_runs_without_better_params = 50 #maximum number of runs without finding better parameters target_cost = 0.01 #optimization halts when a cost below this target is found -#Learner specific options -first_params = [0.5,0.5] #first parameters to try -trust_region = 0.4 #maximum % move distance from best params +#Learner options +cost_has_noise = True #whether the cost are corrupted by noise or not + +#Timing options +no_delay = True #wait for learner to make generate new parameters or use training algorithms #File format options interface_file_type = 'txt' #file types of *exp_input.mat* and *exp_output.mat* diff --git a/mloop/__init__.py b/mloop/__init__.py index 06df418..9e53155 100644 --- a/mloop/__init__.py +++ b/mloop/__init__.py @@ -12,5 +12,5 @@ import os -__version__= "2.1.0" +__version__= "2.1.1" __all__ = ['controllers','interfaces','launchers','learners','testing','utilities','visualizations','cmd'] \ No newline at end of file diff --git a/setup.py b/setup.py index 01f5b48..c6b6017 100644 --- a/setup.py +++ b/setup.py @@ -39,7 +39,7 @@ def main(): license = 'MIT', keywords = 'automated machine learning optimization optimisation science experiment quantum', url = 'https://github.com/michaelhush/M-LOOP/', - download_url = 'https://github.com/michaelhush/M-LOOP/tarball/v2.1.0', + download_url = 'https://github.com/michaelhush/M-LOOP/tarball/v2.1.1', classifiers = ['Development Status :: 2 - Pre-Alpha', 'Intended Audience :: Science/Research',