-
Notifications
You must be signed in to change notification settings - Fork 24
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'hpc-improv' of github.com:Edward-RSE/python into hpc-im…
…prov
- Loading branch information
Showing
32 changed files
with
1,701 additions
and
1,087 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
MakeMacro | ||
--------- | ||
|
||
To facillitate generation of data for MacroAtoms several routines exist, including | ||
|
||
* Makemacro.py | ||
* RedoPhot.py | ||
* MacroCombine.py | ||
|
||
These routines use data contained in a (local) Chianti repository and from TopBase to construct atomic data files that can be used to generate the atomic data | ||
needed for MacroAtoms. Chiani is the primary source of data, but unfortunately for Python, it does not include photionization x-sections, and so these are obtained from TopBase. | ||
|
||
**It should be emphacised at the outset that these routines, while they generate that can be read and used within in Python, they do not guaranteed that the Macro atom models are physically sensible. In particular, it is easy to generate models that are overly complex or ones that do not include all of the sublevels of a particular ion** | ||
|
||
|
||
Mechanics | ||
========= | ||
|
||
All of the programs can be run from the command line. All of the programs can be ruun with a -h flag to obtain helo information. | ||
|
||
The programs must be run in an enviroment in which astropy, matplotlib and ChiantiPy are installed. | ||
|
||
MakeMacro.py | ||
============ | ||
|
||
MakeMacro.py is the main program that needs to be run first. It retrieves data from a local installation of the CChianti database and from an external version of Topbase using wget. | ||
|
||
If for example, one wishes to make C a macro atom, one needs to run MakeMacro as follows:: | ||
|
||
MakeMacro.py c_1 20 | ||
MakeMacro.py c_2 20 | ||
MakeMacro.py c_3 20 | ||
MakeMacro.py c_4 20 | ||
MakeMacro.py c_5 20 | ||
MakeMacro.py c_6 20 True | ||
|
||
These commands create all of the files needed to make carbon a macro atom. The number 20 implies that one wants to create 20 levels for each of the ions, and need not be the same for each of the ions. This number should be larger than the number of levels one wants to ulitmately use with Python, and can be smaller. The extra command line option True for c_6 imples causes a bare C ion to be created for C | ||
|
||
The files that will be created include: | ||
|
||
* c_2_levels.dat - A set of levels for C II taken from Chianti | ||
* c_2_lines.dat - A set of line files for C II taken from Chianti | ||
* c_2_phot.dat - A set of photoionization x-section taken from Topbase, matched to the level files. The photionization x-sections will have been extended to an energy (currently 100 keV by the routines contained in RedoPhot.py | ||
* c_2_upsilon.dat - A set of collisional x-sections taken from Chianti | ||
|
||
There will also be a figure | ||
|
||
* c_2_phot.png which shows how the photoionzation x-sections have been extended to higher energies. | ||
|
||
RedoPhot.py | ||
=========== | ||
|
||
RedoPhot.py is the program that contains routines to extend x-sections to a higher energy. it is normally called as a subroutine from MakeMacro.py, although it can be called from the command line for testing. . | ||
|
||
The underlying issue is that the TopBase x-sections, which are simple tables of photon encergy and x-sections, do not extend to very high or consistent energies. The routine assumes one can use the logarighmic slope of the x-sections at the high end of what TopBase provides to extend the x-sections. | ||
|
||
The routine called from MakeMacro.py rewrites the same file, but called from the command line can write the extended x-sections to a different file. | ||
|
||
MacroCombine.py | ||
=============== | ||
|
||
MacroCombine.py is a routine that is intended to allow the user to selectively combine level generated by MakeMacro.py into level files that are physically reasonable but more compact that the models generated by MakeMacro. | ||
|
||
The underlying assumption made in MacroCombine is that levels that are combined have relative populations that are portional to g of the original uncombined level. | ||
|
||
Typically one would run MacroCombine.py as follows for C II | ||
|
||
MacroCombine.py -guess c_2_levels.dat c_2_lines.dat c_2_phot.dat | ||
|
||
The would produce the following files | ||
|
||
* c_2_lev_final.dat - A compressed level file based on the assumption that levels with very similar excitation energy should be combined | ||
* c_2_lines_final.dat - A set of lines for the compressed files. | ||
* c_2_phot_final.dat - A set of x-sections, matched to the other files | ||
|
||
* c_2_levels_guess.dat - A file which has two extra columns, xlev and G, compared to a typical level file. The xlev column shows which of the original levels been combined. | ||
|
||
The last file needs to be inspected carefully. If the user wishes to change the choices that were made, he or she, should edit this file to reflect what levels should be combined. | ||
|
||
When this is done, one reruns the routine | ||
|
||
MacroCombine.py c_2_levels_guess.dat c_2_lines.dat c_2_phot.dat | ||
|
||
without the -guess option to produce a final set of files | ||
|
||
**Note that at present the routine does not handle the collisional x-sections, though this should be straightforward to add** | ||
|
||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
Regression | ||
---------- | ||
|
||
Primarily to verify that changes made to Python do not inadvertently cause unexpected changes | ||
if models, several rutines exist to run a fixed set of (relatively fast) models that are | ||
**nominally** contained in Examples/regress. | ||
|
||
Developers are encouraged to use this routines, before they merge anything into one of the | ||
major branches of Python. | ||
|
||
The routines involved are | ||
|
||
* regression.py | ||
* regression_check.py | ||
* regression_plot.py | ||
|
||
The primary routine is regression.py. All of the routines can be run with a -h switch | ||
to obtain information of the full set of command line options | ||
|
||
Setup | ||
##### | ||
|
||
Typically one should set up a directory, e.g Regression to run the routines, and, if for example, | ||
py87f, is the version of Python being used when you set up the directory, being run. | ||
|
||
Python should be compiled with mpicc before running the regression program | ||
|
||
Basic Procedure | ||
############### | ||
|
||
regression.py py87f | ||
|
||
This will create a directory py87f_231108 where 231108 is the current date. The pf files from | ||
the regression directory as well as various ancillary files will be copied into this directory, | ||
and all of the models contained therein will run sequentially. | ||
In the absence of command line | ||
switches the routines will be run with a default number of processors (currently 3). | ||
Assuming this is the first time the program is run, no comparison to previous runs will be made | ||
|
||
The models have been selected | ||
to test a variety of types of models and/or to highlight areas of concern. As a result, the models that are run are likely | ||
to change occassionaly. | ||
|
||
Once changes have been made to python, one reruns the program, e.g. | ||
|
||
regression.py py | ||
|
||
This will created a directory py_2311108 (assuming it this is the same day) and repead the previous | ||
precedured. | ||
|
||
**If the program is run on the same day with the same version of python, the older models | ||
will be overwritten. Typically one can avoid this by using py one time and py with the version number | ||
a second time. But there is also a command line switch to specify the name of the run time directory** | ||
|
||
Assuming all of the various models run to complesion, regression.py will call subroutines in regression_check.py | ||
and regression_plot.py to make comparasions between the model just run and the previous one. The plots (one for each model) | ||
will be contained in a directory called Xcompare. | ||
|
||
|
||
Interpretion of the results | ||
########################### | ||
|
||
The models that are run are simple models, and to allow one to proceed quickly, none of the models is run to convergence. | ||
The outputs compare the spectra that were produced in the two runs of the program, both by looking to see how many lines in | ||
the ionization and detailed spectra have changed, and by generating plots that show comparisions of the spectra. | ||
|
||
Many times the results will be identical, but if a change between two versions of the program results in a different | ||
sequence of random numbers, then the spectra will change simply as a result of random noise, which is not a concern. | ||
There is no easy way to quantify changes that are due to this effect or something else, and | ||
so one simply through experience has to gauge the results by inspecting the plots that are produced.. | ||
|
||
|
||
Comments and additions | ||
###################### | ||
|
||
Although regression.py generally produces a comparison betwen the set of models being run and the last set of models that wre run, one can use | ||
regression_check.py to compare any two sets of runs. | ||
|
||
retression_check.py run1 run2 | ||
|
||
where one gives the names of the two directories to be compared. | ||
|
||
|
||
While the regression procedure described here is generally set up to run on the models that are contained in the Examples/regress directory, | ||
regression.py has switches that allow one to do tests on models that are in any input directory. This can be useful, if one wishes to test | ||
different models in order to solve specific problems, or to run a set of models sequentially. | ||
|
||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.