Skip to content

Commit

Permalink
Merge branch 'hpc-improv' of github.com:Edward-RSE/python into hpc-im…
Browse files Browse the repository at this point in the history
…prov
  • Loading branch information
Edward-RSE committed Jan 10, 2024
2 parents 694dc7e + e4cf621 commit 26ef349
Show file tree
Hide file tree
Showing 32 changed files with 1,701 additions and 1,087 deletions.
12 changes: 12 additions & 0 deletions bin/Setup_Py_Dir
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
top=$PYTHON
xdata=xdata
xmod=xmod
zdata=zdata

if [ -d "data" ]; then
rm data
Expand All @@ -15,6 +16,11 @@ if [ -d "xmod" ]; then
rm xmod
fi


if [ -d "zdata" ]; then
rm zdata
fi

if [ -d "$top/$xdata" ]; then
ln -s -f $top/$xdata data
fi
Expand All @@ -25,3 +31,9 @@ fi



if [ -d "$top/$zdata" ]; then
ln -s -f $top/$zdata zdata
fi



6 changes: 5 additions & 1 deletion docs/sphinx/source/atomic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,16 @@ The purpose of documentation is as follows:
* to explain where the data currently used in Python and to explain how the raw data
is translated in to a format the Python accepts

The routines used to translate raw data format (as well as much of the raw data)
The routines used to translate raw data format for two-level atoms (as well as much of the raw data)
are contained in a separate github `repository <https://github.com/agnwinds/data-gen>`_
These routines are very "rough-and-ready", and not extensively documented, but so users
should beware. On the other hand, they are not exceptionally complicated so in most cases
it should be fairly clear from the code what the various routines do.

The routines used to generate data for MacroAtoms are described in :doc:`Generating Macro Atom data <./py_progs/MakeMacro>`



The "masterfile" that determines what data will be read into Python is determined by the
line in the parameter file, which will read something like::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ Bound-bound electron collision strengths

Source
======
We use the Chianti atomic database, specifically the \*.scups files. These "contain the effective electron collision strengths scaled according to the rules formulated by `Burgess & Tully 1992, A&A, 254, 436 <http://articles.adsabs.harvard.edu/full/1992A%26A...254..436B>`_
We use the Chianti atomic database, specifically the \*.scups files. These "contain the effective electron collision strengths
scaled according to the rules formulated by
`Burgess & Tully 1992, A&A, 254, 436 <https://ui.adsabs.harvard.edu/abs/1992A%26A...254..436B/abstract>`_
The values in the file are functional fits to :math:`\Upsilon(T)`$ which we referred to as :math:`\Omega` in our calculations for collisional de-excitation rate coefficient


Expand Down Expand Up @@ -87,4 +89,24 @@ There is also a member in the line structure (coll_index) which points to the re
Comments
========

This data has been generated to match the Verner line list. If we want to use the Kurukz line list, then a new set of data should be made

There are currenly 4 types of transitions that are read from the Chianti data

- 1 - Allowed transition
- 2 - Forbiddent transitions
- 3 - Intercombination trantions
- 4 - Allowed transitions with

which correspond to the transition types idenitfied by Burgess & Tully

There are addtional transition types in the Chianti database

- 5 - Dielectronic capture tranisitions
- 6 - Proton transitions


The latter are not currently used in **Python**

Discussion of how Chianti handles transitions can be found in
`The CHIANTI upsilon files (ups and scups) <http://www.chiantidatabase.org/tech_reports/13_scups/chianti_report_13.pdf>`_

90 changes: 90 additions & 0 deletions docs/sphinx/source/py_progs/MakeMacro.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
MakeMacro
---------

To facillitate generation of data for MacroAtoms several routines exist, including

* Makemacro.py
* RedoPhot.py
* MacroCombine.py

These routines use data contained in a (local) Chianti repository and from TopBase to construct atomic data files that can be used to generate the atomic data
needed for MacroAtoms. Chiani is the primary source of data, but unfortunately for Python, it does not include photionization x-sections, and so these are obtained from TopBase.

**It should be emphacised at the outset that these routines, while they generate that can be read and used within in Python, they do not guaranteed that the Macro atom models are physically sensible. In particular, it is easy to generate models that are overly complex or ones that do not include all of the sublevels of a particular ion**


Mechanics
=========

All of the programs can be run from the command line. All of the programs can be ruun with a -h flag to obtain helo information.

The programs must be run in an enviroment in which astropy, matplotlib and ChiantiPy are installed.

MakeMacro.py
============

MakeMacro.py is the main program that needs to be run first. It retrieves data from a local installation of the CChianti database and from an external version of Topbase using wget.

If for example, one wishes to make C a macro atom, one needs to run MakeMacro as follows::

MakeMacro.py c_1 20
MakeMacro.py c_2 20
MakeMacro.py c_3 20
MakeMacro.py c_4 20
MakeMacro.py c_5 20
MakeMacro.py c_6 20 True

These commands create all of the files needed to make carbon a macro atom. The number 20 implies that one wants to create 20 levels for each of the ions, and need not be the same for each of the ions. This number should be larger than the number of levels one wants to ulitmately use with Python, and can be smaller. The extra command line option True for c_6 imples causes a bare C ion to be created for C

The files that will be created include:

* c_2_levels.dat - A set of levels for C II taken from Chianti
* c_2_lines.dat - A set of line files for C II taken from Chianti
* c_2_phot.dat - A set of photoionization x-section taken from Topbase, matched to the level files. The photionization x-sections will have been extended to an energy (currently 100 keV by the routines contained in RedoPhot.py
* c_2_upsilon.dat - A set of collisional x-sections taken from Chianti

There will also be a figure

* c_2_phot.png which shows how the photoionzation x-sections have been extended to higher energies.

RedoPhot.py
===========

RedoPhot.py is the program that contains routines to extend x-sections to a higher energy. it is normally called as a subroutine from MakeMacro.py, although it can be called from the command line for testing. .

The underlying issue is that the TopBase x-sections, which are simple tables of photon encergy and x-sections, do not extend to very high or consistent energies. The routine assumes one can use the logarighmic slope of the x-sections at the high end of what TopBase provides to extend the x-sections.

The routine called from MakeMacro.py rewrites the same file, but called from the command line can write the extended x-sections to a different file.

MacroCombine.py
===============

MacroCombine.py is a routine that is intended to allow the user to selectively combine level generated by MakeMacro.py into level files that are physically reasonable but more compact that the models generated by MakeMacro.

The underlying assumption made in MacroCombine is that levels that are combined have relative populations that are portional to g of the original uncombined level.

Typically one would run MacroCombine.py as follows for C II

MacroCombine.py -guess c_2_levels.dat c_2_lines.dat c_2_phot.dat

The would produce the following files

* c_2_lev_final.dat - A compressed level file based on the assumption that levels with very similar excitation energy should be combined
* c_2_lines_final.dat - A set of lines for the compressed files.
* c_2_phot_final.dat - A set of x-sections, matched to the other files

* c_2_levels_guess.dat - A file which has two extra columns, xlev and G, compared to a typical level file. The xlev column shows which of the original levels been combined.

The last file needs to be inspected carefully. If the user wishes to change the choices that were made, he or she, should edit this file to reflect what levels should be combined.

When this is done, one reruns the routine

MacroCombine.py c_2_levels_guess.dat c_2_lines.dat c_2_phot.dat

without the -guess option to produce a final set of files

**Note that at present the routine does not handle the collisional x-sections, though this should be straightforward to add**




90 changes: 90 additions & 0 deletions docs/sphinx/source/py_progs/regression.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
Regression
----------

Primarily to verify that changes made to Python do not inadvertently cause unexpected changes
if models, several rutines exist to run a fixed set of (relatively fast) models that are
**nominally** contained in Examples/regress.

Developers are encouraged to use this routines, before they merge anything into one of the
major branches of Python.

The routines involved are

* regression.py
* regression_check.py
* regression_plot.py

The primary routine is regression.py. All of the routines can be run with a -h switch
to obtain information of the full set of command line options

Setup
#####

Typically one should set up a directory, e.g Regression to run the routines, and, if for example,
py87f, is the version of Python being used when you set up the directory, being run.

Python should be compiled with mpicc before running the regression program

Basic Procedure
###############

regression.py py87f

This will create a directory py87f_231108 where 231108 is the current date. The pf files from
the regression directory as well as various ancillary files will be copied into this directory,
and all of the models contained therein will run sequentially.
In the absence of command line
switches the routines will be run with a default number of processors (currently 3).
Assuming this is the first time the program is run, no comparison to previous runs will be made

The models have been selected
to test a variety of types of models and/or to highlight areas of concern. As a result, the models that are run are likely
to change occassionaly.

Once changes have been made to python, one reruns the program, e.g.

regression.py py

This will created a directory py_2311108 (assuming it this is the same day) and repead the previous
precedured.

**If the program is run on the same day with the same version of python, the older models
will be overwritten. Typically one can avoid this by using py one time and py with the version number
a second time. But there is also a command line switch to specify the name of the run time directory**

Assuming all of the various models run to complesion, regression.py will call subroutines in regression_check.py
and regression_plot.py to make comparasions between the model just run and the previous one. The plots (one for each model)
will be contained in a directory called Xcompare.


Interpretion of the results
###########################

The models that are run are simple models, and to allow one to proceed quickly, none of the models is run to convergence.
The outputs compare the spectra that were produced in the two runs of the program, both by looking to see how many lines in
the ionization and detailed spectra have changed, and by generating plots that show comparisions of the spectra.

Many times the results will be identical, but if a change between two versions of the program results in a different
sequence of random numbers, then the spectra will change simply as a result of random noise, which is not a concern.
There is no easy way to quantify changes that are due to this effect or something else, and
so one simply through experience has to gauge the results by inspecting the plots that are produced..


Comments and additions
######################

Although regression.py generally produces a comparison betwen the set of models being run and the last set of models that wre run, one can use
regression_check.py to compare any two sets of runs.

retression_check.py run1 run2

where one gives the names of the two directories to be compared.


While the regression procedure described here is generally set up to run on the models that are contained in the Examples/regress directory,
regression.py has switches that allow one to do tests on models that are in any input directory. This can be useful, if one wishes to test
different models in order to solve specific problems, or to run a set of models sequentially.




22 changes: 11 additions & 11 deletions examples/extended/m16_agn.pf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Central_object.mass(msol) 1e9
Central_object.radius(cm) 8.85667e+14

### Parameters for the Disk (if there is one)
Disk.type(none,flat,vertically.extended) flat
Disk.type(none,flat,vertically.extended,rmin>central.obj.rad) flat
Disk.radiation(yes,no) yes
Disk.rad_type_to_make_wind(bb,models) bb
Disk.temperature.profile(standard,readin) standard
Expand All @@ -14,13 +14,12 @@ Disk.radmax(cm) 1e+17

### Parameters for Boundary Layer or the compact object in an X-ray Binary or AGN
Central_object.radiation(yes,no) yes
Central_object.rad_type_to_make_wind(bb,models,power,cloudy,brems) power
Central_object.rad_type_to_make_wind(bb,models,power,cloudy,brems,mono) power
Central_object.luminosity(ergs/s) 1e+45
Central_object.power_law_index -0.9
Central_object.geometry_for_source(sphere,lamp_post) sphere
Central_object.geometry_for_source(sphere,lamp_post,bubble) sphere

### Parameters describing the various winds or coronae in the system
Wind.radiation(yes,no) yes
Wind.number_of_components 1
Wind.type(SV,star,hydro,corona,kwd,homologous,shell,imported) sv
Wind.coord_system(spherical,cylindrical,polar,cyl_var) cylindrical
Expand All @@ -31,8 +30,9 @@ Wind.dim.in.z_or_theta.direction 200
Photons_per_cycle 2e7
Ionization_cycles 20.0
Spectrum_cycles 5.0
Wind.ionization(on.the.spot,ML93,LTE_tr,LTE_te,fixed,matrix_bb,matrix_pow) matrix_pow
Line_transfer(pure_abs,pure_scat,sing_scat,escape_prob,thermal_trapping,macro_atoms,macro_atoms_thermal_trapping) macro_atoms_thermal_trapping
Wind.ionization(on.the.spot,ML93,LTE_tr,LTE_te,fixed,matrix_bb,matrix_pow,matrix_est) matrix_pow
Line_transfer(pure_abs,pure_scat,sing_scat,escape_prob,thermal_trapping,macro_atoms_escape_prob,macro_atoms_thermal_trapping) macro_atoms_thermal_trapping
Matom_transition_mode(mc_jumps,matrix) matrix
Atomic_data data/h10_hetop_lohe1_standard80.dat
Surface.reflection.or.absorption(reflect,absorb,thermalized.rerad) absorb
Wind_heating.extra_processes(none,adiabatic,nonthermal,both) adiabatic
Expand All @@ -56,11 +56,11 @@ Wind.filling_factor(1=smooth,<1=clumped) 0.01

### Parameters defining the spectra seen by observers

Disk.rad_type_in_final_spectrum(bb,models,uniform) bb
Central_object.rad_type_in_final_spectrum(bb,models,power,cloudy,brems) power
Disk.rad_type_in_final_spectrum(bb,models,uniform,mono) bb
Central_object.rad_type_in_final_spectrum(bb,models,power,cloudy,brems,mono) power

### The minimum and maximum wavelengths in the final spectra
Spectrum.nwave 10000
### The minimum and maximum wavelengths in the final spectra and the number of wavelength bins
Spectrum.nwave 10000
Spectrum.wavemin(Angstroms) 500.0
Spectrum.wavemax(Angstroms) 7500.0

Expand Down Expand Up @@ -93,4 +93,4 @@ Spectrum.type(flambda,fnu,basic) flambda
Reverb.type(none,photon,wind,matom) none

### Other parameters
Photon_sampling.approach(T_star,cv,yso,AGN,tde_bb,min_max_freq,user_bands,cloudy_test,wide,logarithmic) agn
Photon_sampling.approach(T_star,cv,yso,AGN,tde_bb,min_max_freq,user_bands,cloudy_test,wide,logarithmic) agn
2 changes: 1 addition & 1 deletion py_progs/MacroCombine.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ def write_phot(outfile,xtab,xsec):
i=0
while i<len(xtab):
one=xtab[i]
string='PhotMacS %3d %3d %3d %3d %10.6e %3d' % (one['z'],one['ion'],one['ll'],one['ul'],one['ethresh'],one['nlines'])
string='PhotMacS %3d %3d %3d %3d %10.6f %3d' % (one['z'],one['ion'],one['ll'],one['ul'],one['ethresh'],one['nlines'])
print(string)
f.write('%s\n' %string)
one_x=xsec[i]
Expand Down
18 changes: 16 additions & 2 deletions py_progs/MakeMacro.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@
import numpy as np
from astropy.table import Table, join
from astropy.io import ascii
import RedoPhot


# The levels format looks like
Expand Down Expand Up @@ -378,6 +379,13 @@ def get_lines(ion="h_4", nlevels=10):

# Check for lines that have a Wavelength of 0 and if tis happens fix the
# wavelegnth using el and eu, but issue a warning when you do this
# Also assure that the f values are not zero. Note that this
# value may be higher than we want, as the f values for forbidden lines
# may be even lower than this. Howevever, if one decides to change this
# One needs to be that output formats for various routines capture the full range, as the
# f value is compared to the value for collisions.

f_min=0.000001

for one in xxtab:
if one["Wave"] == 0:
Expand All @@ -386,8 +394,12 @@ def get_lines(ion="h_4", nlevels=10):
"Line with ll %d and ul %d is missing wavelength. Correcting to %.3f using el and eu"
% (one["ll"], one["ul"], one["Wave"])
)
if one["f"]<f_min:
one['f']=f_min
print("Line with ll %d and ul %d is missing f. Changing to %.6f so line has way out"
% (one["ll"], one["ul"],one["f"]))

xxtab["Wave"].format = "10.3f"
xxtab["Wave"].format = "10.6f"
xxxtab = xxtab["Dtype", "z", "ion", "Wave", "f", "gl", "gu", "el", "eu", "ll", "ul"]
return xxxtab

Expand Down Expand Up @@ -831,6 +843,7 @@ def get_collisions(ion="h_1", nlev=20):
)

print(xtab)
xtab=xtab[xtab['ttype']<5]

npossible = 0
for one in xtab:
Expand Down Expand Up @@ -858,7 +871,7 @@ def get_collisions(ion="h_1", nlev=20):
xout = open(ion + "_upsilon.dat", "w")
for one in xxtab:
# print(one)
xstring = "CSTREN Line %3d %3d %10.3f %9.6f %2d %2d %10.6f %10.6f " % (
xstring = "CSTREN Line %3d %3d %10.6f %9.6f %2d %2d %10.6f %10.6f " % (
one["z"],
one["ion"],
one["Wave"],
Expand Down Expand Up @@ -998,6 +1011,7 @@ def doit(atom="h_1", nlev=10, next_ion = False):
get_phot(atom)
make_phot(atom)
write_phot(atom)
RedoPhot.redo_one('%s_phot.dat' % atom, atom)

xcol = get_collisions(atom, nlev)
return
Expand Down
Loading

0 comments on commit 26ef349

Please sign in to comment.