Skip to content

Commit

Permalink
mkdocs update
Browse files Browse the repository at this point in the history
  • Loading branch information
profLewis committed Oct 8, 2020
1 parent 3f6378f commit b3efd29
Show file tree
Hide file tree
Showing 36 changed files with 527,232 additions and 528,698 deletions.
1 change: 1 addition & 0 deletions config/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ nav:
- 016_Python_for.md
- 017_Functions.md
plugins:
- git-revision-date-localized
- exclude:
glob:
- "*blank.tif"
Expand Down
12 changes: 12 additions & 0 deletions docs/002_Unix.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ The code cells in this notebook take Python commands by default, but we can run

```python
!pwd


```

/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks
Expand Down Expand Up @@ -134,6 +136,16 @@ ls -lh README.md
-rw-r--r-- 1 ucfalew ucfa 3.3K Oct 5 09:38 README.md



100
1 x 2^2 + 0 x 2^1 + 0 x 2^0
110

Here, the file size if `321B` (321 Bytes), and the file is owned by the user `plewis`. The field `-rw-r--r--` provides information on file permissions. Ignoring the first `-`, it is in 3 sets of 3 bits:

rw- r-- r--
Expand Down
26 changes: 25 additions & 1 deletion docs/004_Accounts.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ sites = ['https://n5eil01u.ecs.nsidc.org',\

l = Cylog(sites)
test = l.login()

```

### Test
Expand All @@ -38,10 +39,33 @@ You can run a test on your login to NASA Earthdata using the information you hav

```python
from geog0111.modis import test_login
do_test=False
do_test=True
assert test_login(do_test)
```

--> keeping existing file /shared/groups/jrole001/geog0111/work/e4ftl01.cr.usgs.gov.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MOLA in https://e4ftl01.cr.usgs.gov/
--> keeping existing file /shared/groups/jrole001/geog0111/work/e4ftl01.cr.usgs.gov/MOLA.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MYD11_L2.006 in https://e4ftl01.cr.usgs.gov/MOLA
--> keeping existing file /shared/groups/jrole001/geog0111/work/e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern 2002.07.04 in https://e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006
--> keeping existing file /shared/groups/jrole001/geog0111/work/e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006/2002.07.04.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MYD11_L2*0325*.hdf in https://e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006/2002.07.04
--> trying https://e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006/2002.07.04/MYD11_L2.A2002185.0325.006.2015142192613.hdf
--> trying get
--> trying https://e4ftl01.cr.usgs.gov/MOLA/MYD11_L2.006/2002.07.04/MYD11_L2.A2002185.0325.006.2015142192613.hdf
--> code 401
--> trying another
--> getting login
--> logging in to https://e4ftl01.cr.usgs.gov/
--> data read from https://e4ftl01.cr.usgs.gov/
--> code 200


## Reset password

If you are interested, you can see the help page for `Cylog`. It shows, for instance, how to over-ride the current entry (e.g. if you have changed your password), by using `force=True`).
Expand Down
43 changes: 19 additions & 24 deletions docs/022_Read_write_files.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,6 @@ assert data_url == data_file
print('files are the same')
```

--> deleting existing file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/www.json.org/json-en.html.store
--> trying https://www.json.org/json-en.html


Expand Down Expand Up @@ -146,21 +145,18 @@ modis = Modis(**kwargs)
url = modis.get_url(year="2020",month="01",day="01")[0]
```

--> retrieving SDS MCD15A3H from database
--> found SDS names in database
--> ['FparExtra_QC', 'FparLai_QC', 'FparStdDev_500m', 'Fpar_500m', 'LaiStdDev_500m', 'Lai_500m']
--> product MCD15A3H -> code MOTA
--> getting database from command line
--> keeping existing file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov.store
--> keeping existing file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MOTA in https://e4ftl01.cr.usgs.gov/
--> keeping existing file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA.store
--> keeping existing file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MCD15A3H.006 in https://e4ftl01.cr.usgs.gov/MOTA
--> keeping existing file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006.store
--> keeping existing file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern 2020.01.01 in https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006
--> keeping existing file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01.store
--> keeping existing file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01.store
--> parsing URLs from html file 1 items
--> discovered 1 files with pattern MCD15A3H*.h08v06*.hdf in https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01

Expand All @@ -176,16 +172,14 @@ print(f'data for {url} cached in {url.local()}')
print(f'dataset is {len(b)} bytes')
```

--> retrieving data https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf from database
--> local file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store exists
--> updated cache database in /shared/groups/jrole001/geog0111/work/database.db


data for https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf cached in /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store
data for https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf cached in /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store
dataset is 9067184 bytes


--> local file /nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store exists
--> retrieving data https://e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf from database
--> local file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store exists
--> updated cache database in /Users/plewis/Documents/GitHub/geog0111/notebooks/work/database.db
--> local file /Users/plewis/Documents/GitHub/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store exists


We could explicitly write the data to a file, but since we are using a cache, there is no real point. This means that we can just use the URL to access the dataset. If we do need to specify the filename explicitly for any other codes, we can use `url.local()`.
Expand Down Expand Up @@ -228,7 +222,8 @@ returns a list of sub-dataset information. Each item in the list is a tuple of t

We read the dataset with:

gdal.Open(filename).ReadAsArray()
gsub = gdal.Open(filename)
data = gsub.ReadAsArray()

In the illustration below, we will examine only the first sub-dataset `g.GetSubDatasets()[0]`.

Expand All @@ -246,22 +241,22 @@ kwargs = {
modis = Modis(**kwargs)
url = modis.get_url(year="2020",month="01",day="01")[0]

# open
g = gdal.Open(str(url.local()))
# set True to force download of the local file
filename = url.local(True).as_posix()

# open the local file associated with the dataset
g = gdal.Open(filename)
if g:
# get the first SDS only for illustration
filename,name = g.GetSubDatasets()[0]
print(f'dataset info is: {name}')
# read the dataset
data = gdal.Open(filename).ReadAsArray()
print(f'dataset read is shape {data.shape} and type {type(data)}')
gsub = gdal.Open(filename)
if gsub:
data = gsub.ReadAsArray()
print(f'dataset read is shape {data.shape} and type {type(data)}')
```

dataset info is: [2400x2400] Fpar_500m MOD_Grid_MCD15A3H (8-bit unsigned integer)
dataset read is shape (2400, 2400) and type <class 'numpy.ndarray'>


#### Exercise 3

name = '[2400x2400] Fpar_500m MOD_Grid_MCD15A3H (8-bit unsigned integer)'
Expand Down
14 changes: 0 additions & 14 deletions docs/022_Read_write_files_answers.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,9 +78,6 @@ hdf_urls = modis.get_url(year="2020",month="01",day="*")
print(get_locals(hdf_urls))
```

[PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.01/MCD15A3H.A2020001.h08v06.006.2020006032951.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.05/MCD15A3H.A2020005.h08v06.006.2020010210940.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.09/MCD15A3H.A2020009.h08v06.006.2020014204616.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.13/MCD15A3H.A2020013.h08v06.006.2020018030252.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.17/MCD15A3H.A2020017.h08v06.006.2020022034013.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.21/MCD15A3H.A2020021.h08v06.006.2020026032135.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.25/MCD15A3H.A2020025.h08v06.006.2020030025757.hdf.store'), PosixPath('/nfs/cfs/home3/Uucfa6/ucfalew/geog0111/notebooks/work/e4ftl01.cr.usgs.gov/MOTA/MCD15A3H.006/2020.01.29/MCD15A3H.A2020029.h08v06.006.2020034165001.hdf.store')]


#### Exercise 3

name = '[2400x2400] Fpar_500m MOD_Grid_MCD15A3H (8-bit unsigned integer)'
Expand All @@ -104,9 +101,6 @@ sds_name = name.split()[1]
print(sds_name)
```

Fpar_500m



```python
# Write a function called get_data that reads an HDF (MODIS) filename,
Expand Down Expand Up @@ -159,11 +153,3 @@ for k,v in hdf_dict.items():
# do some neat formatting on k
print(f'{k:<20s}: {v.shape}')
```

Fpar_500m : (2400, 2400)
Lai_500m : (2400, 2400)
FparLai_QC : (2400, 2400)
FparExtra_QC : (2400, 2400)
FparStdDev_500m : (2400, 2400)
LaiStdDev_500m : (2400, 2400)

6 changes: 3 additions & 3 deletions docs/024_Image_display.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ kwargs = {

modis = Modis(**kwargs)
# specify day of year (DOY) and year
data_MCD15A3H = modis.get_data(2019,1+4*10)

data_MCD15A3H = modis.get_data(2019,41)
# loop over dictionary items
for k,v in data_MCD15A3H.items():
if k in modis.sds:
Expand Down Expand Up @@ -108,7 +108,7 @@ fig.colorbar(im, ax=axs[0])



<matplotlib.colorbar.Colorbar at 0x7fd01f4cead0>
<matplotlib.colorbar.Colorbar at 0x7f390b53de90>



Expand Down Expand Up @@ -430,7 +430,7 @@ plt.legend(handles=patches,



<matplotlib.legend.Legend at 0x7fd0292f6c10>
<matplotlib.legend.Legend at 0x7f3915363190>



Expand Down
8 changes: 8 additions & 0 deletions docs/031_Numpy.md
Original file line number Diff line number Diff line change
Expand Up @@ -507,10 +507,15 @@ Below are some representative arithmetic operations that you can use on arrays.


```python
import numpy as np

# initialise some numbers
b = np.arange(4)
print(f'{b}^2 = {b**2}\n')

b = np.arange(4)
print(f'e^{b} = {np.exp(b)}\n')

a = np.array([20, 30, 40, 50])
print(f"assuming in radians,\n10*sin({a}) = {10 * np.sin(a)}")

Expand All @@ -524,6 +529,8 @@ print(f"Find the std dev of an array: a.std(): {a.std() : >5.2f}")

[0 1 2 3]^2 = [0 1 4 9]

e^[0 1 2 3] = [ 1. 2.71828183 7.3890561 20.08553692]

assuming in radians,
10*sin([20 30 40 50]) = [ 9.12945251 -9.88031624 7.4511316 -2.62374854]

Expand Down Expand Up @@ -587,3 +594,4 @@ Remember:
|`np.median(a)` | median of values in array `a`, assumed `a` values in radians |`axis=N` : value taken over axis `N` |
|`np.sqrt(a)` | square root of values in array `a` ||
|`np.sin(a)` | sine of values in array `a`, assumed `a` values in radians etc.|
|`np.exp(a)` | exponential of values in array `a`|
25 changes: 24 additions & 1 deletion docs/032_More_numpy.md
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ fig.colorbar(im, ax=axs)



<matplotlib.colorbar.Colorbar at 0x7f3260dcad10>
<matplotlib.colorbar.Colorbar at 0x7f1e3a320f50>



Expand Down Expand Up @@ -873,3 +873,26 @@ To get some feedback on how you are doing, you should complete and submit the fo
### Summary

In this section, we have expanded our understanding of `numpy` processing to include topics such as finding the array index where some pattern occurs (e.g. `argmin()`, `argsort()` or `where()`) and how to generate and use masks for selecting data. We should now have a good grasp of the role of axes, slicing, and reconciling multi-dimensional grids for efficient processing.

Remember:


| Function | description | keywords |
|---|---|---|
| `np.loadtxt(f)` | Load datra from file `f` into numpy array ||
| x[start:stop:step] | 1-D slice of array `x` ||
| `slice(start:stop:step)` | function to apply to slice e.g. `x[slice(start:stop:step)]`|
| `np.argmin(x)` | return 1D index of minimum value in array (/axis) | `axis=N` : value taken over axis `N` |
| `np.argmax(x)` | return 1D index of maximum value in array (/axis) | `axis=N` : value taken over axis `N` |
| `x > 1` | logical operator resulting in boolean array e.g. to use for masks |
| `np.logical_not(x)` | element-wise not over array |
| `np.logical_or(x,y)` | element-wise `a` or `b` over array |
| `np.logical_and(x,y)` | element-wise `a` and `b` over array |
| `np.where(x)` | list of indices where `x` is `True` |
| `x.flatten()` | convert copy of ND array `x` into 1D array |
| `x.ravel()` | convert ND array `x` into 1D array |
| `x.reshape(shape)` | apply shape `shape` to array `x` |
| `np.unravel_index(indices,shape)` | unravel 1D indices `indices` to ND defined by `shape` |
| `np.newaxis` | add a new axis to array for reconciling multiple dimensions making a copy. Effectively makes new dimension of size `(1,)` |


48 changes: 48 additions & 0 deletions docs/040_GDAL_mosaicing_and_masking.md
Original file line number Diff line number Diff line change
Expand Up @@ -536,3 +536,51 @@ We have started to do some fuller geospatial processing now. WE have seen how to
We have also seen some utility functions to aid our use of these data: `Modis.get_files` to get the SDS or filenames for a particular configuration, and `Modis.get_modis` and to get a `gdal` VRT file with mosaiced tiles and vector masking.

You should make sure that you are able to use one or more of these methods to obtain a numpy array with a MODIS datatset for a particular place and time.

Remember:


Modis library:

from geog0111.modis import Modis
modis = Modis(**kwargs)

kwargs = {
'tile' : ['h17v03'],
'product' : 'MCD15A3H',
'sds' : 'Lai_500m',
}


| function|comment|example|
|---|---|---|
| `modis.get_data(year,doy)` | Dictionary of 2D data arrays by SDS key for MODIS product for year `year` and day of year `doy` | `idict = modis.get_data(2019,41)`|
|`modis.get_files(year,doy)`| Filename and SDS list of MODIS product for year `year` and day of year `doy` | `files, sds = modis.get_data(2019,41)`|
|`modis.get_modis(year,doy,warp_args=warp_args)` | Dictionary of 2D/3D data arrays by SDS key for MODIS product for year `year` and day of year `doy`, warped by `warp_args` (see `gdal.Warp()`). Note that `doy` can be list of `doys` or wildcard. If > 1 band, then dataset is 3D and key `bandnames` included |
`gdal`:


|function|comment|example and keywords|
|---|---|---|
|`g = gdal.Open(filename)` | Open geospatial file `filename` and return `gdal` object `g` (`None` if file not opened correctly)|
|`g.GetSubDatasets()` | Get list of sub-datasets from `gdal` object `g`|
|`g.ReadAsArray(c0,r0,nc,nr)` | Read dataset from `gdal` object `g` into array. Form `c0` for `nc` columns and `r0` for `nr` rows. Set as `None` for defaults or don't give.|
|`gdal.BuildVRT(ofile, sds)` | create `gdal` VRT (wrapper) file called `ofile` for SDS/file `sds` | `files,sds = modis.get_files(year,doy)`|
||| `separate=True` for separate bands |
||| `ofile = f"work/stitch_full_{year}_{doy:03d}.vrt"`|
|||`stitch_vrt = gdal.BuildVRT(ofile, sds[0])`|
|`gdal.Info(f)` | Print information about geospatial file `f` ||
| `gdal.Warp(ofile,ifile)` | Warp `ifile` to `ofile` with keyword parameters | Keywords:
|||`format = 'MEM'` or `format = 'GTiff'` : output format|
||| `options=['COMPRESS=LZW']` : compression option for GTiff etc.
||| `dstNodata=255`: no data value |
||| `cropToCutline = True` : whether to crop to cutline or bounds |
||| `cutlineDSName = 'data/TM_WORLD_BORDERS-0.3.shp'` : vector dataset for cutline|
||| `cutlineWhere = "FIPS='UK'"` : identifier information for cutline
|`g.FlushCache()` | flush open `gdal` object `g` (force write to file) |




Loading

0 comments on commit b3efd29

Please sign in to comment.