Skip to content

Commit

Permalink
first sweep (#689)
Browse files Browse the repository at this point in the history
Co-authored-by: Dom Heinzeller <[email protected]>
  • Loading branch information
ashley314 and climbfuji authored Nov 15, 2023
1 parent 97b9f57 commit 7ea48a2
Show file tree
Hide file tree
Showing 7 changed files with 33 additions and 31 deletions.
2 changes: 1 addition & 1 deletion docs/FAQ/FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ This is often accompanied by failure of the python tests in ``ioda``. A likely

Conda installs its own packages like ``hdf5``, ``NetCDF``, and ``openssl`` that can conflict with libraries installed via the `spack-stack <https://github.com/jcsda/spack-stack.git>`_. This applies in particular to the IODA Python API, which is now enabled by default in ``ioda``.

These conflicts are not easily addressed since the dependencies are built into ``conda`` through `rpaths <https://en.wikipedia.org/wiki/Rpath>`_. At this time we recommend that you avoid using conda if possible when building and running JEDI applications, and use alternative methods described in the `spack-stack documentation <https://spack-stack.readthedocs.io/en/1.5.0/MaintainersSection.html#testing-adding-packages-outside-of-spack>`_ instead.
These conflicts are not easily addressed since the dependencies are built into ``conda`` through `rpaths <https://en.wikipedia.org/wiki/Rpath>`_. At this time we recommend that you avoid using conda if possible when building and running JEDI applications, and use alternative methods described in the `spack-stack documentation <https://spack-stack.readthedocs.io/en/1.5.1/MaintainersSection.html#testing-adding-packages-outside-of-spack>`_ instead.

Git LFS Smudge error when running ``ecbuild``
---------------------------------------------
Expand Down
4 changes: 2 additions & 2 deletions docs/inside/developer_tools/cmake.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Installing CMake and CTest
^^^^^^^^^^^^^^^^^^^^^^^^^^

This step is only necessary if you are working outside preconfigured JEDI environments or containers, and when not following the
`spack-stack <https://spack-stack.readthedocs.io/en/1.5.0/>`_ instructions to set up your environment.
`spack-stack <https://spack-stack.readthedocs.io/en/1.5.1/>`_ instructions to set up your environment.

For the Mac, use `homebrew <https://brew.sh/>`_ to install CMake.

Expand Down Expand Up @@ -198,7 +198,7 @@ source directories.
Installing ecbuild
^^^^^^^^^^^^^^^^^^

As before, the steps shown in this section are only necessary if you are working outside preconfigured JEDI environments or containers, and when not following the recommendation to use `spack-stack <https://spack-stack.readthedocs.io/en/1.5.0/>`_ to set up your environment.
As before, the steps shown in this section are only necessary if you are working outside preconfigured JEDI environments or containers, and when not following the recommendation to use `spack-stack <https://spack-stack.readthedocs.io/en/1.5.1/>`_ to set up your environment.

For all systems, you need to have CMake, eigen3 installed before installing ecbuild.
To install these on the Mac:
Expand Down
14 changes: 7 additions & 7 deletions docs/using/building_and_running/running_skylab.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,27 +8,27 @@ List of spack, software, and AMIs

Versions used:

- spack-stack-1.5.0 from September 2023
- spack-stack-1.5.1 from November 2023

* https://github.com/JCSDA/spack-stack/tree/1.5.0
* https://github.com/JCSDA/spack-stack/tree/1.5.1

* https://spack-stack.readthedocs.io/en/1.5.0
* https://spack-stack.readthedocs.io/en/1.5.1

- AMI available in us-east-1 region (N. Virginia)

- Red Hat 8 with gnu-11.2.1 and openmpi-4.1.5:

AMI Name skylab-6.0.0-redhat8
AMI Name skylab-6.1.0-redhat8

AMI ID ami-0f1750cc8882b7d75 (https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#ImageDetails:imageId=ami-0f1750cc8882b7d75)
AMI ID ami-06497c2e0f2ded6cf (https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#ImageDetails:imageId=ami-06497c2e0f2ded6cf)

- AMI available in us-east-2 region (Ohio)

- Red Hat 8 with gnu-11.2.1 and openmpi-4.1.5:

AMI Name skylab-6.0.0-redhat8
AMI Name skylab-6.1.0-redhat8

AMI ID ami-0d671c49d7bf7918c (https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#ImageDetails:imageId=ami-0d671c49d7bf7918c)
AMI ID ami-0b1ce08e2fd42333b (https://us-east-2.console.aws.amazon.com/ec2/v2/home?region=us-east-2#ImageDetails:imageId=ami-0b1ce08e2fd42333b)

Note. It is necessary to use c6i.4xlarge or larger instances of this family (recommended: c6i.8xlarge when running the `skylab-atm-land-small` experiment).

Expand Down
10 changes: 6 additions & 4 deletions docs/using/jedi_environment/cloud/singlenode.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ As described elsewhere in :doc:`this chapter <index>`, there are several steps y

When you have completed these steps, you are ready to launch a single JEDI EC2 instance through the `EC2 Dashboard <https://console.aws.amazon.com/ec2>`_ on the AWS console.

As part of this release, an Amazon Media Image (AMI) is available that has the necessary `spack-stack-1.5.0` environment for `skylab-6.0.0` pre-installed. For more information on how to find this AMI, refer to :doc:`Building and running SkyLab <../../building_and_running/running_skylab>` in this documentation.
As part of this release, an Amazon Media Image (AMI) is available that has the necessary `spack-stack-1.5.1` environment for `skylab-6.1.0` pre-installed. For more information on how to find this AMI, refer to :doc:`Building and running SkyLab <../../building_and_running/running_skylab>` in this documentation.


.. _singlenode-launch:
Expand All @@ -22,9 +22,9 @@ Launching instance

This section provides detailed instructions on how to build and use an EC2 instance based on an existing AMI. The AMI can be thought of as a pre-built template that provides a software stack, and just needs the configuration details of the EC2 instance (such as the number of cores, the amount of memory, etc.).

The following example uses the ``skylab-6.0.0-redhat8`` AMI.
The following example uses the ``skylab-6.1.0-redhat8`` AMI.

1. Log into the AWS Console and select the EC2 service. In the sidebar on the left, scroll down to the Images section and click on the "AMIs" option. Select ``skylab-6.0.0-redhat8`` from the list of AMIs. Click on "Launch instance from AMI".
1. Log into the AWS Console and select the EC2 service. In the sidebar on the left, scroll down to the Images section and click on the "AMIs" option. Select ``"skylab-6.1.0-redhat8`` from the list of AMIs. Click on "Launch instance from AMI".
2. Give your instance a meaningful name so that you can identify it later in the list of running instances.
3. Select an instance type that has enough memory for your experiment. For available options see, https://aws.amazon.com/ec2/instance-types. Note that because you only have one node you will need a large amount of memory when running higher resolution experiments. For low resolution experiments, instances like c6i.2xlarge may be sufficient, but for c96 experiments instances with at least 512GB memory are required.

Expand All @@ -45,6 +45,8 @@ The following example uses the ``skylab-6.0.0-redhat8”`` AMI.
+-----------------------------------------+---------------------------------+--------------------------+
| ``skylab-6.0.0-redhat8`` | c6i.4xlarge | Intel Ice Lake 8375C |
+-----------------------------------------+---------------------------------+--------------------------+
| ``skylab-6.1.0-redhat8`` | c6i.4xlarge | Intel Ice Lake 8375C |
+-----------------------------------------+---------------------------------+--------------------------+

4. Select an existing key pair (for which you hold the private key on your machine) or create a new key pair and follow the process.
5. Check the entries under "Network settings". Make sure that the network is correct (usually the default is), that the subnet is public (usually indicated by the name), and that "Auto-assign public IP" enabled. Choose the existing security group "Global SSH" or create a new security group that allows SSH traffic from anywhere so that you can connect to the instance from your local machine.
Expand Down Expand Up @@ -84,7 +86,7 @@ After launching the instance through the AWS console, select the instance and cl
[default]
region = us-east-1
**For AWS Red Hat 8:** After logging in, follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#amazon-web-services-red-hat-8 to load the basic spack-stack modules for GNU. Please note that the AMI IDs in the spack-stack 1.5.0 release documentation are incorrect - they are correct in these JEDI docs release notes. Proceed with loading the appropriate modules for your application, for example for the ``skylab-6.0.0`` release:
**For AWS Red Hat 8:** After logging in, follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#amazon-web-services-red-hat-8 to load the basic spack-stack modules for GNU. Please note that the AMI IDs in the spack-stack 1.5.1 release documentation are incorrect - they are correct in these JEDI docs release notes. Proceed with loading the appropriate modules for your application, for example for the ``skylab-6.0.0`` release:

.. code-block:: bash
Expand Down
2 changes: 1 addition & 1 deletion docs/using/jedi_environment/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ But JEDI does not exist in a vacuum. Like any modern, sophisticated software pa

In order to help JEDI users and developers quickly create a productive and consistent computing environment, the JEDI team provides a number of portability tools. These include:

* A complete software stack called `spack-stack <https://github.com/jcsda/spack-stack>`_ for compiled and Python dependencies based on the open-source `spack <https://github.com/spack/spack>`_ package manager, originally developed by the `Lawrence Livermore National Laboratory (LLNL) <https://computing.llnl.gov/projects/spack-hpc-package-manager>`_, with `instructions <https://spack-stack.readthedocs.io/en/1.5.0/>`_ for building and using spack-stack on HPC, the cloud, and generic macOS and Linux systems.
* A complete software stack called `spack-stack <https://github.com/jcsda/spack-stack>`_ for compiled and Python dependencies based on the open-source `spack <https://github.com/spack/spack>`_ package manager, originally developed by the `Lawrence Livermore National Laboratory (LLNL) <https://computing.llnl.gov/projects/spack-hpc-package-manager>`_, with `instructions <https://spack-stack.readthedocs.io/en/1.5.1/>`_ for building and using spack-stack on HPC, the cloud, and generic macOS and Linux systems.
* Machine images for cloud computing (e.g. AMIs for `Amazon Web Services <https://aws.amazon.com>`_)
* :doc:`Environment modules <modules>` for selected HPC systems
* Docker and Singularity software :doc:`containers <containers/container_overview>`.
Expand Down
18 changes: 9 additions & 9 deletions docs/using/jedi_environment/modules.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Using spack-stack modules to build and run JEDI
===============================================

The instructions in this section are specific to the use of spack-stack environment modules (``lmod/lua`` or ``tcl/tk``) for building and running JEDI applications. For general information on using spack-stack to build and run software, see the `spack-stack documentation <https://spack-stack.readthedocs.io/en/1.5.0>`_.
The instructions in this section are specific to the use of spack-stack environment modules (``lmod/lua`` or ``tcl/tk``) for building and running JEDI applications. For general information on using spack-stack to build and run software, see the `spack-stack documentation <https://spack-stack.readthedocs.io/en/1.5.1>`_.

One of the big advantages of spack-stack is that it automatically generates modules for all compiled packages and Python packages and works in exactly the same way on HPCs, on the cloud, and on a personal computer. Environment modules are available on basically all HPC systems and any modern macOS or Linux distribution, and are an easy and effective way to manage software libraries. There are two main flavors, the older ``tcl/tk`` modules and the newer ``lmod/lua`` modules, with the latter being superior and therefore preferred, if available. The two implementations share similar commands, such as:

Expand Down Expand Up @@ -48,7 +48,7 @@ Orion

Orion is an HPC system located at Mississippi State University for the purpose of furthering NOAA’s scientific research and collaboration.

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#msu-orion to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#msu-orion to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -149,7 +149,7 @@ Discover

`Discover <https://www.nccs.nasa.gov/systems/discover>`_ is 90,000 core supercomputing cluster capable of delivering 3.5 petaflops of high-performance computing for Earth system applications from weather to seasonal to climate predictions.

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#nasa-discover to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#nasa-discover to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -182,7 +182,7 @@ Hera

Hera is an HPC system located in NOAA's NESCC facility in Fairmont, WV. The following bash shell commands are necessary to access the installed spack-stack modules (substitute equivalent csh shell commands as appropriate):

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#noaa-rdhpcs-hera to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#noaa-rdhpcs-hera to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -210,7 +210,7 @@ Cheyenne

`Cheyenne <https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne/cheyenne>`_ is a 5.34-petaflops, high-performance computer built for NCAR by SGI.

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#ncar-wyoming-cheyenne to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#ncar-wyoming-cheyenne to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -254,7 +254,7 @@ Casper

The `Casper <https://www2.cisl.ucar.edu/resources/computational-systems/casper>`_ cluster is a heterogeneous system of specialized data analysis and visualization resources, large-memory, multi-GPU nodes, and high-throughput computing nodes.

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#ncar-wyoming-casper to load the basic spack-stack modules for Intel. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#ncar-wyoming-casper to load the basic spack-stack modules for Intel. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -311,7 +311,7 @@ Once logged into S4, you must then log into s4-submit to load the spack-stack mo
ssh -Y s4-submit
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#uw-univ-of-wisconsin-s4 to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#uw-univ-of-wisconsin-s4 to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -408,7 +408,7 @@ Narwhal

Narwhal is an HPE Cray EX system located at the Navy DSRC. It has 2,176 standard compute nodes (AMD 7H12 Rome, 128 cores, 238 GB) and 12 large-memory nodes (995 GB). It has 590 TB of memory and is rated at 12.8 peak PFLOPS.

Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.0/PreConfiguredSites.html#navy-hpcmp-narwhal to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:
Follow the instructions in https://spack-stack.readthedocs.io/en/1.5.1/PreConfiguredSites.html#navy-hpcmp-narwhal to load the basic spack-stack modules for Intel or GNU. Proceed with loading the appropriate modules for your application, for example for the ``Skylab v6`` release:

.. code-block:: bash
Expand Down Expand Up @@ -462,5 +462,5 @@ AWS AMIs
--------
For more information about using Amazon Web Services please see :doc:`JEDI on AWS <./cloud/index>`.

As part of this release, Amazon Media Images (AMI) are available that have the necessary ``spack-stack-1.5.0`` environment for ``skylab-6.0.0`` pre-installed. For more information on how to find these AMIs, refer to :doc:`Building and running SkyLab <../building_and_running/running_skylab>` in this documentation.
As part of this release, Amazon Media Images (AMI) are available that have the necessary ``spack-stack-1.5.1`` environment for ``skylab-6.0.0`` pre-installed. For more information on how to find these AMIs, refer to :doc:`Building and running SkyLab <../building_and_running/running_skylab>` in this documentation.

Loading

0 comments on commit 7ea48a2

Please sign in to comment.