Skip to content

Commit

Permalink
Update doc, fix example
Browse files Browse the repository at this point in the history
  • Loading branch information
JeanKossaifi committed Jun 9, 2023
1 parent 326c950 commit 6f18c75
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 7 deletions.
23 changes: 22 additions & 1 deletion doc/source/modules/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,11 @@ Layers

In addition to the full architectures, we also provide building blocks:

.. automodule:: neuralop.models.fno_block
Neural operator Layers
++++++++++++++++++++++

**Spectral convolutions** (in Fourier domain):
.. automodule:: neuralop.models.spectral_convolution
:no-members:
:no-inherited-members:

Expand All @@ -86,6 +90,23 @@ In addition to the full architectures, we also provide building blocks:
FactorizedSpectralConv2d
FactorizedSpectralConv3d


**Spherical convolutions**:

.. automodule:: neuralop.models.spherical_convolution
:no-members:
:no-inherited-members:

.. autosummary::
:toctree: generated
:template: class.rst

FactorizedSphericalConv


Other resolution invariant operations
+++++++++++++++++++++++++++++++++++++

Automatically apply resolution dependent domain padding:

.. automodule:: neuralop.models.padding
Expand Down
13 changes: 7 additions & 6 deletions examples/plot_SFNO_swe.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,14 @@

# %%
# Loading the Navier-Stokes dataset in 128x128 resolution
train_loader, test_loaders = load_spherical_swe(n_train=128, batch_size=4, test_resolutions=[(128, 256), (256, 512)], n_tests=[10, 10], test_batch_sizes=[4, 4],)
train_loader, test_loaders = load_spherical_swe(n_train=500, batch_size=4, train_resolution=(32, 64),
test_resolutions=[(32, 64), (64, 128)], n_tests=[50, 50], test_batch_sizes=[10, 10],)


# %%
# We create a tensorized FNO model

model = SFNO(n_modes=(64, 128), in_channels=3, out_channels=3, hidden_channels=32, projection_channels=64, factorization='dense')
model = SFNO(n_modes=(32, 32), in_channels=3, out_channels=3, hidden_channels=32, projection_channels=64, factorization='dense')
model = model.to(device)

n_params = count_params(model)
Expand All @@ -49,10 +50,10 @@
# %%
# Creating the losses
l2loss = LpLoss(d=2, p=2, reduce_dims=(0,1))
h1loss = H1Loss(d=2, reduce_dims=(0,1))
# h1loss = H1Loss(d=2, reduce_dims=(0,1))

train_loss = h1loss
eval_losses={'h1': h1loss, 'l2': l2loss}
train_loss = l2loss
eval_losses={'l2': l2loss} #'h1': h1loss,


# %%
Expand Down Expand Up @@ -103,7 +104,7 @@
#
# In practice we would train a Neural Operator on one or multiple GPUs

test_samples = test_loaders[32].dataset
test_samples = test_loaders.dataset[32]

fig = plt.figure(figsize=(7, 7))
for index in range(3):
Expand Down

0 comments on commit 6f18c75

Please sign in to comment.