Skip to content

Commit

Permalink
Merge branch 'develop' into 'master'
Browse files Browse the repository at this point in the history
Release 4.2.0

See merge request remi.cresson/otbtf!99
  • Loading branch information
Cresson Remi committed Sep 12, 2023
2 parents f4f355f + af693c4 commit a8c84a5
Showing 13 changed files with 383 additions and 68 deletions.
13 changes: 11 additions & 2 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
variables:
OTBTF_VERSION: 4.1.0
OTBTF_VERSION: 4.2.0
OTB_BUILD: /src/otb/build/OTB/build # Local OTB build directory
OTBTF_SRC: /src/otbtf # Local OTBTF source directory
OTB_TEST_DIR: $OTB_BUILD/Testing/Temporary # OTB testing directory
@@ -164,7 +164,7 @@ ctest:
extends: .tests_base
stage: Applications Test
before_script:
- pip3 install pytest pytest-cov pytest-order
- pip install pytest pytest-cov pytest-order
- mkdir -p $ARTIFACT_TEST_DIR
- cd $CI_PROJECT_DIR

@@ -189,6 +189,15 @@ sr4rs:
- export PYTHONPATH=$PYTHONPATH:$PWD/sr4rs
- python -m pytest --junitxml=$ARTIFACT_TEST_DIR/report_sr4rs.xml $OTBTF_SRC/test/sr4rs_unittest.py

decloud:
extends: .applications_test_base
script:
- git clone https://github.com/CNES/decloud.git
- pip install -r $PWD/decloud/docker/requirements.txt
- wget -P decloud_data --no-verbose --recursive --level=inf --no-parent -R "index.html*" --cut-dirs=3 --no-host-directories http://indexof.montpellier.irstea.priv/projets/geocicd/decloud/
- export DECLOUD_DATA_DIR="$PWD/decloud_data"
- pytest decloud/tests/train_from_tfrecords_unittest.py

otbtf_api:
extends: .applications_test_base
script:
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -27,7 +27,9 @@ RUN ln -s /usr/bin/python3 /usr/local/bin/python && ln -s /usr/bin/pip3 /usr/loc
RUN pip install --no-cache-dir pip --upgrade
# NumPy version is conflicting with system's gdal dep and may require venv
ARG NUMPY_SPEC="==1.22.*"
RUN pip install --no-cache-dir -U wheel mock six future tqdm deprecated "numpy$NUMPY_SPEC" packaging requests \
# This is to avoid https://github.com/tensorflow/tensorflow/issues/61551
ARG PROTO_SPEC="==4.23.*"
RUN pip install --no-cache-dir -U wheel mock six future tqdm deprecated "numpy$NUMPY_SPEC" "protobuf$PROTO_SPEC" packaging requests \
&& pip install --no-cache-dir --no-deps keras_applications keras_preprocessing

# ----------------------------------------------------------------------------
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -33,8 +33,8 @@ The documentation is available on [otbtf.readthedocs.io](https://otbtf.readthedo
You can use our latest GPU enabled docker images.

```bash
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.0.0-gpu otbcli_PatchesExtraction
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.0.0-gpu python -c "import otbtf"
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.2.0-gpu otbcli_PatchesExtraction
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.2.0-gpu python -c "import otbtf"
```

You can also build OTBTF from sources (see the documentation)
8 changes: 8 additions & 0 deletions RELEASE_NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
Version 4.2.0 (12 sep 2023)
----------------------------------------------------------------
* Add new python modules: `otbtf.layers` (with new classes `DilatedMask`, `ApplyMask`, `ScalarsTile`, `ArgMax`, `Max`) and `otbtf.ops` (`one_hot()`)
* Fix an error in the documentation
* Update the otbtf-keras tutorial
* Add decloud testing in CI
* Fix protobuf version in dockerfile (see https://github.com/tensorflow/tensorflow/issues/61551)

Version 4.1.0 (23 may 2023)
----------------------------------------------------------------
* Add no-data values support for inference in TensorflowModelServe application
96 changes: 57 additions & 39 deletions doc/api_tutorial.md
Original file line number Diff line number Diff line change
@@ -184,6 +184,19 @@ def dataset_preprocessing_fn(examples: dict):

As you can see, we don't modify the input tensor, since we want to use it
as it in the model.
Note that since version 4.2.0 the `otbtf.ops.one_hot` can ease the transform:

```python
def dataset_preprocessing_fn(examples: dict):
return {
INPUT_NAME: examples["input_xs_patches"],
TARGET_NAME: otbtf.ops.one_hot(
labels=examples["labels_patches"],
nb_classes=N_CLASSES
)
}

```

### Model inputs preprocessing

@@ -258,10 +271,7 @@ and the estimated values.
out_tconv1 = _tconv(out_conv4, 64, "tconv1") + out_conv3
out_tconv2 = _tconv(out_tconv1, 32, "tconv2") + out_conv2
out_tconv3 = _tconv(out_tconv2, 16, "tconv3") + out_conv1
out_tconv4 = _tconv(out_tconv3, N_CLASSES, "classifier", None)

softmax_op = tf.keras.layers.Softmax(name=OUTPUT_SOFTMAX_NAME)
predictions = softmax_op(out_tconv4)
predictions = _tconv(out_tconv3, N_CLASSES, OUTPUT_SOFTMAX_NAME, "softmax")

return {TARGET_NAME: predictions}

@@ -375,32 +385,39 @@ polluted by the convolutional padding.
For a 2D convolution of stride \(s\) and kernel size \(k\), we can deduce the
valid output size \(y\) from input size \(x\) using this expression:
$$
y = \left[\frac{x - k + 1}{s}\right]
y = \left[\frac{x - k }{s}\right] + 1
$$
For a 2D transposed convolution of stride \(s\) and kernel size \(k\), we can
deduce the valid output size \(y\) from input size \(x\) using this expression:
$$
y = (x * s) - k + 1
y = x * s - k + 2
$$

Let's consider a chunk of input image of size 128, and check the valid output
Let's consider a chunk of input image of size 64, and check the valid output
size of our model:

| Conv. name | Conv. type | Kernel | Stride | Out. size | Valid out. size |
|------------|-------------------|--------|--------|-----------|-----------------|
| *conv1* | Conv2D | 3 | 2 | 64 | 63 |
| *conv2* | Conv2D | 3 | 2 | 32 | 30 |
| *conv3* | Conv2D | 3 | 2 | 16 | 14 |
| *conv4* | Conv2D | 3 | 2 | 8 | 6 |
| *tconv1* | Transposed Conv2D | 3 | 2 | 16 | 10 |
| *tconv2* | Transposed Conv2D | 3 | 2 | 32 | 18 |
| *tconv3* | Transposed Conv2D | 3 | 2 | 64 | 34 |
| Conv. name | Conv. type | Kernel | Stride | Out. size | Valid out. size |
|----------------|-------------------|--------|--------|-----------|-----------------|
| *input* | / | / | / | 64 | 64 |
| *conv1* | Conv2D | 3 | 2 | 32 | 31 |
| *conv2* | Conv2D | 3 | 2 | 16 | 15 |
| *conv3* | Conv2D | 3 | 2 | 8 | 7 |
| *conv4* | Conv2D | 3 | 2 | 4 | 3 |
| *tconv1* | Transposed Conv2D | 3 | 2 | 8 | 5 |
| *tconv2* | Transposed Conv2D | 3 | 2 | 16 | 9 |
| *tconv3* | Transposed Conv2D | 3 | 2 | 32 | 17 |
| *classifier* | Transposed Conv2D | 3 | 2 | 64 | 33 |

This shows that our model can be applied in a fully convolutional fashion
without generating blocking artifacts, using the central part of the output of
size 34. This is equivalent to remove \((128 - 24)/2 = 47\) pixels from
the borders of the output. We can hence use the output cropped with **64**
pixels, named ***predictions_crop64***.
size 33. This is equivalent to remove \((64 - 33)/2 = 15\) pixels from
the borders of the output. We keep the upper nearest power of 2 to keep the
convolutions consistent between two adjacent image chunks, hence we can remove 16
pixels from the borders. We can hence use the output cropped with **16** pixels,
named ***predictions_crop16*** in the model outputs.
By default, cropped outputs in `otbtf.ModelBase` are generated for the following
values: `[16, 32, 64, 96, 128]` but that can be changed setting `inference_cropping`
in the model `__init__()` (see the reference API documentation for details).

!!! Info

@@ -427,10 +444,11 @@ In the following subsections, we run `TensorflowModelServe` over the input
image, with the following parameters:

- the input name is ***input_xs***
- the output name is ***predictions_crop64*** (cropping margin of 64 pixels)
- we choose a receptive field of ***256*** and an expression field of
***128*** so that they match the cropping margin of 64 pixels.

- the output name is ***predictions_crop16*** (cropping margin of 16 pixels)
- we choose a receptive field of ***64*** and an expression field of
***32*** so that they match the cropping margin of 16 pixels (since we remove
16 pixels from each side in x and y dimensions, we remove a total of 32 pixels
from each borders in x/y dimensions).

### Command Line Interface

@@ -439,14 +457,14 @@ Open a terminal and run the following command:
```commandline
otbcli_TensorflowModelServe \
-source1.il $DATADIR/fake_spot6.jp2 \
-source1.rfieldx 256 \
-source1.rfieldy 256 \
-source1.rfieldx 64 \
-source1.rfieldy 64 \
-source1.placeholder "input_xs" \
-model.dir /tmp/my_1st_savedmodel \
-model.fullyconv on \
-output.names "predictions_crop64" \
-output.efieldx 128 \
-output.efieldy 128 \
-output.names "predictions_crop16" \
-output.efieldx 32 \
-output.efieldy 32 \
-out softmax.tif
```

@@ -459,14 +477,14 @@ python wrapper:
import otbApplication
app = otbApplication.Registry.CreateApplication("TensorflowModelServe")
app.SetParameterStringList("source1.il", ["fake_spot6.jp2"])
app.SetParameterInt("source1.rfieldx", 256)
app.SetParameterInt("source1.rfieldy", 256)
app.SetParameterInt("source1.rfieldx", 64)
app.SetParameterInt("source1.rfieldy", 64)
app.SetParameterString("source1.placeholder", "input_xs")
app.SetParameterString("model.dir", "/tmp/my_1st_savedmodel")
app.EnableParameter("fullyconv")
app.SetParameterStringList("output.names", ["predictions_crop64"])
app.SetParameterInt("output.efieldx", 128)
app.SetParameterInt("output.efieldy", 128)
app.SetParameterStringList("output.names", ["predictions_crop16"])
app.SetParameterInt("output.efieldx", 32)
app.SetParameterInt("output.efieldy", 32)
app.SetParameterString("out", "softmax.tif")
app.ExecuteAndWriteOutput()
```
@@ -479,14 +497,14 @@ Using PyOTB is nicer:
import pyotb
pyotb.TensorflowModelServe({
"source1.il": "fake_spot6.jp2",
"source1.rfieldx": 256,
"source1.rfieldy": 256,
"source1.rfieldx": 64,
"source1.rfieldy": 64,
"source1.placeholder": "input_xs",
"model.dir": "/tmp/my_1st_savedmodel",
"fullyconv": True,
"output.names": ["predictions_crop64"],
"output.efieldx": 128,
"output.efieldy": 128,
"output.names": ["predictions_crop16"],
"output.efieldx": 32,
"output.efieldy": 32,
"out": "softmax.tif",
})
```
@@ -499,4 +517,4 @@ pyotb.TensorflowModelServe({
control the output image chunk size and tiling/stripping layout. Combined
with the `optim` parameters, you will likely always find the best settings
suited for the hardware. Also, the receptive and expression fields sizes
have a major contribution.
have a major contribution.
10 changes: 5 additions & 5 deletions doc/docker_troubleshooting.md
Original file line number Diff line number Diff line change
@@ -52,13 +52,13 @@ sudo service docker {status,enable,disable,start,stop,restart}
Run a simple command in a one-shot container:

```bash
docker run mdl4eo/otbtf:3.4.0-cpu otbcli_PatchesExtraction
docker run mdl4eo/otbtf:4.2.0-cpu otbcli_PatchesExtraction
```

You can also use the image in interactive mode with bash:

```bash
docker run -ti mdl4eo/otbtf:3.4.0-cpu bash
docker run -ti mdl4eo/otbtf:4.2.0-cpu bash
```

### Mounting file systems
@@ -70,7 +70,7 @@ to use inside the container:
The following command shows you how to access the folder from the docker image.

```bash
docker run -v /mnt/disk1/:/data/ -ti mdl4eo/otbtf:3.4.0-cpu bash -c "ls /data"
docker run -v /mnt/disk1/:/data/ -ti mdl4eo/otbtf:4.2.0-cpu bash -c "ls /data"
```
Beware of ownership issues! see the last section of this doc.

@@ -81,7 +81,7 @@ any directory.

```bash
docker create --interactive --tty --volume /home/$USER:/home/otbuser/ \
--name otbtf mdl4eo/otbtf:3.4.0-cpu /bin/bash
--name otbtf mdl4eo/otbtf:4.2.0-cpu /bin/bash
```

!!! warning
@@ -160,7 +160,7 @@ automatically pull image

```bash
docker create --interactive --tty --volume /home/$USER:/home/otbuser \
--name otbtf mdl4eo/otbtf:3.4.0-cpu /bin/bash
--name otbtf mdl4eo/otbtf:4.2.0-cpu /bin/bash
```

Start a background container process:
34 changes: 23 additions & 11 deletions doc/docker_use.md
Original file line number Diff line number Diff line change
@@ -5,13 +5,13 @@ We recommend to use OTBTF from official docker images.
Latest CPU-only docker image:

```commandline
docker pull mdl4eo/otbtf:4.0.0-cpu
docker pull mdl4eo/otbtf:4.2.0-cpu
```

Latest GPU-ready docker image:

```commandline
docker pull mdl4eo/otbtf:4.0.0-gpu
docker pull mdl4eo/otbtf:4.2.0-gpu
```

Read more in the following sections.
@@ -25,12 +25,12 @@ Since OTBTF >= 3.2.1 you can find the latest docker images on

| Name | Os | TF | OTB | Description | Dev files | Compute capability |
|------------------------------------------------------------------------------------| ------------- |-------|-------| ---------------------- | --------- | ------------------ |
| **mdl4eo/otbtf:4.0.0-cpu** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-cpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-gpu** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-gpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.0.0-gpu-opt** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. | no | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.0.0-gpu-opt-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.2.0-cpu** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.2.0-cpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.2.0-gpu** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.2.0-gpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.2.0-gpu-opt** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. | no | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.2.0-gpu-opt-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. (dev) | yes | 5.2,6.1,7.0,7.5,8.6|

The list of older releases is available [here](#older-images).

@@ -51,13 +51,13 @@ You can then use the OTBTF `gpu` tagged docker images with the **NVIDIA runtime*
With Docker version earlier than 19.03 :

```bash
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.0.0-gpu bash
docker run --runtime=nvidia -ti mdl4eo/otbtf:4.2.0-gpu bash
```

With Docker version including and after 19.03 :

```bash
docker run --gpus all -ti mdl4eo/otbtf:4.0.0-gpu bash
docker run --gpus all -ti mdl4eo/otbtf:4.2.0-gpu bash
```

You can find some details on the **GPU docker image** and some **docker tips
@@ -80,7 +80,7 @@ See here how to install docker on Ubuntu
1. Install [WSL2](https://docs.microsoft.com/en-us/windows/wsl/install-win10#manual-installation-steps) (Windows Subsystem for Linux)
2. Install [docker desktop](https://www.docker.com/products/docker-desktop)
3. Start **docker desktop** and **enable WSL2** from *Settings* > *General* then tick the box *Use the WSL2 based engine*
3. Open a **cmd.exe** or **PowerShell** terminal, and type `docker create --name otbtf-cpu --interactive --tty mdl4eo/otbtf:4.0.0-cpu`
3. Open a **cmd.exe** or **PowerShell** terminal, and type `docker create --name otbtf-cpu --interactive --tty mdl4eo/otbtf:4.2.0-cpu`
4. Open **docker desktop**, and check that the docker is running in the **Container/Apps** menu
![Docker desktop, after the docker image is downloaded and ready to use](images/docker_desktop_1.jpeg)
5. From **docker desktop**, click on the icon highlighted as shown below, and use the bash terminal that should pop up!
@@ -160,4 +160,16 @@ Here you can find the list of older releases of OTBTF:
| **mdl4eo/otbtf:3.4.0-gpu-dev** | Ubuntu Focal | r2.8 | 8.1.0 | GPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:3.4.0-gpu-opt** | Ubuntu Focal | r2.8 | 8.1.0 | GPU with opt. | no | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:3.4.0-gpu-opt-dev** | Ubuntu Focal | r2.8 | 8.1.0 | GPU with opt. (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-cpu** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-cpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-gpu** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.0.0-gpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.0.0-gpu-opt** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. | no | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.0.0-gpu-opt-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.1.0-cpu** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.1.0-cpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | CPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.1.0-gpu** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization | no | 5.2,6.1,7.0,7.5,8.6|
| **mdl4eo/otbtf:4.1.0-gpu-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU, no optimization (dev) | yes | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.1.0-gpu-opt** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. | no | 5.2,6.1,7.0,7.5,8.6|
| **gitlab.irstea.fr/remi.cresson/otbtf/container_registry/otbtf:4.1.0-gpu-opt-dev** | Ubuntu Jammy | r2.12 | 8.1.0 | GPU with opt. (dev) | yes | 5.2,6.1,7.0,7.5,8.6|

3 changes: 2 additions & 1 deletion otbtf/__init__.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@
# ==========================================================================
#
# Copyright 2018-2019 IRSTEA
# Copyright 2020-2022 INRAE
# Copyright 2020-2023 INRAE
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -33,4 +33,5 @@

from otbtf.tfrecords import TFRecords # noqa
from otbtf.model import ModelBase # noqa
from otbtf import layers, ops # noqa
__version__ = pkg_resources.require("otbtf")[0].version
Loading

0 comments on commit a8c84a5

Please sign in to comment.