Skip to content

Commit 89abfb4

Browse files
authored
rework sphinx markdown directives (#2088)
1 parent 3e15460 commit 89abfb4

21 files changed

+1424
-1645
lines changed

demos/README.md

+32-33
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,38 @@
11
# Demos {#ovms_docs_demos}
22

3-
@sphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
48
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_demo_age_gender_guide
10-
ovms_demo_horizontal_text_detection
11-
ovms_demo_optical_character_recognition
12-
ovms_demo_face_detection
13-
ovms_demo_face_blur_pipeline
14-
ovms_demo_capi_inference_demo
15-
ovms_demo_single_face_analysis_pipeline
16-
ovms_demo_multi_faces_analysis_pipeline
17-
ovms_docs_demo_ensemble
18-
ovms_docs_demo_mediapipe_image_classification
19-
ovms_docs_demo_mediapipe_multi_model
20-
ovms_docs_demo_mediapipe_object_detection
21-
ovms_docs_demo_mediapipe_holistic
22-
ovms_docs_image_classification
23-
ovms_demo_using_onnx_model
24-
ovms_demo_tf_classification
25-
ovms_demo_person_vehicle_bike_detection
26-
ovms_demo_vehicle_analysis_pipeline
27-
ovms_demo_real_time_stream_analysis
28-
ovms_demo_bert
29-
ovms_demo_gptj_causal_lm
30-
ovms_demo_llama_2_chat
31-
ovms_demo_stable_diffusion
32-
ovms_demo_universal-sentence-encoder
33-
ovms_demo_speech_recognition
34-
ovms_demo_benchmark_client
35-
36-
@endsphinxdirective
9+
ovms_demo_age_gender_guide
10+
ovms_demo_horizontal_text_detection
11+
ovms_demo_optical_character_recognition
12+
ovms_demo_face_detection
13+
ovms_demo_face_blur_pipeline
14+
ovms_demo_capi_inference_demo
15+
ovms_demo_single_face_analysis_pipeline
16+
ovms_demo_multi_faces_analysis_pipeline
17+
ovms_docs_demo_ensemble
18+
ovms_docs_demo_mediapipe_image_classification
19+
ovms_docs_demo_mediapipe_multi_model
20+
ovms_docs_demo_mediapipe_object_detection
21+
ovms_docs_demo_mediapipe_holistic
22+
ovms_docs_image_classification
23+
ovms_demo_using_onnx_model
24+
ovms_demo_tf_classification
25+
ovms_demo_person_vehicle_bike_detection
26+
ovms_demo_vehicle_analysis_pipeline
27+
ovms_demo_real_time_stream_analysis
28+
ovms_demo_bert
29+
ovms_demo_gptj_causal_lm
30+
ovms_demo_llama_2_chat
31+
ovms_demo_stable_diffusion
32+
ovms_demo_universal-sentence-encoder
33+
ovms_demo_speech_recognition
34+
ovms_demo_benchmark_client
35+
```
3736

3837
OpenVINO Model Server demos have been created to showcase the usage of the model server as well as demonstrate it’s capabilities. Check out the list below to see complete step-by-step examples of using OpenVINO Model Server with real world use cases:
3938

demos/benchmark/README.md

+8-9
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,14 @@
11
# Benchmark Client {#ovms_demo_benchmark_client}
22

3-
@sphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
48
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_demo_benchmark_app
10-
ovms_demo_benchmark_app_cpp
11-
12-
@endsphinxdirective
9+
ovms_demo_benchmark_app
10+
ovms_demo_benchmark_app_cpp
11+
```
1312

1413
## Python
1514
| Demo | Description |

demos/image_classification/README.md

+9-10
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,15 @@
11
# Image Classification Demos {#ovms_docs_image_classification}
22

3-
@sphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
48
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_demo_image_classification
10-
ovms_demo_image_classification_cpp
11-
ovms_demo_image_classification_go
12-
13-
@endsphinxdirective
9+
ovms_demo_image_classification
10+
ovms_demo_image_classification_cpp
11+
ovms_demo_image_classification_go
12+
```
1413

1514
## Python
1615
| Demo | Description |

docs/accelerators.md

+39-57
Original file line numberDiff line numberDiff line change
@@ -26,35 +26,26 @@ Before using GPU as OpenVINO Model Server target device, you need to:
2626

2727
Running inference on GPU requires the model server process security context account to have correct permissions. It must belong to the render group identified by the command:
2828

29-
@sphinxdirective
30-
.. code-block:: sh
31-
32-
stat -c "group_name=%G group_id=%g" /dev/dri/render*
33-
34-
@endsphinxdirective
29+
```bash
30+
stat -c "group_name=%G group_id=%g" /dev/dri/render*
31+
```
3532

3633
The default account in the docker image is preconfigured. If you change the security context, use the following command to start the model server container:
3734

38-
@sphinxdirective
39-
.. code-block:: sh
40-
41-
docker run --rm -it --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
42-
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
43-
--model_path /opt/model --model_name resnet --port 9001 --target_device GPU
44-
45-
@endsphinxdirective
35+
```bash
36+
docker run --rm -it --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
37+
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
38+
--model_path /opt/model --model_name resnet --port 9001 --target_device GPU
39+
```
4640

4741
GPU device can be used also on Windows hosts with Windows Subsystem for Linux 2 (WSL2). In such scenario, there are needed extra docker parameters. See the command below.
4842
Use device `/dev/dxg` instead of `/dev/dri` and mount the volume `/usr/lib/wsl`:
4943

50-
@sphinxdirective
51-
.. code-block:: sh
52-
53-
docker run --rm -it --device=/dev/dxg --volume /usr/lib/wsl:/usr/lib/wsl -u $(id -u):$(id -g) \
54-
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
55-
--model_path /opt/model --model_name resnet --port 9001 --target_device GPU
56-
57-
@endsphinxdirective
44+
```bash
45+
docker run --rm -it --device=/dev/dxg --volume /usr/lib/wsl:/usr/lib/wsl -u $(id -u):$(id -g) \
46+
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
47+
--model_path /opt/model --model_name resnet --port 9001 --target_device GPU
48+
```
5849

5950
> **NOTE**:
6051
> The public docker image includes the OpenCL drivers for GPU in version 22.28 (RedHat) and 22.35 (Ubuntu).
@@ -136,15 +127,12 @@ Make sure you have passed the devices and access to the devices you want to use
136127

137128
Below is an example of the command with AUTO Plugin as target device. It includes extra docker parameters to enable GPU (/dev/dri) , beside CPU.
138129

139-
@sphinxdirective
140-
.. code-block:: sh
141-
142-
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
143-
-u $(id -u):$(id -g) -v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
144-
--model_path /opt/model --model_name resnet --port 9001 \
145-
--target_device AUTO
146-
147-
@endsphinxdirective
130+
```bash
131+
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
132+
-u $(id -u):$(id -g) -v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
133+
--model_path /opt/model --model_name resnet --port 9001 \
134+
--target_device AUTO
135+
```
148136

149137
The `Auto Device` plugin can also use the [PERFORMANCE_HINT](performance_tuning.md) plugin config property that enables you to specify a performance mode for the plugin.
150138

@@ -154,29 +142,23 @@ To enable Performance Hints for your application, use the following command:
154142

155143
LATENCY
156144

157-
@sphinxdirective
158-
.. code-block:: sh
159-
160-
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
161-
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
162-
--model_path /opt/model --model_name resnet --port 9001 \
163-
--plugin_config '{"PERFORMANCE_HINT": "LATENCY"}' \
164-
--target_device AUTO
165-
166-
@endsphinxdirective
145+
```bash
146+
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
147+
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
148+
--model_path /opt/model --model_name resnet --port 9001 \
149+
--plugin_config '{"PERFORMANCE_HINT": "LATENCY"}' \
150+
--target_device AUTO
151+
```
167152

168153
THROUGHPUT
169154

170-
@sphinxdirective
171-
.. code-block:: sh
172-
173-
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
174-
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
175-
--model_path /opt/model --model_name resnet --port 9001 \
176-
--plugin_config '{"PERFORMANCE_HINT": "THROUGHPUT"}' \
177-
--target_device AUTO
178-
179-
@endsphinxdirective
155+
```bash
156+
docker run --rm -d --device=/dev/dri --group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) -u $(id -u):$(id -g) \
157+
-v ${PWD}/models/public/resnet-50-tf:/opt/model -p 9001:9001 openvino/model_server:latest-gpu \
158+
--model_path /opt/model --model_name resnet --port 9001 \
159+
--plugin_config '{"PERFORMANCE_HINT": "THROUGHPUT"}' \
160+
--target_device AUTO
161+
```
180162

181163
> **NOTE**: currently, AUTO plugin cannot be used with `--shape auto` parameter while GPU device is enabled.
182164
@@ -186,22 +168,22 @@ OpenVINO Model Server can be used also with NVIDIA GPU cards by using NVIDIA plu
186168
The docker image of OpenVINO Model Server including support for NVIDIA can be built from sources
187169

188170
```bash
189-
git clone https://github.com/openvinotoolkit/model_server.git
190-
cd model_server
191-
make docker_build NVIDIA=1 OV_USE_BINARY=0
192-
cd ..
171+
git clone https://github.com/openvinotoolkit/model_server.git
172+
cd model_server
173+
make docker_build NVIDIA=1 OV_USE_BINARY=0
174+
cd ..
193175
```
194176
Check also [building from sources](https://github.com/openvinotoolkit/model_server/blob/main/docs/build_from_source.md).
195177

196178
Example command to run container with NVIDIA support:
197179

198180
```bash
199-
docker run -it --gpus all -p 9000:9000 -v ${PWD}/models/public/resnet-50-tf:/opt/model openvino/model_server:latest-cuda --model_path /opt/model --model_name resnet --port 9000 --target_device NVIDIA
181+
docker run -it --gpus all -p 9000:9000 -v ${PWD}/models/public/resnet-50-tf:/opt/model openvino/model_server:latest-cuda --model_path /opt/model --model_name resnet --port 9000 --target_device NVIDIA
200182
```
201183

202184
For models with layers not supported on NVIDIA plugin, you can use a virtual plugin `HETERO` which can use multiple devices listed after the colon:
203185
```bash
204-
docker run -it --gpus all -p 9000:9000 -v ${PWD}/models/public/resnet-50-tf:/opt/model openvino/model_server:latest-cuda --model_path /opt/model --model_name resnet --port 9000 --target_device HETERO:NVIDIA,CPU
186+
docker run -it --gpus all -p 9000:9000 -v ${PWD}/models/public/resnet-50-tf:/opt/model openvino/model_server:latest-cuda --model_path /opt/model --model_name resnet --port 9000 --target_device HETERO:NVIDIA,CPU
205187
```
206188

207189
Check the supported [configuration parameters](https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/nvidia_plugin#supported-configuration-parameters) and [supported layers](https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/nvidia_plugin#supported-layers-and-limitations)

docs/advanced_topics.md

+11-12
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,16 @@
11
# Advanced Features {#ovms_docs_advanced}
22

3-
@sphinxdirective
4-
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_sample_cpu_extension
10-
ovms_docs_model_cache
11-
ovms_docs_custom_loader
12-
ovms_extras_nginx-mtls-auth-readme
13-
14-
@endsphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
8+
9+
ovms_sample_cpu_extension
10+
ovms_docs_model_cache
11+
ovms_docs_custom_loader
12+
ovms_extras_nginx-mtls-auth-readme
13+
```
1514

1615
## CPU Extensions
1716
Implement any CPU layer, that is not support by OpenVINO yet, as a shared library.

docs/api_reference_guide.md

+12-13
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,17 @@
11
# API Reference Guide {#ovms_docs_server_api}
22

3-
@sphinxdirective
4-
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_docs_grpc_api_tfs
10-
ovms_docs_grpc_api_kfs
11-
ovms_docs_rest_api_tfs
12-
ovms_docs_rest_api_kfs
13-
ovms_docs_c_api
14-
15-
@endsphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
8+
9+
ovms_docs_grpc_api_tfs
10+
ovms_docs_grpc_api_kfs
11+
ovms_docs_rest_api_tfs
12+
ovms_docs_rest_api_kfs
13+
ovms_docs_c_api
14+
```
1615

1716
## Introduction
1817

docs/binary_input.md

+10-11
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,16 @@
11
# Support for Binary Encoded Image Input Data {#ovms_docs_binary_input}
22

3-
@sphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
48
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_docs_binary_input_layout_and_shape
10-
ovms_docs_binary_input_tfs
11-
ovms_docs_binary_input_kfs
12-
ovms_docs_demo_tensorflow_conversion
13-
14-
@endsphinxdirective
9+
ovms_docs_binary_input_layout_and_shape
10+
ovms_docs_binary_input_tfs
11+
ovms_docs_binary_input_kfs
12+
ovms_docs_demo_tensorflow_conversion
13+
```
1514

1615
While OpenVINO models don't have the ability to process images directly in their binary format, the model server can accept them and convert
1716
automatically from JPEG/PNG to OpenVINO friendly format using built-in [OpenCV](https://opencv.org/) library. To take advantage of this feature, there are two requirements:

docs/clients.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# Clients {#ovms_docs_clients}
22

3-
@sphinxdirective
4-
5-
.. toctree::
6-
:maxdepth: 1
7-
:hidden:
8-
9-
ovms_docs_clients_tfs
10-
ovms_docs_clients_kfs
11-
@endsphinxdirective
3+
```{toctree}
4+
---
5+
maxdepth: 1
6+
hidden:
7+
---
8+
9+
ovms_docs_clients_tfs
10+
ovms_docs_clients_kfs
11+
```
1212

1313
In this section you can find short code samples to interact with OpenVINO Model Server endpoints via:
1414
- [TensorFlow Serving API](./clients_tfs.md)

0 commit comments

Comments
 (0)