Skip to content

Latest commit

 

History

History
139 lines (96 loc) · 4.92 KB

home.rst

File metadata and controls

139 lines (96 loc) · 4.92 KB

OpenVINO 2023.2

OpenVINO 2023.3

  • An open-source toolkit for optimizing and deploying deep learning models.
    Boost your AI deep-learning inference performance!
  • Use PyTorch models directly, without converting them first.
    Learn more...
  • OpenVINO via PyTorch 2.0 torch.compile()
    Use OpenVINO directly in PyTorch-native applications!
    Learn more...
  • Do you like Generative AI? You will love how it performs with OpenVINO!
    Check out our new notebooks...
.. button-ref::  get_started
   :ref-type: doc
   :class: ov-homepage-banner-btn
   :color: primary
   :outline:

   Get started
.. rst-class:: openvino-diagram

   .. image:: _static/images/ov_homepage_diagram.png
      :align: center


.. grid:: 2 2 3 3
   :class-container: ov-homepage-higlight-grid

   .. grid-item-card:: Performance Benchmarks
      :link: openvino_docs_performance_benchmarks
      :link-alt: performance benchmarks
      :link-type: doc

      See latest benchmark numbers for OpenVINO and OpenVINO Model Server

   .. grid-item-card:: Work with Multiple Model Formats
      :link: openvino_docs_model_processing_introduction
      :link-alt: Supported Model Formats
      :link-type: doc

      OpenVINO supports different model formats: PyTorch, TensorFlow, TensorFlow Lite, ONNX, and PaddlePaddle.

   .. grid-item-card:: Deploy at Scale with OpenVINO Model Server
      :link: ovms_what_is_openvino_model_server
      :link-alt: model server
      :link-type: doc

      Cloud-ready deployments for microservice applications

   .. grid-item-card:: Optimize Models
      :link: openvino_docs_model_optimization_guide
      :link-alt: model optimization
      :link-type: doc

      Boost performance using quantization and compression with NNCF

   .. grid-item-card:: Use OpenVINO with PyTorch Apps with torch.compile()
      :link: pytorch_2_0_torch_compile
      :link-alt: torch.compile
      :link-type: doc

      Optimize generation of the graph model with PyTorch 2.0 torch.compile() backend

   .. grid-item-card:: Optimize and Deploy Generative AI
      :link: gen_ai_guide
      :link-alt: gen ai
      :link-type: doc

      Enhance the efficiency of Generative AI


Feature Overview

.. grid:: 1 2 2 2
   :class-container: ov-homepage-feature-grid

   .. grid-item-card:: Local Inference & Model Serving

      You can either link directly with OpenVINO Runtime to run inference locally or use OpenVINO Model Server
      to serve model inference from a separate server or within Kubernetes environment

   .. grid-item-card:: Improved Application Portability

      Write an application once, deploy it anywhere, achieving maximum performance from hardware. Automatic device
      discovery allows for superior deployment flexibility. OpenVINO Runtime supports Linux, Windows and MacOS and
      provides Python, C++ and C API. Use your preferred language and OS.

   .. grid-item-card:: Minimal External Dependencies

      Designed with minimal external dependencies reduces the application footprint, simplifying installation and
      dependency management. Popular package managers enable application dependencies to be easily installed and
      upgraded. Custom compilation for your specific model(s) further reduces final binary size.

   .. grid-item-card:: Enhanced App Start-Up Time

      In applications where fast start-up is required, OpenVINO significantly reduces first-inference latency by using the
      CPU for initial inference and then switching to another device once the model has been compiled and loaded to memory.
      Compiled models are cached improving start-up time even more.



.. toctree::
   :maxdepth: 2
   :hidden:

   GET STARTED <get_started>
   LEARN OPENVINO <learn_openvino>
   OPENVINO WORKFLOW <openvino_workflow>
   DOCUMENTATION <documentation>
   ABOUT OPENVINO <about_openvino>