Skip to content

SonjoyKP/KubeShare

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

KubeShare

Share GPU between Pods in Kubernetes

Features

  • Treat GPU as a first class resource.
  • Compatible with native "nvidia.com/gpu" system.
  • Extensible architecture supports custom scheduling policies without modifing KubeShare.

Prerequisite & Limitation

  • A Kubernetes cluster with garbage collection, DNS enabled, and Nvidia GPU device plugin installed.
  • Only support a kubernetes cluster that uses the environment variable NVIDIA_VISIBLE_DEVICES to control which GPUs will be made accessible inside the container.
  • One GPU model within one node.
  • cuda == 10.0 (other version not tested)

Run

Installation

kubectl apply -f  build/deployment/

Uninstallation

kubectl delete -f build/deployment/

SharePod

SharePod Lifecycle

SharePod Lifecycle

  1. User create a SharePod to requiring portion GPU.
  2. kubeshare-scheduler schedules pending SharePods.
  3. kubeshare-device-manager will create a corresponding Pod object behind the SharePod with same namespace and name, and some extra critical settings. (Pod started to run)
  4. kubeshare-device-manager will synchronize Pod's ObjectMeta and PodStatus to SharePodStatus.
  5. SharePod was deleted by user. (Pod was also garbage collected by K8s)

SharePod Specification

apiVersion: sharedgpu.goc/v1
kind: SharePod
metadata:
  name: sharepod1
  annotations:
    "sharedgpu/gpu_request": "0.5" # required if allocating GPU
    "sharedgpu/gpu_limit": "1.0" # required if allocating GPU
    "sharedgpu/gpu_mem": "1073741824" # required if allocating GPU # 1Gi, in bytes
    "sharedgpu/sched_affinity": "red" # optional
    "sharedgpu/sched_anti-affinity": "green" # optional
    "sharedgpu/sched_exclusion": "blue" # optional
spec: # PodSpec
  containers:
  - name: cuda
    image: nvidia/cuda:9.0-base
    command: ["nvidia-smi", "-L"]
    resources:
      limits:
        cpu: "1"
        memory: "500Mi"

Because floating point custom device requests is forbidden by K8s, we move GPU resource usage definitions to Annotations.

  • sharedgpu/gpu_request (required if allocating GPU): guaranteed GPU usage of Pod, gpu_request <= "1.0".
  • sharedgpu/gpu_limit (required if allocating GPU): maximum extra usage if GPU still has free resources, gpu_request <= gpu_limit <= "1.0".
  • sharedgpu/gpu_mem (required if allocating GPU): maximum GPU memory usage of Pod, in bytes.
  • spec (required): a normal PodSpec definition to be running in K8s.
  • sharedgpu/sched_affinity (optional): only schedules SharePod with same sched_affinity label or schedules to an idle GPU.
  • sharedgpu/sched_anti-affinity (optional): do not schedules SharedPods together which have the same sched_anti-affinity label.
  • sharedgpu/sched_exclusion (optional): only one sched_exclusion label exists on a device, including empty label.
  • sharedgpu/gpu_model (optional): only assign pod to the node with dedicated gpu model, you can use kubectl describe node | grep sharedgpu/gpu_model_info to check gpu model e.g. GeForce GTX 1080

We also support the Node Selector:
If you want to use this feature, please add the label to the node first and add the label in the yaml.

  nodeSelector:
    disktype: ssh

This is the example:
Example

SharePod usage demo clip

All yaml files in clip are located in REPO_ROOT/doc/yaml.

asciicast

SharePod with NodeName and GPUID (advanced)

Follow this section to understand how to locate a SharePod on a GPU which is used by others.
kubeshare-scheduler fills metadata.annotations["kubeshare/GPUID"] and spec.nodeName to schedule a SharePod.

apiVersion: sharedgpu.goc/v1
kind: SharePod
metadata:
  name: sharepod1
  annotations:
    "sharedgpu/gpu_request": "0.5"
    "sharedgpu/gpu_limit": "1.0"
    "sharedgpu/gpu_mem": "1073741824" # 1Gi, in bytes
    "sharedgpu/GPUID": "abcde"
spec: # PodSpec
  nodeName: node01
  containers:
  - name: cuda
    image: nvidia/cuda:9.0-base
    command: ["nvidia-smi", "-L"]
    resources:
      limits:
        cpu: "1"
        memory: "500Mi"

A GPU is shared between multiple SharePods if the SharePods own the same <nodeName, GPUID> pair.

Following is a demonstration about how kubeshare-scheduler schedule SharePods with GPUID mechanism in a single node with two physical GPUs:

Initial status

GPU1(null)       GPU2(null)
+--------------+ +--------------+
|              | |              |
|              | |              |
|              | |              |
+--------------+ +--------------+

Pending list: Pod1(0.2)
kubeshare-scheduler decides to bind Pod1 on an idle GPU:
    randomString(5) => "zxcvb"
    Register Pod1 with GPUID: "zxcvb"

GPU1(null)       GPU2(zxcvb)
+--------------+ +--------------+
|              | |   Pod1:0.2   |
|              | |              |
|              | |              |
+--------------+ +--------------+

Pending list: Pod2(0.3)
kubeshare-scheduler decides to bind Pod2 on an idle GPU:
    randomString(5) => "qwert"
    Register Pod2 with GPUID: "qwert"

GPU1(qwert)      GPU2(zxcvb)
+--------------+ +--------------+
|   Pod2:0.3   | |   Pod1:0.2   |
|              | |              |
|              | |              |
+--------------+ +--------------+

Pending list: Pod3(0.4)
kubeshare-scheduler decides to share the GPU which Pod1 is using with Pod3:
    Register Pod2 with GPUID: "zxcvb"

GPU1(qwert)      GPU2(zxcvb)
+--------------+ +--------------+
|   Pod2:0.3   | |   Pod1:0.2   |
|              | |   Pod3:0.4   |
|              | |              |
+--------------+ +--------------+

Delete Pod2 (GPUID qwert is no longer exist)

GPU1(null)       GPU2(zxcvb)
+--------------+ +--------------+
|              | |   Pod1:0.2   |
|              | |   Pod3:0.4   |
|              | |              |
+--------------+ +--------------+

Pending list: Pod4(0.5)
kubeshare-scheduler decides to bind Pod4 on an idle GPU:
    randomString(5) => "asdfg"
    Register Pod4 with GPUID: "asdfg"

GPU1(asdfg)      GPU2(zxcvb)
+--------------+ +--------------+
|   Pod4:0.5   | |   Pod1:0.2   |
|              | |   Pod3:0.4   |
|              | |              |
+--------------+ +--------------+

More details in System Architecture

Build

Compiling

git clone https://github.com/NTHU-LSALAB/KubeShare.git
cd KubeShare
make
  • bin/kubeshare-scheduler: schedules pending SharePods to node and device, i.e. <nodeName, GPUID>.
  • bin/kubeshare-device-manager: handles scheduled SharePods and create the Pod object. Communicate with kubeshare-config-client on every nodes.
  • bin/kubeshare-config-client: daemonset on every node which configure the GPU isolation settings.

Directories & Files

  • cmd/: where main function located of three binaries.
  • crd/: CRD specification yaml file.
  • docker/: materials of all docker images in yaml files
  • pkg/: includes KubeShare core components, SharePod, and API server clientset produced by code-generater.
  • code-gen.sh: code-generator script.
  • go.mod: KubeShare dependencies.

GPU Isolation Library

Please refer to Gemini.

TODO

  • Convert vGPU UUID update trigger method from dummy Pod creation handler to dummy Pod sending data to controller.
  • Add PodSpec.SchedulerName support to kubeshare-scheduler.
  • Docker version check at init phase in config-client.

Issues

Any issues please open a GitHub issue, thanks.

Publication

Our paper is accepted by ACM HPDC 2020, and an introduction video is also available on YouTube.

About

Share GPU between Pods in Kubernetes

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 85.6%
  • Python 7.0%
  • Shell 3.5%
  • Dockerfile 2.6%
  • Makefile 1.3%