Skip to content

Commit

Permalink
uupdated
Browse files Browse the repository at this point in the history
  • Loading branch information
cfregly committed Mar 16, 2020
0 parents commit f1bf948
Show file tree
Hide file tree
Showing 80 changed files with 1,009,121 additions and 0 deletions.
Binary file added kubeflow/.DS_Store
Binary file not shown.
250 changes: 250 additions & 0 deletions kubeflow/01_Distributed_Training.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,250 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Background\n",
"Deep learning has shown that being able to train large models on vasts amount of data can drastically improve model performance. \n",
"\n",
"\n",
"However, consider the problem of training a deep network with millions, or even billions of parameters. How do we achieve this without waiting for days, or even multiple weeks? Dean et al propose a different training paradigm which allows us to train and serve a model on multiple physical machines. The auth|ors propose two novel methodologies to accomplish this, namely, `model parallelism` and `data parallelism`.\n",
"\n",
"\n",
"## Model Parallelism\n",
"When a big model can not fit into a single node's memory, model parallel training can be employed to handle the big model. Model parallelism training has two key features:\n",
"1. Each worker task is responsible for estimating different part of the model parameters. So the computation logic in each worker is different from other one else.\n",
"2. There is application-level data communication between workers. \n",
"\n",
"![Model Parallelism](./images/model_parallelism.jpg)\n",
"\n",
"\n",
"## Data Parallelism\n",
"\n",
"The algorithm distributes the data between various tasks.\n",
"1. Each worker task is responsible for estimating different part of the dataset\n",
"2. Tasks then exchange their estimate(s) with each other to come up with the right estimate for the step.\n",
"\n",
"![Data Parallelism](./images/data_parallelism.png)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Distributed Training in Tensorflow \n",
"\"Data Parallelism\" is the most common training configuration, it involves multiple tasks in a `worker` job training the same model on different mini-batches of data, updating shared parameters hosted in one or more tasks in a `ps` (parameter server) job. All tasks typically run on different machines or containers. There are many ways to specify this structure in TensorFlow, and Tensorflow team are building libraries that will simplify the work of specifying a replicated model. Other platforms like `MXnet`, `Petuum` also have the same abstraction. \n",
"\n",
"- __In-graph replication__. In this approach, the client builds a single tf.Graph that contains one set of parameters (in tf.Variable nodes pinned to /job:ps); and multiple copies of the compute-intensive part of the model, each pinned to a different task in /job:worker.\n",
"\n",
"- __Between-graph replication__. In this approach, there is a separate client for each /job:worker task, typically in the same process as the worker task. Each client builds a similar graph containing the parameters (pinned to /job:ps as before using tf.train.replica_device_setter to map them deterministically to the same tasks); and a single copy of the compute-intensive part of the model, pinned to the local task in /job:worker.\n",
"\n",
"- __Asynchronous training__. In this approach, each replica of the graph has an independent training loop that executes without coordination. It is compatible with both forms of replication above.\n",
"\n",
"- __Synchronous training__. In this approach, all of the replicas read the same values for the current parameters, compute gradients in parallel, and then apply them together. It is compatible with in-graph replication (e.g. using gradient averaging as in the CIFAR-10 multi-GPU trainer), and between-graph replication (e.g. using the tf.train.SyncReplicasOptimizer).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examples\n",
"\n",
"We will introduce two frameworks in the distributed training. Tensorflow and PyTorch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tensorflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check Tensorflow PS Job"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat ./distributed-training-jobs/distributed-tensorflow-job.yaml"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit TFJob distributed training job"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl create -f distributed-training-jobs/distributed-tensorflow-job.yaml"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Get all TFJobs"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl get tfjob"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check TFJob Status"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl describe tfjob distributed-tensorflow-job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check all the pods created by this TFJob"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl get pod | grep distributed-tensorflow-job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check logs of one worker pod\n",
"Re-run the following cell periodically to see the logs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl logs distributed-tensorflow-job-worker-0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### PyTorch"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!cat ./distributed-training-jobs/distributed-pytorch-job.yaml"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl apply -f ./distributed-training-jobs/distributed-pytorch-job.yaml"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl describe pytorchjob distributed-pytorch-job"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl get pod | grep distributed-pytorch-job"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Check logs of one worker pod\n",
"Re-run the following cell periodically to see the logs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!kubectl logs distributed-pytorch-job-master-0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit f1bf948

Please sign in to comment.