|

Deep Learning with Multiple GPUs on Rescale: TensorFlow Tutorial

markblogtensorflow
In a previous post, we showed examples of using multiple GPUs to train a deep neural network (DNN) using the Torch machine learning library. In this post, we will focus on performing multi-GPU training using TensorFlow.

In particular, we will explore data-parallel GPU training with multi-GPU and multi-node configurations on Rescale. We will leverage Rescale’s existing MPI configured clusters to easily launch TensorFlow distributed training workers. For a basic example of training with TensorFlow on a single GPU, see this previous post.

Preparing Data
To make our multi-GPU training sessions more interesting, we will be using some larger datasets. Later, we will show a training job on the popular ImageNet image classification dataset. Before we start with this 150 GB dataset, we will prepare a smaller dataset to be in the same format as ImageNet and test our jobs with that in order to make sure the TensorFlow trainer is working properly. In order to keep data local during training, Rescale syncs the dataset to local storage on the GPU nodes before training starts. Waiting for a large dataset like ImageNet to sync when iteratively developing a model using just a few examples is wasteful. For this reason, we will start with the smaller Flowers dataset and then move to ImageNet once we have a working example.
TensorFlow processes images that are formatted as TFRecords so first let’s download and preprocess the pngs from the flowers dataset to be in this format. All the examples we will be showing today come out of the inception module in the tensorflow/models repository on GitHub, so we start by cloning that repository:

git clone https://github.com/tensorflow/models

Now we use the bazel build tool to make the flowers download-and-preprocess script and then run that script:

pushd models/inception
bazel clean
bazel build inception/download_and_preprocess_flowers
popd
models/inception/bazel-bin/inception/download_and_preprocess_flowers $(pwd)/flowers

This should download a ~220MB archive and create something like this:

$ ls flowers
flower_photos.tgz  train-00000-of-00002  validation-00000-of-00002
raw-data           train-00001-of-00002  validation-00001-of-00002

We assemble this archive and then upload it to Rescale. Optionally, you can delete the raw-data and archive file since all the necessary information is now encoded as TFRecords.

pushd flowers
rm -rf raw-data flower_photos.tgz
popd
tar czf flowers.tar.gz flowers

We have assembled all these operations in a preprocessing job on Rescale here for you to clone and run yourself.

Next, let’s take the flowers.tar.gz file we just produced and convert it to an input file for the next step:
1-flower-input-file

Now we have our preprocessed flower image TFRecords ready for training.

Single Node – Multiple GPUs
The next step is to take this input dataset and train a model with it. We will be using the Inception v3 DNN architecture from the Tensorflow/models repository as mentioned above. Training on a single node with multiple GPUs looks something like this:

(from https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch)


We will first create a Rescale job that runs on a single node, since that has fewer moving parts than the multi-node case. We will actually run 3 processes in the job:

  • Main GPU-based model training
  • CPU-based evaluation of checkpointed models on the validation set
  • TensorBoard visualization tool

So let’s get started! First build the training and evaluation scripts.

pushd models/inception
bazel clean
bazel build inception/imagenet_train inception/imagenet_eval
popd

Next, create some output directories and start the main training process: 

mkdir -p out/train out/eval
models/inception/bazel-bin/inception/imagenet_train --num_gpus=$RESCALE_GPUS_PER_SLOT --batch_size=32 --train_dir=out/train --data_dir=flowers

$RESCALE_GPUS_PER_SLOT is a variable set on all Rescale job environments. In this command line, we point to the flowers directory with our training images and the empty out/train directory where TensorFlow will output logs and models files.

Evaluation of accuracy on the validation set can be done separately and does not need GPU acceleration:

CUDA_VISIBLE_DEVICES='' models/inception/bazel-bin/inception/imagenet_eval --eval_dir=out/eval --checkpoint_dir=$HOME/work/out/train --eval_interval_secs=1200 --data_dir=flowers

imagenet_eval sits in a loop and wakes up every eval_interval_secs to evaluate the accuracy of the most recently trained model checkpoint in out/train against validation TFRecords in the flowers directory. Accuracy results are logged to out/eval. CUDA_VISIBLE_DEVICES is an important parameter here. TensorFlow will by default always load itself into GPU memory, even if it is not going to make use of the GPU. Without this parameter, both the training and evaluation processes will together exhaust all the memory on the GPU and cause the training to fail.

Finally, TensorBoard is a handy tool for monitoring TensorFlow’s progress. TensorBoard runs its own web server to show plots of training progress, a graph of the model, and may other visualizations. To start it, we just have to point it to the out directory where our training and evaluation processes are outputting:

CUDA_VISIBLE_DEVICES='' tensorboard --logdir=out

TensorBoard will pull in logs in all subdirectories of logdir so it will show training and evaluation data together.

Putting this all together:

DATASET=flowers
pushd models/inception
bazel clean
bazel build inception/imagenet_train inception/imagenet_eval
popd
mkdir -p out/train out/eval
echo "Training on $RESCALE_GPUS_PER_SLOT GPUs"
CUDA_VISIBLE_DEVICES=’’ tensorboard --logdir=out &
( sleep 600; CUDA_VISIBLE_DEVICES= models/inception/bazel-bin/inception/imagenet_eval --eval_dir=out/eval --checkpoint_dir=$HOME/work/out/train --eval_interval_secs=1200 --num_examples=350 --data_dir=$DATASET ) &
models/inception/bazel-bin/inception/imagenet_train --num_gpus=$RESCALE_GPUS_PER_SLOT --batch_size=32 --train_dir=out/train --data_dir=$DATASET

Since this all runs in a single shell, we background the TensorBoard and evaluation processes. We also delay start of the evaluation process since the training process needs a few minutes to initialize and create the first model checkpoint.

You can run this training job on Rescale here.

Since TensorBoard runs its own web server without any authentication, access is blocked by default on Rescale. The easiest way to get access to TensorBoard is to open an SSH tunnel to the node and forward port 6006:

ssh -L 6006:localhost:6006 @



Now navigate to https://localhost:6006 and you should see something like this:
4-tensorboard

Multiple Nodes
The current state-of-the-art limits the total GPU cards that can fit on a node to something around 8. Additionally, the CUDA peer-to-peer system, the mechanism a TensorFlow process uses to distribute work amongst GPUs is currently limited to 8 GPU devices. While these numbers will continue to increase, it is still convenient to have a mechanism to scale your training out for large models and datasets. TensorFlow distributed training synchronizes updates between different training processes over the network, so it can be used with any network fabric and not be limited by CUDA implementation details. Distributed training consists of some number of workers and parameter servers as shown here:
5-distributed-training

(from https://github.com/tensorflow/models/tree/master/inception#how-to-train-from-scratch)

Parameter servers provide model parameters which are then used to evaluate an input batch. After the batch on each worker is complete, the error gradients are fed back into the parameter server which then uses them to produce new model parameters. In the context of a GPU cluster, we could run a worker process to use each GPU in our cluster and then pick enough parameter servers to keep up with processing of the gradients.

Following the instructions here, we set up a worker per GPU and a parameter server per node. We will take advantage of the MPI configuration that comes with every Rescale cluster.

To start, we need generate the host strings that will be passed to each parameter server and worker, each process getting a unique hostname:port combination, for example:

--ps_hosts='host1:2222,host2:2222'
--worker_hosts='host1:2223,host1:2224,host2:2223,host2:2224'

We want a single entry per host for the parameter servers and a single entry per GPU for the workers. We take advantage of the machine files that are automatically set up on every Rescale cluster. $HOME/machinefile just has a list of hosts in the cluster and $HOME/machinefile.gpu has a list of hosts annotated with the number of GPUs on each host. We parse them to generate our host strings in a python script: make_hoststrings.py

#!/usr/bin/env python
import os.path
PS_PORT = 2222
WORKER_PORT_START = 2223
ps_string = ','.join('{0}:{1}'.format(hostname.strip(), PS_PORT)
                    for hostname in open(os.path.expanduser('~/machinefile')))
workers = []
for hostline in open(os.path.expanduser('~/machinefile.gpu')):
   hostname, slots, max_slots = hostline.split()
   slots = int(slots.split('=')[1])
   workers += ['{0}:{1}'.format(hostname, WORKER_PORT_START + i)
               for i in range(slots)]
print ps_string, ','.join(workers)

Next we have a script that takes these host strings and launches the imagenet_distributed_train script with the proper task ID and GPU whitelist, tf_mpistart.sh:

#!/bin/bash
PROC_TYPE=$1
DATADIR=$2
host_strings=$(./make_hoststrings.py)
PS_HOSTS=$(echo $host_strings | cut -d' ' -f1)
WORKER_HOSTS=$(echo $host_strings | cut -d' ' -f2)
TASK_ID=$OMPI_COMM_WORLD_RANK
if [ $PROC_TYPE == 'ps' ]; then
   CUDADEV=''
else
   CUDADEV=$OMPI_COMM_WORLD_LOCAL_RANK
fi
CUDA_VISIBLE_DEVICES=$CUDADEV models/inception/bazel-bin/inception/imagenet_distributed_train --batch_size=32 --data_dir=$DATADIR --train_dir=out/train --job_name=$PROC_TYPE --task_id=$TASK_ID --ps_hosts="$PS_HOSTS" --worker_hosts="$WORKER_HOSTS" >>${PROC_TYPE}-$TASK_ID.log 2>&1

 tf_mpistart.sh will be run with OpenMPI mpirun so $OMPI* environment variables are automatically injected. We use $OMPI_COMM_WORLD_RANK to get a global task index and $OMPI_COMM_WORLD_LOCAL_RANK to get a node local GPU index.

Now, putting it all together:

DATASET=flowers
mv tf_mpistart.sh make_hoststrings.py models $DATASET shared/
cd shared
pushd models/inception
bazel clean
bazel --output_base=$HOME/work/shared/.cache build inception/imagenet_distributed_train inception/imagenet_eval
popd
mkdir -p out/train out/eval
CUDA_VISIBLE_DEVICES= tensorboard --logdir=out &
mpirun -machinefile ~/machinefile ./tf_mpistart.sh ps &
( sleep 600; CUDA_VISIBLE_DEVICES= models/inception/bazel-bin/inception/imagenet_eval --eval_dir=out/eval --checkpoint_dir=$HOME/work/shared/out/train --eval_interval_secs=1200 --num_examples=350 --data_dir=$DATASET ) &
mpirun -machinefile ~/machinefile.gpu ./tf_mpistart.sh worker

We start with a bunch of the same directory creation and bazel build boilerplate. The 2 exceptions are:

1. We move all the input directories to the shared/ subdirectory so it is shared across nodes.
2. We now call the bazel build command with the --output_base so that bazel doesn’t symlink the build products to $HOME/.cache and instead makes them available on the shared filesystem.

Next we launch TensorBoard and imagenet_eval locally on the MPI master. These 2 processes don’t need to be replicated across nodes with mpirun.

Finally, we launch the parameter servers with tf_mpistart.sh ps and the single entry per node machinefile and then the workers with tf_mpistart.sh worker with GPU-ranked machinefile.gpu.
Here is an example job performing the above distributed training on the flowers dataset using 2 Jade nodes (8 K520 GPUs). Note that since we are using the MPI infrastructure already set up on Rescale, we can use this same example for any number of nodes or GPUs-per-node. Using the appropriate machinefiles, the number of workers and parameter servers are set automatically to match the resources.
Training on ImageNet
Now that we have developed the machinery to launch a TensorFlow distributed training job on the smaller flowers dataset, we are ready to train on the full ImageNet dataset. Downloading of ImageNet requires permission here. You can request access and upon acceptance, you will be given a username and password to download the necessary tarballs.
We can then run a preparation job similar to the flowers job above to download the dataset and format the images into TFRecords:

# *************************
# SET IMAGENET VALUES BELOW
# *************************
export DATA_DIR=$HOME/work/imagenet-data
export IMAGENET_USERNAME=
export IMAGENET_ACCESS_KEY=
cd models/inception
bazel build inception/download_and_preprocess_imagenet
bazel-bin/inception/download_and_preprocess_imagenet "${DATA_DIR}"
rm -rf imagenet-data/raw-data imagenet-data/*.gz
tar -cf ilsvrc2012.tar imagenet-data
rm -rf imagenet-data

You can clone and run this preparation job on Rescale here.
If you have already downloaded the 3 necessary inputs from the ImageNet site (ILSVRC2012_img_train.tar, ILSVRC2012_img_val.tar, and ILSVRC2012_bbox_train_v2.tar.gz) and have placed them somewhere accessible to via HTTP (like an AWS S3 bucket), you can customize models/inception/inception/data/download_imagenet.sh in the tensorflow/models repository to download from your custom location:

# ************************
# SET BASE_URL VALUE BELOW
# ************************
export DATA_DIR=$HOME/work/imagenet-data
export BASE_URL=
sed -i "s#BASE_URL=.*#BASE_URL=${BASE_URL}#" download_imagenet_noauth.sh
mv download_imagenet_noauth.sh models/inception/inception/data/download_imagenet.sh
cd models/inception
bazel build inception/download_and_preprocess_imagenet
bazel-bin/inception/download_and_preprocess_imagenet "${DATA_DIR}"
rm -rf imagenet-data/raw-data imagenet-data/*.gz
tar -cf ilsvrc2012.tar imagenet-data
rm -rf imagenet-data

Clone and run this version of the preparation job here.
Finally, we can make some slight modifications to our multi-GPU flowers jobs to take the imagenet-data dataset directory instead of the flowers directory by changing the $DATASET variable at the top:

DATASET=imagenet-data
pushd models/inception
bazel clean
bazel build inception/imagenet_train
bazel build inception/imagenet_eval
popd
mkdir -p out/train out/eval
echo "Training on $RESCALE_GPUS_PER_SLOT GPUs"
CUDA_VISIBLE_DEVICES= tensorboard --logdir=out &
( sleep 600; CUDA_VISIBLE_DEVICES= models/inception/bazel-bin/inception/imagenet_eval --eval_dir=out/eval --checkpoint_dir=$HOME/work/out/train --eval_interval_secs=1200 --data_dir=$DATASET ) &
models/inception/bazel-bin/inception/imagenet_train --num_gpus=$RESCALE_GPUS_PER_SLOT --batch_size=32 --train_dir=out/train --data_dir=$DATASET
rm -rf $DATASET models

And the distributed training case:

DATASET=imagenet-data
pushd models/inception
bazel clean
bazel build inception/imagenet_train
bazel build inception/imagenet_eval
popd
mkdir -p out/train out/eval
echo "Training on $RESCALE_GPUS_PER_SLOT GPUs"
CUDA_VISIBLE_DEVICES= tensorboard --logdir=out &
( sleep 600; CUDA_VISIBLE_DEVICES= models/inception/bazel-bin/inception/imagenet_eval --eval_dir=out/eval --checkpoint_dir=$HOME/work/out/train --eval_interval_secs=1200 --data_dir=$DATASET ) &
models/inception/bazel-bin/inception/imagenet_train --num_gpus=$RESCALE_GPUS_PER_SLOT --batch_size=32 --train_dir=out/train --data_dir=$DATASET
rm -rf $DATASET models

We have gone through all the details for performing multi-GPU, single and multi-node model training with TensorFlow. In an upcoming post, we will discuss the performance ramifications of distributed training, and look at how well it scales on different server configurations.

Rescale Jobs
Here is a summary of the Rescale jobs used in this example. Click the links to import the jobs to your Rescale account.
Flowers dataset preprocessing
Single node flowers training 
Multiple nodes flowers training
ImageNet ILSVRC2012 download and preprocessing 
ImageNet ILSVRC2012 download from existing S3 bucket

Author

  • Mark Whitney

    Mark Whitney is a director of engineering at Rescale. His areas of expertise include high performance computing architectures, quantum information research, and cloud computing. He holds a PhD in computer science from the University of California, Berkeley

Similar Posts