Learning note tf065: tensorflowonspark


Hadoop ecological big data system is divided into yam, HDFS, MapReduce computing framework. Tensorflow distributed is equivalent to MapReduce computing framework, kubernetes is equivalent to yam scheduling system. Tensorflowonspark uses remote direct memory access (RDMA) to solve storage function and scheduling, and realizes deep learning and big data fusion. Tensorflow on Spark (TFOs), an open source project of Yahoo. https://github.com/yahoo/Tens… 。 Support Apache spark cluster distributed tensorflow training and prediction. Tensorflowonspark provides a bridge program. Each spark executor starts a corresponding tensorflow process and interacts with each other through remote process communication (RPC).

Tensorflowonspark architecture. Tensorflow training program runs in spark cluster, manages spark cluster steps: Reserve, execute each tensorflow process in executor, reserve a port, start data message listener. Start the tensorflow main function in the executor. For data acquisition, tensorflow readers and queuerunners mechanism read HDFS data file directly, spark does not access data; feeding, sparkrdd data is sent to tensorflow node, and data is sent through feed_ Dict mechanism is passed into tensorflow calculation graph. Close, close the executor tensorflow calculation node and parameter service node. Spark driver > spark executor > parameter server > tensorflow core > grpc, RDMA > HDFS dataset. http://yahoohadoop.tumblr.com… 。

TensorFlowOnSpark MNIST。 https://github.com/yahoo/Tens… 。 Stand alone mode spark cluster, one computer. Install spark and Hadoop. Deploy Java 1.8.0 JDK. Download spark version 2.1.0 http://spark.apache.org/downl… 。 Download Hadoop version 2.7.3 http://hadoop.apache.org/#Dow… 。 Version 0.12.1 supports better.
Modify the configuration file, set the environment variables, and start Hadoop: $Hadoop_ HOME/sbin/start- all.sh 。 Check out tensorflowonspark source code:

git clone --recurse-submodules https://github.com/yahoo/TensorFlowOnSpark.git
cd TensorFlowOnSpark
git submodule init
git submodule update --force
git submodule foreach --recursive git clean -dfx

Source code package, submit task using:

cd TensorflowOnSpark/src
zip -r ../tfspark.zip *

Set tensorflowonspark root environment variable:

cd TensorFlowOnSpark
export TFoS_HOME=$(pwd)

Start spark master node:


Two worker instances are configured, and master spark URL is connected to the master node

export MASTER=spark://$(hostname):7077
$(SPARK_HOME)/sbin/start-slave.sh -c $CORES_PER_WORKER -m 3G $(MASTER)

Submit the task and convert the MNIST zip file into the HDFS RDD dataset

$(SPARK_HOME)/bin/spark-submit \
--master $(MASTER) --conf spark.ui.port=4048 --verbose \
$(TFoS_HOME)/examples/mnist/mnist_data_setup.py \
--output examples/mnist/csv \
--format csv

To view processed datasets:

hadoop fs -ls hdfs://localhost:9000/user/libinggen/examples/mnist/csv

To view the saved image and mark vector:

hadoop fs -ls hdfs://localhost:9000/user/libinggen/examples/mnist/csv/train/labels

RDD data are saved in training set and test set respectively.
https://github.com/yahoo/Tens… 。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy
import tensorflow as tf
from array import array
from tensorflow.contrib.learn.python.learn.datasets import mnist
def toTFExample(image, label):
  """Serializes an image/label as a TFExample byte string"""
  example = tf.train.Example(
    features = tf.train.Features(
      feature = {
        'label': tf.train.Feature(int64_list=tf.train.Int64List(value=label.astype("int64"))),
        'image': tf.train.Feature(int64_list=tf.train.Int64List(value=image.astype("int64")))
  return example.SerializeToString()
def fromTFExample(bytestr):
  """Deserializes a TFExample from a byte string"""
  example = tf.train.Example()
  return example
def toCSV(vec):
  """Converts a vector/array into a CSV string"""
  return ','.join([str(i) for i in vec])
def fromCSV(s):
  """Converts a CSV string to a vector/array"""
  return [float(x) for x in s.split(',') if len(s) > 0]
def writeMNIST(sc, input_images, input_labels, output, format, num_partitions):
  """Writes MNIST image/label vectors into parallelized files on HDFS"""
  # load MNIST gzip into memory
  #MNIST image and marker vector are written into HDFS
  with open(input_images, 'rb') as f:
    images = numpy.array(mnist.extract_images(f))
  with open(input_labels, 'rb') as f:
    if format == "csv2":
      labels = numpy.array(mnist.extract_labels(f, one_hot=False))
      labels = numpy.array(mnist.extract_labels(f, one_hot=True))
  shape = images.shape
  print("images.shape: {0}".format(shape))          # 60000 x 28 x 28
  print("labels.shape: {0}".format(labels.shape))   # 60000 x 10
  # create RDDs of vectors
  imageRDD = sc.parallelize(images.reshape(shape[0], shape[1] * shape[2]), num_partitions)
  labelRDD = sc.parallelize(labels, num_partitions)
  output_images = output + "/images"
  output_labels = output + "/labels"
  # save RDDs as specific format
  #RDDS save specific formats
  if format == "pickle":
  elif format == "csv":
  elif format == "csv2":
    imageRDD.map(toCSV).zip(labelRDD).map(lambda x: str(x[1]) + "|" + x[0]).saveAsTextFile(output)
  else: # format == "tfr":
    tfRDD = imageRDD.zip(labelRDD).map(lambda x: (bytearray(toTFExample(x[0], x[1])), None))
    # requires: --jars tensorflow-hadoop-1.0-SNAPSHOT.jar
    tfRDD.saveAsNewAPIHadoopFile(output, "org.tensorflow.hadoop.io.TFRecordFileOutputFormat",
#  Note: this creates TFRecord files w/o requiring a custom Input/Output format
#  else: # format == "tfr":
#    def writeTFRecords(index, iter):
#      output_path = "{0}/part-{1:05d}".format(output, index)
#      writer = tf.python_io.TFRecordWriter(output_path)
#      for example in iter:
#        writer.write(example)
#      return [output_path]
#    tfRDD = imageRDD.zip(labelRDD).map(lambda x: toTFExample(x[0], x[1]))
#    tfRDD.mapPartitionsWithIndex(writeTFRecords).collect()
def readMNIST(sc, output, format):
  """Reads/verifies previously created output"""
  output_images = output + "/images"
  output_labels = output + "/labels"
  imageRDD = None
  labelRDD = None
  if format == "pickle":
    imageRDD = sc.pickleFile(output_images)
    labelRDD = sc.pickleFile(output_labels)
  elif format == "csv":
    imageRDD = sc.textFile(output_images).map(fromCSV)
    labelRDD = sc.textFile(output_labels).map(fromCSV)
  else: # format.startswith("tf"):
    # requires: --jars tensorflow-hadoop-1.0-SNAPSHOT.jar
    tfRDD = sc.newAPIHadoopFile(output, "org.tensorflow.hadoop.io.TFRecordFileInputFormat",
    imageRDD = tfRDD.map(lambda x: fromTFExample(str(x[0])))
  num_images = imageRDD.count()
  num_labels = labelRDD.count() if labelRDD is not None else num_images
  samples = imageRDD.take(10)
  print("num_images: ", num_images)
  print("num_labels: ", num_labels)
  print("samples: ", samples)
if __name__ == "__main__":
  import argparse
  from pyspark.context import SparkContext
  from pyspark.conf import SparkConf
  parser = argparse.ArgumentParser()
  parser.add_argument("-f", "--format", help="output format", choices=["csv","csv2","pickle","tf","tfr"], default="csv")
  parser.add_argument("-n", "--num-partitions", help="Number of output partitions", type=int, default=10)
  parser.add_argument("-o", "--output", help="HDFS directory to save examples in parallelized format", default="mnist_data")
  parser.add_argument("-r", "--read", help="read previously saved examples", action="store_true")
  parser.add_argument("-v", "--verify", help="verify saved examples after writing", action="store_true")

args = parser.parse_args()

  sc = SparkContext(conf=SparkConf().setAppName("mnist_parallelize"))
  if not args.read:
    # Note: these files are inside the mnist.zip file
    writeMNIST(sc, "mnist/train-images-idx3-ubyte.gz", "mnist/train-labels-idx1-ubyte.gz", args.output + "/train", args.format, args.num_partitions)
    writeMNIST(sc, "mnist/t10k-images-idx3-ubyte.gz", "mnist/t10k-labels-idx1-ubyte.gz", args.output + "/test", args.format, args.num_partitions)
  if args.read or args.verify:
    readMNIST(sc, args.output + "/train", args.format)

Submit training task, start training, generate MNIST in HDFS_ Model, command:

${SPARK_HOME}/bin/spark-submit \
--master ${MASTER} \
--py-files ${TFoS_HOME}/examples/mnist/spark/mnist_dist.py \
--conf spark.cores.max=${TOTAL_CORES} \
--conf spark.task.cpus=${CORES_PER_WORKER} \
--conf spark.executorEnv.JAVA_HOME="$JAVA_HOME" \
${TFoS_HOME}/examples/mnist/spark/mnist_spark.py \
--cluster_size ${SPARK_WORKER_INSTANCES} \
--images examples/mnist/csv/train/images \
--labels examples/mnist/csv/train/labels \
--format csv \
--mode train \
--model mnist_model

mnist_ dist.py Construct tensorflow distributed task, define main function of distributed task, and start tensorflow main function map_ Fun, data acquisition method feeding. Get tensorflow cluster and server instance:

cluster, server = TFNode.start_cluster_server(ctx, 1, args.rdma)

Tfnode call tfspark.zip TFNode.py Documents.

mnist_ spark.py The file is the main training program. Deployment steps of tensorflowonspark are as follows:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from pyspark.context import SparkContext
from pyspark.conf import SparkConf
import argparse
import os
import numpy
import sys
import tensorflow as tf
import threading
import time
from datetime import datetime
from tensorflowonspark import TFCluster
import mnist_dist
sc = SparkContext(conf=SparkConf().setAppName("mnist_spark"))
executors = sc._conf.get("spark.executor.instances")
num_executors = int(executors) if executors is not None else 1
num_ps = 1
parser = argparse.ArgumentParser()
parser.add_argument("-b", "--batch_size", help="number of records per batch", type=int, default=100)
parser.add_argument("-e", "--epochs", help="number of epochs", type=int, default=1)
parser.add_argument("-f", "--format", help="example format: (csv|pickle|tfr)", choices=["csv","pickle","tfr"], default="csv")
parser.add_argument("-i", "--images", help="HDFS path to MNIST images in parallelized format")
parser.add_argument("-l", "--labels", help="HDFS path to MNIST labels in parallelized format")
parser.add_argument("-m", "--model", help="HDFS path to save/load model during train/inference", default="mnist_model")
parser.add_argument("-n", "--cluster_size", help="number of nodes in the cluster", type=int, default=num_executors)
parser.add_argument("-o", "--output", help="HDFS path to save test/inference output", default="predictions")
parser.add_argument("-r", "--readers", help="number of reader/enqueue threads", type=int, default=1)
parser.add_argument("-s", "--steps", help="maximum number of steps", type=int, default=1000)
parser.add_argument("-tb", "--tensorboard", help="launch tensorboard process", action="store_true")
parser.add_argument("-X", "--mode", help="train|inference", default="train")
parser.add_argument("-c", "--rdma", help="use rdma connection", default=False)
args = parser.parse_args()
print("{0} ===== Start".format(datetime.now().isoformat()))
if args.format == "tfr":
  images = sc.newAPIHadoopFile(args.images, "org.tensorflow.hadoop.io.TFRecordFileInputFormat",
  def toNumpy(bytestr):
    example = tf.train.Example()
    features = example.features.feature
    image = numpy.array(features['image'].int64_list.value)
    label = numpy.array(features['label'].int64_list.value)
    return (image, label)
  dataRDD = images.map(lambda x: toNumpy(str(x[0])))
  if args.format == "csv":
    images = sc.textFile(args.images).map(lambda ln: [int(x) for x in ln.split(',')])
    labels = sc.textFile(args.labels).map(lambda ln: [float(x) for x in ln.split(',')])
  else: # args.format == "pickle":
    images = sc.pickleFile(args.images)
    labels = sc.pickleFile(args.labels)
  print("zipping images and labels")
  dataRDD = images.zip(labels)
#1. Reserve a port for executing each tensorflow process in the executor
cluster = TFCluster.run(sc, mnist_dist.map_fun, args, args.cluster_size, num_ps, args.tensorboard, TFCluster.InputMode.SPARK)
#2. Start tensorflow main function
cluster.start(mnist_dist.map_fun, args)
if args.mode == "train":
  #3. Training
  cluster.train(dataRDD, args.epochs)
  #3. Forecast
  labelRDD = cluster.inference(dataRDD)
#4. Close the executor tensorflow calculation node and parameter service node
print("{0} ===== Stop".format(datetime.now().isoformat()))

Forecast command:

${SPARK_HOME}/bin/spark-submit \
--master ${MASTER} \
--py-files ${TFoS_HOME}/examples/mnist/spark/mnist_dist.py \
--conf spark.cores.max=${TOTAL_CORES} \
--conf spark.task.cpus=${CORES_PER_WORKER} \
--conf spark.executorEnv.JAVA_HOME="$JAVA_HOME" \
${TFoS_HOME}/examples/mnist/spark/mnist_spark.py \
--cluster_size ${SPARK_WORKER_INSTANCES} \
--images examples/mnist/csv/test/images \
--labels examples/mnist/csv/test/labels \
--mode inference \
--format csv \
--model mnist_model \
--output predictions

You can also run Amazon EC2 and yarn mode in Hadoop cluster.

reference material:
Tensorflow technology analysis and actual combat

Welcome to recommend Shanghai machine learning job opportunities, my wechat: qingxingfengzi