Tensorflow end to end whirlwind tutorial

Time:2020-1-22

Because at present, I am doing a project of equipment anomaly detection for my elder martial brother, and I began to have access to TF. This tutorial is not only the notes of this period, but also a quick index for other partners or latecomers of the project team.

The so-called “end-to-end” refers to the whole process from environment building, model development, deployment and online to client use. At present, I haven’t seen a complete article throughout. The purpose of this paper is to get through the whole link, so as to help TF beginners quickly understand the whole picture, rather than limited to the implementation details of specific models.

OK,Let’s go!

TF installation

As for this step, there is not much to say, just follow the installation guide of TF official website. The only thing to note is that before the installation starts, please configure your pypi image. Otherwise, the speed can be imagined.

Model development

TF development is divided into two steps: algorithm development and model export.

For the development part, the articles on the internet almost all take MNIST data set as an example to explain. But as a person from the past, I personally do not recommend Xiaobai to complete such a complex example. We can spend our energy on understanding the basic concepts of TF first, and then quickly switch to the following steps to establish the overall impression and concept of ML development and application.

So, I recommend you take a look at O’Reilly’s Hello, tensorflow!. This paper not only explains the basic concept of TF, but also gives a brief introduction to tensorboard. The only drawback is that because TF API changes a lot, the sample code needs to be adjusted.

The TF model needs to be exported before it can be used in the production environment. Unfortunately, it is not mentioned above. Here you can refer to my example on GitHub, which not only adjusts the above example code, but also the code example of model export. This is rarely mentioned in general articles.

Running environment

The running environment here refers to the local running environment. After all, it is more convenient for the development work.

In this step, using docker is the easiest way. Besides, tensor serving also has ready-made images to use. The documents are here.

Here are some quick commands for reference:

  • docker load < bitnami_tensorflow_serving.tar.xz
  • Docker run – D – V model directory / bitnami / model data – P — expose 9000 — name tensorflow serving bitnami / tensorflow serving

    • Note that the model directory uses the full path

/Bitnami / model data is the basic directory of the image model. You can see this after entering the container (the configuration file used can be found from the command-line parameter started by TF serving in the container): / opt / bitnami / tensorflow-serving/conf/tensorflow-serving.conf, which contains the base path. In addition, you can also find that the default included model name is “Inception”).

At the same time, don’t forget to expose the port.

Model startup

Actually, the problem of model startup has been mentioned above, but I think it needs to be emphasized separately.

Intentional readers should be able to guess from the above command that the so-called startup model is nothing more than to let TF serving know the directory containing the exported model. Such as:

tensorflow_model_server --model_name=mnist --port=9000 --model_base_path= /home/ml/modules/export/

If you start multiple models, you need to write a model configuration file, for example:

model_config_list: {
  config: {
    name: "mnist",
    base_path: "/tmp/mnist_model",
    model_platform: "tensorflow"
  },
  config: {
    name: "inception",
    base_path: "/tmp/inception_model",
    model_platform: "tensorflow"
  }
}

At startup, the command becomes:

Tensorflow? Model? Server? Port = 9000? Model? Config? File = configuration file

Unfortunately, TF serving cannot refresh the configuration file dynamically at this time when there are multiple models. This undoubtedly causes trouble in deploying new models and updating models.

If you use the docker image above, you can directly use the docker run above to start the model. The small partner can go to the container to learn the relevant command details (after entering, ps-ef).

Call model

It’s finally time to test our results!

Unfortunately, the tensorflow serving API does not currently support Python 3. And I’m lazy to maintain two environments, so I’m on the wrong side: try to call the model in Java, after all, it’s grpc.

Inspired by this paper (which is also a good introductory article), it is finally completed:

  • Import proto files of TF and TF serving
  • All that’s left is simple grpc client writing.

As an official, please refer to my example. For the complete example of this article, please refer to my example project (it is also an example of vertx grpc).

So far, tensorflow has been fully developed, and the rest is to concentrate on developing models!

Recommended Today

RCAST 35: add type to currency

– font ALT: Simsun; MSO font charset: 134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 680460288 22 0 262145 0;} @font-face {font-family:”Cambria Math”; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;} @font-face {font-family:Calibri; Variable; Ose-1: 216301111; mso-font-charset:134; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:3 680460288 22 0 262145 0;} /\* Style […]