Tensorflow Rust Actual Chapter 2 [Integrating Actix-Web to Provide http Services]

Time:2019-8-11

Last time I wrote Tensorflow Rust. This time we’ll look at what tensorflow builds and provides services through the HTTP interface. With the release of Actix Web 1.0, I think it would be a good time to build something with it.

This article assumes that you have some knowledge of Futures and how it works. I’ll try to explain it in simpler terms, but understanding the Futures ecosystem will be very helpful in reading this article. So I suggest you start with tokio.

Some people suggest waiting for the release of async / await and friends before going deep into Futures. I think you should do it yourself now: asynchronous programming is always challenging.

Again, for impatient people, you can find the reference code on the actix-web branch:
https://github.com/cetra3/mtc…

I. API Definition

The API here is very simple. We want to imitate what we did on the command line: submit a picture and return it as a picture. To make things interesting, we’ll provide a way to return the boundaries as JSON arrays.

With regard to the submission of binary data through HTTP protocol, I first came up with several options:

  • Just submit the original data.
  • Using multipart/form-data
  • Serialized submission in JSON format

I think the simplest is raw data, so let’s do it! Multpart/form-data may be ok, but when do you have to process multiple images? JSON format seems a bit wasteful, because you inevitably have to convert binary data using Base64 or something like that.

So our API is like this:

  • Submit POST requests as original files
  • Running session and extracting face by MTCNN algorithm
  • Return the border box in JSON format; or return the image in JPEG format as in the command line example.

2. Structures of MTCNN

In our last blog, we simply used the main function to perform all operations, but we had to do some refactoring to work with actix. We want to encapsulate MTCNN behavior as a structure that can be transferred and transferred. The ultimate goal is to use it in the application state.

2.1 Structural Definition

Let’s include everything we want in our structure:

  • picture
  • Conversation
  • Input parameters of the Tensor framework shared in several requests

First, we create a new file mtcnn. RS with the structure definition.

use tensorflow::{Graph, Session, Tensor};

pub struct Mtcnn {
    graph: Graph,
    session: Session,
    min_size: Tensor<f32>,
    thresholds: Tensor<f32>,
    factor: Tensor<f32>
}

Then, now we just populate the initialization with the new method. Since initialization of some of these values is not absolutely reliable, we will return to Result:

pub fn new() -> Result<Self, Box<dyn Error>> {

    let model = include_bytes!("mtcnn.pb");

    let mut graph = Graph::new();
    graph.import_graph_def(&*model, &ImportGraphDefOptions::new())?;

    let session = Session::new(&SessionOptions::new(), &graph)?;

    let min_size = Tensor::new(&[]).with_values(&[40f32])?;
    let thresholds = Tensor::new(&[3]).with_values(&[0.6f32, 0.7f32, 0.7f32])?;
    let factor = Tensor::new(&[]).with_values(&[0.709f32])?;

    Ok(Self {
        graph,
        session,
        min_size,
        thresholds,
        factor
    })

}

2.2Run Method

I’m going to start speeding up here, so if you encounter difficulties or are not sure what’s going on, please check out the first part of the Tensorflow Rust Actual War to explain what’s going on here.

We’ve added everything that needs to run a conversation. Let’s create a way to do what the API needs to do: submit a picture, respond to some border boxes (frame the location of the face):

pub fn run(&self, img: &DynamicImage) -> Result<Vec<BBoxes>, Status> {
    ...
}

Again, we respond to a Result type, because in some cases the run method fails. We use the Status type to represent the type of response error.

Like our previous main method, we need to flatten the input of the image:

let input = {
    let mut flattened: Vec<f32> = Vec::new();

    for (_x, _y, rgb) in img.pixels() {
        flattened.push(rgb[2] as f32);
        flattened.push(rgb[1] as f32);
        flattened.push(rgb[0] as f32);
    }

    Tensor::new(&[img.height() as u64, img.width() as u64, 3])
        .with_values(&flattened)?
};

Then we will provide all relevant input. This is the same as our previous main method, but we just borrow values from self instead of creating them for each run:

let mut args = SessionRunArgs::new();

args.add_feed(
    &self.graph.operation_by_name_required("min_size")?,
    0,
    &self.min_size,
);
args.add_feed(
    &self.graph.operation_by_name_required("thresholds")?,
    0,
    &self.thresholds,
);
args.add_feed(
    &self.graph.operation_by_name_required("factor")?,
    0,
    &self.factor,
);
args.add_feed(&self.graph.operation_by_name_required("input")?, 0, &input);

Next, we grab the output we want:

let bbox = args.request_fetch(&self.graph.operation_by_name_required("box")?, 0);
let prob = args.request_fetch(&self.graph.operation_by_name_required("prob")?, 0);

2.3 session

Now that we have all the parameters set, we can run session:

&self.session.run(&mut args)?;

Oh, oh! We get a compiler error:

error[E0596]: cannot borrow `self.session` as mutable, as it is behind a `&` reference
  --> src/mtcnn.rs:68:10
   |
36 |     pub fn run(&self, img: &DynamicImage) -> Result<DynamicImage, Box<dyn Error>> {
   |                ----- help: consider changing this to be a mutable reference: `&mut self`
...
68 |         &self.session.run(&mut args)?;
   |          ^^^^^^^^^^^^ `self` is a `&` reference, so the data it refers to cannot be borrowed as mutable

Facts have proved that the Session:: run () method adopts &mut self. What can we do to solve this problem?

  • Let’s get our run method & mut self
  • Do some tricky internal variability
  • Submit issue to tensorflow-rust crate to see if Session really needs & mut self

We chose the third way!
Update your Cargo. toml by specifying git instead of crate version number in cargo:

tensorflow = { git = "https://github.com/tensorflow/rust"}

2.4 Getting Boundary Box (Face Location)

This has not changed since our main approach. We take boundaries and put them into our BBox structure:

//Our bounding box extents
let bbox_res: Tensor<f32> = args.fetch(bbox)?;
//Our facial probability
let prob_res: Tensor<f32> = args.fetch(prob)?;

//Let's store the results as a Vec<BBox>
let mut bboxes = Vec::new();

let mut i = 0;
let mut j = 0;

//While we have responses, iterate through
while i < bbox_res.len() {
    //Add in the 4 floats from the `bbox_res` array.
    //Notice the y1, x1, etc.. is ordered differently to our struct definition.
    bboxes.push(BBox {
        y1: bbox_res[i],
        x1: bbox_res[i + 1],
        y2: bbox_res[i + 2],
        x2: bbox_res[i + 3],
        prob: prob_res[j], // Add in the facial probability
    });

    //Step `i` ahead by 4.
    i += 4;
    //Step `i` ahead by 1.
    j += 1;
}

debug!("BBox Length: {}, BBoxes:{:#?}", bboxes.len(), bboxes);

Ok(bboxes)

So far, our run method is complete.

JSON Format of 2.5BBox Structure

We intend to respond to the JSON representing the BBox structure, so add Serialize (serialization related module) in serde_derive:

use serde_derive::Serialize;

#[derive(Copy, Clone, Debug, Serialize)]
pub struct BBox {
    pub x1: f32,
    pub y1: f32,
    pub x2: f32,
    pub y2: f32,
    pub prob: f32,
}

2.6 Draw Output Pictures

We will add a method to input a picture and an array of bounding boxes in response to the output picture:

pub fn overlay(img: &DynamicImage, bboxes: &Vec<BBox>) -> DynamicImage

There’s not much change here, just a response to a picture instead of saving a file:

//Let's clone the input image
let mut output_image = img.clone();

//Iterate through all bounding boxes
for bbox in bboxes {
    //Create a `Rect` from the bounding box.
    let rect = Rect::at(bbox.x1 as i32, bbox.y1 as i32)
        .of_size((bbox.x2 - bbox.x1) as u32, (bbox.y2 - bbox.y1) as u32);

    //Draw a green line around the bounding box
    draw_hollow_rect_mut(&mut output_image, rect, LINE_COLOUR);
}

output_image

OK, we have completed our Mtcnn structure and method! Can we go further? Yes, absolutely! But for now, I think that’s what we need. We have encapsulated the behavior and created several functions that are very useful.

3. New main method

Instead of using it as a command-line program, we use it as a self-hosted Web application. Because we no longer have input and output files, we need to change the parameters required by the application.
I think the only parameter we should get initially is the listening address, and even then we should use reasonable defaults. So let’s make this very small demo with the help of structopt:

#[derive(StructOpt)]
struct Opt {
    #[structopt(
        short = "l",
        long = "listen",
        help = "Listen Address",
        default_value = "127.0.0.1:8000"
    )]
    listen: String,
}

3.1 Log Framework

Actix Web uses log crate to display errors and debug messages.
Let’s use log instead of println!. I like to use pretty_env_logger because it prints different levels into different colors, and we can use useful timestamps.
Pretty_env_logger still uses environment variables. Let’s set the environment variable RUST_LOG and start our logger.

//Set the `RUST_LOG` var if none is provided
if env::var("RUST_LOG").is_err() {
    env::set_var("RUST_LOG", "mtcnn=DEBUG,actix_web=DEBUG");
}

//Create a timestamped logger
pretty_env_logger::init_timed();

This sets up DEBUG level logs for our app and Actix web, but allows us to change the log level through environment variables.

IV. Actix and State

We need to pass some states to Actix for use: the Mtcnn structure and the run method. You can provide Actix by passing state in a variety of ways, but the simplest method should be the App:: data method. When we are entering a multithreaded world, we will have to consider Send/Sync.

Okay, so how do we share data between threads? Well, as a first step, I’ll look at std:: sync. Since we know that the run function of mtcnn does not need a variable reference, but only a variable self reference, we can wrap it in Arc. Mutex may also be required if we have to use variable references, but this can be avoided if we use the main branch of tensorflow-rust.

So let’s create an Arc:

let mtcnn = Arc::new(Mtcnn::new()?);

Now you can instantiate the service:

HttpServer::new(move || {
    App::new()
        //Add in our mtcnn struct, we clone the reference for each worker thread
        .data(mtcnn.clone())
        //Add in a logger to see the requests coming through
        .wrap(middleware::Logger::default())
        // Add in some routes here
        .service(
            ...
        )
})
.bind(&opt.listen)? // Use the listener from the command arguments
.run()

Summarize what we have accomplished:

  • First, build an HttpServer
  • This requires a closure to return to App. This App is instantiated for threads running on each HTTP server
  • Add Arc < Mtcnn > using the data method and clone it for each thread listener
  • A logging framework has been added
  • Setting up some routes with service method
  • Bind to a listener address and run

V. Processing requests

Actix Web is an asynchronous framework that uses tokio. Our function is synchronous and takes some time to complete. In other words, our request is blocked. We can mix synchronization and asynchrony, which, of course, is a bit cumbersome.

5.1 Method Definition and Extractors

Actix 1.0 makes extensive use of Extractors, which provide a completely different form of method definition. You specify what you want the interface to receive, and Actix will connect you in series. Note: This does mean that errors cannot be found before running. I used an example of the wrong type signature in the web:: Data parameter.

So what do we need to extract from our request? Bytes and mtcnn of request body:

fn handle_request(
    stream: web::Payload,
    mtcnn: web::Data<Arc<Mtcnn>>,
) -> impl Future<Item = HttpResponse, Error = ActixError> {

    ...

}

We will use this type in mtcnn (web:: Data < Arc < Mtcnn). So let’s create a type alias for it:

type WebMtcnn = web::Data<Arc<Mtcnn>>;

VI. Getting Images from Payload

Note: Payloload here refers to the following part of the header in the HTTP request.

We need a way to retrieve images from payload and return to Future. Web:: The Payload structure implements Stream to set Item to Bytes.

It’s not meaningful to get a single byte from the stream. We want to get the whole batch and decode the image! So let’s convert Stream to Future and merge all the individual bytes we’re going to get into a large byte bucket. It sounds complicated, but fortunately Stream has one approach: concat2.

Conat2 is a very powerful combiner that allows us to add the results of a single Stream poll to a collection, and if this item implements Extend (and some other traits), Bytes will support extensions.

So like this:

stream.concat2().and_then(....)

6.1 Image Decoding and Web::Block

The second thing we need to solve is that if we want to decode the image, it will block the thread until the decoding is complete. If it’s a huge image, it may take milliseconds! So we want to make sure that we don’t get blocked when that happens. Fortunately, Actix web has a way to wrap blocking code as future:

stream.concat2().and_then(move |bytes| {
    web::block(move || {
        image::load_from_memory(&bytes)
    })
})

We use stream to convert it into futures and bytes, and then use web:: block to decode bytes into images in background threads and return the results. The load_from_memory function returns a Result, which means that we can use it as a return type.

6.2 Balanced Error Types

Therefore, our Item is converted to Bytes and then to DynamicImage, but we haven’t handled the error type yet and can’t compile through. What types of errors should we make? Let’s use actix_web:: Error as ActixError:

use actix_web::{Error as ActixError}

fn get_image(stream: web::Payload) -> impl Future<Item = DynamicImage, Error = ActixError> {
    stream.concat2().and_then(move |bytes| {
        web::block(move || {
            image::load_from_memory(&bytes)
        })
    })
}

Well, when we tried to compile, there was an error:

error[E0271]: type mismatch resolving `<impl futures::future::Future as futures::future::IntoFuture>::Error == actix_http::error::PayloadError`
  --> src/main.rs:67:22
   |
67 |     stream.concat2().and_then(move |bytes| {
   |                      ^^^^^^^^ expected enum `actix_threadpool::BlockingError`, found enum `actix_http::error::PayloadError`
   |
   = note: expected type `actix_threadpool::BlockingError<image::image::ImageError>`
              found type `actix_http::error::PayloadError`
              
There are also some things that are not listed.

When you combine streams, map them to future, and try to get some output from these combiners, you are actually dealing with Item and Error types.
Processing multiple types of response results can make the code ugly, unlike Result, where you can use question marks to automatically adjust to the correct error. When ops:: try and async / await grammars become stable, things may become easier, but now we have to find ways to deal with these types of errors.

We can use the from_err() method. The function is basically the same as the question mark (?), but the difference is that from_err acts on the future. We have two future in progress: an array of bytes from stream and an image from blocking closures. We have three types of errors: the Payload error, the Image load from memory error, and the blocking error:

fn get_image(stream: web::Payload)
  -> impl Future<Item = DynamicImage, Error = ActixError> {
    stream.concat2().from_err().and_then(move |bytes| {
        web::block(move || {
            image::load_from_memory(&bytes)
        }).from_err()
    })
}

7. Obtaining Boundary Frames from Images

Most importantly, we need to run up:

mtcnn.run(&img)

But we want to run in a thread pool:

web::block(|| mtcnn.run(&img))

Let’s look at the function declaration. At least we need images and mtcnn structures. Then we want to return to BBox’s Vec. We keep the error types the same, so we’ll use the ActixError type.

The function is declared as follows:

fn get_bboxes(img: DynamicImage, mtcnn: WebMtcnn) 
  -> impl Future<Item = Vec<BBox>, Error = ActixError> 

We need to use from_err() on web:: block to convert error types and move to provide images to closures:

fn get_bboxes(img: DynamicImage, mtcnn: WebMtcnn) -> impl Future<Item = Vec<BBox>, Error = ActixError> {
    web::block(move || mtcnn.run(&img)).from_err()
}

But compilation errors still occur:

error[E0277]: `*mut tensorflow_sys::TF_Status` cannot be sent between threads safely
  --> src/main.rs:75:5
   |
75 |     web::block(move || mtcnn.run(&img)).from_err()
   |     ^^^^^^^^^^ `*mut tensorflow_sys::TF_Status` cannot be sent between threads safely
   |
   = help: within `tensorflow::Status`, the trait `std::marker::Send` is not implemented for `*mut tensorflow_sys::TF_Status`
   = note: required because it appears within the type `tensorflow::Status`
   = note: required by `actix_web::web::block`

Tensorflow:: Status, which is an error type and cannot be sent between threads.

The shortcut is to convert error to String:

fn get_bboxes(img: DynamicImage, mtcnn: WebMtcnn) -> impl Future<Item = Vec<BBox>, Error = ActixError> {
    web::block(move || mtcnn.run(&img).map_err(|e| e.to_string())).from_err()
}

Because String implements Send, it allows Result to be sent across threads.

8. Return JSON object BBoxes

Okay, we have two functions, one for capturing images from requests and the other for capturing boundaries. We’re going back to JSON HttpResponse:

fn return_bboxes(
    stream: web::Payload,
    mtcnn: WebMtcnn,
) -> impl Future<Item = HttpResponse, Error = ActixError> {
    // Get the image from the input stream
    get_image(stream) 
        // Get the bounding boxes from the image
        .and_then(move |img| get_bboxes(img, mtcnn)) 
        // Map the bounding boxes to a json HttpResponse
        .map(|bboxes| HttpResponse::Ok().json(bboxes))
}

Next, add the interface definition in App:

HttpServer::new(move || {
    App::new()
        .data(mtcnn.clone()) 
        .wrap(middleware::Logger::default()) 
        // our new API service
        .service(web::resource("/api/v1/bboxes").to_async(return_bboxes))
})
.bind(&opt.listen)?
.run()

Run up and use curl to submit a request:

$ curl --data-binary @rustfest.jpg  http://localhost:8000/api/v1/bboxes

[{"x1":471.4591,"y1":287.59888,"x2":495.3053,"y2":317.25327,"prob":0.9999908}....

Using jmespath to get 120 faces:

$ curl -s --data-binary @rustfest.jpg  http://localhost:8000/api/v1/bboxes | jp "length(@)"
120

9. Return the superimposed image

Another API call we want is to return an image that covers the boundaries. This is not a big extension, but the drawing box on the image must be a blocking action, so we send it to the thread pool to run.
Let’s wrap the superposition function and convert it to future:

fn get_overlay(img: DynamicImage, bboxes: Vec<BBox>)
   -> impl Future<Item = Vec<u8>, Error = ActixError> {
    web::block(move || {
        let output_img = overlay(&img, &bboxes);
        
        ...

    }).from_err()
}

We want to return a U8 byte Vec so that we can use it in the body of the return. So we need to allocate buffers and write them in JPEG format:

let mut buffer = vec![];

output_img.write_to(&mut buffer, JPEG)?; // write out our buffer

Ok(buffer)

Try compiling the functions so far:

fn get_overlay(img: DynamicImage, bboxes: Vec<BBox>)
  -> impl Future<Item = Vec<u8>, Error = ActixError> {
    web::block(move || {
        let output_img = overlay(&img, &bboxes);

        let mut buffer = Vec::new();

        output_img.write_to(&mut buffer, JPEG)?; // write out our buffer

        Ok(buffer)
    }).from_err()
}

A little less, we lack a type annotation:

error[E0282]: type annotations needed
  --> src/main.rs:82:5
   |
82 |     web::block(move || {
   |     ^^^^^^^^^^ cannot infer type for `E`

Why is this a question of type? Relevant to this line:

Ok(buffer) // What's the `Error` type here?

Currently, the only type of error is from the write_to method, ImageError. But there is no error type in this line. It could be anything.
I think of three ways to deal with this problem:

Method 1: Declare errors in web:: block

web::block::<_,_,ImageError>

It looks a bit messy, but it can be compiled and passed.

Method 2: Use as to declare Result type:

Ok(buffer) as Result<_, ImageError>

Method 3: Use map to return a buffer when successful:

output_img.write_to(&mut buffer, JPEG).map(|_| buffer)

I think # 2 is probably the simplest for readability. Web:: The block function requires three types of parameters that can cause confusion when you first read the code. No. 3 is good, but I think it looks a little strange.

Ultimately, my choice is:

fn get_overlay(img: DynamicImage, bboxes: Vec<BBox>)
   -> impl Future<Item = Vec<u8>, Error = ActixError> {
    web::block(move || {
        let output_img = overlay(&img, &bboxes);

        let mut buffer = Vec::new();

        output_img.write_to(&mut buffer, JPEG)?;

        // Type annotations required for the `web::block`
        Ok(buffer) as Result<_, ImageError> 
    }).from_err()
}

9.1API call

Okay, we have some methods for returning futures, futures returning boundaries and superimposed images. Let’s stitch them together and return an HttpResponse:

fn return_overlay(
    stream: web::Payload,
    mtcnn: WebMtcnn,
) -> impl Future<Item = HttpResponse, Error = ActixError> {
    //... magic happens here
}

The first step is to get the image from the byte stream:

get_image(stream)

Then we want to get the boundary box:

get_image(stream).and_then(move |img| {
    get_bboxes(img, mtcnn)
})

9.2 How to Use Image Objects

Now we want to get the superimposed image. We have a question, how to use image? Get_bboxes returns the future image, then calculates the face on the image and returns an array of bounding boxes. There are several options. When we pass the image to get_bboxes, we can clone the image, but memory copy occurs. We can wait for Pin and async / await grammar to complete, and then it may be easier to handle it.
Or we can adjust our get_bboxes method:

fn get_bboxes(
    img: DynamicImage,
    mtcnn: WebMtcnn,
) -> impl Future<Item = (DynamicImage, Vec<BBox>), Error = ActixError> {
    web::block(move || {
        mtcnn
            .run(&img)
            .map_err(|e| e.to_string())
            //Return both the image and the bounding boxes
            .map(|bboxes| (img, bboxes))
    })
    .from_err()
}

The record also modifies the return_bboxes method:

fn return_bboxes(
    stream: web::Payload,
    mtcnn: WebMtcnn,
) -> impl Future<Item = HttpResponse, Error = ActixError> {
    get_image(stream)
        .and_then(move |img| get_bboxes(img, mtcnn))
        .map(|(_img, bboxes)| HttpResponse::Ok().json(bboxes))
}

9.3 Acquisition of superimposed layers

It would be great if rust could turn tuples into command parameters. Unfortunately, it’s not for us, so we need to create a closure:

//Create our image overlay
.and_then(|(img, bbox)| get_overlay(img, bbox))
.map(|buffer| {
// Return a `HttpResponse` here
})

9.4 Create Responses

Our HttpResponse needs to package buffers into a body:

HttpResponse::with_body(StatusCode::OK, buffer.into())

Set Content-Type to jpeg:

let mut response = HttpResponse::with_body(StatusCode::OK, buffer.into());

response
    .headers_mut()
    .insert(CONTENT_TYPE, HeaderValue::from_static("image/jpeg"));

Get the final implementation of the overlay layer:

fn return_overlay(
    stream: web::Payload,
    mtcnn: WebMtcnn,
) -> impl Future<Item = HttpResponse, Error = ActixError> {
    get_image(stream)
        .and_then(move |img| {
            get_bboxes(img, mtcnn)
        })
        .and_then(|(img, bbox) | get_overlay(img, bbox))
        .map(|buffer| {
            let mut response = HttpResponse::with_body(StatusCode::OK, buffer.into());
            response
                .headers_mut()
                .insert(CONTENT_TYPE, HeaderValue::from_static("image/jpeg"));
            response
        })
}

Register this interface in App:

HttpServer::new(move || {
    App::new()
        .data(mtcnn.clone()) //Add in our data handler
        //Add in a logger to see the requets coming through
        .wrap(middleware::Logger::default()) 
        //JSON bounding boxes
        .service(web::resource("/api/v1/bboxes").to_async(return_bboxes))
        //Image overlay
        .service(web::resource("/api/v1/overlay").to_async(return_overlay))
}

Run:

$ curl --data-binary @rustfest.jpg  http://localhost:8000/api/v1/bboxes > output.jpg

Result:
Tensorflow Rust Actual Chapter 2 [Integrating Actix-Web to Provide http Services]

X. Summary

We have gradually transformed CLI applications into HTTP services and tried asynchronous programming. As you can see, Actix web is a very general web framework. My interest in it comes from having all the functions needed to build a Web application: multi-component, thread pool, and high efficiency. Although Actix writing asynchrony is not elegant, the future is promising, because I think many developers are trying to solve this problem.

If you are looking for more Actix examples, this sample repository is your best choice: https://github.com/actix/exam…

I look forward to seeing the future construction of the community!

Recommended Today

Implementation of PHP Facades

Example <?php class RealRoute{ public function get(){ Echo’Get me’; } } class Facade{ public static $resolvedInstance; public static $app; public static function __callStatic($method,$args){ $instance = static::getFacadeRoot(); if(!$instance){ throw new RuntimeException(‘A facade root has not been set.’); } return $instance->$method(…$args); } // Get the Facade root object public static function getFacadeRoot() { return static::resolveFacadeInstance(static::getFacadeAccessor()); } protected […]