Exploration of serverless and microservices (I) – how to practice spring boot project with serverless

Time:2022-1-14

preface

With the development of technology, we have more and more choices to implement our business logic. As a cutting-edge technology, can serverless also explore new possibilities of microservice architecture?

This article is a summary of some achievements of the springboot microservice project implemented with serverless that I have explored recently.

What is serverless

What is microservice and what is springboot no longer need me to explain.

What is serverless?

According to the definition of CNCF, serverless refers to the concept of building and running applications that do not need server management.

Serverless does not mean that you can calculate without a server, but that developers or companies can calculate without understanding and managing the underlying server.

Generally speaking, serverless encapsulates the underlying computing resources. You only need to provide functions to run it.

Exploration of serverless and microservices (I) - how to practice spring boot project with serverless

Another concept to be mentioned here is FAAS (function as a service), which means function is a service. The logic we usually run on serverless is function level granularity.

Therefore, for micro services with reasonable split granularity control, it is very suitable to use serverless.

The value of serverless for microservices

  1. Each microservice API is called with different frequencies. Serverless can be used to accurately manage cost and elasticity.
  2. There is no need to worry about a large number of API calls and the need to expand the entire service. Serverless can automatically expand and shrink capacity.
  3. There is no need to operate and maintain how many containers and servers are deployed behind each service without load balancing.
  4. It shields the complex learning cost of container arrangement such as k8s.
  5. The stateless feature of serverless is also very consistent with the feature that microservices use restful API.

Preliminary practice

First, you need to prepare a springboot project, which can be started spring. IO quickly create one.

In business development, serverless is no different from traditional microservice development. So I quickly wrote a todo back-end service to complete the function of addition, deletion, modification and query.

The sample code is here.

So where are the real differences in using serverless?

If you simply want to deploy a single service, there are two main differences:

  1. Deployment mode
  2. Start mode

Deployment mode

Because we can’t touch the server, the deployment mode has changed greatly.

Traditional microservice deployment is usually deployed directly to the virtual machine for operation, or k8s is used for container scheduling.

The traditional deployment relationship is roughly shown in the figure below.
Exploration of serverless and microservices (I) - how to practice spring boot project with serverless

If serverless is used, it usually requires that our micro service splitting granularity be finer in order to achieve FAAS.
Therefore, the relationship between deploying microservices using serverless is roughly shown in the figure below.
Exploration of serverless and microservices (I) - how to practice spring boot project with serverless

Serverless only needs to provide code. Because serverless has its own running environment, there are usually two ways for serverless to deploy microservices:

  1. Code package upload deployment
  2. Mirror deployment

The first method is the most different from traditional deployment. It requires us to package and upload the written code. And you need to specify an entry function or a listening port.

The second method is almost the same as the traditional method. It uploads the finished image to our image warehouse. Then select the corresponding image when deploying the serverless platform.

Start mode

Because the corresponding instance will be created when serverless is used, and the instance will be destroyed when it is not used, which reflects the characteristics of serverless billing by volume.

Therefore, there is a cold start process when serverless is called for the first time. The so-called cold start means that the platform needs to allocate computing resources, load and start code. Therefore, there may be different cold start times according to different operating environments and codes.

As a static language, Java has been criticized for its startup speed. However, there is a slower one, that is, the start-up time of spring, which is obvious to all. Therefore, the combination of java + spring creates a sloth like startup speed. It is possible to cause an extremely long waiting time for the first call to the service.

However, don’t worry, spring has provided two solutions to shorten startup time.

  1. One is springfu
  2. The other is spring native.

SpringFu

Spring Fu is the incubator of jafu (Java DSL) and Kofu (kotlin DSL). Spring boot is explicitly configured with code in a declarative manner. It is highly discoverable due to automatic completion. It provides fast startup (40% faster than regular autoconfiguration on the smallest spring MVC application), low memory consumption, and is well suited for graalvm natives because of its (almost) no reflection method. With the graalvm compiler, the application startup speed can drop sharply to about 1%.

However, at present, springfu is still in a particularly early stage, and there are many problems in its use. In addition, using spring Fu will cost a lot of code transformation, because it kills all annotations, so I don’t use spring Fu this time.

Spring Native

Spring native compiles spring applications into native executables using the graalvm native image compiler to provide native deployment options packaged in lightweight containers. The goal of spring native is to support spring boot applications with little code transformation cost on this new platform.

Therefore, I chose spring native because it does not need to transform the code. It only needs to add some plug-ins and dependencies to implement the native image.

Native image has several benefits:

  1. Unused code is removed at build time
  2. The classpath is determined at build time
  3. No class delay loading: all the contents of the executable will be loaded into memory at startup
  4. Some code runs at build time

Based on these characteristics, it can greatly speed up the startup time of the program.

I will explain how to use it in the next article. You can see this official tutorial for a detailed tutorial. I also refer to this tutorial.

Let me talk about my test comparison results.

I deployed and tested the compiled images locally, Tencent cloud serverless cloud function and AWS serverless lambda.

Specifications Springboot cold start duration Springnative cold start duration
Local 16g memory mac 1 second 79 MS
Tencent cloud serverless 256M memory 13 seconds 300 ms
AWS serverless 256M memory 21 seconds 1 second

From the test results, springnative greatly improves the startup speed. Improving the specification of serverless can further improve the speed.

If the cold start speed of serverless is controlled within 1 second, most businesses are acceptable. And only the first request will have a cold start. Other requests have the same response time as ordinary microservices.

In addition, at present, serverless on major platforms supports preset instances, that is, create instances in advance before access to reduce cold start time. Bring higher corresponding time in business.

summary

As an advanced technology, serverless brings us many benefits.

  1. Elasticity and concurrency of automatic expansion and contraction
  2. Fine grained resource allocation
  3. loose coupling
  4. Operation and maintenance free

But serverless is not perfect. When we try to use it in the field of micro services, we can still see that it has many problems to be solved.

  1. Difficult to monitor and debug

    1. This is currently recognized as a pain
  2. There may be more cold starts

    1. When we split the microservice to adapt to the function granularity, it also disperses the call time of each function, resulting in lower call frequency of each function and more cold starts.
  3. The interaction between functions will be more complex

    1. As the function granularity becomes finer, in large-scale microservice projects, the already complex microservices will become more complex.

To sum up, serverless still has a long way to go in microservices to completely replace traditional virtual machines.

next step

I will continue to explore the practice of serverless and microservices.

In the following articles, I will discuss the following topics

  • Inter service call in serverless
  • Database access in serverless
  • Registration and discovery of services in serverless
  • Service corruption and degradation in serverless
  • Service splitting in serverless