In 2021, can serverless replace microservice?


In 2021, can serverless replace microservice?
Source|Serverless official account
Compile | orange J
Author|Mariliis Retter

“Can serverless replace microservices?” This is a hot topic in Zhihu’s serverless classification.

In 2021, can serverless replace microservice?

Some people say that microservice is different from serverless. Although we can build microservice based on the back end of serverless, there is no direct path between microservice and serverless. It is also said that because the functions contained in serverless can be regarded as smaller and atomized services, and some concepts of micro services are naturally combined, serverless and micro services are a perfect match. In 2021, will serverless eventually replace microservices? What is the path from microservice to serverless? This paper will compare the advantages and disadvantages of serverless and microservice.

Conceptually, microservice is fully consistent with the functional structure of serverless. Microservice can easily realize the deployment and runtime isolation of different services. In terms of storage, services like dynamodb allow each microservice to have an independent database and expand independently.Before we go deep into the details, don’t rush to “stand in line”. We might as well consider whether it is suitable to use micro service based on the actual situation of your team, and don’t choose it because “this is the trend”.

Advantages of microservice in serverless environment

Optional scalability and concurrency

Serverless makes it easy to manage concurrency and scalability. In the microservice architecture, we make the most of this. Each microservice can set concurrency / scalability according to its own requirements. From different perspectives, this is very valuable: for example, reducing the possibility of DDoS attacks, reducing the financial risk of cloud billing out of control, better allocation of resources, and so on.

Fine grained resource allocation

Because scalability and concurrency can be chosen independently, users can control the priority of resource allocation in a fine-grained way. In lambda functions, each microservice can have different levels of memory allocation according to its requirements. For example, customer-oriented services can have higher memory allocation because it will help to speed up execution time, while internal services that are insensitive to latency can be deployed with optimized memory settings.

This feature also applies to storage mechanisms. For example, dynamodb or aurora serverless databases can have different levels of capacity allocation according to the requirements of specific (micro) services they serve.

loose coupling

This is a general attribute of microservice, not a unique attribute of serverless. This feature makes it easier to decouple components with different functions in the system.

Support multiple running environments

The simplicity of configuration, deployment and execution of serverless provides the possibility for systems based on multiple runtimes.

Although Node.js JavaScript runtime is one of the most popular technologies in back-end web applications, but it can’t be the best tool for every task. For data intensive tasks, predictive analysis, and any kind of machine learning, you may choose Python as the programming language; specialized platforms like sagemaker are more suitable for large projects.

With the serverless infrastructure, you don’t need to spend extra effort in operation to directly choose for the regular back-end API Node.js , choose Python for data intensive work. Obviously, this may bring extra work to your team in code maintenance and team management.

Independence of development team

Different developers or teams can work on their own microservices, fix bugs, extend functions, etc., so as not to interfere with each other. Tools such as AWS Sam and serverless framework make developers more independent in operation. The emergence of AWS CDK architecture can make the development team have higher independence without compromising high quality and operation and maintenance standards.

The disadvantage of microservice in serverless

Difficult to monitor and debug

Among the many challenges brought by serverless, monitoring and debugging may be the most difficult. Because computing and storage systems are scattered in many different functions and databases, not to mention queues, caches and other services, these problems are caused by microservices themselves. However, there is already a professional platform to solve all these problems. Then, whether the professional development team should introduce these professional platforms should also be considered based on the cost.

May experience more cold starts

When FAAS platforms (such as lambda) need to start a new virtual machine to run function code, a cold start occurs. If your function workload is delay sensitive, you are likely to run into problems. Because cold start will increase the total start-up time by several hundred milliseconds to a few seconds, when a request is completed, FAAS platform usually makes the micro VM idle for a period of time, waiting for the next request, and then shut down after 10-60 minutes (yes, it changes a lot). The result: the more frequently your functions are executed, the more likely it is that the micro VM will start and run for incoming requests (avoid cold starts).

When we distribute applications in hundreds or thousands of microservices, we may disperse the call time in each service, resulting in the decrease of the call frequency of each function. Note that “calls may be scattered.”. Depending on the business logic and the way your system behaves, this negative impact may be minimal or negligible.

Other shortcomings

The concept of microservice itself has other inherent disadvantages. These are not intrinsically linked to serverless. Nevertheless, every team adopting this type of architecture should be cautious to reduce its potential risks and costs.

  • Determining service boundaries is not easy and may lead to architectural problems.
  • A wider range of attack
  • On the cost of service arrangement
  • Synchronous computing and storage (when needed) is not easy to achieve high performance and scalability

Challenges and best practices of microservices in serverless

How big should micro services be in serverless?

When people understand serverless, the concept of “function as a service (FAAS)” is easily confused with the function statement in programming language. At present, we are in a period when there is no way to draw a perfect line, but experience shows that using very small serverless function is not a good idea.

When you decide to divide a (micro) service into independent functions, you will have to face the problem of serverless. Therefore, as a reminder, it is much better to keep the relevant logic in a function whenever possible.

Of course, the decision-making process should also consider the advantages of having an independent micro service

You can imagine this: “if I split up this micro service

  • Does it allow different teams to work independently?
  • Can you benefit from fine-grained resource allocation or selective scalability?

If not, you should consider bundling this service with another service that requires similar resources, context sensitive, and related workload execution.

Loosely coupled architecture

There are many ways to coordinate micro services by composing serverless functions.

When synchronous communication is needed, it can be called directly (that is, AWS lambda requestresponse call method), but this will lead to a highly coupled architecture. A better choice is to use lambda layers or HTTP API, so that future modification or migration of services will not affect the client.

For accepting asynchronous communication model, we have several choices, such as queue (SQS), subject notification (SNS), event bridge or dynamodb streams.

Cross component isolation

Ideally, microservices should not expose details to users. Serverless platforms like lambda provide an API to isolate functions. But this itself is a disclosure of implementation details. Ideally, we will add an unknown HTTP API layer on top of the function to make it truly isolated.

The importance of using concurrency restriction and throttling strategy

In order to mitigate DDoS attacks, when using AWS API gateway and other services, it is necessary to set a separate concurrency restriction and throttling strategy for each public oriented terminal. This kind of service generally sets global concurrency quota for the whole region in the cloud platform. If you don’t have endpoint based restrictions, attackers only need to target a single endpoint, which can exhaust your quota and paralyze your entire system in the region.

Translator: Orange J
Link to the original text: