AWS lambda takes off rapidly with the help of serverless framework



Microservice architecture is different from the traditional single application scheme. We can split the single application into multiple core functions. Each function is called a service and can be built and deployed separately, which means that the services will not affect each other when they work

When this design concept is further applied, it becomes serverless. “No service” seems ridiculous. In fact, servers still exist, but we don’t need to pay attention to or preset servers. This allows developers to focus more on function implementation

The typical example of serverless is AWS lambda

AWS Lambda

If you are a java developer, you should have heard or used lambda in JDK 1.8, but lambda in AWS and lambda in JDKIt doesn’t matter

Here, AWS lambda is a kind of computing service, which can run code without presetting or managing the server. With lambda, we can run code for almost any type of application or back-end service without any management. All we have to do is upload the corresponding code. Lambda will handle all the work needed to run and extend ha code

To put it bluntly

Lambda is like a method to implement a function (in reality, we usually make lambda function as single as possible). We make this method into a service for calling

You may have a puzzle here. Since lambda is a “method”, who will call it? Or how to call it?

How to call lambda

To answer the above question, we need to log in to AWS, open the lambda service, and then create a lambda function (Hello lambda)

Since lambda is a method, it is necessary to select the corresponding runtime environment, as shown in the figure below. There is always one that is suitable for you Node.js (use this here.)

Click the create function button in the lower right corner to enter the configuration page

You can configure the trigger of starting lambda in the position of the red line in the figure above. Click Add trigger

As can be seen from the above figure, many built-in services of AWS can trigger lambda

  • API gateway (it will be used in the demo later, and it is also the most common way to call)
  • ALB – Application Loac Balancer
  • CloudFront
  • DynamoDB
  • S3
  • SNS – Simple Notification Service
  • SQS – Simple Queue Service

The above is just some built-in services of AWS. If you slide down, you will find that you can also configure many servicesNon AWSEvent source for

By now, you should have the answer to the above question. For the time being, you don’t need any trigger. First, click the button in the upper right cornerTestTest lambda

A simple lambda function is implemented. The red response just tells you that each request will have a corresponding request ID, and the start / end logo can quickly locate the log content (you can view it through cloudwatch, but I won’t expand the description here for the moment)

You may have begun to spread your thinking about how to use AWS lambda. In fact, there are many examples on the official website of AWS

Classic cases

For example, in order to adapt to multi platform image display, after an original image is uploaded to S3, it will adapt to images of different platform sizes through lambda resize

For example, using AWS lambda and Amazon API gateway to build the back end to verify and process API requests. When a user publishes a dynamic message, the subscriber will receive the corresponding notification

Next, we use lambda to implement the classic case of distributed order service

Order service demo

In order to enhance the user experience, or to improve the program throughput, or to decouple the architecture design program, we usually use message oriented middleware to complete the task

Suppose there is a common scenario in which if a user chooses to issue an invoice when placing an order, the invoice service needs to be called. Obviously, the invoice service needs to be calledIt is not the critical path of program runningIn this scenario, we can decouple through message middleware. There are two services:

  1. Order service
  2. Invoice Service

If lambda is used to realize two services, the overall design idea is as follows:

In reality, it is impossible for us to create various services by clicking the button in AWS console. In the actual development of AWS, we create relevant AWS services by writing cloudformation template (hereinafter referred to as CFT, which is actually a definition in yaml or JSON format). If the demo above is shown in the figure, There are many services we need to create

  • Lambda * 2
  • API Gateway
  • SQS

If you write AWS native CFT, you still have a lot to achieve

But… Lazy programmers always bring a lot of surprises

Serverless Framework

The trouble of writing JDBC leads to the emergence of various persistence layer frameworks. Similarly, the trouble of writing AWS native CFT leads to the emergence of serverless framework (hereinafter referred to as SF) to help us define related serverless components (by the way, are you using graphql?)

SF not only simplifies the writing of AWS native CFT, but also simplifies the definition of cross cloud services. Just like the facade in the design pattern, it establishes a layer of facade on it, hides the details of different services at the bottom, and reduces the threshold of cross cloud concurrent use of cloud services. At present, the cloud services supported include the following

We won’t explain SF in depth for the moment. In our demo, we just want to define SF

Install serverless framework

If you have node installed, you only need one NPM command to install it globally

npm update -g serverless

After installation, check whether the installation version is successful

sls -version

Configure serverless framework

Because you want to use the lambda of AWS, you need to make the basic configuration of SF. At least SF should have permission to create AWS services. When you create an AWS user, you can get AK “access”_ key_ ID “and SK” secret “_ access_ key」(It’s not skii)In fact, it is a form of user name and password

Then add the configuration by the following command:

serverless config credentials --provider aws --key 1234 --secret 5678 --profile custom-profile
  • –Provider cloud service provider
  • –Your AK
  • –Secret your sk
  • –Profile if you have multiple accounts, you can add this profile to make a quick distinction

After running the above command, a file named credentials will be created in ~ /. AWS / directory to store the above configuration, like this:

By now, the preparatory work has been completed. Just start to write our definition

Create serverless application

Create a serverless application with the following command

sls create --template aws-nodejs --path ./demo --name lambda-sqs-lambda
  • –Template specifies the template to be created
  • –Path specifies the directory to be created
  • –Name specifies the name of the service to be created

After running the above command, enter the demo directory, which is the following structure and content

➜  demo tree
├── handler.js
└── serverless.yml

0 directories, 2 files

Because we use Node.js To write the serverless application, also execute the following command in the demo directory to initialize the directory, because we will use two NPM packages later

npm init -y

The current structure is like this (in fact, there is one more) package.json ):

➜  demo tree
├── handler.js
├── package.json
└── serverless.yml

0 directories, 3 files

So far, the preparatory work is ready, and the next step is to serverless.yml The threshold is very low: just write yaml according to the corresponding key, isn’t it very simple , open serverless.yml Let’s take a look at the files. What’s wrong?

# Welcome to Serverless!
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
# For full config options, check the docs:
# Happy Coding!

service: lambda-sqs-lambda
# app and org for use with
#app: your-app-name
#org: your-org-name

# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"

  name: aws
  runtime: nodejs12.x

# you can overwrite defaults here
#  stage: dev
#  region: us-east-1

# you can add statements to the Lambda function's IAM Role here
#  iamRoleStatements:
#    - Effect: "Allow"
#      Action:
#        - "s3:ListBucket"
#      Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ]  }
#    - Effect: "Allow"
#      Action:
#        - "s3:PutObject"
#      Resource:
#        Fn::Join:
#          - ""
#          - - "arn:aws:s3:::"
#            - "Ref" : "ServerlessDeploymentBucket"
#            - "/*"

# you can define service wide environment variables here
#  environment:
#    variable1: value1

# you can add packaging information here
#  include:
#    - include-me.js
#    - include-me-dir/**
#  exclude:
#    - exclude-me.js
#    - exclude-me-dir/**

    handler: handler.hello
#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
#    events:
#      - http:
#          path: users/create
#          method: get
#      - websocket: $connect
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill: amzn1.ask.skill.xx-xx-xx-xx
#      - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp
#      - alb:
#          listenerArn: arn:aws:elasticloadbalancing:us-east-1:XXXXXX:listener/app/my-load-balancer/50dc6c495c0c9188/
#          priority: 1
#          conditions:
#            host:
#            path: /hello

#    Define function environment variables here
#    environment:
#      variable2: value2

# you can add CloudFormation resource templates here
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: my-new-bucket
#  Outputs:
#     NewOutput:
#       Description: "Description for the output"
#       Value: "Some output value"

At first glance, you may feel dazzled. In fact, this is a relatively complete complete set of lambda configurations. We don’t need such detailed contents, but this file is for our reference

Next, we’ll define everything we need for the demo (key comments have been written in the code)

  Name: lambda SQS lambda # defines the name of the service

  Name: AWS # cloud service provider is AWS
  Runtime: nodejs12. X # version of the runtime node
  Region: ap-northeast-1 # published to the northeast region is actually the Tokyo region
  Stage: dev # the publishing environment is dev
  Iamrolestatements: # create Iam role to allow lambda function to send messages to queue
    - Effect: Allow
        - sqs:SendMessage
        - Fn::GetAtt: [ receiverQueue, Arn ]
Functions: define two lambda functions
    handler: app/ order.checkout  #The first lambda function program entry is in the app directory order.js  The checkout method inside
    Events: # trigger is a way of API gateway, which triggers the lambda function when receiving the post request of / order
      - http:
          method: post
          path: order

    handler: app/ invoice.generate  #The second lambda function program entry is in the app directory invoice.js  The generate method inside
    timeout: 30
    Events: # trigger is a SQS service, which triggers the lambda function consumption message when there is a message in the message queue
      - sqs:
              - receiverQueue
              - Arn
    Receiverqueue: # defines the SQS service, which is also the service lambda needs to rely on
      Type: AWS::SQS::Queue
        QueueName: ${self:custom.conf.queueName}

# package:
#   exclude:
#     - node_modules/**

  conf: ${file(conf/ config.json )}# introduction of externally defined configuration variables

config.json The content only defines the name of the queue, just to illustrate the flexibility of configuration

  "queueName": "receiverQueue"

Because we want to simulate the order generation, here we use UUID to simulate the order number,

Because we need to call the AWS service API, we need to use AWS SDK,

So we need to install these two packages

  "name": "lambda-sqs-lambda",
  "version": "1.0.0",
  "description": "demo for lambda",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "license": "MIT",
  "dependencies": {
    "uuid": "^8.1.0"
  "devDependencies": {
    "aws-sdk": "^2.6.15"

Next, we can write the code logic of two lambda functions

Order Lambda Function

The order service is very simple. It receives an order request and returns the successful order to the user quickly. At the same time, it sends the successful order message to SQS for the downstream invoice service to issue an invoice

'use strict';

const config = require('../conf/config.json')
const AWS = require('aws-sdk');
const sqs = new AWS.SQS();
const { v4: uuidv4 } = require('uuid');

module.exports.checkout = async (event, context, callback) => {
    let statusCode = 200
    let message

    if (!event.body) {
        return {
        statusCode: 400,
        body: JSON.stringify({
            message: 'No order body was found',

    const region = context.invokedFunctionArn.split(':')[3]
    const accountId = context.invokedFunctionArn.split(':')[4]
    const queueName = config['queueName']

    //URL to assemble SQS service
    const queueUrl = `https://sqs.${region}${accountId}/${queueName}`
    const orderId = uuidv4()

    try {
      	//Call SQS service
        await sqs.sendMessage({
            QueueUrl: queueUrl,
            MessageBody: event.body,
            MessageAttributes: {
                orderId: {
                    StringValue: orderId,
                    DataType: 'String',

        message = 'Order message is placed in the Queue!';

  } catch (error) {
    message = error;
    statusCode = 500;

  //Quick return order ID
  return {
    body: JSON.stringify({
      message, orderId,

Invoice Lambda Function

The logic of invoice service is also very simple. It consumes the messages in the queue specified by SQS and sends the issued invoice to the email of customer’s order information

module.exports.generate = (event, context, callback) => {
    try {
        for (const record of event.Records) {
          const messageAttributes = record.messageAttributes;
          console.log('OrderId is  -->  ', messageAttributes.orderId.stringValue);
          console.log('Message Body -->  ', record.body);
          const reqBody = JSON.parse(record.body)
          //Sleep for 20 seconds to simulate the time-consuming process of invoice generation
          setTimeout( () => {
              console.log("Receipt is generated and sent to :" +
          }, 20000)
    } catch (error) {

The code of this demo is all implemented, from which you can see:

We didn’t pay attention to the details of the underlying services of lambda or the services of SQS, just the simple code logic implementation and the concatenation definition between services

Finally, let’s take a look at the overall directory structure

├── app
│   ├── invoice.js
│   └── order.js
├── conf
│   └── config.json
├── package.json
└── serverless.yml

2 directories, 5 files

Release lambda app

Before publishing, compile the application and install the necessary package “UUID and AWS SDK”

npm install

Publishing an application is very simple. You only need one command:

sls deploy -v

After running the above command, it will take dozens of seconds. At the end of the build, our build service information will be printed out

The endpoints in the figure above is the API gateway that we will visit later to trigger the lambda entry. Before calling, let’s go to AWS console to see our defined services

lambda functions


API Gateway


From the construction information in the figure above, you should also see the name of an S3 bucket. We did not create S3, which is created automatically by SF to store the lambda zip package


Call the endpoint of API gateway to test lambda

Open the SQS service and you will find a message:

Next, let’s look at the consumption of invoice lambda function. Open cloudwatch to view the log

It can be seen from the log that the program “spent” 20 seconds to print the log of e-mail to the customer (e-mail can also be realized with the help of AWS SES e-mail service)

So far, a complete demo has been completed, and there is not much code actually written, so the tight concatenation has been completed

Delete service

Lambda charges fees according to the number of calls. In order to avoid causing additional overhead, services are usually destroyed after the demo. It is also very simple to destroy the newly created services by using SF, just in the serverless.yml The file directory executes this command:

sls remove

Summary and feeling

AWS lambda is a typical example of serverless. With the help of lambda, smaller granularity “service” can be realized, and the development speed is accelerated without service construction. Lambda can also combine many services of AWS, such as receiving requests and passing calculation results to downstream services. In addition, many third-party partners are also joining lambda’s trigger force, giving lambda more trigger possibilities. At the same time, with the help of CI / CD, it can quickly realize the function closed-loop

Open AWS free tier, enough for you to play lambda:

Personal blog:
Add my wechat friends, group entertainment, learning and communication, remarks “group”

Welcome to keep public official account: “day arch one soldier”.

  • Leading Java technology dry goods sharing
  • Efficient tools summary – reply to tools
  • Analysis and answer of interview questions
  • Technical data collection reply to “data”

In order to read detective stories, think with ease and interest, learn Java technology stack related knowledge. In line with the principle of simplifying complex problems, concreting abstract problems and graphing, gradually decompose technical problems, and continuously update the technology, please continue to pay attention to

Recommended Today

Looking for frustration 1.0

I believe you have a basic understanding of trust in yesterday’s article. Today we will give a complete introduction to trust. Why choose rust It’s a language that gives everyone the ability to build reliable and efficient software. You can’t write unsafe code here (unsafe block is not in the scope of discussion). Most of […]