Enterprise RPC framework zrpc


Recently, the popular open source project go zero is a fully functional microservice framework integrating various engineering practices, including web and RPC protocols. Today, let’s analyze the RPC part zrpc.

The bottom layer of zrpc relies on grpc, with built-in modules such as service registration, load balancing and interceptor, including micro service governance schemes such as adaptive load shedding, adaptive fusing and current limiting. It is a simple and easy-to-use enterprise RPC framework that can be directly used in production.

Preliminary study on zrpc

Zrpc supports direct connection and etcd based service discovery. We take etcd based service discovery as an example to demonstrate the basic use of zrpc:

to configure

Create the hello.yaml configuration file as follows:

Name: Hello. RPC // service name
Listenon: // service listening address
    - // etcd service address
  Key: Hello. RPC // service registration key
Create proto file

Create the hello.proto file and generate the corresponding go code

syntax = "proto3";

package pb;

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply) {}

message HelloRequest {
  string name = 1;

message HelloReply {
  string message = 1;

Generate go code

protoc --go_out=plugins=grpc:. hello.proto
Server side
package main

import (



type Config struct {

var cfgFile = flag.String("f", "./hello.yaml", "cfg file")

func main() {

    var cfg Config
    conf.MustLoad(*cfgFile, &cfg)

    srv, err := zrpc.NewServer(cfg.RpcServerConf, func(s *grpc.Server) {
        pb.RegisterGreeterServer(s, &Hello{})
    if err != nil {

type Hello struct{}

func (h *Hello) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
    return &pb.HelloReply{Message: "hello " + in.Name}, nil
Client side
package main

import (



func main() {
    client := zrpc.MustNewClient(zrpc.RpcClientConf{
        Etcd: discov.EtcdConf{
            Hosts: []string{""},
            Key:   "hello.rpc",

    conn := client.Conn()
    hello := pb.NewGreeterClient(conn)
    reply, err := hello.SayHello(context.Background(), &pb.HelloRequest{Name: "go-zero"})
    if err != nil {

Start the service and check whether the service is registered:

ETCDCTL_API=3 etcdctl get hello.rpc --prefix

Display service already registered:


Run the client to see the output:

hello go-zero

This example demonstrates the basic use of zrpc. It can be seen that building RPC services through zrpc is very simple and requires only a few lines of code. Let’s continue to explore

Principle analysis of zrpc

The following figure shows the architecture and main components of zrpc


Zrpc is mainly composed of the following modules:

  • Discov: service discovery module, which realizes the service discovery function based on etcd
  • Resolver: the service registration module implements the resolver. Builder interface of grpc and registers it with grpc
  • Interceptor: interceptor, which intercepts requests and responses
  • Balancer: the load balancing module implements the P2C load balancing algorithm and registers it with grpc
  • Client: zrpc client, responsible for initiating requests
  • Server: zrpc server, which is responsible for processing requests

This paper introduces the main components of zrpc and the main functions of each module. Among them, the resolver and balancer modules realize the open interface of grpc and realize the user-defined resolver and balancer. The interceptor module is the functional focus of the whole zrpc. The functions of adaptive load shedding, adaptive breaking and Prometheus service index collection are realized here

Interceptor module

Grpc provides interceptor function, which is mainly used to intercept additional processing before and after requests. Interceptors include client interceptors and server interceptors, which are also divided into unary interceptors and stream interceptors. Here we mainly explain unary interceptors, which are the same as stream interceptors.


The client interceptor is defined as follows:

type UnaryClientInterceptor func(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, invoker UnaryInvoker, opts ...CallOption) error

Where method is the method name, req and reply are the request and response parameters respectively, CC is the client connection object, and the invoker parameter is the handler that actually executes the RPC method, which is actually called and executed in the interceptor

The server interceptor is defined as follows:

type UnaryServerInterceptor func(ctx context.Context, req interface{}, info *UnaryServerInfo, handler UnaryHandler) (resp interface{}, err error)

Where req is the request parameter, info contains the request method attribute, and handler is the wrapper of the server-side method, which is also called and executed in the interceptor

Zrpc has a wealth of built-in interceptors, including adaptive load shedding, adaptive fusing, permission verification, Prometheus index collection, etc. due to the large number of interceptors and limited space, it is impossible to analyze all interceptors one by one. Here we mainly analyze two, adaptive fusing and Prometheus service monitoring index collection:

Built in interceptor analysis

Adaptive breaker

When the client sends a request to the server, the client will record the error returned by the server. When the error reaches a certain proportion, the client will fuse and discard a certain proportion of requests to protect downstream dependencies, and can recover automatically. The adaptive fusing in zrpc follows the overload protection strategy in Google SRE, and the algorithm is as follows:


Requests: total requests

Accepts: number of normal requests

K: Multiple value (Google SRE recommended value is 2)

The radical degree of fusing can be modified by modifying the value of K. reducing the value of K will make the adaptive fusing algorithm more radical, and increasing the value of K will make the adaptive fusing algorithm less radical

Fuse interceptor is defined as follows:

func BreakerInterceptor(ctx context.Context, method string, req, reply interface{},
	cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
  //Method name + target
	breakerName := path.Join(cc.Target(), method)
	return breaker.DoWithAcceptable(breakerName, func() error {
    //Actually execute the call
		return invoker(ctx, method, req, reply, cc, opts...)
	}, codes.Acceptable)

Accept method implements Google SRE overload protection algorithm to judge whether to fuse

func (b *googleBreaker) accept() error {
	 //Accepts is the number of normal requests, and total is the total number of requests
   accepts, total := b.history()
   weightedAccepts := b.k * float64(accepts)
   //Algorithm implementation
   dropRatio := math.Max(0, (float64(total-protection)-weightedAccepts)/float64(total+1))
   if dropRatio <= 0 {
      return nil
	 //Whether the proportion is exceeded
   if b.proba.TrueOnProba(dropRatio) {
      return ErrServiceUnavailable

   return nil

The doreq method first determines whether the circuit breaker is open. If the condition is met, it directly returns error (circuit breaker is open). If the condition is not met, it accumulates the number of requests

func (b *googleBreaker) doReq(req func() error, fallback func(err error) error, acceptable Acceptable) error {
   if err := b.accept(); err != nil {
      if fallback != nil {
         return fallback(err)
      } else {
         return err

   defer func() {
      if e := recover(); e != nil {
   //Execute RPC request here
   err := req()
   //Normal request total and accepts will be incremented by 1
   if acceptable(err) {
   } else {
     //If the request fails, only total will add 1

   return err
Prometheus indicator collection

The monitoring of the current service status is based on the collection of indicators of prohemetus, which is an important means of monitoring the service status in the industry. It also depends on the collection of indicators of prohemetus

Prometheus interceptor is defined as follows:

This interceptor mainly collects the monitoring indicators of the service. Here, it mainly collects the time-consuming and call errors of RPC methods. Here, it mainly uses the histogram and counter data types of Prometheus

func UnaryPrometheusInterceptor() grpc.UnaryServerInterceptor {
	return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (
		interface{}, error) {
    //Record a time before execution
		startTime := timex.Now()
		resp, err := handler(ctx, req)
    //After execution, calculate the time spent executing the call through since
		metricServerReqDur.Observe(int64(timex.Since(startTime)/time.Millisecond), info.FullMethod)
    //Error code corresponding to method
		metricServerReqCodeTotal.Inc(info.FullMethod, strconv.Itoa(int(status.Code(err))))
		return resp, err

Add custom interceptor

In addition to rich built-in interceptors, zrpc also supports adding custom interceptors

The client side adds a unary interceptor through the addinterceptor method:

func (rc *RpcClient) AddInterceptor(interceptor grpc.UnaryClientInterceptor) {

Add a unary interceptor on the server side through the addunaryinterceptors method:

func (rs *RpcServer) AddUnaryInterceptors(interceptors ...grpc.UnaryServerInterceptor) {

Resolver module

Zrpc service registration architecture diagram:


The resolver module is customized in zrpc to implement the service registration function. The underlying layer of zrpc depends on grpc. To customize the resolver in grpc, you need to implement the resolver.builder interface:

type Builder interface {
	Build(target Target, cc ClientConn, opts BuildOptions) (Resolver, error)
	Scheme() string

The build method returns resolver, which is defined as follows:

type Resolver interface {

Two kinds of resolvers, direct and discover, are defined in zrpc. Here we mainly analyze discov for service discovery based on etcd. The custom resolver needs to be registered through the register method provided by grpc. The code is as follows:

func RegisterResolver() {

When we start our zrpc server, call the start method, and the corresponding service address will be registered in etcd:

func (ags keepAliveServer) Start(fn RegisterFn) error {
  //Registered service address
	if err := ags.registerEtcd(); err != nil {
		return err
	//Start service
	return ags.Server.Start(fn)

When we start the zrpc client, the build method of our custom resolver will be called inside the grpc. By calling the updatestate method of resolver.clientconn in the build method, the zrpc will register the service address inside the grpc client:

func (d *discovBuilder) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (
	resolver.Resolver, error) {
	hosts := strings.FieldsFunc(target.Authority, func(r rune) bool {
		return r == EndpointSepChar
  //Service discovery
	sub, err := discov.NewSubscriber(hosts, target.Endpoint)
	if err != nil {
		return nil, err

	update := func() {
		var addrs []resolver.Address
		for _, val := range subset(sub.Values(), subsetSize) {
			addrs = append(addrs, resolver.Address{
				Addr: val,
    //Register service address with grpc
			Addresses: addrs,
	//Return the custom resolver.resolver
	return &nopResolver{cc: cc}, nil

In discov, obtain all addresses of the specified service from etcd by calling the load method:

func (c *cluster) load(cli EtcdClient, key string) {
	var resp *clientv3.GetResponse
	for {
		var err error
		ctx, cancel := context.WithTimeout(c.context(cli), RequestTimeout)
    //Get all addresses of the specified service from etcd
		resp, err = cli.Get(ctx, makeKeyPrefix(key), clientv3.WithPrefix())
		if err == nil {


	var kvs []KV
	for _, ev := range resp.Kvs {
		kvs = append(kvs, KV{
			Key: string(ev.Key),
			Val: string(ev.Value),

	c.handleChanges(key, kvs)

And monitor the change of service address through Watch:

func (c *cluster) watch(cli EtcdClient, key string) {
	rch := cli.Watch(clientv3.WithRequireLeader(c.context(cli)), makeKeyPrefix(key), clientv3.WithPrefix())
	for {
		select {
		case wresp, ok :=

This part mainly introduces how to customize the resolver in zrpc and the principle of service discovery based on etcd. Through this part of the introduction, you can understand the principle of zrpc internal service registration and discovery. There are many source codes, but they are roughly analyzed from the whole process. If you are interested in the source code of zrpc, you can learn by yourself

Balancer module

Schematic diagram of load balancing:


Avoiding overload is an important index of load balancing strategy. A good load balancing algorithm can well balance the server resources. Commonly used load balancing algorithms include rotation training, random, hash, weighted rotation training, etc. However, in order to cope with various complex scenarios, simple load balancing algorithms often do not perform well. For example, the rotation training algorithm can easily lead to load imbalance when the service response time becomes longer. Therefore, the default load balancing algorithm P2C (power of two choices) is customized in zrpc. Similar to resolver, to customize the balancer, you also need to implement the balancer.builder interface defined by grpc, Because it is similar to resolver, we will not take you to analyze how to customize balancer. Interested friends can view grpc related documents for learning

Note that zrpc performs load balancing on the client, and it is common to use nginx intermediate proxy

The default load balancing algorithm in zrpc framework is P2C. The main idea of this algorithm is:

  1. Make two random selection operations from the list of available nodes to obtain nodes a and B
  2. Compare nodes a and B, and select the node with the lowest load as the selected node

The pseudo code is as follows:


The main algorithm logic is implemented in Pick Method:

func (p *p2cPicker) Pick(ctx context.Context, info balancer.PickInfo) (
	conn balancer.SubConn, done func(balancer.DoneInfo), err error) {
	defer p.lock.Unlock()

	var chosen *subConn
	switch len(p.conns) {
	case 0:
		return nil, nil, balancer.ErrNoSubConnAvailable
	case 1:
		chosen = p.choose(p.conns[0], nil)
	case 2:
		chosen = p.choose(p.conns[0], p.conns[1])
		var node1, node2 *subConn
		for i := 0; i < pickTimes; i++ {
      //Random number
			a := p.r.Intn(len(p.conns))
			b := p.r.Intn(len(p.conns) - 1)
			if b >= a {
      //Randomly get two of all nodes
			node1 = p.conns[a]
			node2 = p.conns[b]
      //Whether the validation node is healthy
			if node1.healthy() && node2.healthy() {
		//Select one of the nodes
		chosen = p.choose(node1, node2)

	atomic.AddInt64(&chosen.inflight, 1)
	atomic.AddInt64(&chosen.requests, 1)
	return chosen.conn, p.buildDoneFunc(chosen), nil

The choose method compares the load of randomly selected nodes to finally determine which node to choose

func (p *p2cPicker) choose(c1, c2 *subConn) *subConn {
	start := int64(timex.Now())
	if c2 == nil {
		atomic.StoreInt64(&c1.pick, start)
		return c1

	if c1.load() > c2.load() {
		c1, c2 = c2, c1

	pick := atomic.LoadInt64(&c2.pick)
	if start-pick > forcePick && atomic.CompareAndSwapInt64(&c2.pick, pick, start) {
		return c2
	} else {
		atomic.StoreInt64(&c1.pick, start)
		return c1

The above mainly introduces the design idea and code implementation of zrpc default load balancing algorithm. How does the user-defined balancer register with grpc? Resolver provides the register method for registration, and the balancer also provides the register method for registration:

func init() {

func newBuilder() balancer.Builder {
	return base.NewBalancerBuilder(Name, new(p2cPickerBuilder))

After registering the balancer, how does grpc know which balancer to use? Here, we need to use the configuration item for configuration. In the newclient, we need to use the grpc.withbalancername method for configuration:

func NewClient(target string, opts ...ClientOption) (*client, error) {
	var cli client
	opts = append(opts, WithDialOption(grpc.WithBalancerName(p2c.Name)))
	if err := cli.dial(target, opts...); err != nil {
		return nil, err

	return &cli, nil

This part mainly introduces the implementation principle and specific implementation method of load balancing algorithm in zrpc, and then introduces how zrpc registers a custom balancer and how to select a custom balancer. Through this part, we should have a further understanding of load balancing


First of all, the basic usage of zrpc is introduced. It can be seen that zrpc is very simple to use. It only needs a few lines of code to build RPC services with high performance and built-in service governance capabilities. Of course, there is no comprehensive introduction to the basic usage of zrpc here. You can check the relevant documents for learning

Then, several important modules of zrpc and its implementation principle are introduced, and some source codes are analyzed. Interceptor module is the focus of the whole zrpc, which has built-in rich functions, such as fuse, monitoring, load shedding, etc. it is also essential to build high availability microservices. The resolver and balancer modules customize the resolver and balancer of grpc. Through this part, you can understand the principle of the whole service registration and discovery and how to build your own service discovery system. At the same time, the custom load balancing algorithm is no longer mysterious

Finally, zrpc is an RPC framework that has experienced various engineering practices. Whether you want to use it in production or learn its design patterns, it is a rare open source project. I hope you can learn more about zrpc through the introduction of this article

Project address


Tal Technology

Recommended Today

VBS obtains the operating system and its version number

VBS obtains the operating system and its version number ? 1 2 3 4 5 6 7 8 9 10 11 12 ‘************************************** ‘*by r05e ‘* operating system and its version number ‘************************************** strComputer = “.” Set objWMIService = GetObject(“winmgmts:” _  & “{impersonationLevel=impersonate}!\\” & strComputer & “\root\cimv2”) Set colOperatingSystems = objWMIService.ExecQuery _  (“Select * from […]