Unveiling tidb new optimizer: analysis of cascade planner principle


Author: Mingcong Han

In “ten minutes as a contributor series | adding optimization rules for cascades planner”, we briefly introduced the background knowledge of cascades. This paper will introduce the framework and principle of cascades planner, a new optimizer of tidb.

Brief introduction of tidb current optimizer

The function of query optimizer in relational database is to generate a suitable query plan for a SQL within reasonable overhead. The basic composition of tidb current optimizer is introduced in the tidb source code reading series (VI) select statement overview. The optimization process of tidb current optimizer is mainly divided into logical optimization and physical optimization Optimize). Logic optimization is to change a logical operator tree into a physical plan tree, and the final result is a better logical operator tree; while physical optimization is to transform a logical operator tree into a physical plan tree. This physical operator tree is what we call the physical execution plan. It will be handed over to the tidb execution engine to complete the subsequent SQL execution process.

Logic optimization

In tidb, the ast (abstract syntax tree) of a SQL has been transformed into the corresponding logical operator tree before it enters the logic optimization stage. Therefore, logic optimization is the process of logically equivalent transformation of a logical operator tree. Logic optimization is rule-based optimization (RBO). The principle behind these rules is equivalent transformation of relational algebra. Typical rules include column clipping, predicate deduction, etc. The principle and implementation of existing tidb logic optimization rules can refer to these two source reading articles: (7) rule based optimization, (21) rule based optimization II.

With the increasing number of logic optimization rules in tidb, several problems existing in the current optimization framework are gradually exposed

  1. The optimizer requires that each logic optimization rule must be profitable, and the logical execution plan obtained after transformation must be better than that before transformation (such as predicate push-down), but some optimization rules only have benefits in specific scenarios (such as aggregate push-down) This kind of optimization rule is difficult to add to the current optimizer, resulting in the optimizer’s execution plan is not optimal in those specific scenarios.
  2. No matter what kind of SQL, in the logic optimization stage, all optimization rules follow the same fixed order to see whether they can act on the current logical execution plan. For example, the first rule executed is always column clipping. The order between logical optimization rules needs to be carefully arranged by experienced optimizer hands. For example, partition process should be carried out after predicate push down. As a result, everyone needs to carefully arrange this order when adding optimization rules. Adding an optimization rule needs to know all other optimization rules, and the threshold is high.
  3. In the logic optimization stage, each rule can only be executed once when it is traversed sequentially at most. However, in the actual scenario, there is often a case that an optimization rule that has been executed before can be executed again. Let’s take an example to illustrate that for this simple SQL:select b from t where a > 1In whichayesintType of primary key, we will eventually generate such a physical execution plan:

    TableScan(table: t, range:(1, inf]) -> TableReader(a, b) -> Projection(b)

    In the schema of tablereader, thea bTwo columns, that is, tidb will read two columns from tikv, but eventually only need the first column. The reason behind this problem is: the optimizer first prunes the rows and columns, and then pushes down the predicates. However, after the predicates are pushed down, it is possible that the column clipping can take effect again. However, the possible effective column clipping cannot be executed in the optimizer now. As a result, tidb reads one more column of data from tikv, which increases the network IO usage during SQL execution.

Physical optimization

Physical optimization is a process of transforming a logical operator tree into a physical operator tree. We have done a detailed introduction in the previous tidb source code reading series (VIII) cost based optimization. In physical optimization, the optimizer optimizes the query plan according to the distribution of data (statistical information). Physical optimization is a process of memory search. The goal of search is to find the physical execution plan that satisfies specific physical attributes for logical execution plan, and select the lowest cost as the search result. Therefore, it is also called cost based optimization (C For example, what kind of scan path (which index to use) should be selected for datasource, and what execution mode (hashjoin, mergejoin or indexjoin) should be selected for join.

In tidb, physical optimization is not only the selection of physical operators, but also the task of pushing down tikv by operators. For example, the aggregation operator is divided into finalmode and partialmode, and the aggregation of partialmode is pushed down to tikv coprocessor for execution. For details, please refer to tidb source code reading series (22) hash aggregation.

At present, there are two types of push down operators: selection, limit, topn and aggregation

  • For the selection operator, since the selection is pushed down to the datasource in the logical optimization stage, in the physical optimization stage, if there is a filter condition in the selection that cannot be converted into a scanned range condition, a selection at the coprocessor level will be generated.
  • For limit, topn and aggregation operators, they are allowed to be pushed down only if and only if their child nodes are datasource.

The following figure shows how a simple aggregate query can be optimized to get the final physical plan:

Unveiling tidb new optimizer: analysis of cascade planner principle

  1. First, in the logic optimization stage, the filter conditions in the selection will be pushed down to the accessconds in the datasource.
  2. In the physical optimization phaseA.pk > 10Will be converted to the scope condition of the primary key, andA.value > 1A selection of the tikv layer will be generated.
  3. At the same time, because the child node of aggregation is datasource, it will also be pushed down to tikv.

There are several potential problems in the above physical optimization process

  • Operator pushdown logic is too simple, only one operator can be pushed down except selection, which is difficult to deal with new pushdown operators (such as projection) added in the future, and it is also unable to flexibly push down operators for some special scenarios.
  • Poor scalability: it is difficult to expand and support other storage engines, and implement corresponding operator push-down, such as tiflash (a column memory engine that has not yet been open source).
  • It is difficult to add optimization rules for special data sources, and the search space of optimizer is further limited.

The Volcano/Cascades Optimizer

Volcano / cascades optimizer is a classic optimizer framework. It is produced in the papers the volcano optimizer generator: extensibility and efficient search and the cascades framework for query optimization. Its main authors are Goetz Graefe. The cascades framework has been implemented by many common database systems. Let’s briefly introduce some basic concepts proposed in the two articles.

The Volcano Optimizer Generator

Volcano optimizer generator itself is positioned as an optimizer “generator”, and its core contribution is to provide a search engine. The author provides a basic framework of database query optimizer, and database implementers can generate a query optimizer after implementing the corresponding interface for their own data model. Let’s leave aside the concept of generator and only introduce some methods proposed in the direction of “optimizer”:

  1. Volcano optimizer uses two-stage optimization, uses “logical algebra” to represent various relational algebra operators, and uses “physical algebra” to represent the implementation algorithm of various relational algebra operators. Transformation is used to complete the transformation between logical algebra and physical algebra, and cost based selection is used for conversion from logical algebra to physical algebra.
  2. Changes in volcano optimizer are described in terms of rules. For example, transformation rules are used for changes between logical algebras and implementation rules are used for transformations between logical algebras and physical algebras.
  3. The results of operators and expressions in volcano optimizer are represented by property. Logical property can be extracted from logical algebra, mainly including the schema and statistical information of the operator; physical property can be extracted from physical algebra to represent the physical properties of the number generated by the operator, such as sorting by a key and distributing in the cluster according to a key.
  4. Volcano optimizer uses a top-down dynamic programming algorithm (mnemonic search).

Cascades Optmizer

Cascades optimizer is the follow-up work of volcano optimizer. It further optimizes the volcano optimizer. Here are some basic concepts in cascade optimizer.


In the process of searching, the space searched by cascades optimizer is a forest composed of relational algebraic operator trees, and the data structure to save the forest is memory. The two most basic concepts in memo are expression group (hereinafter referred to as group) and group expression (corresponding relation algebraic operator). The logical equivalent group expression is stored in each group, and the child nodes of group expression are composed of groups. The following figure shows a memo composed of five groups:

Unveiling tidb new optimizer: analysis of cascade planner principle

We can extract the following two equivalent operator trees through the above memo. Using memo to store the next two trees can avoid storing redundant operators (such as scan a and scan b).

Unveiling tidb new optimizer: analysis of cascade planner principle


In volcano optimizer, rules are divided into transformation rules and implementation rules. Transformation rule is used to add logical equivalent group expression in memo. Transformation rule has atomicity, which only acts on a local small fragment of operator tree. Each transformation rule has its own matching conditions. By applying a transformation rule, the space of search can be expanded by matching the transformation rule on the operator tree continuously to find the possible optimal solution. Implementation rule selects physical operators for group expression.

In cascade optimizer, the two types of rules are no longer distinguished.


Pattern is used to describe the local characteristics of group expression. Each rule has its own pattern, which can only be applied if the group expression of the corresponding pattern is satisfied. One is defined on the left in the figure belowSelection->ProjectionAnd a matching group expression appears in the red dotted line in the right memo.

Unveiling tidb new optimizer: analysis of cascade planner principle

Searching Algorithm

Cascades optimizer makes a detailed design for the application order of rules. For example, each rule has two methods: promise and condition. Among them, promise is used to represent the importance of rule in the current search process. The higher the promise value, the more likely the rule will be useful. When the promise value is less than or equal to 0, the rule will not be executed, and condition will not be executed Directly determine whether a rule can be applied in the current procedure by returning a Boolean value. After a rule is successfully applied, the accountant calculates the set of rules that may be applied in the next step.

The search algorithm of cascade optimizer is different from that of volcano optimizer. Volcano optimizer divides the search into two stages. In the first stage, all logical equivalent logical algebras are enumerated. In the second stage, dynamic programming is used to search the physical Algebra with the lowest cost from top to bottom. Cascades optimizer integrates these two stages together. It provides a guidance to guide the execution order of rules. It enumerates logical equivalent operators and generates physical operators at the same time. This can avoid enumerating all logical execution plans, but its disadvantage is wrong guidance It will lead to local convergence of the search, so the optimal execution plan can not be found.

Volcano / cascades optimzier uses branch and bound method to prune the search space. Since both of them use top-down search, the cost upper bound of the operator can be set during the search process. If the leaf node exceeds the preset cost upper bound before the leaf node is searched in the downward search process, the search branch can be pruned in advance.

Design of tidb cascades planner

Based on the principle of volcano / cascades optimizer, we redesigned an optimizer for tidb: tidb cascades planner. We hope that we can solve the problems of the current optimizer through the new optimizer, and also bring some new features:

  • Optimization rules are easy to implement, and some simple interfaces are implemented to define optimization rules.
  • The optimization rules are easy to extend, so we don’t need to consider the execution order of optimization rules.
  • Optimization rules can be applied repeatedly to increase the search space of optimizer.
  • For the optimization rules which are not necessarily better, the results can be selected by cost.
  • The storage layer of operator pushdown is more flexible and convenient to extend new pushdown operators in the future.
  • Make it easier for tidb to access other storage or computing engines, such as tiflash.
  • For tidb optimizer capability grading, different optimization levels can be selected for different complexity queries.

Basic data structure

First of all, we will introduce some basic data structures in tidb cascades planner. The following concepts are basically the same as the volcano / cascades optimizer introduced above, with only a few differences.


Groupexpr is rightLogicalPlanAndLogicalPlanThe difference is that groupexpr’s child nodes are no longerLogicalPlan, but group:

type GroupExpr struct {
  ExprNode plannercore.LogicalPlan
  Children []*Group
  Group    *Group

Group is a set of logically equivalent groupexpr sets. In other words, from a logical point of view, the operator tree generated by any groupexpr in a group is logically equivalent.

type Group struct {
  Equivalents *list.List 

  ImplMap map[string]Implementation
  Prop    *property.LogicalProperty 
  EngineType EngineType

In order to speed up the search process, we have made some more fine-grained optimization on the group. For example, when inserting groupexpr into equivalences, we always ensure that the same type of logicalplan is continuously stored in the linked list; at the same time, we provide a map as an index for the first groupexpr of each type.

Through the above two definitions, we can find that group and groupexpr refer to each other recursively and finally form a memo data structure.


Operand is the type character of logicalplan, which is used to describe pattern.

type Operand int
const (
  OperandAny Operand = iota


Pattern is a tree data structure used to represent the local shape of the logical operator tree. It should be noted that pattern can only be used to match the type of logical operator (through operand), but can not specify the specific content of the operator.

type Pattern struct {
  Children []*Pattern


Transformation is an interface type used to define a logical transformation rule.

  • GetPattern()Method to get a pattern that the transformation rule needs to match.
  • Because the pattern can only describe the type of the operator, it cannot describe the content constraints inside the logicalplanMatch()Method can determine more detailed matching conditions. For example, pattern can only describe an operator of the type of join that we want, but it can’t describe that the join should be innerjoin or leftouterjoin. This kind of condition needs to be in theMatch()To judge.
  • OnTransform()Method defines the specific content of the transformation rule. The returned contents are the new groupexpr and whether to delete the old oneGroupExpr, do you want to delete all theGroupExpr
type Transformation interface {
  GetPattern() *memo.Pattern
  Match(expr *memo.ExprIter) bool
  OnTransform(old *memo.ExprIter) (newExprs []*memo.GroupExpr, eraseOld bool, eraseAll bool, err error)

Let’s use a transformation rule:PushSelDownAggregationAs an example, the above three methods are introduced in detail.

The pattern that this rule matches isSelection -> AggregationThe function is to push the selection down to the aggregation, for example, SQL:select a, sum(b) from t group by a having a > 10 and max(c) > 10In having conditiona > 10You can push it down to the bottom of aggregation. More specifically, we can push down the expression as long as all the columns in an expression in the selection appear in the group by column.

Refer to the following figure:

Unveiling tidb new optimizer: analysis of cascade planner principle

  1. The selection in group0 matches the patternSelection -> Aggregation
  2. YesOnTransform()In the selectiona > 10The condition is pushed down under the new aggregation and the condition is preservedmax(c) > 10It became a new selection.
  3. becauseOnTransform()OferaseOldBackTrueTherefore, the original groupexpr is finally removed from the group.

Implementation/Implementation Rule

Implementation is a package of physical plan and its corresponding cost calculation.

type Implementation interface {
  CalcCost(outCount float64, children ...Implementation) float64
  SetCost(cost float64)
  GetCost() float64
  GetPlan() plannercore.PhysicalPlan

  AttachChildren(children ...Implementation) Implementation
  ScaleCostLimit(costLimit float64) float64

ImplementationRuleIs an interface type that defines a physical implementation of a logical operator.

  • ImplementationRuleCan only be matched by opera nd, so you need one as wellMatch()Method to do more fine-grained matching for the details inside the operator.
  • OnImplement()Method is used to generate the corresponding implementation for groupexpr.
type ImplementationRule interface {
  Match(expr *memo.GroupExpr, prop *property.PhysicalProperty) (matched bool)
  OnImplement(expr *memo.GroupExpr, reqProp *property.PhysicalProperty) (memo.Implementation, error)

Let’s still take aggregation as an example. We know that there are two typical physical execution modes of aggregation, one is hash aggregation, and the other is stream aggregation.

The implementationrule to implement hashagg is implhashagg, and the source code is located in Planner / cascades / implementation_ rules.go/implHashAgg Because hashagg cannot meet any properties required by the upper level node, theMatch()In the method, if the progress passed by the upper level node is not empty, we can not convert the aggregation to hashaggOnImplement()Method, we just need to convert the logical aggregation to physical hashagg.


Enforcer is used to force the addition of sort operator in the operator tree to meet the order attribute required by the parent node. We will introduce how to use enforcer in the implementation phase below.

type Enforcer interface {
  NewProperty(prop *property.PhysicalProperty) (newProp *property.PhysicalProperty)
  OnEnforce(reqProp *property.PhysicalProperty, child memo.Implementation) (impl memo.Implementation)
  GetEnforceCost(g *memo.Group) float64


Logicalproperty contains schema and statistics. Because all groupexprs in a group are logically equivalent, they share the same logicalproperty.

type LogicalProperty struct {
  Stats  *StatsInfo
  Schema *expression.Schema


Orderby items and expectedcount are recorded in physicalproperty, which are consistent with those described in the tidb source code reading series (VIII) cost based optimization, and will not be repeated here.

Adapter Model

In order to enable tidb to be used on a variety of different storage components, we introduced the adapter model for the tidb cascades planner. The so-called adapter model means that we add various operators to the logicalplan to collect data from the storage engine, such asTiKVTableGatherTiFlashTableGathereven to the extent thatMySQLGatherFinally, these gather operators will be rewritten toTableReaderIndexReaderTherefore, all the parent operators of gather are executed in tidb, while the operators of all child nodes of gather are executed on the corresponding storage engine. This has two advantages:

  • We can distinguish different storage engines in the logic optimization stage, and design different operator pushdown strategies for different storage engines.
  • If tidb wants to use other storage engines, it only needs to implement the corresponding gather operator and the reader operator in the physical optimization stage in the optimizer.

Unveiling tidb new optimizer: analysis of cascade planner principle

Optimization process

In the current design, tidb cascades planner divides the search process into three phases:

  • Preprocessing phase.
  • Exploration phase, logical search phase.
  • Implementation phase, physical implementation phase.

The source code of this part is located in Planner / cascades/ optimize.go 。

Preprocessing Phase

In the preprocessing stage, we will do “better” logic transformation to the original logic operator tree, such as column clipping.

Exploration Phase

In the logic search stage, similar to the current tidb logic optimization, we will do logical equivalent transformation on the input logic operator tree. However, the difference is that we first convert the logicalplan tree to the group tree, and apply the transformation rule in the group tree to realize the logical equivalent change.

In the implementation of the search algorithm, mainly involves three functions, we will introduce the role of these three functions from the bottom up.

1)findMoreEquiv(group, groupExpr)

findMoreEquiv(group, groupExpr)It applies all transformations to a groupexpr to search for more logically equivalent groupexpr. The process is as follows:

  1. First, the possible matching transformation rules are obtained according to the corresponding operand in groupexpr. Here, we group all transformation rules according to the topmost operation in their patterns. For example, when groupexpr is selection, it will only try to match all transformation rules that start with selection.
  2. Find out if there is a structure with groupexpr as the root and its corresponding pattern matching.
  3. If such a structure is found, theMatch()Method further determines whether the corresponding details (such as the type of join) can be matched.
  4. Finally, ifMatch()Call theOnTransformation()Method to apply the corresponding transformation rules.
  5. IfOnTransformationNew backGroupExprTo ensure that the newly generated groupexplorer can also be searched in the future, insert the groupexplorer into the group and mark the group as unexplored.
  6. IfOnTransformationReturnederaseOldbyTrue, then infindMoreEquiv()When finished, the current groupexpr is removed from the group.
  7. IfOnTransformationReturnederaseAllbyTrueYou can delete all groupexprs in the current group, insert a new groupexpr, and end the search of the current group.


exploreGroup()Method recursively calls groupexpr throughout the group tree from the bottom upfindMoreEquiv()The main process is as follows:

  1. Traverses all groupexprs in the current group, and recursively calls the subgroups of these groupexprs firstexploreGroup()Until no new groupexpr is generated in the subgroup.
  2. When the child group of a groupexpr is completely searched, call the current groupexprfindMoreEquiv()If theeraseCurbyTrue, remove the groupexpr from the group.


The last part is a circular call to the top group (root group)exploreGroup()Until all groups no longer generate a new groupexpr.

At this point, we save all logicalplan trees with logical equivalence through group. Next, we will select the physical plan tree with the lowest cost for these logicalplans.

Implementation Phase

Implementation phase is similar to physical optimize in the current optimizer, which converts logical plans into physical plans with the lowest cost. But the difference is that the old optimizer can only select a physical plan for a logicalplan tree, but in the cascades planner, we select a physical plan for a group tree or a group of logically equivalent logicalplan trees.

We can divide this process into three parts

1)implGroupExpr(groupExpr, reqPhysicalProp)

implGroupExprGenerate implementation for a groupexpr based on the physical property passed down from the upper layer. The process is very simple, that is, try to apply all the corresponding implementation rules to the current groupexpr, and finally return the implementation generated after successful matching.

func (opt *Optimizer) implGroupExpr(groupExpr *memo.GroupExpr, reqPhysProp *property.PhysicalProperty) (impls []memo.Implementation, err error) {
  for _, rule := range opt.GetImplementationRules(groupExpr.ExprNode) {
     if !rule.Match(groupExpr, reqPhysProp) {
     impl, err := rule.OnImplement(groupExpr, reqPhysProp)
     if err != nil {
        return nil, err
     if impl != nil {
        impls = append(impls, impl)
  return impls, nil

2)implGroup(group, reqPhysicalProp, costLimit)

implGroup()According to the physical property passed down from the upper layer, the optimal implementation is generated recursively for the group.

  • Implementation phase is actually a memory search process. Each group will record the best implementation corresponding to a physical property. Therefore, you can check whether it can be found from the historical results before searchingreqPhysicalPropThe corresponding optimal implementation.
  • Costlimit is the upper bound of cost for pre pruning in the search process. It should be noted that the premise of using costlimit is that the cost must be monotonically increased from bottom to top. Let’s take the following figure as an example. Expr0 and expr1 are logically equivalent groupexpr in group0. The cost of the optimal implementation generated by expr0 is 1000. At this time, we will search expr1 with costlimit = 1000. Our purpose is to make expr1 produce better (less cost) implementation. However, in the process of downward search, expr4 has the optimal implementation The cost of expr1 is 1200, which is greater than the costlimit. In other words, the cost of implementation generated by expr1 must be greater than 1200. Therefore, expr1 will not be better than the implementation generated by expr0 in this path. Therefore, we will prune this search path and do not search expr1 and expr3 again.

Unveiling tidb new optimizer: analysis of cascade planner principle

  • The current group is called before the implementation is generatedfillGroupStats()To fill in the statistics in the logical property.
  • Finally, it’s calledimplGroupExpr()To generate implementation and recursive callsimplGroup()To search for subgroups.


We did not introduce the concept of enforcer in detail above, but we will add here. For example, we have such a SQL:

select b, sum(c) over (partition by b) from t

This is a query with window function. In window, thebColumns are grouped. Because the current implementation of window requires the lower level operators to be ordered according to the grouping columns, when there is nobWhen the column is ordered, we must add a sort operator under the window operator to satisfy the physical property passed down by the window operator.

When inImplGroup()When the physicalproperty passed from the middle and upper levels is not empty, we will call enforcerrule for the group. Enforcerrule will first force the addition of a sort operator, and then use the empty physicalproperty to call the current group againImplGroup()


This paper mainly introduces the composition and principle of tidb cascades planner framework. The introduction of cascades planner solves some problems existing in the existing optimizer and introduces some new features for tidb. We hope that we can reduce the difficulty for the community to participate in the development of tidb optimizer through the cascades planner, and attract more students to participate in the development of tidb. At the same time, we also hope to make tidb a more “general” SQL computing component in the future, so that tidb can be more easily connected to other storage engines. Finally, we are very welcome to join sig planner to exchange views with us

Original reading:https://pingcap.com/blog-cn/tidb-cascades-planner/

Unveiling tidb new optimizer: analysis of cascade planner principle