1、 Definition of dynamic programming
It is a method to solve complex problems by decomposing the original problem into relatively simple subproblems.
2、 Difference between dynamic programming and greedy algorithm
 If an element is added from A0, the change process of A1 is obtained. That is, A0 – > A1; then A1 – > A2; a2 – > A3 This is the strict inductive reasoning, that is, the mathematical induction we often use;
 For AI + 1, only its last state AI is needed to complete the whole reasoning process (without a more ordered state). We call the first mock exam Markoff model. The corresponding reasoning process is called “greedy method”.
 {A1>A2}; {A1, A2>A3}; {A1,A2,A3>A4};…… This way is the second mathematical induction.
 For AI + 1, all the preceding preorder states are needed to complete the reasoning process. The first mock exam is called the high order Markoff model. The corresponding reasoning process is called “dynamic programming”.
 Basic idea: if the optimal solution of the problem can be derived from the optimal solution of the subproblem, the optimal solution of the subproblem can be solved first, and then the optimal solution of the original problem can be constructed; if the subproblem has more repetition, the final subproblem can be solved gradually from the bottom up to the original problem.
 Usage conditions: it can be divided into several related subproblems, and the solutions of subproblems are reused
 Optimal substructure:
 The optimal solution of a problem contains the optimal solution of the subproblem
 Reduce the set of subproblems, only those subproblems included in the optimization problem, reduce the complexity of implementation
 We can do it from the bottom up
 Subteties: in the process of solving the problem, the solutions of many subproblems will be used many times.
 The design steps of dynamic programming algorithm are as follows
 The structure of the optimal solution is analyzed
 Recursively define the cost of the optimal solution
 The cost of the optimal solution is calculated from bottom to top, and the information of constructing the optimal solution is obtained
 The optimal solution is constructed according to the information of the optimal solution
 Features of dynamic planning:
 The original problem is divided into a series of subproblems;
 Each subproblem is solved only once, and the results are saved in a table, which can be accessed directly when it is used later, without repeated calculation and saving calculation time
 Bottom up calculation.
 The optimal solution of the whole problem depends on the optimal solution of the subproblem (state transition equation) (the subproblem is called state, and the solution of the final state is attributed to the solution of other states)

The problems that can be solved by dynamic programming generally have three properties(1) Optimization principle: if the solution of the subproblem contained in the optimal solution of the problem is also optimal, it is said that the problem has the optimal substructure, that is, it satisfies the optimization principle.(2) No aftereffect: that is, once the state of a certain stage is determined, it will not be affected by the subsequent decision of the state. In other words, the process after a certain state will not affect the previous state, but only related to the current state.(3) There are overlapping subproblems: the subproblems are not independent, and a subproblem may be used many times in the next stage of decisionmaking.
4、 General thinking of dynamic programming problem solving
The problem dealt with by dynamic programming is a multistage decisionmaking problem, which generally starts from the initial state and ends by choosing the intermediate stage decisionmaking. These decisions form a decision sequence and determine an activity route (usually the optimal activity route) to complete the whole process. As shown in the figure. The design of dynamic programming has a certain pattern, which generally goes through the following steps.

Initial state → │ decision 1 │ → │ decision 2 │ → → decision n → end state(1) Stage division: according to the time or space characteristics of the problem, the problem is divided into several stages. When dividing the stages, we should pay attention to that the stages after dividing must be orderly or sortable, otherwise the problem can not be solved.(2) Determine the state and state variables: the objective situation of the problem in each stage is expressed by different states. Of course, the choice of state should satisfy no aftereffect.(3) Determine the decision and write the state transition equation: because there is a natural relationship between decision and state transition, state transition is to derive the state of the current stage according to the state and decision of the previous stage. So if the decision is made, the state transition equation can be written. But in fact, it is often done in reverse, according to the relationship between the states of two adjacent stages to determine the decisionmaking method and state transition equation.(4) Search for boundary conditions: the given state transfer equation is a recursive formula, which needs a recursive termination condition or boundary condition.Generally, the state transition equation (including boundary conditions) can be written as long as the solving stage, state and state transition decision are determined.In practical application, the design can be carried out according to the following simplified steps:(1) The properties of the optimal solution are analyzed and its structural characteristics are described.(2) Recursive definition of optimal solution.(3) The optimal value is calculated by bottomup or topdown memorization (memo method)(4) According to the information obtained in calculating the optimal value, the optimal solution of the problem is constructed
1 for (J = 1; J < = m; J = j + 1) // the first stage
2 xn [J] = initial value;
3
4 for (I = n1; I > = 1; I = i1) // other N1 stages
5 for (J = 1; J > = f (I); J = j + 1) // F (I) expressions related to I
6 Xi [J] = J = max (or min) {g (Xi  [J 1: J 2]),..., G (XI1 [J K: J K + 1])};
7
8 T = g (x1 [J 1: J 2]); // the scheme of solving the optimal solution of the whole problem from the optimal solution of the subproblem
9
10 print(x1[j1]);
11
12 for(i=2; i<=n1; i=i+1）
13 {
14 t = txi1[ji];
15
16 for(j=1; j>=f(i); j=j+1)
17 if(t=xi[ji])
18 break;
19 }
View Code
Reference link:
[1]https://blog.csdn.net/zw6161080123/article/details/80639932
[2]https://www.cnblogs.com/hithongming/p/9229871.html