Algorithm analysis dynamic programming


1、 Definition of dynamic programming

It is a method to solve complex problems by decomposing the original problem into relatively simple subproblems.

2、 Difference between dynamic programming and greedy algorithm

Given the premise a of problem size n, an unknown solution B is solved. (we use an to denote “the known condition of problem size n”)
At this point, if the problem scale is reduced to 0, that is, A0 is known, A0 – > b can be obtained
  • If an element is added from A0, the change process of A1 is obtained. That is, A0 – > A1; then A1 – > A2; a2 – > A3 This is the strict inductive reasoning, that is, the mathematical induction we often use;
  • For AI + 1, only its last state AI is needed to complete the whole reasoning process (without a more ordered state). We call the first mock exam Markoff model. The corresponding reasoning process is called “greedy method”.
However, AI and AI + 1 are often not necessary and sufficient conditions for each other. With the increase of I, the valuable premise information is less and less. We can not get the next state only from the previous state. Therefore, we can adopt the following scheme:
  • {A1->A2}; {A1, A2->A3}; {A1,A2,A3->A4};…… This way is the second mathematical induction.
  • For AI + 1, all the preceding preorder states are needed to complete the reasoning process. The first mock exam is called the high order Markoff model. The corresponding reasoning process is called “dynamic programming”.
The above two state transition diagrams are as follows:
3、 Principles of dynamic programming
  • Basic idea: if the optimal solution of the problem can be derived from the optimal solution of the subproblem, the optimal solution of the subproblem can be solved first, and then the optimal solution of the original problem can be constructed; if the subproblem has more repetition, the final subproblem can be solved gradually from the bottom up to the original problem.
  • Usage conditions: it can be divided into several related subproblems, and the solutions of subproblems are reused
    • Optimal substructure:
      • The optimal solution of a problem contains the optimal solution of the subproblem
      • Reduce the set of subproblems, only those subproblems included in the optimization problem, reduce the complexity of implementation
      • We can do it from the bottom up
    • Subteties: in the process of solving the problem, the solutions of many subproblems will be used many times.
  • The design steps of dynamic programming algorithm are as follows
    • The structure of the optimal solution is analyzed
    • Recursively define the cost of the optimal solution
    • The cost of the optimal solution is calculated from bottom to top, and the information of constructing the optimal solution is obtained
    • The optimal solution is constructed according to the information of the optimal solution
  • Features of dynamic planning:
    • The original problem is divided into a series of subproblems;
    • Each subproblem is solved only once, and the results are saved in a table, which can be accessed directly when it is used later, without repeated calculation and saving calculation time
    • Bottom up calculation.
    • The optimal solution of the whole problem depends on the optimal solution of the subproblem (state transition equation) (the subproblem is called state, and the solution of the final state is attributed to the solution of other states)
  • The problems that can be solved by dynamic programming generally have three properties
    (1) Optimization principle: if the solution of the subproblem contained in the optimal solution of the problem is also optimal, it is said that the problem has the optimal substructure, that is, it satisfies the optimization principle.
    (2) No aftereffect: that is, once the state of a certain stage is determined, it will not be affected by the subsequent decision of the state. In other words, the process after a certain state will not affect the previous state, but only related to the current state.
    (3) There are overlapping subproblems: the subproblems are not independent, and a subproblem may be used many times in the next stage of decision-making.

4、 General thinking of dynamic programming problem solving

The problem dealt with by dynamic programming is a multi-stage decision-making problem, which generally starts from the initial state and ends by choosing the intermediate stage decision-making. These decisions form a decision sequence and determine an activity route (usually the optimal activity route) to complete the whole process. As shown in the figure. The design of dynamic programming has a certain pattern, which generally goes through the following steps.

  • Initial state → │ decision 1 │ → │ decision 2 │ → → decision n → end state
    (1) Stage division: according to the time or space characteristics of the problem, the problem is divided into several stages. When dividing the stages, we should pay attention to that the stages after dividing must be orderly or sortable, otherwise the problem can not be solved.
    (2) Determine the state and state variables: the objective situation of the problem in each stage is expressed by different states. Of course, the choice of state should satisfy no aftereffect.
    (3) Determine the decision and write the state transition equation: because there is a natural relationship between decision and state transition, state transition is to derive the state of the current stage according to the state and decision of the previous stage. So if the decision is made, the state transition equation can be written. But in fact, it is often done in reverse, according to the relationship between the states of two adjacent stages to determine the decision-making method and state transition equation.
    (4) Search for boundary conditions: the given state transfer equation is a recursive formula, which needs a recursive termination condition or boundary condition.
    Generally, the state transition equation (including boundary conditions) can be written as long as the solving stage, state and state transition decision are determined.
    In practical application, the design can be carried out according to the following simplified steps:
    (1) The properties of the optimal solution are analyzed and its structural characteristics are described.
    (2) Recursive definition of optimal solution.
    (3) The optimal value is calculated by bottom-up or top-down memorization (memo method)
    (4) According to the information obtained in calculating the optimal value, the optimal solution of the problem is constructed
5、 Description of algorithm implementation
The main difficulty of dynamic programming lies in the theoretical design, that is, the determination of the above four steps. Once the design is completed, the implementation part will be very simple.
The most important thing is to determine the three elements of dynamic programming
(1) Stage of the problem (2) state of each stage
(3) The recursive relationship between the former stage and the later stage.
Recursive relation must be the transformation from the second small problem to the larger one. From this point of view, dynamic programming can often be realized by recursive program. However, because recursion can make full use of the solution of the subproblem to reduce repeated calculation, it has incomparable advantages over recursion for large-scale problems, which is also the core of dynamic programming algorithm The place of the heart.
The optimal decision table is a two-dimensional table, in which the row represents the decision-making stage and the list shows the problem state. The data to be filled in the table generally corresponds to the optimal value of the problem in a certain stage and state (such as the shortest path, the longest common subsequence, the maximum value, etc.) )The process of filling in the table is to start from one row and one column according to the recursive relationship, fill in the table in the order of row or column priority, and finally get the optimal solution of the problem through simple trade-off or operation according to the data of the whole table.
          f(n,m)=max{f(n-1,m), f(n-1,m-w[n])+P(n,m)}
The steps of algorithm implementation

1. Create a one-dimensional array or two-dimensional array, and save the results of each subproblem. The specific creation of one-dimensional array or two-dimensional array depends on the title. Basically, if the title gives a one-dimensional array for operation, you can only create one-dimensional array. If the title gives two one-dimensional arrays for operation or two different types of variable values, the For example, the volume and total volume of different objects in knapsack problem, and the change and total amount of different denominations in change finding problem need to create a two-dimensional array.
Note: if you need to create a two-dimensional array, you can create a one-dimensional array and use the method of rolling array to solve it, that is, the value of one bit array changes constantly, which will be described in detail later
2. To set the boundary value of an array, one-dimensional array is to set the first number, and two-dimensional array is to set the values of the first row and the first column. In particular, rolling one-dimensional array is to set the values of the whole array, and then add and change to different values according to different data.
3. Find out the state transition equation, that is to say, find out the relationship between each state and its previous state, and write the code according to the state transition equation.
4. Returns the required value, usually the last value of the array or the bottom right corner of the two-dimensional array.
Basic code framework:

Algorithm analysis dynamic programmingAlgorithm analysis dynamic programming

1 for (J = 1; J < = m; J = j + 1) // the first stage
 2 xn [J] = initial value;
 4 for (I = n-1; I > = 1; I = i-1) // other N-1 stages
 5 for (J = 1; J > = f (I); J = j + 1) // F (I) expressions related to I
 6 Xi [J] = J = max (or min) {g (Xi - [J 1: J 2]),..., G (XI-1 [J K: J K + 1])};
 8 T = g (x1 [J 1: J 2]); // the scheme of solving the optimal solution of the whole problem from the optimal solution of the subproblem
10 print(x1[j1]);
12 for(i=2; i<=n-1; i=i+1)
13 { 
14       t = t-xi-1[ji];
16       for(j=1; j>=f(i); j=j+1)
17          if(t=xi[ji])
18               break;
19 }

View Code

Reference link:





Recommended Today

Review of SQL Sever basic command

catalogue preface Installation of virtual machine Commands and operations Basic command syntax Case sensitive SQL keyword and function name Column and Index Names alias Too long to see? Space Database connection Connection of SSMS Connection of command line Database operation establish delete constraint integrity constraint Common constraints NOT NULL UNIQUE PRIMARY KEY FOREIGN KEY DEFAULT […]