Dynamic programming – algorithm Introduction

Time:2021-8-8

Algorithm Introduction

Dynamic programming (DP) is a method used in mathematics, management science, computer science, economics and bioinformatics to solve complex problems by decomposing the original problem into relatively simple subproblems.
Dynamic programming is often applied to problems with overlapping subproblems and optimal substructures. The time consumed by dynamic programming method is often much less than that of simple solution.
The basic idea behind dynamic programming is very simple. In general, to solve a given problem, we need to solve its different parts (i.e. subproblems), and then get the solution of the original problem according to the solution of the subproblem. Dynamic programming is often used to optimize recursive problems, such as Fibonacci series. If it is solved by recursion, many identical subproblems will be calculated repeatedly. Using the idea of dynamic programming can reduce the amount of calculation.
Usually many subproblems are very similar. Therefore, the dynamic programming method tries to solve each subproblem only once, which has the function of natural pruning, so as to reduce the amount of calculation: once the solution of a given subproblem has been calculated, it will be memorized and stored, so that the table can be directly looked up the next time the solution of the same subproblem is needed. This approach is particularly useful when the number of repeated subproblems increases exponentially with respect to the size of the input.

Fibonacci sequence

Defined as: by0and1At first, each subsequent number is the sum of the first two numbers.

1. Violence

We can still use the idea of dynamic programming to obtain the state transition equation:

$$
f(n)=f(n-1)+f(n-2)
$$

The code implementation comes out:

def fib(n):
    #Boundary condition (base case)
    if n in (1,2):
        return 1
    return fib(n-1) + fib(n-2)

In fact, violent recursion is inefficient. You can clearly see by drawing a recursion tree:

Dynamic programming - algorithm Introduction

When calculating f (20), f (19) and f (18) are calculated. When calculating f (19), f (18) is calculated again. In this way, repeated calculation leads to low efficiency.

2. Memorandum optimization

Use an array or dictionary to store the calculated values, just like a cache, so as to reduce repeated calculations.

The code implementation is as follows:

def fib(n, tb: List):
    #Boundary condition (base case)
    if n in (1,2):
        return 1
    if tb[n-1] is None:
        tb[n-1] = fib(n-1, tb) + fib(n-2, tb)
    return tb[n-1]

The recursion diagram is as follows:

Dynamic programming - algorithm Introduction

In this way, the redundant calculations in the recursive tree are removed, and the time complexity is reduced fromO(n^2)Optimized toO(n), it can be said to be a dimensionality reduction attack.

According to the direction of thinking and problem solving, this is a top-down method. From the final result, that is, the root node of the recursive tree, recursively calculate down to return, as shown in the following figure:

Dynamic programming - algorithm Introduction

3. DP array iterates from bottom to top

Dynamic programming - algorithm Introduction

In fact, we can also solve iteratively from bottom to top, and deduce f (20) from the smallest f (1) and f (2). The code is as follows:

def fib(n):
    if n in (1,2):
        return 1
    dp = [0] * (n+1)
    dp[1]=dp[2]=1
    for i in range(3, n+1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

4. DP array space optimization

We observe that the results of each time are actually related to the results of the first two times, so we can only store the results of the first two times and simplify the space.

def fib(n):
    if n in (1,2):
        return 1
    dp_1 = dp_2 =1
    for i in range(3, n+1):
        dp_1,dp_2=dp_1+dp_2,dp_1
    return dp_1

The problem of collecting change

Let’s look at the following question first (: give you k coins with ⾯ values, which are C1, C2… CK respectively. The number of each coin is limited, and then give you a total ⾦ amountleastYou need ⼏ coins to round up this ⾦ amount. If it is impossible, the algorithm returns – 1.

Top down thinking

Thinking steps:

  1. This question containsOptimal substructureAnd the subproblems are independent of each other, so it is a problem of dynamic programming.
  2. Define the correct DP function,dp(amount)=nIt means that at least n coins are needed to make up the amount of money. In fact, this formula is also easy to list. There is only the variable amount in the stem. What we need to solve is the minimum number of coins, set to N, so it is easy to define the DP function.
  3. List the state transition equation:

    $$
    dp(amount)=min(dp(amount-c1)+1, dp(amount-c2)+1, …)
    $$

  4. Note the boundary condition. If you can’t figure it out, it is when the amount is smaller than the denomination of the smallest coin and is not 0.

The code implementation is as follows:

from typing import List


def min_coin_num(coins: List, amount: int):
    def dp(n):
        #Boundary conditions
        If n = = 0: # amount is 0, no coins are needed
            return 0
        If n < 0: # amount is negative, the current recursive child node has no solution
            return -1
        ret = float("inf")
        for coin in coins:
            sub_problem = dp(n - coin)
            if sub_problem == -1:
                continue
            ret = min(ret, sub_problem + 1)
        return ret if ret != float("inf") else -1

    return dp(amount)

After drawing the recursive tree, we can see that there are still redundant calculations. We can optimize this slightly. Use a memo to record the calculated results. When it is used next time, there is no need to repeat the calculation.

Dynamic programming - algorithm Introduction

Optimized code:

from typing import List


#Memo optimization
def min_coin_num(coins: List, amount: int):
    memo = [None] * (amount + 1)

    def dp(n):
        if memo[n] is not None:
            return memo[n]

        #Boundary conditions
        If n = = 0: # amount is 0, no coins are needed
            return 0
        If n < 0: # amount is negative, the current recursive child node has no solution
            return -1
        ret = float("inf")
        for coin in coins:
            sub_problem = dp(n - coin)
            if sub_problem == -1:
                continue
            ret = min(ret, sub_problem + 1)
        #Record in memo
        memo[n] = ret if ret != float("inf") else -1
        return memo[n]

    return dp(amount)

Here is an array as a memo, so is the actual dictionary.

Bottom up thinking

Generally, recursion is required from top to bottom. The idea is to recursively decompose the final problem into sub problems one by one. Similarly, we can calculate the results from bottom to top, and go through limited iterations from the initial situation to get the final results.

Code implementation:

from typing import List


#Bottom up
def min_coin_num(coins: List, amount: int):
    dp = [float("inf")] * (amount + 1)
    dp[0] = 0

    for n in range(amount + 1):
        for coin in coins:
            if coin <= n:
                dp[n] = min(dp[n], dp[n - coin] + 1)

    return dp[amount] if dp[amount] != float("inf") else -1

Method summary

Applicable situation: the optimal subproblem, and the subproblems are independent of each other.

Thinking direction: 1. Top down recursion; 2. Bottom up finite iteration.

State transition equation: the general form is $DP (variable 1, variable 2,…) = target result$

Optimization method: an array or dictionary is used as a memo to record the results of the intermediate subproblem to avoid repeated calculation.