Examples of dynamic programming algorithm programs, detailed explanation of dynamic programming algo

Updated on technology 2024-03-21
9 answers
  1. Anonymous users2024-02-07

    I've written a piece of dynamic programming that I've experienced here, and it will definitely be helpful to understand it from a simple perspective.

    Follow Computing Advertising Ecology Reply to DP to get the most thorough explanation of dynamic planning.

  2. Anonymous users2024-02-06

    Dynamic programming can only be applied to problems with optimal substructures. The optimal substructure means that the local optimal solution determines the global optimal solution (for some problems, this requirement cannot be fully satisfied, so sometimes a certain approximation needs to be introduced). Simply put, a problem can be solved by breaking it down into sub-problems.

    The problem to be solved is decomposed into a number of sub-problems, the sub-problems are solved first, and then the solution of the original problem is obtained from the solutions of these sub-problems (this part is similar to the divide and conquer method). Unlike the divide and conquer method, which is suitable for solving problems with dynamic programming, the subproblems obtained by decomposition are often not independent of each other. If this kind of problem is solved by divide-and-conquer, the number of sub-problems obtained by decomposition is too large, and some of the sub-problems are double-counted many times.

    If we can save the answers to the solved sub-problems, scatter them, and find the answers when needed, we can avoid a lot of double counting and save time. It is usually possible to record the answers to all solved sub-problems in a single table.

    The solution of the sub-problem contained in one optimal solution of the problem is also optimal. The total problem contains many sub-problems, and the solution of these sub-problems is also optimal.

    When solving a problem with a recursive algorithm, the sub-problems that arise each time are not always new, and some of the sub-problems are double-counted multiple times. Problem overlap means that when the problem is solved from the top down with a recursive algorithm, the sub-problems generated each time are not always new, and some sub-problems are double-counted multiple times. The dynamic programming algorithm takes advantage of the overlapping nature of this sub-problem, calculates each sub-problem only once, and then saves the calculation result in a **, when it needs to calculate the sub-problem that has been calculated again, just simply check the result in **, so as to obtain higher efficiency.

    Obviously, the corresponding mathematical expression for this problem is:

    where f(1)=1, f(2)=2. It's only natural to use a recursive function to solve: a reference.

  3. Anonymous users2024-02-05

    Compared with other algorithms, dynamic programming greatly reduces the amount of computation and enriches the calculation results, not only finding the optimal value from the current state to the target state, but also finding the optimal value to the intermediate state, which is very useful for many practical problems. Compared with general algorithms, dynamic programming also has certain disadvantages: it occupies too much space, but for problems with small space requirements, dynamic programming is undoubtedly the best method!

    Both dynamic programming algorithms and greedy algorithms are common methods for constructing optimal solutions. The dynamic programming algorithm does not have a fixed problem-solving mode, and it is very skillful.

    Compared with other algorithms, dynamic programming greatly reduces the amount of computation and enriches the calculation results, not only finding the optimal value from the current state to the target state, but also finding the optimal value to the intermediate state, which is very useful for many practical problems. Compared with general algorithms, dynamic programming also has certain disadvantages: it occupies too much space, but for problems with small space requirements, dynamic programming is undoubtedly the best method!

    Both dynamic programming algorithms and greedy algorithms are common methods for constructing optimal solutions. The dynamic programming algorithm does not have a fixed problem-solving mode, and it is very skillful.

    Dynamic programming is a branch of operations research that is the process of optimizing the decision-making process of solving. In the early 50s of the 20th century, American mathematician Bellman and others put forward the famous optimization principle when studying the optimization problem of multi-stage decision-making process, thus creating dynamic programming.

  4. Anonymous users2024-02-04

    Dynamic programming algorithms.

    Similar to the partition method, the basic idea is to decompose the problem to be solved into a number of sub-problems.

    However, the subproblems that are decomposed are often not independent of each other. The number of different subproblems is often only on the order of polynomials. When solving with divide and conquer, some sub-problems are double-counted many times.

    If you can save the answers to the solved sub-problems and find the answers you have already found when needed, you can avoid a lot of double calculations and get a polynomial-time algorithm.

    Solving steps for dynamic programming.

    a.Find out the properties of the optimal solution and characterize its structure.

    b.Recursively define the optimal value.

    c.The optimal value is calculated in a bottom-up fashion.

    d.Based on the information obtained when calculating the optimal value, the optimal solution is constructed.

  5. Anonymous users2024-02-03

    a.Calculated from the bottom up.

    b.Calculated from the top down.

    c.From large to small, the early slow cavity is calculated.

    d.Calculate from small to large.

    Correct Answer: Lu Shirt AD

  6. Anonymous users2024-02-02

    This problem is no worse than using dynamic programming.

    1, 2, 4, 8 ......

    A proportional series with n term 2 (n-1) is prepared to meet this requirement.

  7. Anonymous users2024-02-01

    1. Describe the structural characteristics of the optimal solution.

    2. Recursively define the value of an optimal solution.

    3. Calculate the value of an optimal solution from the bottom up.

    4. Construct an optimal solution from the calculated information.

    First, the basic concept.

    The dynamic programming process is one in which each decision depends on the current state and then causes a state shift. A decision sequence is generated in a changing state, so this process of multi-stage optimal decision-making and problem-solving is called dynamic programming.

    2. Basic ideas and strategies.

    The basic idea is similar to the division and conquer method, which is to decompose the problem to be solved into several sub-problems (stages), solve the sub-stages in order, and the solution of the previous sub-problem provides useful information for the solution of the latter sub-problem. When solving any sub-problem, list the various possible local solutions, and make a decision to keep those that are likely to be optimal, and discard the others. The sub-problems are solved in turn, and the last sub-problem is the solution of the initial problem.

    Since most of the problems solved by dynamic programming have the characteristics of overlapping subproblems, in order to reduce double computation, each subproblem is solved only once, and the different states of different stages are stored in a two-dimensional array.

    The biggest difference with the divide and conquer method is that the sub-problems obtained by decomposition are often not independent of each other (that is, the solution of the next sub-stage is based on the solution of the previous sub-stage and is further solved).

    III. Applicable Circumstances.

    Problems that can be solved by dynamic programming generally have three properties:

    1) Optimization principle: If the optimal solution of the problem contains the solution of the sub-problem is also optimal, the problem is said to have an optimal substructure, that is, the optimization principle is satisfied.

    2) No aftereffect: that is, once a certain stage state is determined, it is not affected by the future decisions of this state. That is, the process after a state does not affect the previous state, but only the current state.

    3) There are overlapping sub-problems: that is, the sub-problems are not independent of each other, and a sub-problem may be used many times in the next stage of decision-making. (This property is not necessary for dynamic programming to apply, but without it, dynamic programming algorithms do not have an advantage over other algorithms).

  8. Anonymous users2024-01-31

    Step 1: Describe the structural features of the optimal solution.

    Step 2: Recursively define the value of an optimal solution.

    Step 3: Calculate the value of an optimal solution from the bottom up.

    Step 4: Construct an optimal solution from the calculated information.

  9. Anonymous users2024-01-30

    As long as the state is represented well, then the state transition equation is good.

Related questions
6 answers2024-03-21

There are n items and a backpack with a volume of m. The volume of the ith article is w[i] and the value is d[i]. Solve which items to pack in your backpack to maximize the sum of your values. >>>More

6 answers2024-03-21

01 Backpack 2 states.

A backpack can only be taken or not taken. >>>More