sprüche und wünsche

Sometimes writing code may take some time to execute, or it may never run even if your logic is exemplary. We convert a big problem into several minor issues of the same type, and as Algo.Monster solve the answers to these minor problems, we broke the big problem. Also, we record the solutions that need to be repeated and put them in an array in solving these more minor problems. So that the next time we encounter the same more minor issue, we can query the results. Dynamic programming is an art, and the more problems you solve, the easier it becomes. Solving such problems requires an entirely different approach, namely “optimizing code” by following the concept of dynamic programming.

What is dynamic programming?

Wikipedia defines dynamic programming as follows:

Dynamic programming solves complex problems by breaking them down into a series of simpler subproblems, solving each subproblem only once, and storing its solution.

In other words, dynamic programming must have the following three characteristics.

  1. Decompose the original problem into several similar subproblems. (Emphasis on “similar subproblems”)
  2. All subproblems need to be solved only once. (Emphasis on “only once”)
  3. store the solutions of the subproblems. (Emphasis on “store”)

We can summarize as follows: Solving a complex problem decomposes into several simple issues. And we find the optimal solution to the target problem by solving the optimal solution of the simple problem.

What is memoization?

Memoization is a top-down approach to problem-solving using dynamic programming. It should be memoization because we will create a memo, or “self-reference,” for the values returned by solving each problem.

Please note that memoization is the correct one, memoization=memorization.

Memoization means that there is no re-computation, which makes the algorithm more efficient. Thus, memoization guarantees efficiency in dynamic programming. Still, selecting the correct subproblem ensures that the active program goes through all the possibilities to find the best one.

Now that we have solved memoization, it is time to learn the dynamic programming process.

How to solve dynamic programming problems?

This question is asking how to solve what we are talking about: How to learn dynamic programming?

Dynamic programming is about using histories to avoid double counting. We need some variables to keep these histories, and they are usually kept in a one-dimensional or two-dimensional array. Let’s talk about the three essential steps to do dynamic programming.

Three steps to easy analysis

Step 1: Define the meaning of each element in the array.

Step 2: Find the relationship between the elements in the array. Dynamic programming is about finding the optimal substructure to turn a significant problem into a small one.

Step 3: Find the initial values.

 For Example

Suppose the problem has N stages; each stage has more than one state, the number of states in different locations is not necessarily the same, and one state in one step can get several of all the states in the next stage. Then we naturally have to calculate the number of states in the final stage by using some states from each previous step.

The good news is that sometimes we don’t need to count all the states, like the retarded chessboard problem: How many moves does it take to get from the top left corner of the board to the bottom right corner? Instead, such questions are used to help us understand stages and states.

There can indeed be multiple states for a given stage, just as we reach many positions by taking N moves in this problem. But what is the part that gets us furthest in the N+1 step of the same n-step? That’s right, the position that is farthest away in the N stage. Using a familiar phrase means “the next optimal step is obtained from the current optimal.” So to compute the final optimum, it is sufficient to store the optimum of each step, and the algorithm that solves problems of this nature is called greedy. So the method of computation is recursive.

It’s good that problems are divisible into phases and states. This way, we solve a large class of the issues at once: we can obtain the optimal solution of a stage from the optimal solution of the previous step. What if the optimal solution cannot get it from that of the last step?


Are you saying that only the first two stages must be to get the current optimum?

It is not substantially different from using only one previous step. The most problematic case is that you need all previous cases to work.

 Let’s take another example of a maze.

When computing the shortest route from the start to the end, you can’t just save the state of the current stage because the problem requires you to be quickest, so you have to know all the positions you’ve taken before. Because even if your recent then situation remains the same, the previous route is different and will affect your way afterward. What you need to save at this point is the state that you have experienced in each last stage and calculate the following form based on that information!

There may not be too many states in each stage. But transferring each state to multiple states in the next step, the complexity of the solution is exponential. Therefore the complexity of the time is exponential. Oh, and the previous route just mentioned affects the choice of the next step, an unpleasant situation called having posteriority.

 Is dynamic programming used in real life?


In Google Maps, people can use dynamic programming to find the shortest path between a source and a series of destinations from various available routes.

In networking, it is transferring data from one sender to various receivers in a sequential fashion.

And of course, there are some of our favorite things-shopping, supermarket coupons, and e-commerce coupons from various sources, that can save us a lot of time when we use dynamic programming.

Leave a comment

canlı casino siteleri casino siteleri 1xbet giriş casino hikaye