How to solve a Dynamic Programming Problem?

To dynamically solve a problem, we need to check two necessary conditions: 

  • Overlapping Subproblems: When the solutions to the same subproblems are needed repetitively for solving the actual problem. The problem is said to have overlapping subproblems property.
     

N-th Fibonacci Series as Overlapping Subproblems

  • Optimal Substructure Property: If the optimal solution of the given problem can be obtained by using optimal solutions of its subproblems then the problem is said to have Optimal Substructure Property.

Steps to solve a Dynamic programming problem:

  1. Identify if it is a Dynamic programming problem.
  2. Decide a state expression with the Least parameters.
  3. Formulate state and transition relationships.
  4. Do tabulation (or memorization).

1) How to classify a problem as a Dynamic Programming algorithm Problem?

  • Typically, all the problems that require maximizing or minimizing certain quantities or counting problems that say to count the arrangements under certain conditions or certain probability problems can be solved by using Dynamic Programming.
  • All dynamic programming problems satisfy the overlapping subproblems property and most of the classic Dynamic programming problems also satisfy the optimal substructure property. Once we observe these properties in a given problem be sure that it can be solved using Dynamic Programming.

2) Deciding the state:

Problems with dynamic programming are mostly concerned with the state and its transition. The most fundamental phase must be carried out with extreme care because the state transition depends on the state definition you select.

State:

A state is a collection of characteristics that can be used to specifically describe a given position or standing in a given challenge. To minimise state space, this set of parameters has to be as compact as feasible.

3) Formulating a relation among the states:

The hardest part of a Dynamic Programming challenge is this step, which calls for a lot of intuition, observation, and training.

Example:

Given 3 numbers {1, 3, 5}, the task is to tell the total number of ways we can form a number N using the sum of the given three numbers. (allowing repetitions and different arrangements).

The total number of ways to form 6 is: 8
1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1

Following are the steps to solve the problem:

  • We choose a state for the given problem.
  • N will be used as the determining factor for the state because it can be used to identify any subproblem.
  • The DP state will resemble state(N), where the state(N) is the total number of arrangements required to create N using the elements 1, 3, and 5. Identify the relationship of the transition between any two states.
  • We must now calculate the state (N).

3.1) How to Compute the state?

As we can only use 1, 3, or 5 to form a given number N. Let us assume that we know the result for N = 1, 2, 3, 4, 5, 6 

Let us say we know the result for:
state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6) 
Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3, and 5. Now we can get a sum total of 7 in the following 3 ways:

1) Adding 1 to all possible combinations of state (n = 6) 
Eg: [ (1+1+1+1+1+1) + 1] 
[ (1+1+1+3) + 1] 
[ (1+1+3+1) + 1] 
[ (1+3+1+1) + 1] 
[ (3+1+1+1) + 1] 
[ (3+3) + 1] 
[ (1+5) + 1] 
[ (5+1) + 1]

2) Adding 3 to all possible combinations of state (n = 4);
[(1+1+1+1) + 3] 
[(1+3) + 3] 
[(3+1) + 3]

3) Adding 5 to all possible combinations of state(n = 2) 
[ (1+1) + 5]

(Note how it sufficient to add only on the right-side – all the add-from-left-side cases are covered, either in the same state, or another, e.g. [ 1+(1+1+1+3)]  is not needed in state (n=6) because it’s covered by state (n = 4) [(1+1+1+1) + 3])

Now, think carefully and satisfy yourself that the above three cases are covering all possible ways to form a sum total of 7;
Therefore, we can say that result for 
state(7) = state (6) + state (4) + state (2) 
OR
state(7) = state (7-1) + state (7-3) + state (7-5)
In general, 
state(n) = state(n-1) + state(n-3) + state(n-5)

Below is the implementation for the above approach:

C++




// Returns the number of arrangements to
// form 'n'
int solve(int n)
{
// base case
if (n < 0)
    return 0;
if (n == 0)
    return 1;
  
return solve(n-1) + solve(n-3) + solve(n-5);
}


Java




import java.io.*;
  
class GFG {
  
    // Returns the number of arrangements to
    // form 'n'.
    public static int solve(int n)
    {
        // base case
        if (n < 0)
            return 0;
        if (n == 0)
            return 1;
  
        return solve(n - 1) + solve(n - 3) + solve(n - 5);
    }
    public static void main(String[] args) {}
}


Python3




# Python program to Returns the number of arrangements to form 'n'
def solve(n):
  # Base case
    if(n < 0):
        return 0
    if(n == 0):
        return 1
    return solve(n-1)+solve(n-3)+solve(n-5)
  
  # This code is contributed by ishankhandelwals.


C#




using System;
  
public class GFG {
  
    // Returns the number of arrangements to
    // form 'n'
    public static int solve(int n)
    {
        // base case
        if (n < 0)
            return 0;
        if (n == 0)
            return 1;
  
        return solve(n - 1) + solve(n - 3) + solve(n - 5);
    }
  
    static public void Main() {}
}
  
// This code is contributed by akashish__


Javascript




//JS code that returns the number of arrangements to
// form 'n'
function solve(n)
{
  
// base case
if (n < 0)
    return 0;
if (n == 0)
    return 1;
return solve(n-1) + solve(n-3) + solve(n-5);
}
  
// This code is contributed by ishankhandelwals.


Time Complexity: O(3n), As at every stage we need to take three decisions and the height of the tree will be of the order of n.
Auxiliary Space: O(n), The extra space is used due to the recursion call stack.

The above code seems exponential as it is calculating the same state again and again. So, we just need to add memoization.

4) Adding memoization or tabulation for the state

The simplest portion of a solution based on dynamic programming is this. Simply storing the state solution will allow us to access it from memory the next time that state is needed.

Adding memoization to the above code:

C++




// Initialize to -1
int dp[MAXN];
  
// This function returns the number of
// arrangements to form 'n'
int solve(int n)
{
// base case
if (n < 0)
    return 0;
if (n == 0)
    return 1;
  
// Checking if already calculated
if (dp[n]!=-1)
    return dp[n];
  
// Storing the result and returning
return dp[n] = solve(n-1) + solve(n-3) 
                          + solve(n-5);
}


Java




// Initialize to -1
int dp = new int[MAXN];
  
// This function returns the number of
// arrangements to form 'n'
int solve(int n)
{
    // base case
    if (n < 0)
        return 0;
    if (n == 0)
        return 1;
  
    // Checking if already calculated
    if (dp[n] != -1)
        return dp[n];
  
    // Storing the result and returning
    return dp[n]
        = solve(n - 1) + solve(n - 3) + solve(n - 5);
}


Python3




# Initialize to -1
dp = []
  
# This function returns the number of
# arrangements to form 'n'
def solve(n):
    # base case
    if n < 0:
        return 0
    if n == 0:
        return 1
  
# Checking if already calculated
    if dp[n] != -1:
        return dp[n]
  
# Storing the result and returning
    dp[n] = solve(n-1) + solve(n-3) + solve(n-5)
    return dp[n]
  
  # This code is contributed by ishankhandelwals.


C#




using System;
  
public class GFG
{
    
      // Initialize to -1
    public static int dp = new int[MAXN];
  
    // This function returns the number of
    // arrangements to form 'n'
    public static int solve(int n)
    {
    // base case
    if (n < 0)
        return 0;
    if (n == 0)
        return 1;
  
    // Checking if already calculated
    if (dp[n]!=-1)
        return dp[n];
  
    // Storing the result and returning
    return dp[n] = solve(n-1) + solve(n-3) 
                              + solve(n-5);
    }
  
    static public void Main (){
  
        // Code
    }
}
  
// This code is contributed by akashish__


Javascript




// Initialize to -1
let dp = [];
for (let i = 0; i < MAXN; i++) {
    dp.push(0);
}
  
// This function returns the number of
// arrangements to form 'n'
function solve(n) {
    // base case
    if (n < 0)
        return 0;
    if (n == 0)
        return 1;
  
    // Checking if already calculated
    if (dp[n] != -1)
        return dp[n];
  
    // Storing the result and returning
    return dp[n] = solve(n - 1) + solve(n - 3)
        + solve(n - 5);
}
// contributed by akashish__


Time Complexity: O(n), As we just need to make 3n function calls and there will be no repetitive calculations as we are returning previously calculated results.
Auxiliary Space: O(n), The extra space is used due to the recursion call stack.

Dynamic Programming (DP) Tutorial with Problems

Dynamic Programming (DP) is defined as a technique that solves some particular type of problems in Polynomial Time. Dynamic Programming solutions are faster than the exponential brute method and can be easily proved their correctness.

Important Topics in Dynamic Programming Tutorial

  • Characteristics of Dynamic Programming Algorithm:
  • What is the difference between a Dynamic programming algorithm and recursion?
  • Techniques to solve Dynamic Programming Problems:
  • Tabulation(Dynamic Programming) vs Memoization:
  • How to solve a Dynamic Programming Problem?
  • How to solve Dynamic Programming problems through Example?
  • Greedy approach vs Dynamic programming
  • Some commonly asked problems in Dynamic programming:
  • FAQs about Dynamic Programming Algorithm:

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.

Introduction to Dynamic Programming – Data Structures and Algorithm Tutorials

Similar Reads

Characteristics of Dynamic Programming Algorithm:

...

What is the difference between a Dynamic programming algorithm and recursion?

In general, dynamic programming (DP) is one of the most powerful techniques for solving a certain class of problems.  There is an elegant way to formulate the approach and a very simple thinking process, and the coding part is very easy.  Essentially, it is a simple idea, after solving a problem with a given input, save the result as a reference for future use, so you won’t have to re-solve it.. briefly ‘Remember your Past’ :).  It is a big hint for DP if the given problem can be broken up into smaller sub-problems, and these smaller subproblems can be divided into still smaller ones, and in this process, you see some overlapping subproblems.  Additionally, the optimal solutions to the subproblems contribute to the optimal solution of the given problem (referred to as the Optimal Substructure Property).  The solutions to the subproblems are stored in a table or array (memoization) or in a bottom-up manner (tabulation) to avoid redundant computation. The solution to the problem can be constructed from the solutions to the subproblems. Dynamic programming can be implemented using a recursive algorithm, where the solutions to subproblems are found recursively, or using an iterative algorithm, where the solutions are found by working through the subproblems in a specific order....

Techniques to solve Dynamic Programming Problems:

In dynamic programming, problems are solved by breaking them down into smaller ones to solve the larger ones, while recursion is when a function is called and executed by itself. While dynamic programming can function without making use of recursion techniques, since the purpose of dynamic programming is to optimize and accelerate the process, programmers usually make use of recursion techniques to accelerate and turn the process efficiently. When a function can execute a specific task by calling itself, receive the name of the recursive function. In order to perform and accomplish the work, this function calls itself when it has to be executed. Using dynamic programming, you can break a problem into smaller parts, called subproblems, to solve it. Dynamic programming involves solving the problem for the first time, then using memoization to store the solutions. Therefore, the main difference between the two techniques is their intended use; recursion is used to automate a function, whereas dynamic programming is an optimization technique used to solve problems. Recursive functions recognize when they are needed, execute themselves, then stop working. When the function identifies the moment it is needed, it calls itself and is executed; this is called a recursive case. As a result, the function must stop once the task is completed, known as the base case. By establishing states, dynamic programming recognizes the problem and divides it into sub-problems in order to solve the whole scene. After solving these sub-problems, or variables, the programmer must establish a mathematical relationship between them. Last but not least, these solutions and results are stored as algorithms, so they can be accessed in the future without having to solve the whole problem again....

Tabulation vs Memoization:

1. Top-Down(Memoization):...

How to solve a Dynamic Programming Problem?

There are two different ways to store the values so that the values of a sub-problem can be reused. Here, will discuss two patterns of solving dynamic programming (DP) problems:...

How to solve Dynamic Programming problems through Example?

To dynamically solve a problem, we need to check two necessary conditions:...

Greedy approach vs Dynamic programming

...

Some commonly asked problems in Dynamic programming:

...

FAQs about Dynamic Programming Algorithm:

...

Conclusion:

...

Contact Us