Skip to content
← All posts

Climbing Stairs: Your First Dynamic Programming Problem

7 min read
leetcodeproblemeasydynamic-programming

You are at the bottom of a staircase with n steps. Each time you can climb either 1 step or 2 steps. How many distinct ways can you reach the top?

This is LeetCode 70: Climbing Stairs, and it is the single best introduction to dynamic programming. If you have ever stared at a DP problem and thought "I have no idea where to start," this is the one to learn first. The pattern it teaches shows up in dozens of harder problems.

ground12345🧍+1+1+1+2+2+1 step+2 stepsgoal
At each position you can take 1 step or 2 steps. How many distinct ways to reach step 5?

Why this problem matters

Climbing Stairs looks almost too simple. But it contains the core idea behind all of dynamic programming: break a problem into overlapping subproblems, solve each one once, and combine the results.

Once you see how that works here, you will recognize the same structure in House Robber, Coin Change, Decode Ways, and many others. This is the gateway problem.

Starting with recursion

The most natural way to think about this problem is recursively. To reach step n, you must have come from either step n-1 (taking 1 step) or step n-2 (taking 2 steps). So the number of ways to reach step n is the number of ways to reach step n-1 plus the number of ways to reach step n-2.

def climb_stairs(n):
    if n <= 1:
        return 1
    return climb_stairs(n - 1) + climb_stairs(n - 2)

This is correct, but it is slow. For n = 40, this takes over a billion recursive calls. The problem is that we keep recomputing the same subproblems. climb_stairs(3) gets called many times, each time branching into more duplicate work. The time complexity is O(2^n).

If you draw out the recursion tree for n = 5, you will see climb_stairs(2) computed three separate times and climb_stairs(1) computed five times. That is the hallmark of overlapping subproblems, and it is exactly what DP is designed to fix.

Adding memoization (top-down DP)

The fix is simple: cache the result of each subproblem so you never compute it twice. This is called memoization, or "top-down" dynamic programming.

def climb_stairs(n):
    memo = {}

    def helper(i):
        if i <= 1:
            return 1
        if i in memo:
            return memo[i]
        memo[i] = helper(i - 1) + helper(i - 2)
        return memo[i]

    return helper(n)

Now each value from 0 to n is computed exactly once and looked up every time after that. Time drops from O(2^n) to O(n). Space is O(n) for the memo dictionary plus O(n) for the call stack.

This is a huge improvement, but we can do better. The recursive call stack is unnecessary overhead, and we can eliminate it by flipping our approach.

Building a DP table (bottom-up)

Instead of starting at n and recursing down, we start at 0 and build up. This is "bottom-up" dynamic programming.

Define dp[i] as the number of distinct ways to reach step i. The recurrence is:

  • dp[0] = 1 (one way to stand at the ground: do nothing)
  • dp[1] = 1 (one way to reach step 1: take a single step)
  • dp[i] = dp[i-1] + dp[i-2] for all i >= 2

Let's trace through this for n = 5:

Base cases: dp[0] = 1, dp[1] = 1

i=01i=11i=20i=30i=40i=50

There is 1 way to reach step 0 (do nothing) and 1 way to reach step 1 (one single step).

dp[2] = dp[1] + dp[0] = 1 + 1 = 2

i=01i=11i=22i=30i=40i=50

Arrive from step 0 (+2) or step 1 (+1). Two ways total.

dp[3] = dp[2] + dp[1] = 2 + 1 = 3

i=01i=11i=22i=33i=40i=50

Arrive from step 1 (+2) or step 2 (+1). Three ways total.

dp[4] = dp[3] + dp[2] = 3 + 2 = 5

i=01i=11i=22i=33i=45i=50

Arrive from step 2 (+2) or step 3 (+1). Five ways total.

dp[5] = dp[4] + dp[3] = 5 + 3 = 8

i=01i=11i=22i=33i=45i=58

Arrive from step 3 (+2) or step 4 (+1). Eight ways to reach step 5.

The answer is dp[5] = 8. There are 8 distinct ways to climb 5 stairs.

If that recurrence looks familiar, it should. It is the Fibonacci sequence. 1, 1, 2, 3, 5, 8, 13, .... Climbing Stairs is literally Fibonacci with a different story wrapped around it.

def climb_stairs(n):
    dp = [0] * (n + 1)
    dp[0] = 1
    dp[1] = 1

    for i in range(2, n + 1):
        dp[i] = dp[i - 1] + dp[i - 2]

    return dp[n]

Time: O(n). We fill in each cell of the DP table exactly once.

Space: O(n). We store an array of n + 1 values.

No recursion, no call stack overhead. Just a simple loop. But we can still squeeze out one more improvement.

Optimizing to O(1) space

Look at the recurrence again: dp[i] = dp[i-1] + dp[i-2]. Each value depends on only the two values immediately before it. We do not need the entire array. Two variables are enough.

def climb_stairs(n):
    if n <= 1:
        return 1

    prev2 = 1  # dp[i-2]
    prev1 = 1  # dp[i-1]

    for i in range(2, n + 1):
        current = prev1 + prev2
        prev2 = prev1
        prev1 = current

    return prev1

Same O(n) time, but now O(1) space. Two variables instead of an entire array. This is the optimal solution.

Let's break down what happens for n = 5:

  • Start: prev2 = 1, prev1 = 1
  • i=2: current = 2, shift: prev2 = 1, prev1 = 2
  • i=3: current = 3, shift: prev2 = 2, prev1 = 3
  • i=4: current = 5, shift: prev2 = 3, prev1 = 5
  • i=5: current = 8, shift: prev2 = 5, prev1 = 8
  • Return prev1 = 8

This space optimization trick works whenever your DP recurrence only looks back a fixed number of steps. If dp[i] depends on dp[i-1] and dp[i-2], you need two variables. If it depends on dp[i-1], dp[i-2], and dp[i-3], you need three. This is the "constant-state linear DP" pattern.

Complexity analysis

ApproachTimeSpace
Naive recursionO(2^n)O(n) call stack
Memoized recursionO(n)O(n) memo + stack
Bottom-up DP tableO(n)O(n) array
O(1) space DPO(n)O(1)

All four approaches produce the correct answer. The difference is efficiency. The progression from brute force to optimal is a pattern you will see over and over in DP problems:

  1. Write the recursive solution to get the logic right.
  2. Add memoization to eliminate redundant work.
  3. Convert to bottom-up to remove recursion overhead.
  4. Optimize space by keeping only what the recurrence needs.

You do not always need to go all the way to step 4. But knowing how to get there is what separates someone who memorized a solution from someone who understands the technique.

Edge cases to watch for

Before submitting, make sure your solution handles these:

  • n = 0: return 1. There is one way to stay at the ground (do nothing). Some problem statements define n >= 1, so check.
  • n = 1: return 1. Only one way: take a single step.
  • n = 2: return 2. Either two single steps or one double step.
  • Large n: the O(1) space solution handles n = 45 (the LeetCode constraint) instantly.

The building blocks

This problem is built on a single reusable pattern: linear DP with constant state. The idea is straightforward. You iterate through a sequence, and at each position you compute a value that depends on a fixed number of previous values. Because the lookback window is constant, you can replace the full DP array with a handful of variables.

This pattern has a specific shape:

  • State: a small, fixed number of previous results (here, the last two values)
  • Transition: combine those previous results to get the current one (dp[i] = dp[i-1] + dp[i-2])
  • Space optimization: since you only look back a constant number of steps, replace the array with variables

The same building block appears in many other problems:

  • House Robber (LeetCode 198): dp[i] = max(dp[i-1], dp[i-2] + nums[i]). Same two-variable lookback, different transition.
  • Decode Ways (LeetCode 91): count valid decodings of a digit string. Again depends on dp[i-1] and dp[i-2].
  • Min Cost Climbing Stairs (LeetCode 746): same staircase structure, but with costs on each step.
  • Fibonacci Number (LeetCode 509): literally the same recurrence with no disguise.

Once you internalize linear DP with constant state as a building block, these problems stop looking like separate challenges. They are all the same brick in different walls.

From understanding to recall

You have read through four versions of the solution and the progression makes sense. But here is the thing: understanding a solution and being able to produce it under pressure are very different skills. In an interview, nobody walks you through the recursion-to-DP pipeline. You have to build it yourself, from scratch.

Spaced repetition bridges that gap. Instead of solving Climbing Stairs once and forgetting the details in two weeks, you revisit the core pattern at increasing intervals. Each time you write the recurrence from scratch, identify the base cases, and optimize the space. After a few repetitions, the technique is automatic.

The linear DP with constant state pattern is one of roughly 60 reusable building blocks that cover hundreds of LeetCode problems. Learning them individually and drilling them with spaced repetition is a far more effective strategy than grinding problems randomly and hoping the patterns stick.

Related posts

  • Best Time to Buy and Sell Stock - Another problem where tracking a running value gives you an O(n) single-pass solution
  • Contains Duplicate - A great warm-up for understanding when and why to trade space for time
  • Two Sum - The classic intro problem, and a natural companion to Climbing Stairs for building your foundation