Jump Game: Greedy Reachability
You are given an array of non-negative integers nums where each element represents your maximum jump length from that position. Starting at the first index, determine if you can reach the last index.
This is LeetCode 55: Jump Game, a classic medium problem that tricks people into reaching for dynamic programming when a simple greedy scan is all you need. Once you see the greedy reachability insight, the solution is five lines of code.
Why this problem matters
Jump Game is one of the cleanest examples of a problem where greedy beats DP. The DP solution works but runs in O(n^2). The greedy solution runs in O(n) with O(1) space. Understanding why greedy works here builds intuition for recognizing greedy-friendly problems in general.
The core pattern, tracking the farthest point you can reach as you scan forward, shows up in Jump Game II, gas station problems, and other reachability problems. It is a building block worth internalizing.
The brute force DP approach
The most natural approach is to think recursively. From index 0, try every possible jump length (1 to nums[0]). For each landing index, recursively check if you can reach the end from there.
def can_jump_recursive(nums):
def helper(i):
if i >= len(nums) - 1:
return True
for jump in range(1, nums[i] + 1):
if helper(i + jump):
return True
return False
return helper(0)
This explores every possible path, which means exponential time in the worst case. Not great.
You can optimize with memoization (top-down DP) by caching whether each index can reach the end:
def can_jump_dp(nums):
n = len(nums)
memo = [None] * n
memo[n - 1] = True
def helper(i):
if memo[i] is not None:
return memo[i]
farthest_jump = min(i + nums[i], n - 1)
for j in range(i + 1, farthest_jump + 1):
if helper(j):
memo[i] = True
return True
memo[i] = False
return False
return helper(0)
This avoids recomputing subproblems, but each index can still scan forward through up to n other indices. That makes it O(n^2) in the worst case. For a bottom-up version, you would iterate from right to left and fill the same table. Same time complexity.
The DP approach is correct. But it is doing way more work than necessary.
The key insight: greedy farthest reach
Here is the idea that changes everything. You do not need to know which path leads to the end. You just need to know whether you can reach the end. And for that, you only need to track one thing: the farthest index you can reach so far.
Walk through the array from left to right. At each index i, update the farthest reachable index: farthest = max(farthest, i + nums[i]). If at any point i > farthest, you are stuck at an index you could never have reached, so return False. If farthest ever reaches or passes the last index, return True.
That is the entire algorithm. One variable, one pass.
Think of it this way: as you walk forward, you are expanding your "reachable horizon." Each position you visit can potentially push that horizon further out. If the horizon ever fails to include where you are standing, you are stuck. If it reaches the end, you are done.
Walking through it step by step
Let's trace through both the passing case ([2, 3, 1, 1, 4]) and the failing case ([3, 2, 1, 0, 4]). The blue pointer shows the current index i. The green pointer shows farthest. Green cells are reachable, red cells are not.
Step 1: i = 0, nums[0] = 2. farthest = max(0, 0 + 2) = 2.
From index 0 we can reach up to index 2. farthest = 2.
Step 2: i = 1, nums[1] = 3. farthest = max(2, 1 + 3) = 4.
From index 1 we can reach index 4. farthest jumps to 4, the last index!
Step 3: i = 2, nums[2] = 1. farthest = max(4, 2 + 1) = 4.
i + nums[i] = 3, which does not beat farthest. No change.
Step 4: i = 3, nums[3] = 1. farthest = max(4, 3 + 1) = 4.
Still 4. We already know we can reach the end.
Step 5: i = 4. We reached the last index. Return True!
farthest >= 4. We can reach the end. Done!
Now the failing case: [3, 2, 1, 0, 4]
Step 1: i = 0, nums[0] = 3. farthest = max(0, 0 + 3) = 3.
We can reach up to index 3. Looking good so far.
Step 2: i = 1, nums[1] = 2. farthest = max(3, 1 + 2) = 3.
1 + 2 = 3, does not beat farthest. No change.
Step 3: i = 2, nums[2] = 1. farthest = max(3, 2 + 1) = 3.
2 + 1 = 3, still stuck at 3.
Step 4: i = 3, nums[3] = 0. farthest = max(3, 3 + 0) = 3.
The zero traps us. farthest stays at 3. We cannot get past index 3.
Step 5: i = 4. But farthest = 3 < 4. We never reach here. Return False.
farthest < i, so this index is unreachable. The answer is False.
Notice the critical difference. In the passing case, index 1 with value 3 pushes the farthest reach all the way to the end. In the failing case, the zero at index 3 acts as a wall. Every path that reaches index 3 gets stuck there because 3 + 0 = 3, which does not extend the reach.
The greedy solution
Here is the complete greedy solution in Python:
def can_jump(nums):
farthest = 0
for i in range(len(nums)):
if i > farthest:
return False
farthest = max(farthest, i + nums[i])
return True
Five lines of logic. Let's break it down:
- Initialize
farthest = 0. We start at index 0, so the farthest we have confirmed reaching is 0. - Loop through every index. At each index, first check: is
i > farthest? If yes, this index is unreachable. We got stuck somewhere before here. Return False. - If we can reach index
i, updatefarthesttomax(farthest, i + nums[i]). From this position, we could jump as far asi + nums[i]. - If we make it through the entire loop without getting stuck, we can reach the end. Return True.
You can add an early exit: if farthest >= len(nums) - 1, return True immediately. This is a minor optimization that does not change the worst-case complexity but can save time on inputs where the reach extends to the end early.
Why greedy beats DP here
The DP approach asks: "For each index, can it reach the end?" and fills a table from right to left. Each entry might scan forward through many indices. The total work is O(n^2) in the worst case.
The greedy approach asks a simpler question: "What is the farthest index I can reach?" It only needs a running maximum, which updates in O(1) per step. Total work: O(n).
Why is the simpler question sufficient? Because reachability is monotonic. If you can reach index 5, you can also reach indices 0 through 4 (since you passed through them to get to 5). You never need to check whether intermediate indices are reachable. You just need to know the frontier, the farthest point, and whether it keeps advancing past each new index.
This is a hallmark of problems where greedy works: the optimal substructure is so simple that you do not need a full table of subproblem results. A single running value captures everything.
| Approach | Time | Space |
|---|---|---|
| Recursive (no memo) | O(2^n) | O(n) call stack |
| DP with memoization | O(n^2) | O(n) |
| Greedy farthest reach | O(n) | O(1) |
Complexity analysis
Time: O(n). One pass through the array. Each index is visited exactly once, and each visit does O(1) work (one comparison, one max operation).
Space: O(1). A single variable farthest. No arrays, no hash maps, no recursion stack.
This is optimal. You must inspect every element at least once (the answer could depend on any single element being zero), so O(n) time is the best possible. And O(1) space means no overhead at all.
Edge cases
Before submitting, make sure your solution handles these:
- Single element
[0]: you are already at the last index. Return True. - All zeros
[0, 0, 0]: stuck at index 0. Return False (unless the array has length 1). - Large first element
[100, 0, 0, 0, 0]: the first jump reaches the end. Return True. - Zeros in the middle
[2, 0, 0]: you can jump from 0 to 2, skipping the zeros. Return True. - Zero right before the end
[1, 1, 0, 1]: reach is index 0 to 1 to 2. At index 2, nums[2] = 0, farthest = 2. At index 3, i = 3 > farthest = 2. Return False.
The greedy solution handles all of these without special cases.
The building blocks
This problem is built on two reusable patterns that CodeBricks drills independently:
1. Greedy local accumulation
The pattern of scanning through a sequence and making a locally optimal update at each step that builds toward a global answer:
best = initial_value
for item in sequence:
best = update(best, item)
if done(best):
return early_result
return best
In Jump Game, best is farthest, and the update is max(farthest, i + nums[i]). In Buy and Sell Stock, best is max_profit, and you update based on the running minimum. In Maximum Subarray (Kadane's algorithm), best is the running maximum subarray sum. The skeleton is the same. The only thing that changes is what you accumulate and how you update.
2. Running min/max
The idea of maintaining an extremal value (minimum or maximum) as you iterate. Here, we track a running maximum of i + nums[i]. In stock problems, you track a running minimum of prices. In trapping rain water, you track running maximums from both directions.
The running max is what lets us answer "can I reach this index?" in O(1). Without it, you would need to check every previous index, which is where the O(n^2) DP approach comes from. The running max compresses all that information into a single value.
The greedy local accumulation pattern works when the optimal answer can be determined by a single pass with a rolling state variable. If the problem requires you to remember which choices you made (not just the best result so far), you probably need DP or backtracking instead.
Common mistakes
1. Forgetting the unreachability check. If you skip the if i > farthest: return False check, your code will happily process indices you could never actually reach, giving wrong answers for cases like [0, 1].
2. Using DP when greedy suffices. The DP solution is correct but needlessly slow. In an interview, showing the greedy approach after discussing DP demonstrates that you understand the tradeoff and can identify when a simpler strategy works.
3. Off-by-one on the loop bound. You need to iterate through all indices, including the last one (or return True after the loop). Some people stop at n - 1 and forget to handle the final check.
4. Confusing this with Jump Game II. Jump Game asks "can you reach the end?" (boolean). Jump Game II asks "what is the minimum number of jumps?" (integer). The greedy approach for II tracks jump boundaries, not just farthest reach.
When to use this pattern
Look for these signals:
- The problem asks about reachability or whether some goal is achievable
- You are scanning a sequence and each element contributes a local constraint or capability
- The answer depends on an extremal value (farthest, highest, lowest) accumulated over the scan
- You do not need to reconstruct the actual path or sequence of choices
Problems that use the same greedy accumulation pattern:
- Jump Game II (LeetCode 45): minimum number of jumps (greedy with jump boundaries)
- Gas Station (LeetCode 134): can you complete a circular route? (running sum)
- Best Time to Buy and Sell Stock (LeetCode 121): running minimum for max profit
- Maximum Subarray (LeetCode 53): Kadane's algorithm, running max subarray sum
From understanding to recall
You have read the greedy solution and it makes sense. Five lines, one variable, one pass. Simple. But can you write it from scratch in an interview without looking at it?
That is the real challenge. The logic is simple but the details matter: initializing farthest to 0, checking i > farthest before updating, using max(farthest, i + nums[i]). These are easy to second-guess under pressure if you have not practiced writing them from memory.
Spaced repetition closes that gap. You practice writing the greedy farthest-reach loop from scratch at increasing intervals. After a few rounds, the pattern is automatic. You see "can you reach the end?" and the code flows out without hesitation.
The greedy local accumulation pattern is one of roughly 60 reusable building blocks that cover hundreds of LeetCode problems. Learning them individually and drilling them with spaced repetition is far more effective than grinding random problems and hoping they stick.
Related posts
- Best Time to Buy and Sell Stock - Another greedy single-pass problem using running min/max
- Climbing Stairs - The classic intro to DP, helpful for understanding why DP is overkill for Jump Game
- Product of Array Except Self - Another problem where a clever single-pass approach replaces brute force
CodeBricks breaks Jump Game into its greedy-local-accumulation and running-min-max building blocks, then drills them independently with spaced repetition. You type each piece from scratch until the pattern is automatic. When a greedy reachability problem shows up in your interview, you do not think about it. You just write it.