Skip to content
← All posts

Meeting Rooms II: Min Heap Scheduling

7 min read
leetcodeproblemmediumheap

Meeting Rooms II (LeetCode 253) is one of the most practical problems on the platform. It shows up in real scheduling systems, conference room booking tools, and resource allocation engines. The question is simple: given a list of meeting time intervals, find the minimum number of conference rooms you need so that no two overlapping meetings share a room.

It is also a perfect introduction to min heap scheduling. The heap lets you efficiently track which room frees up next, and that single insight turns what could be a messy simulation into a clean O(n log n) solution.

The problem

You are given an array of meeting time intervals [[start, end], ...]. Each meeting runs from start up to (but not including) end. Two meetings overlap if one starts before the other ends. Return the minimum number of conference rooms required.

Example: intervals = [[0,30],[5,10],[15,20]]

Input meetings (sorted by start time)051015202530[0, 30][5, 10][15, 20]Room assignment (2 rooms needed)Room 1[0, 30]Room 2[5, 10][15, 20]
[5,10] starts while [0,30] is still running, so it needs a second room. [15,20] reuses Room 2 after [5,10] ends.

Meeting [0,30] runs the entire time. When [5,10] starts at time 5, the first meeting is still going, so we need a second room. Later, [15,20] starts at 15, and by then [5,10] has ended (it ended at 10), so it can reuse Room 2. The answer is 2 rooms.

Why a min heap works

The key question at each new meeting is: has the earliest-ending active meeting finished yet? If yes, that room is free and we reuse it. If no, every room is still occupied and we need a new one.

A min heap of end times gives you the earliest ending time in O(1). You compare the new meeting's start time against the heap's minimum:

  • If start >= heap_min, a room is free. Pop the old end time and push the new one (same room, new meeting).
  • If start < heap_min, no room is free. Push the new end time (new room).

The heap size at any point equals the number of rooms in use. The maximum heap size across all meetings is your answer.

The algorithm

  1. Sort the meetings by start time.
  2. Initialize a min heap with the end time of the first meeting.
  3. For each remaining meeting:
    • If its start time is at least the heap minimum, pop the minimum (free that room).
    • Push the current meeting's end time onto the heap.
  4. Return the heap size.
import heapq

def min_meeting_rooms(intervals: list[list[int]]) -> int:
    if not intervals:
        return 0

    intervals.sort(key=lambda x: x[0])
    heap = [intervals[0][1]]  # end time of first meeting

    for start, end in intervals[1:]:
        if start >= heap[0]:
            heapq.heapreplace(heap, end)  # reuse room
        else:
            heapq.heappush(heap, end)     # new room

    return len(heap)

Short and clean. heapreplace is a combined pop-and-push that is slightly more efficient than doing them separately. The comparison start >= heap[0] checks whether the earliest-ending meeting has finished by the time the new one starts.

Visual walkthrough

Let's trace through the algorithm on [[0,30],[5,10],[15,20]]. After sorting (these are already sorted by start), we process each meeting and watch the heap evolve.

Step 1: Process [0, 30]. Heap is empty. Allocate Room 1, push end time 30.

051015202530[0, 30]
min-heap (end times)30rooms = 1

First meeting always gets a new room. Heap = [30].

Step 2: Process [5, 10]. Heap min is 30. Since 5 < 30, this overlaps. New room needed.

051015202530[0, 30][5, 10]
min-heap (end times)1030rooms = 2

5 < 30 means no room is free. Push 10 onto the heap. Heap = [10, 30]. Rooms = 2.

Step 3: Process [15, 20]. Heap min is 10. Since 15 >= 10, we reuse that room.

051015202530[0, 30][5, 10][15, 20]
min-heap (end times)2030rooms = 2

15 >= 10, so [5,10]'s room is free. Pop 10, push 20. Heap = [20, 30]. Still 2 rooms.

Done! The max heap size during the process was 2. Answer: 2 rooms.

051015202530[0, 30][5, 10][15, 20]
min-heap (end times)2030rooms = 2

The heap tracked active meetings at each step. Peak size = 2 = answer.

The critical moment is step 2. When [5,10] arrives, the heap minimum is 30. Since 5 < 30, the first room is not free. We push 10 onto the heap, growing it to size 2 (two rooms). In step 3, when [15,20] arrives, the heap minimum is 10. Since 15 >= 10, a room is free. We pop 10 and push 20. The heap stays at size 2. The answer is 2.

Why sorting is essential

If you processed meetings in random order, you could not make correct decisions about room reuse. Consider [[5,10],[15,20],[0,30]]. If you process [0,30] last, you might think it needs a new room even though it should have been the first meeting assigned.

Sorting by start time gives you a clean invariant: you see meetings in the order they actually begin. Combined with the heap tracking the earliest end time, every room-reuse decision is optimal.

Why >= instead of >? A meeting ending at time 10 means the room is free starting at time 10. So a new meeting starting at exactly 10 can use that room. The >= handles this correctly.

Approach 2: sweep line (event-based)

There is an alternative approach that does not use a heap at all. Instead, treat every meeting as two events: a +1 at the start (someone needs a room) and a -1 at the end (someone leaves a room). Sort all events by time, and sweep through them while tracking the running count.

The peak of that running count is the answer.

def min_meeting_rooms_sweep(intervals: list[list[int]]) -> int:
    events = []
    for start, end in intervals:
        events.append((start, 1))   # meeting starts
        events.append((end, -1))    # meeting ends

    events.sort()  # sort by time, then by type (-1 before +1 at same time)
    rooms = 0
    peak = 0

    for _, delta in events:
        rooms += delta
        peak = max(peak, rooms)

    return peak

This works because Python sorts tuples lexicographically. At the same time value, (10, -1) comes before (10, 1), meaning a meeting ending at 10 frees its room before a meeting starting at 10 claims one. That is the correct behavior.

Trace for [[0,30],[5,10],[15,20]]:

Events sorted: (0,+1), (5,+1), (10,-1), (15,+1), (20,-1), (30,-1)

Eventroomspeak
(0, +1)11
(5, +1)22
(10, -1)12
(15, +1)22
(20, -1)12
(30, -1)02

Peak is 2. Same answer, different lens.

Which approach should you use?

Both are O(n log n) because of sorting. The heap approach uses O(n) space for the heap, while the sweep line uses O(n) space for the event list.

ApproachTimeSpaceNotes
Min heapO(n log n)O(n)Sort + heap operations
Sweep lineO(n log n)O(n)Sort events + linear scan

The heap approach is more intuitive for this specific problem because it directly models the concept of "which room frees up next." The sweep line is more general and transfers to problems where you care about the maximum overlap of any set of ranges.

Pick whichever one you can explain clearly in an interview. Both are correct, and neither has a meaningful performance advantage.

Edge cases

No meetings. Return 0. The code handles this with the early return.

Non-overlapping meetings. Something like [[1,5],[6,10],[11,15]]. The heap never grows past size 1 because each new meeting starts after the previous one ends. Answer: 1 room.

All meetings overlap. [[1,10],[2,10],[3,10]]. Each new meeting starts before any previous one ends, so the heap grows by one each time. Answer: 3 rooms (one per meeting).

Meetings that touch at endpoints. [[1,5],[5,10]]. Since 5 >= 5, the second meeting reuses the room. Answer: 1 room.

Complexity analysis

Time: O(n log n). Sorting the intervals takes O(n log n). Each of the n meetings involves one heap operation (push or replace), each O(log n). Total: O(n log n).

Space: O(n). The heap can hold up to n end times in the worst case (when all meetings overlap). The sweep line variant also uses O(n) for the event list.

The building blocks

Meeting Rooms II is built on one core building block that appears across many LeetCode scheduling and interval problems.

Greedy interval scheduling

Sort by start time, then make a local decision for each interval using a min heap to track the constraint. The heap holds "end times" and answers the question "is any resource free right now?" in O(log n) per operation.

This same pattern powers:

  • Task Scheduler (greedy frequency counting to minimize idle time)
  • Non-overlapping Intervals (sort by end time, greedily keep non-conflicting)
  • Merge Intervals (sort by start, merge overlapping neighbors)
  • Car Pooling (sweep line over pickup/dropoff events)

The underlying technique is always the same: sort to create order, then sweep with some tracking structure (heap, counter, or running state) to make greedy decisions. If your brute force involves checking all pairs of intervals for overlap, sorting plus a heap almost certainly gives you O(n log n).

If you can write the sort-then-heap solution from scratch, you have the building block for any "how many concurrent X at peak" problem. Calendar conflicts, server connections, CPU task scheduling, operating room bookings: they all reduce to this same pattern.

Why you will forget this (and how to fix it)

The algorithm is only a few lines, but the details matter. Is it start >= heap[0] or start > heap[0]? Do you pop before you push, or use heapreplace? Should you sort by start time or end time?

Each of those decisions has a reason behind it, and getting any one wrong produces subtle bugs. The fix is not to memorize the code but to internalize the logic: "sort by start, compare new start to earliest end, reuse or allocate." Once that reasoning is second nature, the code writes itself.

Spaced repetition is how you get there. Type the solution from scratch, verify it, then do it again in a few days. After a handful of reps, min heap scheduling becomes automatic.

Related posts

  • Merge Intervals - The foundational interval pattern: sort by start, sweep and merge overlapping ranges
  • Task Scheduler - Another scheduling problem that uses greedy frequency counting instead of a heap

Visual Learner? Understand how heaps work under the hood with our Heap Data Structure Guide.