Skip to content
← All posts

Arrays: The Foundation of Everything

5 min read
data-structuresarrayspatterns

Every data structure you will ever learn is either built on top of arrays or exists because arrays could not solve a particular problem well enough. Hash maps use arrays internally. Heaps are arrays with a clever indexing scheme. Even linked lists are often taught as the "what if we did not have contiguous memory" alternative to arrays.

If you want to truly understand data structures, start here. The array data structure is the foundation.

What is an array?

An array is a collection of elements stored in contiguous memory. That word, contiguous, is the key to everything. It means the elements sit right next to each other in memory, with no gaps.

10[0]20[1]30[2]40[3]50[4]0x1000x1040x1080x10c0x110
Five integers stored contiguously in memory. Each index maps directly to a memory address: base + (index * element_size).

When you create an array of five integers, the computer allocates a single block of memory big enough to hold all five. If each integer takes 4 bytes, that is 20 bytes in a row. The first element starts at the base address, the second is 4 bytes later, the third is 8 bytes later, and so on.

This is why array indexing starts at 0 in most languages. The index is really an offset: address = base + (index * element_size). Index 0 means "zero offset from the start." Index 3 means "three elements past the start."

This simple formula is what gives arrays their superpower.

Key operations and array time complexity

Because elements are contiguous, the computer can jump to any element by calculating its address. No scanning, no following pointers, no searching. Just math.

Access arr[2] - O(1)10[0]20[1]30[2]40[3]50[4]Jump directly!Insert 99 at index 2 - O(n)10[0]20[1]99[2]30[3]40[4]50[5]Elements shifted right
Top: accessing by index is instant. Bottom: inserting in the middle forces every later element to shift over.

Here is the full breakdown:

OperationTime ComplexityWhy
Access by indexO(1)Direct address calculation
Search (unsorted)O(n)Must check each element
Search (sorted)O(log n)Binary search works
Insert at endO(1) amortizedJust place it (if space exists)
Insert at beginning/middleO(n)Must shift elements right
Delete from endO(1)Just shrink the size
Delete from beginning/middleO(n)Must shift elements left

The pattern is clear. Arrays are incredible for reading and expensive for inserting or deleting anywhere except the end.

This tradeoff shapes everything. It is the reason queues are often backed by linked lists, why databases use B-trees instead of sorted arrays, and why so many interview problems come down to finding a way to avoid unnecessary insertions.

Fixed vs dynamic arrays

In lower-level languages like C or Java, arrays have a fixed size. You declare int arr[10] and that is it. Ten slots. If you need an eleventh, you are out of luck (unless you manually allocate a bigger block and copy everything over).

Most modern languages give you dynamic arrays instead: Python's list, JavaScript's Array, Java's ArrayList, C++'s vector. These resize automatically when they run out of space.

How? When a dynamic array fills up, it allocates a new block (usually double the current size), copies everything over, and frees the old block. This copy is O(n), but it happens so rarely that the amortized cost of appending stays O(1).

Think of it this way: you pay a big tax every now and then, but spread across all your appends, the average cost per operation is constant.

The key insight: dynamic arrays are still arrays under the hood. They still provide O(1) access by index. They still store elements contiguously. The resizing is just a clever wrapper that hides the fixed-size limitation.

When to use arrays vs other structures

Arrays are the right choice when:

  • You need fast random access by index
  • You mostly read data rather than insert or delete
  • You want cache-friendly iteration (contiguous memory is great for CPU caches)
  • The size is known upfront, or appending to the end is the main operation

Consider something else when:

  • You need frequent insertions or deletions in the middle (linked list or balanced tree)
  • You need fast membership testing (hash set gives you O(1) lookups)
  • You need key-value pairs (hash map)
  • The data has a hierarchical structure (tree)
  • You need FIFO ordering with fast dequeue (deque or linked list)

In practice, arrays (and dynamic arrays) are the default. Most of the time, you start with an array and only reach for something else when you hit a specific bottleneck.

Array interview problems: the essential list

Arrays show up in interview problems more than any other data structure. Below are the problems every developer should know, grouped by the technique they use. Each one teaches you something different about working with arrays.

Direct array manipulation

These problems test your ability to work with array indices, in-place modifications, and clever traversal strategies.

  • Rotate Array - Rotates elements in place using the triple-reverse trick, a pure array manipulation technique.
  • Merge Sorted Array - Merges two sorted arrays into one by filling from the end, exploiting the existing buffer space.

Hash map on arrays

When brute-forcing an array problem gives you O(n^2), a hash map often drops it to O(n) by trading space for time.

  • Two Sum - Uses a hash map to find complement pairs in a single pass through the array.
  • Contains Duplicate - Uses a set (hash-based) to detect repeated elements in O(n) time.

Greedy / single pass

These problems scan the array once, making locally optimal decisions that lead to the global answer.

Prefix / suffix techniques

Some array problems become simple once you precompute cumulative information from the left and right.

Two pointers on arrays

When the array is sorted (or you sort it first), two pointers let you search pairs in O(n) instead of O(n^2).

  • The Two Pointers Pattern - A complete guide to the technique, covering sorted pair sums, partitioning, and converging pointers on arrays.

Sliding window on arrays

For subarray problems with a contiguous constraint, the sliding window pattern keeps a running window and adjusts its boundaries.

Wrapping up

Arrays are deceptively simple. The concept of "elements stored next to each other in memory" does not sound like much. But that single property, contiguous storage, gives you O(1) random access, excellent cache performance, and a foundation that nearly every other data structure builds on.

The tradeoff is equally clear: inserting or removing elements anywhere except the end costs O(n) because everything must shift. Understanding this tradeoff is the first step to knowing which data structure fits a given problem.

The problems listed above are not random LeetCode exercises. Each one teaches a specific way to leverage (or work around) the strengths and weaknesses of arrays. Master them, and you will have a toolkit that covers the vast majority of array interview problems you will ever see.