Software Architecture Patterns in Coding Problems
If you have taken a software engineering course, you have probably seen architecture patterns on a slide deck somewhere. Pipe-and-filter. Client-server. Layered. Master-slave. You memorize them for the exam, draw some boxes and arrows, and move on with your life.
But here is a secret that nobody tells you in that class: these same software architecture patterns show up every time you solve a coding problem. The reason architecture exists is because systems are too complex to build as one giant blob. You have to break them into focused, independent pieces that communicate through clean interfaces.
That is the exact same skill you need to solve hard algorithm problems. Decomposition problem solving is the core of both disciplines. The best interview candidates do not just write code. They architect their solutions.
The four patterns
Before we connect these to code, let's look at the four architecture patterns side by side.
Each pattern answers a different question about how components should be organized. And each one has a direct analog in how you structure a coding solution.
Pipe-and-Filter = Multi-pass Array Processing
Product of Array Except Self: forward pass, backward pass, combine. Each stage takes an array and produces an array.
Also see: Sliding Window, prefix/suffix patterns
Client-Server = Data Structure with Query API
LRU Cache: the cache is the server, get/put are client requests, hash map + linked list is the hidden implementation.
Also see: Implement Trie, Min Stack, design problems
Master-Slave = Divide-and-Conquer / Tree Recursion
Maximum Depth of Binary Tree: the root delegates to left and right children, then combines their answers.
Also see: Merge Intervals, Course Schedule, any recursive decomposition
Layered = DP Tables and BFS Levels
Coin Change: dp[amount] only depends on dp[amount - coin]. Each layer builds on the previous one.
Also see: Number of Islands (BFS levels), Climbing Stairs
Let's walk through each one.
Breaking problems down is architecture
The core skill of software architecture is decomposition: taking something large and complex and splitting it into smaller, focused pieces that each do one thing well. That is also the core skill of problem solving.
When you face a hard coding problem, your first job is never "write the code." Your first job is to figure out how the problem breaks into smaller sub-problems. Once you see the sub-problems, the code almost writes itself.
Take Climbing Stairs. At first it looks like you need to enumerate all possible ways to climb n steps. That sounds overwhelming. But the moment you decompose it, the problem collapses:
- To reach step
i, you either came from stepi - 1or stepi - 2. - So
dp[i] = dp[i - 1] + dp[i - 2].
That is architecture. You took a big, vague problem and decomposed it into two small, precise sub-problems. The solution follows directly from the decomposition.
This is not a coincidence. It is the same mental skill, whether you are designing a microservice or solving a LeetCode problem. The best engineers decompose instinctively.
Pipe-and-filter in code
In software architecture, pipe-and-filter means data flows through a sequence of processing stages. Each filter takes input, transforms it, and passes the result to the next filter. The constraint is that each stage works with the same type of data. Input goes in, transformed output comes out.
This pattern is everywhere in coding problems. Whenever you solve something with multiple passes over the data, each pass transforming or accumulating information, you are building a pipe-and-filter pipeline.
Product of Array Except Self
Product of Array Except Self is a textbook pipeline:
- Forward pass (Filter 1): Build the prefix products array. Takes an array, produces an array.
- Backward pass (Filter 2): Build the suffix products array. Takes an array, produces an array.
- Combine step (Filter 3): Multiply prefix and suffix at each index. Takes two arrays, produces the final array.
Each stage has one job. Each stage takes array-shaped data and produces array-shaped data. The stages are independent enough that you could reason about each one in isolation. That is the power of pipe-and-filter.
Sliding Window as a filter pipeline
The Sliding Window Pattern is another pipeline, but the "filters" run in a loop:
- Expand: Move the right pointer to include more data.
- Validate: Check if the current window meets the constraint.
- Contract: If the window is invalid (or we want to try smaller), move the left pointer.
- Record: Update the best answer seen so far.
Each step is a filter. Data (the window state) flows through them in sequence. You can debug any step independently. If your answer is wrong, you can ask: "Is my expand step correct? Is my validation correct? Is my contraction correct?" Pipe-and-filter gives you that isolation.
Client-server in code
The client-server pattern is about separation of concerns through a request-response interface. A client asks for something. A server does the work internally and returns a result. The client does not know or care how the server works inside. It just trusts the API.
In coding problems, this pattern shows up whenever you build a data structure that responds to queries. The "server" is your data structure. The "client" is the code that calls its methods. And the key insight is that the internal implementation is hidden behind a clean interface.
LRU Cache
LRU Cache is the purest client-server problem on LeetCode. Think about it:
- The server: A cache backed by a hash map and a doubly linked list.
- The client API: Two operations,
get(key)andput(key, value). - The hidden implementation: The linked list maintains access order. The hash map provides O(1) lookups. The client never sees any of this.
When you design your LRU Cache, you are literally designing a server. You decide what data structures to use internally. You define the interface. You make sure the client gets correct responses regardless of what happens under the hood. This is architecture.
Implement Trie
Implement Trie follows the same pattern. The server is a tree of nodes. The client API is insert(word), search(word), and startsWith(prefix). The caller does not need to know about the tree structure, the children maps, or the isEnd flags. They just call the API and get answers.
If you understand client-server architecture, you understand how to approach every "Design X" problem on LeetCode. Define the interface first, then build the internals to support it.
Master-slave in code
The master-slave pattern is about coordination. One component (the master) breaks a problem into pieces, delegates each piece to a worker (slave), and then combines their results. The master does not do the heavy lifting itself. It orchestrates.
In coding, this is divide-and-conquer. It is tree recursion. It is any problem where a function breaks the work into sub-problems, delegates them to recursive calls, and merges the results.
Merge Intervals
Merge Intervals has a clear master-slave structure. The "master" function does two things:
- Sorts the intervals (preprocessing, like a master organizing work before delegation).
- Iterates and merges overlapping intervals, effectively delegating the "should I merge these two?" decision to a comparison at each step.
The master coordinates the overall strategy. The comparison logic at each step is the worker doing a small, focused task.
Maximum Depth of Binary Tree
Maximum Depth of Binary Tree is the clearest example of master-slave recursion. Each node is a master that:
- Delegates to the left child: "What is your depth?"
- Delegates to the right child: "What is your depth?"
- Combines the results:
1 + max(leftDepth, rightDepth).
The root does not compute the entire tree's depth by itself. It delegates to its children, who delegate to their children, all the way down. Each node trusts its workers to return the right answer. This is exactly how a master-slave architecture works in a distributed system: the master sends tasks out, collects results, and aggregates them.
Course Schedule
Course Schedule takes this further. The main function is the master that must check every node in the graph for cycles. It does not do all the DFS traversal itself in one giant loop. Instead, it iterates over every node and delegates a DFS call for each one. Each DFS call is a worker exploring one connected component. The master coordinates which nodes still need visiting and aggregates the final answer.
Layered architecture in code
Layered architecture means your system is organized into horizontal layers, and each layer only communicates with the layer directly above or below it. The presentation layer talks to the business logic layer. The business logic layer talks to the data access layer. You never skip layers.
In coding problems, layered architecture appears in two major places: dynamic programming and BFS.
Coin Change
Coin Change has a beautifully layered structure. The DP table is built layer by layer:
dp[0]is the base case (0 coins needed for amount 0).dp[1]depends only ondp[1 - coin]for each coin denomination.dp[2]depends only ondp[2 - coin]for each coin denomination.dp[amount]depends only on previous layers.
Each "layer" (each value of amount) only talks to the layer below it (amount - coin). You never skip layers. dp[5] does not magically depend on dp[0] unless there is a coin with value 5. The layered constraint makes the solution correct by construction, because each layer fully resolves before the next one uses it.
This is why DP problems are hard at first but become systematic once you see the layered pattern. You just need to figure out what each layer is and how it depends on the previous layer.
Number of Islands
Number of Islands uses BFS, which is inherently layered. When you run BFS from a cell:
- Layer 0: The starting cell.
- Layer 1: All cells at distance 1 from the start.
- Layer 2: All cells at distance 2 from the start.
Each layer is fully processed before the next layer begins. A cell at distance 2 is never visited before all cells at distance 1 are handled. This layered processing is what guarantees BFS finds the shortest path in unweighted graphs. The architecture (layers that depend only on the previous layer) is what makes the algorithm correct.
The takeaway
Good architecture in a codebase and good architecture in a coding solution come from the same skill: decompose the problem, isolate responsibilities, and make each piece clean and independently understandable.
When you learn software architecture patterns in class, you are not learning something separate from coding. You are learning the same thinking patterns that the best problem solvers use every day:
- Pipe-and-filter teaches you to build multi-pass solutions where each pass has one job.
- Client-server teaches you to design clean interfaces that hide complexity.
- Master-slave teaches you to delegate sub-problems and trust recursive calls.
- Layered teaches you to build solutions where each step depends only on the previous step.
The next time you sit down with a hard problem in an architecture coding interview, do not just start writing code. Ask yourself: what is the architecture of this solution? Which pattern fits? How does this problem decompose?
That question alone will get you further than memorizing a hundred solutions.