Verification vs. Validation: Are We Building It Right, or Building the Right Thing?
There are two questions you need to answer about any piece of software:
- Did we build it right? (Verification)
- Did we build the right thing? (Validation)
These sound similar, but they are fundamentally different questions. Getting them confused is one of the most common mistakes in software engineering, and it leads to projects that pass every test but still fail in production because they solve the wrong problem.
Verification: did we build it right?
Verification is about correctness. Does the software do what the specification says it should do? Does the code match the design? Do the outputs match the expected outputs for a given set of inputs?
Verification asks: given the requirements we wrote down, does this implementation satisfy them?
This is the world of testing, code reviews, and static analysis. You have a specification, you have an implementation, and you check whether the implementation matches the specification.
# Specification: function should return the sum of two numbers
# Verification: does the implementation match?
def add(a, b):
return a + b
# Unit test (verification)
assert add(2, 3) == 5 # Pass: matches specification
assert add(-1, 1) == 0 # Pass: matches specification
assert add(0, 0) == 0 # Pass: matches specification
The tests pass. The implementation matches the specification. Verification is satisfied. But notice that verification has nothing to say about whether "sum of two numbers" was the right feature to build in the first place. That is validation's job.
Types of verification
Unit testing verifies individual functions and methods in isolation. Each test checks one behavior against its specification.
Integration testing verifies that modules work together correctly. The individual pieces might work in isolation, but do they produce the right result when combined?
Code reviews are a form of manual verification. Another developer reads your code and checks whether it correctly implements the intended logic.
Static analysis uses tools to verify properties of the code without running it. Type checking, linting, and security scanning are all verification activities. They check whether the code conforms to rules and specifications.
Formal verification uses mathematical proofs to verify that code satisfies its specification for all possible inputs, not just the test cases you thought of. This is rare in practice but used in safety-critical systems like aviation and medical devices.
The common thread: verification always compares the implementation against a specification. If the specification says "return the sorted array," verification checks whether the returned array is actually sorted. It does not ask whether sorting was the right thing to do.
Validation: did we build the right thing?
Validation is about usefulness. Does the software actually solve the user's problem? Does it meet their real needs, not just the requirements document's version of their needs?
Validation asks: even if this software works perfectly, is it the right software?
This is a harder question, and it is where many projects fail. You can build a flawless piece of software that nobody wants to use. Every test passes. Every requirement is met. But the requirements were wrong, or incomplete, or based on assumptions that turned out to be false.
A concrete example
Imagine a team building an internal tool for a warehouse. The requirements say: "The system should allow workers to scan items and update inventory counts." The team builds exactly that. Every unit test passes. The integration tests pass. The code review is clean. Verification is complete.
Then they deploy it. The warehouse workers hate it. Why? Because they need to scan items while walking, and the interface requires two hands. The requirements never mentioned one-handed operation because nobody asked the workers how they actually do their jobs. The software was built right, but it was not the right software.
Validation would have caught this. A prototype test with actual warehouse workers, a user observation session, or even a conversation about their workflow would have revealed the constraint before thousands of engineering hours were spent.
Types of validation
User acceptance testing (UAT) puts the software in front of real users and asks: does this solve your problem? This is the most direct form of validation.
Beta testing releases the software to a small group of real users in a real environment. Their feedback reveals whether the software meets actual needs.
Prototyping builds a simplified version of the software and tests it with users before investing in the full implementation. This catches validation failures early when they are cheap to fix.
A/B testing compares two versions of the software with real users to determine which one better meets their needs. This is validation through data.
Requirements reviews involve stakeholders reviewing the requirements before implementation begins. This is an early form of validation: are we even planning to build the right thing?
The common thread: validation always involves the real world. Real users, real environments, real workflows. It checks whether the software meets actual needs, not just documented requirements.
The classic summary
The distinction is often summarized in two questions:
| Verification | Validation | |
|---|---|---|
| Question | Are we building the product right? | Are we building the right product? |
| Checks against | Specification / requirements | User needs / real-world use |
| Focus | Correctness | Usefulness |
| Methods | Testing, code review, static analysis | UAT, beta testing, prototyping |
| Timing | During development | Before and after development |
| Can be automated | Mostly yes | Mostly no |
| Found by | Developers and QA | Users and stakeholders |
Why both matter
Verification without validation means you build the wrong thing perfectly. Every test passes, but nobody uses the software because it does not solve their actual problem.
Validation without verification means you identified the right problem but built a broken solution. Users want the feature, but it crashes, produces wrong results, or corrupts data.
You need both. Validation ensures you are solving the right problem. Verification ensures your solution actually works.
Where projects fail
Most engineering teams are good at verification. They write tests, run CI pipelines, do code reviews. The tooling is mature and the process is well understood.
Most engineering teams are bad at validation. They assume the requirements are correct. They build what the product manager wrote in the ticket without questioning whether the ticket describes the right thing. They ship features that technically work but miss the point.
The most expensive bugs in software are not off-by-one errors or null pointer exceptions. Those are verification failures, and they are relatively cheap to find and fix. The most expensive bugs are building the wrong feature entirely. That is a validation failure, and it costs months of engineering time.
A useful mental model: verification is an inside-out check (does the code match the spec?), while validation is an outside-in check (does the product match reality?). You need both directions.
Verification and validation in the SDLC
If you have studied the Software Development Lifecycle, verification and validation map directly to its phases:
- Requirements phase: Validation happens here. Are we capturing the right requirements? Do they reflect what users actually need?
- Design phase: Both. Verification checks that the design satisfies the requirements. Validation checks that the design would actually solve the user's problem.
- Implementation phase: Verification dominates. Unit tests, integration tests, code reviews all check the code against the spec.
- Testing phase: Both. Test cases verify correctness. User acceptance testing validates usefulness.
- Maintenance phase: Validation resurfaces. Users provide feedback on whether the software meets their evolving needs.
The V-Model of software development makes this explicit by pairing each development phase with a corresponding verification or validation activity. Requirements pair with acceptance testing (validation). Design pairs with integration testing (verification). Implementation pairs with unit testing (verification).
Verification and validation in coding interviews
Even in a 45-minute coding interview, both concepts apply.
Verification is when you test your solution against the examples. Does your code produce the expected output for the given inputs? Do your edge cases pass? This is what most candidates focus on, and it is important.
Validation is when you make sure you are solving the right problem. Did you read the constraints correctly? Are you handling the right edge cases? Is your solution optimized for the right complexity target? Many candidates fail interviews not because their code is buggy, but because they solved a slightly different problem than the one asked.
The first two minutes of any interview problem should be validation: "Let me make sure I understand what is being asked." Restate the problem. Clarify constraints. Confirm edge cases. This is exactly the requirements phase of the SDLC applied to a 45-minute window.
Common mistakes
Treating testing as only verification. If your test suite only checks "does the code do what the spec says," you are missing half the picture. Some of your tests should validate that the spec itself is correct.
Skipping validation because you trust the requirements. Requirements are written by humans. Humans make assumptions, forget edge cases, and misunderstand user needs. Always question whether the requirements are right, not just whether the code matches them.
Doing validation too late. If you wait until the software is fully built to show it to users, fixing validation failures is extremely expensive. Validate early with prototypes, mockups, and conversations.
Confusing the two. Saying "we tested it" does not mean both verification and validation happened. Automated tests are almost always verification. Validation requires human judgment about whether the software meets real needs.
The takeaway
Verification and validation are both essential, and they answer different questions. Verification asks whether the software works correctly. Validation asks whether the software is correct to build.
The best engineering teams do both continuously. They write tests and run CI (verification). They also talk to users, prototype early, and question assumptions (validation). Neither one alone is enough.
A perfectly verified system that nobody needs is a waste. A validated idea with a broken implementation is useless. You need to build the right thing, and you need to build it right.
Related posts
- How the SDLC Applies to Solving Coding Problems maps the five SDLC phases to problem-solving steps, including where verification and validation fit.
- Implementation in Software Engineering and Coding Interviews covers how technical debt and coding principles affect the quality of your implementation.
- Coupling in Software Design and Cohesion in Software Design cover design principles that make verification easier by keeping modules focused and independent.