Testing
Confidence, not coverage
What this chapter is
This chapter is not about frameworks. It is not about coverage percentages. It is not about mocks, spies, or tools.
This chapter is about trust.
Testing exists so you can change a system without fear.
The core truth
Tests do not prove correctness.
They prove stability of behavior.
A test says:
"This must remain true, even when everything else changes."
🧠 Mental Model Tests lock intent in place.
Why testing exists at all
Without tests:
- refactors hesitate
- fear grows
- changes slow
- systems calcify
With tests:
- change becomes routine
- design improves
- confidence replaces caution
Testing is not overhead. Fear is.
Tests protect behavior, not implementation
The biggest testing mistake is testing how something works.
Tests should care about:
- inputs
- outputs
- side effects
- invariants
- failure behavior
They should not care about:
- internal steps
- helper functions
- temporary structure
🧠 Architect's Note Good tests survive refactors. Bad tests resist them.
Invariants are the highest-value tests
An invariant is something that must always be true.
Examples:
- balances never go negative
- users cannot access forbidden data
- state transitions are valid
- identity never changes
Invariants deserve tests first.
🧠 Mental Model If an invariant breaks, the system is lying.
Boundaries deserve tests
Boundaries are where:
- assumptions meet reality
- failures occur
- misuse happens
Test boundaries:
- input validation
- external interfaces
- persistence edges
- failure paths
🧠 Perspective Most serious bugs enter through boundaries.
Failure paths matter more than success paths
Happy paths are easy. Failure paths are forgotten.
Testing failure paths ensures:
- partial failures don't corrupt state
- retries are safe
- recovery is possible
🧠 Architect's Note If failure isn't tested, it isn't designed.
Tests enable design feedback
Tests are not just safety nets. They are design mirrors.
When tests are:
- hard to write
- fragile
- overly complex
…the design is usually the problem.
🧠 Mental Model Difficulty testing is a signal, not an inconvenience.
Unit vs integration is the wrong debate
The real questions are:
- What behavior am I protecting?
- What assumptions am I verifying?
- Where does failure hurt most?
Use the smallest test that gives confidence. Use larger tests where boundaries demand it.
🧠 Perspective Confidence, not purity, is the goal.
Tests and time
Systems change. Tests must change slower.
If every small refactor breaks tests:
- tests are overfitted
- intent is unclear
- trust erodes
Tests should anchor behavior, not freeze structure.
Testing and ownership
Tests encode responsibility.
When a test fails, it should be obvious:
- what broke
- why it matters
- who owns the fix
Ambiguous tests create blame. Clear tests create action.
Minimal practice (still no code)
Problem: "A system processes payments and updates balances."
Ask:
- What must always be true after processing?
- What must never happen?
- What happens if processing is retried?
- What failures are acceptable?
Write tests for those answers — not the steps.
What beginners gain here
- Confidence to change code
- Less fear of breaking things
- Better understanding of intent
What experienced developers recognize
- Why refactors stall
- Why test suites feel brittle
- Why velocity drops over time
Tests protected structure, not behavior.
What this chapter deliberately avoids
- Testing libraries
- Tool comparisons
- Mocking strategies
- Coverage metrics
Those are secondary.
Closing
Testing is not about perfection. It is about permission.
Permission to change. Permission to refactor. Permission to improve.
A system without tests demands caution. A system with good tests invites progress.