Definition of Deployable
Definition
The “definition of deployable” is your organization’s agreed-upon set of non-negotiable quality criteria that every artifact must pass before it can be deployed to any environment. This definition should be automated, enforced by the pipeline, and treated as the authoritative verdict on whether a change is ready for deployment.
Key principles:
- Pipeline is definitive: If the pipeline passes, the artifact is deployable—no exceptions
- Automated validation: All criteria are checked automatically, not manually
- Consistent across environments: The same standards apply whether deploying to test or production
- Fails fast: The pipeline rejects artifacts that don’t meet the standard immediately
Why This Matters
Without a clear, automated definition of deployable, teams face:
- Inconsistent quality standards: Different people have different opinions on “ready”
- Manual gatekeeping: Deployment approvals become bottlenecks
- Surprise failures: Issues that should have been caught earlier appear in production
- Blame culture: Unclear accountability when problems arise
- Deployment fear: Uncertainty about readiness causes risk aversion
A strong definition of deployable creates:
- Confidence: Everyone trusts that pipeline-approved artifacts are safe
- Speed: No waiting for manual approvals or meetings
- Clarity: Unambiguous standards for the entire team
- Accountability: The pipeline (and the team that maintains it) owns quality
What Should Be in Your Definition
Your definition of deployable should include automated checks for:
Security
- Static security scans (SAST) pass
- Dependency vulnerability scans show no critical issues
- Secrets are not embedded in code
- Authentication/authorization tests pass
Functionality
- All unit tests pass
- Integration tests pass
- End-to-end tests pass
- Regression tests pass
- Business logic behaves as expected
Compliance
- Code meets regulatory requirements
- Audit trails are in place
- Required documentation is generated
- Compliance tests pass
Performance
- Response time meets thresholds
- Resource usage is within acceptable limits
- Load tests pass
- No memory leaks detected
Reliability
- Error rates are within acceptable bounds
- Circuit breakers and retries work correctly
- Graceful degradation is in place
- Health checks pass
Code Quality
- Code style/linting checks pass
- Code coverage meets minimum threshold
- Static analysis shows no critical issues
- Technical debt is within acceptable limits
Example Implementations
Anti-Pattern: Manual Approval Process
Problem: Manual steps delay feedback, introduce inconsistency, and reduce confidence.
Good Pattern: Automated Pipeline Gates
Benefit: Every commit is automatically validated against all criteria. If it passes, it’s deployable.
What is Improved
- Removes bottlenecks: No waiting for manual approval meetings
- Increases quality: Automated checks catch more issues than manual reviews
- Reduces cycle time: Deployable artifacts are identified in minutes, not days
- Improves collaboration: Shared understanding of quality standards
- Enables continuous delivery: Trust in the pipeline makes frequent deployments safe
- Reduces stress: Clear criteria eliminate guesswork and blame
Common Patterns
Progressive Quality Gates
Structure your pipeline to fail fast on quick checks, then run expensive tests:
Context-Specific Definitions
Some criteria may vary by context:
Error Budget Approach
Use error budgets to balance speed and reliability:
If error budget is exhausted, focus shifts to reliability work instead of new features.
FAQ
Who decides what goes in the definition of deployable?
The entire team—developers, QA, operations, security, and product—should collaboratively define these standards. It should reflect genuine risks and requirements, not arbitrary bureaucracy.
What if the pipeline passes but we find a bug in production?
This indicates a gap in your definition of deployable. Add a test to catch that class of bug in the future. The definition should evolve based on production learnings.
Can we skip pipeline checks for “urgent” hotfixes?
No. If the pipeline can’t validate a hotfix quickly enough, that’s a problem with your pipeline, not your process. Fix the pipeline, don’t bypass it. Bypassing quality checks for “urgent” changes is how critical bugs reach production.
How strict should our definition be?
Strict enough to prevent production incidents, but not so strict that it becomes a bottleneck. If your pipeline rejects 90% of commits, your standards may be too rigid. If production incidents are frequent, your standards may be too lax.
Should manual testing be part of the definition?
Manual exploratory testing is valuable for discovering edge cases, but it should inform the definition, not be the definition. Automate the validations that result from manual testing discoveries.
What about things we can’t test automatically?
Some requirements (like UX polish or accessibility) are harder to automate fully. For these:
- Automate what you can (e.g., accessibility checkers, visual regression tests)
- Make manual checks lightweight and concurrent, not blockers
- Continuously work to automate more
Health Metrics
- Pipeline pass rate: Should be 70-90% (too high = tests too lax, too low = tests too strict)
- Pipeline execution time: Should be < 30 minutes for full validation
- Production incident rate: Should decrease over time as definition improves
- Manual override rate: Should be near zero (manual overrides indicate broken process)
Additional Resources
- Dave Farley: Real Example of a Deployment Pipeline in the Fintech Industry
- Continuous Delivery: The Deployment Pipeline
- Accelerate: Building Quality In - Nicole Forsgren, Jez Humble, Gene Kim
- Site Reliability Engineering: Implementing SLOs