Technical Due Diligence Checklist for Startup Investors
As both a startup founder and angel investor, I’ve been on both sides of technical due diligence. This comprehensive checklist reflects what I look for when evaluating technical capabilities for investment decisions.
Why Technical Due Diligence Matters
Technical due diligence isn’t about finding a perfect codebase—those don’t exist. It’s about understanding:
- Can this team execute? Technical capability is a proxy for execution ability
- Are there hidden risks? Security vulnerabilities, scalability limits, technical debt
- Is the technology defensible? Does it create sustainable competitive advantage?
- What investment is needed? Post-funding technical needs and timeline
The goal is informed investment decisions, not technical perfection.
Architecture and Infrastructure
System Design
Architecture documentation
- Architecture diagram exists and is current
- Components and their interactions are clearly defined
- Data flows are documented
- External dependencies are identified
Questions to ask:
- Walk me through how a typical request flows through your system
- What happens when this component fails?
- How has your architecture evolved over the past year?
Separation of concerns
- Clear boundaries between frontend, backend, and data layers
- Services have well-defined responsibilities
- Business logic is separated from infrastructure concerns
- API contracts are explicit
Red flags:
- Monolithic applications with no decomposition plan
- Circular dependencies between components
- Business logic scattered across multiple layers
Scalability approach
- Current capacity is understood and documented
- Scaling strategy is articulated (vertical, horizontal, or both)
- Bottlenecks are identified
- Cost implications of scaling are understood
Questions to ask:
- What happens at 10x current load?
- Where will the system break first under scale?
- How quickly can you scale up? Scale down?
Resilience and fault tolerance
- Single points of failure are identified
- Failure modes are understood
- Recovery procedures exist
- Disaster recovery plan is documented
Red flags:
- No monitoring or alerting
- Single database with no replication
- Critical processes running on single instances
Infrastructure
Cloud architecture
- Cloud provider selection is justified
- Infrastructure is documented
- Multi-region or availability zone strategy exists
- Backup and recovery processes are tested
Infrastructure as code
- Infrastructure is codified (Terraform, CloudFormation, Pulumi)
- Changes are version controlled
- Deployment is repeatable
- Environments can be reproduced
Red flags:
- Manual server configuration
- No documentation of infrastructure setup
- “Snowflake” servers that can’t be reproduced
Environment management
- Development, staging, and production are properly separated
- Environment parity is maintained
- Data handling between environments follows best practices
- Access controls differ appropriately by environment
Cost management
- Cloud costs are monitored
- Cost trends are understood
- Optimization practices exist
- Budget vs. actual is tracked
Code Quality and Practices
Development Practices
Version control
- Git (or equivalent) is used properly
- Branching strategy is defined and followed
- Commit history is meaningful
- Code reviews are required before merge
Questions to ask:
- How does code get from development to production?
- Who can merge to main/master?
- How do you handle hotfixes?
Code review process
- All code is reviewed before merging
- Review standards are documented
- Reviewers have appropriate context
- Feedback is constructive and acted upon
CI/CD pipeline
- Automated build process exists
- Tests run automatically
- Deployment is automated or semi-automated
- Rollback capability exists
Red flags:
- Manual deployments to production
- No automated testing
- “It works on my machine” deployment culture
Testing strategy
- Unit tests exist for critical functionality
- Integration tests cover key workflows
- Test coverage is measured and reasonable
- Tests run reliably (not flaky)
Questions to ask:
- What’s your test coverage? What areas are well-covered vs. sparse?
- How long does your test suite take to run?
- When was the last time tests caught a bug before production?
Code Assessment
Code organization
- Structure is logical and consistent
- Naming conventions are followed
- Files and modules have single responsibilities
- Code is navigable for new developers
Technical debt
- Technical debt is acknowledged and tracked
- Debt is being actively managed
- No critical debt blocking progress
- Debt doesn’t indicate deeper problems
Questions to ask:
- What would you rebuild if you started over today?
- What technical debt keeps you up at night?
- How do you prioritize paying down debt vs. new features?
Dependency management
- Dependencies are current (or reasonably so)
- Known vulnerabilities are addressed
- Dependency choices are justified
- Lock files are used for reproducibility
Red flags:
- Major dependencies years out of date
- Known critical vulnerabilities unpatched
- Excessive dependencies for project size
Bus factor
- Knowledge is distributed across team
- No single person owns critical systems exclusively
- Documentation enables knowledge transfer
- On-call rotation includes multiple people
Security Posture
Application Security
Authentication and authorization
- Authentication is properly implemented
- Authorization checks are consistent
- Session management follows best practices
- Password policies are appropriate
Red flags:
- Custom authentication implementation (vs. established libraries)
- Authorization checked inconsistently
- Passwords stored in plaintext or weak hashing
Input validation
- Input is validated on backend (not just frontend)
- SQL injection protections exist
- XSS protections are implemented
- File upload validation exists
Secrets management
- Secrets are not in code repositories
- Secrets are properly encrypted
- Access to secrets is controlled
- Secrets rotation is possible
Questions to ask:
- How do you manage API keys and credentials?
- Who has access to production secrets?
- When were credentials last rotated?
Security logging and monitoring
- Security events are logged
- Logs are retained appropriately
- Anomaly detection exists (or is planned)
- Incident response procedures exist
Data Protection
Encryption
- Data is encrypted at rest
- Data is encrypted in transit
- Encryption keys are properly managed
- Appropriate encryption standards are used
PII handling
- PII is identified and documented
- Handling follows regulations (GDPR, CCPA, etc.)
- Data minimization is practiced
- Deletion procedures exist
Access controls
- Principle of least privilege is followed
- Access is audited
- Off-boarding removes access promptly
- Third-party access is controlled
Questions to ask:
- What personal data do you collect? Where is it stored?
- How do you handle data deletion requests?
- Who has access to production data?
Security History
Previous incidents
- Past security incidents are disclosed
- Root causes were identified
- Remediation was completed
- Lessons were applied
Security assessments
- Penetration testing has been conducted
- Vulnerabilities from assessments were addressed
- Regular assessments are planned
- Third-party security review exists (for high-risk areas)
Team Assessment
Technical Leadership
CTO/Technical Lead evaluation
- Has relevant technical experience
- Can articulate architecture and decisions
- Demonstrates ownership of technical strategy
- Commands respect from engineering team
Questions to ask:
- What’s the biggest technical challenge you’ve faced? How did you solve it?
- What would you do differently if starting over?
- How do you balance technical excellence with shipping speed?
Decision-making
- Technical decisions have clear ownership
- Trade-offs are understood and documented
- Decisions are revisited when context changes
- Learning from mistakes is evident
Engineering Team
Team composition
- Team size is appropriate for stage
- Skills match technology stack
- Seniority mix is balanced
- Roles and responsibilities are clear
Tenure and turnover
- Reasonable tenure for stage
- Turnover is not excessive
- Departures are understood
- Knowledge transfer occurred for departures
Red flags:
- Excessive recent departures
- All institutional knowledge with founders
- Inability to explain why people left
Team culture
- Team communicates effectively
- Collaboration is evident
- Learning and growth are supported
- Work-life balance is reasonable
Hiring capability
- Hiring pipeline exists
- Interview process is defined
- Ability to attract talent is demonstrated
- Compensation is competitive
Questions I Always Ask
These questions reveal more than any checklist:
“Walk me through your deployment process” Reveals: automation level, risk management, team coordination, operational maturity.
“How would you handle 10x current load?” Reveals: scalability thinking, architecture understanding, planning capability.
“What’s your biggest technical debt item?” Reveals: self-awareness, honesty, prioritization ability.
“How do you prioritize bugs vs. features?” Reveals: quality culture, customer orientation, decision-making process.
“What would you build differently if starting over?” Reveals: learning ability, hindsight clarity, intellectual honesty.
“Tell me about your last production incident” Reveals: operational maturity, incident response, learning culture.
The quality of answers—not just content but depth, honesty, and self-awareness—reveals technical maturity more than specific technical details.
Red Flags Summary
Immediate concerns (often deal-breakers)
- Security vulnerabilities with customer data exposure
- No version control or backup systems
- Critical knowledge with single person who might leave
- Fundamental architecture flaws requiring complete rewrite
- Deceptive or evasive answers about technical state
Significant concerns (require remediation plan)
- Outdated dependencies with known vulnerabilities
- No automated testing or CI/CD
- Poor documentation of critical systems
- Excessive technical debt impeding velocity
- Security practices well below industry standards
Minor concerns (note for post-investment)
- Code style inconsistencies
- Incomplete documentation
- Some manual processes that could be automated
- Test coverage gaps in non-critical areas
Making the Investment Decision
Technical due diligence informs but doesn’t determine investment decisions. Consider:
Can issues be fixed? Most technical problems can be addressed with investment and time. The question is whether the team can do it.
Is the team capable? A strong team with technical debt is better than a weak team with clean code. Evaluate trajectory, not just current state.
What’s the investment required? Factor technical remediation into post-funding plans and burn rate.
Are there fundamental issues? Some problems indicate deeper issues—deceptive founders, incompetent leadership, or flawed business assumptions.
How I Support Due Diligence
For investors conducting technical due diligence, I provide:
Architecture review: Deep dive into system design and scalability potential.
Code assessment: Sampling of key modules for quality, security, and maintainability.
Team interviews: Evaluation of technical leadership and team capabilities.
Risk summary: Prioritized list of technical concerns with remediation estimates.
Investment recommendation: Overall technical assessment with go/no-go guidance.
If you’re evaluating a technology investment and need expert technical assessment, let’s discuss the engagement.