Understanding Secure SDLC
What Secure SDLC is and what security does at each development stage.
Introduction
A working application is not always a secure one. Most teams build software using the Software Development Life Cycle (SDLC), which sets goals around features, deadlines, and quality. SDLC does not require any security work. So security usually comes in as a final check before release. By that point, fixing a security bug costs more than it would have during design or coding.
Secure SDLC (SSDLC) adds security work to every SDLC stage, from Requirements to Maintenance.
This post describes what security does at each stage and which standards to reference.
What is SSDLC?
SSDLC is SDLC with security activities added at each stage. The security team is not a separate process running in parallel. They join the same project meetings, write into the same documents, and track work in the same backlog.
Two reasons to do this:
- Cost. A security bug found in Design is a discussion. The same bug found in production is a patch, a regression test, and possibly a public disclosure. The later the catch, the higher the cost.
- Compliance. Standards like ISO 27001:2022 and regulations like SEOJK 21 MRTI expect security to be part of development, not added after.
SDLC has six stages. Here is what each one looks like when security is part of it:
| # | Stage | Project team activity | Security team activity |
|---|---|---|---|
| 1 | Planning | Kickoff, scope, timeline, budget | Not yet active |
| 2 | Requirements | Functional and non-functional needs | Add security requirements (auth, logging, data protection) to the same documents |
| 3 | Design | Architecture, APIs, DB schema, UI mockups | Threat modeling and security design review |
| 4 | Implementation | Write frontend and backend code | Provide secure coding guidance; run SAST, secret scanning, SCA |
| 5 | Testing | Functional and integration testing | DAST and penetration testing |
| 6 | Deployment and Maintenance | Deploy, monitor, patch | Dependency scanning, WAF, bug bounty/VDP, incident response, security regression |
Each stage maps to a recognized methodology:
flowchart TD
P["1. Planning"] --> R["2. Requirements"]
R --> D["3. Design"]
D --> I["4. Implementation"]
I --> T["5. Testing"]
T --> M["6. Deployment and Maintenance"]
R -.-> RS["OWASP ASVS"]
D -.-> DS["STRIDE / PASTA<br/>OWASP ASVS V1"]
I -.-> IS["OWASP Cheat Sheets<br/>CWE catalog"]
T -.-> TS["OWASP WSTG / MAS<br/>NIST SP 800-115"]
M -.-> MS["CIS Benchmarks<br/>NIST SP 800-61"]
SSDLC in Practice
Stage 1 & 2: Planning and Requirements
The security team does not work during Planning. They wait until Requirements starts.
In Requirements, the team writes security requirements alongside the functional ones. Most teams use OWASP ASVS as the baseline. ASVS groups security requirements into categories:
- Data protection. What sensitive data does the app handle? How is it classified under applicable law?
- Access control. What roles exist? Where are their boundaries?
- Error handling and logging. Which actions need an audit log?
- Compliance. Which standards or regulations apply (GDPR, OJK, PCI-DSS, ISO 27001:2022)?
The security requirements go in the same document as the functional requirements. They are reviewed, estimated, and tracked the same way.
Stage 3: Design
In Design, the engineering team turns requirements into a technical plan: how parts of the system communicate, how users log in, how data is stored. The security team adds two activities:
- Threat modeling. List what could go wrong with the design. The most common framework is STRIDE, which has six threat types: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. For each part of the design, ask which of these apply. PASTA is a more detailed alternative.
- Security design review. Check the technical design against OWASP ASVS section V1 (Architecture). Is communication encrypted? Are sessions handled safely? Are sensitive parameters exposed in the API? Is data encrypted in transit and at rest?
Findings go back to the design before any code is written. At this point, a fix is a paragraph in a doc.
Stage 4: Implementation
During Implementation, the security team coaches developers and runs automated scans. Coding guidelines come from the OWASP Cheat Sheet Series and OWASP ASVS. SAST scanners check the source against the CWE catalog.
| Activity | What it does | Common tools |
|---|---|---|
| Secure coding guidelines | Reference notes on input validation, secret handling, library versions, error logging | (internal) |
| SAST | Read source code, flag known-bad patterns like SQL injection and XSS | Semgrep, SonarQube, Checkmarx |
| Secret scanning | Find API keys, passwords, and tokens committed to the repo | Gitleaks, TruffleHog, GitGuardian |
| SCA | Check open-source dependencies for known CVEs | Snyk, OWASP Dependency-Check |
| Container and IaC | Scan Docker images and Terraform configs | Trivy, Checkov |
Each finding goes to a developer as a ticket. The security team triages first so developers only see real bugs.
Stage 5: Testing
After QA signs off on functionality, the security team tests the app against structured guides:
- OWASP WSTG for web applications
- OWASP MAS for mobile applications (formerly MASTG/MSTG)
For penetration tests, teams also reference NIST SP 800-115 and PTES.
Two activities:
- DAST. Run the app and probe it like an attacker. Tools: Burp Suite, OWASP ZAP, and MobSF for mobile.
- Penetration testing. A pentester combines scanner output, business knowledge, and manual attack paths to find what scanners miss.
Findings follow the same retest loop as QA bugs.
Stage 6: Deployment and Maintenance
Releasing the app starts the long-running phase of security work. Standards:
- CIS Benchmarks for server, container, and cloud configuration
- NIST SP 800-61 or ISO/IEC 27035 for incident response
- ISO/IEC 29147 for vulnerability disclosure
| Activity | What it does |
|---|---|
| Vulnerability monitoring | Keep scanning dependencies and base images for new CVEs |
| WAF (Web Application Firewall) | Block known attack patterns while a patch is being prepared |
| Bug bounty and VDP | A channel for outside researchers to report bugs |
| Patch and incident response | Triage, fix, and disclose reported bugs |
| Security regression testing | Re-run security tests when features change or refactors land |
Why It Matters
A bug found in Design costs less than the same bug found in production. The earlier the catch, the smaller the fix.
When security is missing from Requirements, encryption is added later to a database that has no place for it. When security is missing from Design, the permission model is wrong, and pentest finds out only after the fact. When security is missing from Implementation, vulnerable libraries reach production. When security is missing from Testing, real attackers find the bugs first.
Each gap is fixable on its own. Catching all of them early is the return from SSDLC.
Conclusion
SSDLC is SDLC with security at every stage from Requirements onward.
Most teams already do part of this work. The question is where the gaps are. Map your current SDLC against the activities and standards in this post and check which already happen, which do not, and where to start.