Post

Understanding Secure SDLC

What Secure SDLC is and what security does at each development stage.

Understanding Secure SDLC

Introduction

A working application is not always a secure one. Most teams build software using the Software Development Life Cycle (SDLC), which sets goals around features, deadlines, and quality. SDLC does not require any security work. So security usually comes in as a final check before release. By that point, fixing a security bug costs more than it would have during design or coding.

Secure SDLC (SSDLC) adds security work to every SDLC stage, from Requirements to Maintenance.

This post describes what security does at each stage and which standards to reference.

What is SSDLC?

SSDLC is SDLC with security activities added at each stage. The security team is not a separate process running in parallel. They join the same project meetings, write into the same documents, and track work in the same backlog.

Two reasons to do this:

  1. Cost. A security bug found in Design is a discussion. The same bug found in production is a patch, a regression test, and possibly a public disclosure. The later the catch, the higher the cost.
  2. Compliance. Standards like ISO 27001:2022 and regulations like SEOJK 21 MRTI expect security to be part of development, not added after.

SDLC has six stages. Here is what each one looks like when security is part of it:

#StageProject team activitySecurity team activity
1PlanningKickoff, scope, timeline, budgetNot yet active
2RequirementsFunctional and non-functional needsAdd security requirements (auth, logging, data protection) to the same documents
3DesignArchitecture, APIs, DB schema, UI mockupsThreat modeling and security design review
4ImplementationWrite frontend and backend codeProvide secure coding guidance; run SAST, secret scanning, SCA
5TestingFunctional and integration testingDAST and penetration testing
6Deployment and MaintenanceDeploy, monitor, patchDependency scanning, WAF, bug bounty/VDP, incident response, security regression

Each stage maps to a recognized methodology:

flowchart TD
    P["1. Planning"] --> R["2. Requirements"]
    R --> D["3. Design"]
    D --> I["4. Implementation"]
    I --> T["5. Testing"]
    T --> M["6. Deployment and Maintenance"]

    R -.-> RS["OWASP ASVS"]
    D -.-> DS["STRIDE / PASTA<br/>OWASP ASVS V1"]
    I -.-> IS["OWASP Cheat Sheets<br/>CWE catalog"]
    T -.-> TS["OWASP WSTG / MAS<br/>NIST SP 800-115"]
    M -.-> MS["CIS Benchmarks<br/>NIST SP 800-61"]

SSDLC in Practice

Stage 1 & 2: Planning and Requirements

The security team does not work during Planning. They wait until Requirements starts.

In Requirements, the team writes security requirements alongside the functional ones. Most teams use OWASP ASVS as the baseline. ASVS groups security requirements into categories:

  • Data protection. What sensitive data does the app handle? How is it classified under applicable law?
  • Access control. What roles exist? Where are their boundaries?
  • Error handling and logging. Which actions need an audit log?
  • Compliance. Which standards or regulations apply (GDPR, OJK, PCI-DSS, ISO 27001:2022)?

The security requirements go in the same document as the functional requirements. They are reviewed, estimated, and tracked the same way.

Stage 3: Design

In Design, the engineering team turns requirements into a technical plan: how parts of the system communicate, how users log in, how data is stored. The security team adds two activities:

  • Threat modeling. List what could go wrong with the design. The most common framework is STRIDE, which has six threat types: spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. For each part of the design, ask which of these apply. PASTA is a more detailed alternative.
  • Security design review. Check the technical design against OWASP ASVS section V1 (Architecture). Is communication encrypted? Are sessions handled safely? Are sensitive parameters exposed in the API? Is data encrypted in transit and at rest?

Findings go back to the design before any code is written. At this point, a fix is a paragraph in a doc.

Stage 4: Implementation

During Implementation, the security team coaches developers and runs automated scans. Coding guidelines come from the OWASP Cheat Sheet Series and OWASP ASVS. SAST scanners check the source against the CWE catalog.

ActivityWhat it doesCommon tools
Secure coding guidelinesReference notes on input validation, secret handling, library versions, error logging(internal)
SASTRead source code, flag known-bad patterns like SQL injection and XSSSemgrep, SonarQube, Checkmarx
Secret scanningFind API keys, passwords, and tokens committed to the repoGitleaks, TruffleHog, GitGuardian
SCACheck open-source dependencies for known CVEsSnyk, OWASP Dependency-Check
Container and IaCScan Docker images and Terraform configsTrivy, Checkov

Each finding goes to a developer as a ticket. The security team triages first so developers only see real bugs.

Stage 5: Testing

After QA signs off on functionality, the security team tests the app against structured guides:

For penetration tests, teams also reference NIST SP 800-115 and PTES.

Two activities:

  • DAST. Run the app and probe it like an attacker. Tools: Burp Suite, OWASP ZAP, and MobSF for mobile.
  • Penetration testing. A pentester combines scanner output, business knowledge, and manual attack paths to find what scanners miss.

Findings follow the same retest loop as QA bugs.

Stage 6: Deployment and Maintenance

Releasing the app starts the long-running phase of security work. Standards:

  • CIS Benchmarks for server, container, and cloud configuration
  • NIST SP 800-61 or ISO/IEC 27035 for incident response
  • ISO/IEC 29147 for vulnerability disclosure
ActivityWhat it does
Vulnerability monitoringKeep scanning dependencies and base images for new CVEs
WAF (Web Application Firewall)Block known attack patterns while a patch is being prepared
Bug bounty and VDPA channel for outside researchers to report bugs
Patch and incident responseTriage, fix, and disclose reported bugs
Security regression testingRe-run security tests when features change or refactors land

Why It Matters

A bug found in Design costs less than the same bug found in production. The earlier the catch, the smaller the fix.

When security is missing from Requirements, encryption is added later to a database that has no place for it. When security is missing from Design, the permission model is wrong, and pentest finds out only after the fact. When security is missing from Implementation, vulnerable libraries reach production. When security is missing from Testing, real attackers find the bugs first.

Each gap is fixable on its own. Catching all of them early is the return from SSDLC.

Conclusion

SSDLC is SDLC with security at every stage from Requirements onward.

Most teams already do part of this work. The question is where the gaps are. Map your current SDLC against the activities and standards in this post and check which already happen, which do not, and where to start.

This post is licensed under CC BY 4.0 by the author.