placeholder

The evolving SDLC: Navigating AI's impact on software delivery

AI has crossed a threshold. It's no longer just a library you import or a pre-trained model you call with an API key. Increasingly, it's a collaborator, an agentic component that can reason, generate code, and even orchestrate workflows across systems.

SEP. 3, 2025
3 Min Read
by
Donovan Crewe
For engineers, this is both exhilarating and dangerous. The tooling promises velocity. Whole modules can appear in minutes, and copilots are filling in boilerplate before you even finish typing the function signature. But speed can be deceptive. Without discipline, what looks like acceleration is often just a shortcut to technical debt.
In this era of agentic coding, the skill of the engineer isn't diminished; it's magnified. The real work isn't asking an LLM to "write me a microservice" but making sure that service is robust, maintainable, and secure. Without that oversight, you don't get a scalable platform; you get a spaghetti dinner for five, delivered straight into production.
That's why the software development lifecycle (SDLC) still matters. Not the rigid, waterfall version consigned to dusty textbooks, but the evolving discipline that keeps systems alive and trustworthy. AI doesn't erase the SDLC. It reshapes it. Some fundamentals remain non-negotiable. Some practices need to evolve. And a few entirely new disciplines could be added if we want AI systems to survive outside the demo hall.

The non-negotiables: What must stay (even when agents write the code)

AI agents can generate scaffolding in minutes, refactor code on the fly, or even stitch together APIs without a human typing a single line. That doesn't make the bedrock of software engineering optional. It means these principles have to be applied with even more care, because the failure modes change when the code isn't entirely written by humans.

Scalability

An agent can spin up a service that works in isolation, but it won't naturally account for what happens when thousands of requests arrive at once, or when a single poorly designed loop spikes CPU usage. The SDLC's emphasis on scalable architecture is what ensures those generated components hold up under real-world load. Cloud-native patterns, container orchestration, and resource monitoring must remain central, because an agent can't anticipate your traffic curves or cost constraints.

Maintainability

Agent-generated code tends to be correct in small instances but messy in large ones. It may hardcode values, duplicate logic, or weave dependencies in ways that work now but complicate future changes. The SDLC's discipline around modular design, code review, and documentation keeps this from hardening into an unmanageable tangle. With models updating every few months, maintainability isn't just nice to have; it's the difference between swapping in a new component confidently and tearing down half your system to do it.

Testing

Yes, agents can generate tests, but they often generate shallow ones: checking happy paths, mirroring the implementation, or skipping the failure cases entirely. That's why the SDLC's insistence on meaningful unit, integration, and regression testing is still indispensable. Engineers must design test suites that probe for robustness, malformed inputs, boundary conditions, and unexpected data. Without those guardrails, agent-written code can pass CI/CD and still fail users in production.

Security

Agents don't naturally think in terms of least privilege, secure defaults, or data handling policies. They'll happily suggest an endpoint with no authentication if that satisfies the prompt. The SDLC's security reviews and threat modeling ensure these blind spots are caught early. In the agentic era, engineers must extend those same principles to new risks like prompt injection or model misuse, while still enforcing the basics: input sanitization, encryption, and audit trails.
The application of these fundamentals shifts as we integrate AI. This leads us to consider how our existing disciplines must adapt.
The evolving disciplines: What can adapt when coding with AI
Where the fundamentals persist, the application of those fundamentals shifts. Agent-assisted coding changes the texture of the work, which means the SDLC must adapt at every stage.

Requirements and design

Traditional requirements are binary: given X, the system must return Y. With AI agents, outputs can vary; a code generator might propose multiple valid implementations, or a language model might produce slightly different JSON each call. Requirements now need to specify constraints and behaviors rather than fixed results: "The function must run within 200ms," or "The generated SQL must sanitize inputs." Engineers must translate fuzzy AI outputs into deterministic system requirements.

Code reviews and pair programming

Agents are tireless, but they aren't opinionated about architecture or maintainability. The SDLC's code review phase becomes even more important, functioning like a "human-in-the-loop" checkpoint for AI contributions. Engineers aren't just checking syntax; they're validating that generated code aligns with patterns, fits the architecture, and doesn't sneak in subtle bugs or insecure defaults. Think of it as pair programming with a partner who never gets tired, but also never takes responsibility.

Deployment pipelines

Agents can generate and refactor code faster than any human. That means deployment pipelines must be ready to absorb frequent changes without sacrificing quality. CI/CD isn't optional; it's the safety net that ensures velocity doesn't come at the cost of reliability. Automated linting, static analysis, and container scanning help catch issues that an agent won't.

Monitoring and ops

When AI writes code, monitoring becomes a living part of the SDLC. Engineers need to watch not only service uptime, but also whether agent-generated components behave consistently over time. For example, an agent-written script that scrapes an external API might work today but break tomorrow if the API changes its schema. The SDLC must evolve to include monitoring for "silent failures", the kind that don't throw an error but still erode trust.
Beyond adapting existing practices, the agentic world introduces entirely new considerations for the SDLC.
What can be added: New practices for an agentic world
There are things the classical SDLC simply never accounted for because we never before had non-deterministic collaborators generating production code.

Prompt and pattern management

Prompts are the new source code. They evolve, they can regress, and they need version control. An engineer should be able to answer: R The SDLC needs explicit stages for reviewing, testing, and storing prompts and agent strategies, just like code. Most good agents now have rules management for specifically this case; keeping these up to date, just like good documentation, will only benefit the team.

Reliability and guardrails for generated code

Agents can produce surprising implementations. The SDLC must now include automated sanity checks: linting, type checking, security scanning, and style enforcement for every generated snippet. This is about codifying "what good looks like" so that agent contributions start from a baseline of safety.

Human-in-the-loop validation

Agents can take a feature 80% of the way, but that last 20%, aligning with business requirements, handling edge cases, balancing trade-offs, is still human territory. The SDLC needs explicit steps for human validation of agent-generated artifacts before they're promoted downstream.

Continuous learning feedback loops

Unlike static code, AI agents learn and change over time. The SDLC must add feedback loops where production errors, user corrections, and performance data flow back into prompt refinement or retraining cycles. This isn't a one-off phase; it's a continuous rhythm built into operations.

Why engineers matter more than ever

AI agents are accelerators, not architects. They'll get you from zero to prototype in minutes, but they won't tell you if the design is sustainable, if the security model is sound, or if the generated code will hold up when traffic triples.
That's the engineer's job. To enforce scalability patterns, ensure maintainability, design meaningful tests, and close the loop on monitoring and governance. In this sense, AI doesn't reduce the need for engineering skill; it raises the bar.
An agent can write a function, but only an engineer can make sure that the function belongs in the system. An agent can generate a microservice, but only an engineer can prevent it from becoming a brittle one-off. Without that discipline, you're not shipping faster; you're just accelerating toward entropy.

Building systems worthy of AI

Every era of technology has forced us to evolve the SDLC. Cloud gave us DevOps. Mobile forced responsive design. Now AI, and especially agentic coding, is demanding a new layer of discipline.
AI will happily hand you a pile of working code, but for now at least, engineering is what decides whether that pile becomes a resilient product or a plate of spaghetti.
Keep the fundamentals. Evolve your practices. Add the missing pieces. That's how we engineer with AI, and have been engineering with all the new technologies we’ve been presented with, not against it, not in awe of it, but alongside it.
The hype belongs to everyone. The future belongs to the engineers who know how to harness AI without losing the principles that got us here.