placeholder

From developer to orchestrator: My experiment with AI agents and Parallel Coding

Everyone is talking about using AI to write code faster, but they are still thinking single-threaded: one developer, one feature, one AI assistant.

DEC. 3, 2025
9 Min Read
by
Adrian Obelmejias
But what if you could work on multiple features simultaneously? Not by multitasking (which we all know is a myth), but by orchestrating multiple specialized AI agents, each working in parallel on different aspects of a project?
The idea seemed crazy. The execution turned out to be surprisingly elegant.
Three days later, I had merged four production-ready features:
  • Lead document management API - Complete backend implementation with secure downloads and multi-tenant isolation
  • Users API bug fix - Diagnosed and fixed missing facility relationships
  • Contact deletion confirmation - Added confirmation dialog matching our UX patterns
  • Phone number formatting - Updated display format across multiple components to match Figma designs
Four different problems. Four different complexity levels. One complex backend feature requiring deep security analysis. Two QA feedback items. One bug blocking the frontend team. All developed in parallel. All reviewed in real-time. All shipped with confidence.
Here’s what I discovered.

Context switching is your new superpower

Traditional development requires deep focus. You load an entire feature into your brain, keep dozens of implementation details in working memory, and carefully craft each line. Context switching is brutal because you have to serialize and deserialize your entire mental model.
But what if your role shifted from writing code to guiding and reviewing code being written?
Suddenly, context switching becomes trivial. You’re not holding implementation details in your head. You’re making architectural decisions, catching mistakes, and ensuring quality. You become a real-time code reviewer for multiple streams of parallel development.
This is the fundamental shift: from developer to development orchestrator.

The architecture: How it actually works

The solution has three key components that work together:

1. Git Worktrees for parallel isolation

# Main repository stays clean
~/projects/client-platform
# Each feature gets its own parallel workspace
~/projects/client-worktrees/
├── feature-document-api/        # Complex backend feature
├── fix-users-api-bug/          # Urgent bug fix
├── feature-contact-confirm/     # QA feedback
└── fix-phone-formatter/         # UI polish
Each worktree is completely isolated:
  • Own git branch
  • Own Docker containers (via COMPOSE_PROJECT_NAME)
  • Own dependencies
  • No conflicts, no contamination

2. Specialized AI agent profiles

Instead of a generic “AI assistant,” I created specialized agent profiles with deep, domain-specific knowledge:
.agents/
├── profiles/
│   ├── backend-dev.md      # Role, decisions, when to use patterns
│   ├── frontend-dev.md     # React role, component decisions
│   ├── architect.md        # System design, cross-cutting concerns
│   ├── reviewer.md         # Code quality, security checks
│   └── tester.md          # Testing strategies, edge cases
├── context/
│   ├── codebase-overview.md
│   ├── conventions.md
│   └── dependencies.md
└── workflows/
    ├── feature-development.md
    ├── bug-fix.md
    └── refactoring.md
Each profile contains hundreds of lines of project-specific knowledge:
  • References to your team’s standards (in /docs/)
  • Decision-making frameworks (when to use which pattern)
  • Architecture decisions and why (context for the patterns)
  • Common pitfalls and solutions (role-specific gotchas)
  • Compliance requirements (HIPAA in our case)
The key insight: Agent profiles focus on decision-making and context, not code patterns. The actual patterns live in your team’s documentation, where both humans and agents can reference them.

3. Strategic model selection

Not all agents need the same LLM model. Choosing the right model for each agent dramatically affects both quality and cost.
My model strategy:
The pattern: Use extended thinking for architecture and complex debugging. Use standard Sonnet 4.5 for implementation. Use auto mode for trivial fixes.
Why this matters:
  • Architect with extended thinking: When designing the document API, I need the model to deeply reason through security implications, edge cases, and multi-tenant concerns. Extended thinking catches things like “what if a user guesses a document ID?” that a quick response might miss.
  • Implementation with Sonnet 4.5: Once the architect has created the plan, implementation is more straightforward. Sonnet 4.5, without extended thinking, can follow a detailed plan perfectly and respond much faster.
  • Cost and speed efficiency: Extended thinking on Opus/Sonnet is expensive and slow. Reserve it for where complex reasoning actually matters.

The paradigm shift: What this really means

This isn’t about typing faster. It’s about operating at a different level of abstraction.
Before: You’re a pianist playing a complex piece. Deep focus required, context switches are expensive, one piece at a time.
After: You’re a conductor leading an orchestra. Each musician (agent) plays their part; you ensure harmony, and you can oversee multiple movements simultaneously.
Your cognitive load shifts from:
  • Implementation details → Architectural decisions
  • Syntax and patterns → Business logic validation
  • Writing tests → Ensuring test coverage is comprehensive
  • Documentation → Ensuring docs are accurate
  • Sequential execution → Parallel orchestration
And here’s the hidden benefit for tech leads: While your agents are working, you can actually do the other parts of your job. Code reviews for other team members. Slack questions. Architecture decisions. Backlog grooming. Email. All those things that used to force you to context-switch away from coding.
When you were deep in implementation, an interruption was costly. Now? The agents keep working while you handle leadership responsibilities. When you return, you just review their progress. No mental model to rebuild, no “where was I?” moment.
You can finally be both a tech lead AND a productive engineer.

The real-world benefits

Here’s what I’ve noticed after several weeks of working this way:

1. Parallel development actually works

I can genuinely work on 3-4 features simultaneously. Not “switching between them” but actually progressing all of them in parallel while maintaining quality.

2. Context switching becomes effortless

When someone pings me about an urgent bug, I spin up a new worktree with a specialized agent, guide it to the fix, and never lose context on my other work.

3. Code consistency is automatic

Because agents follow the exact patterns in their profiles, every endpoint, every component, every test follows our conventions perfectly. No more “Bob does it this way, Alice does it differently.”

4. Compliance becomes automatic

Our backend agent profile includes detailed HIPAA compliance requirements. The agent never forgets to add audit logging, never exposes PHI in logs, and never skips permission checks. Compliance-by-default.

5. Documentation is never an afterthought

Agents document as they code. Every function has docstrings, every API endpoint has clear descriptions, and every complex logic has comments. Because asking an agent to document is effortless.

6. Quality improves through real-time review

Continuous real-time review catches issues immediately. No more “I’ll review it later” backlog. I caught a critical security issue in the document API during implementation: a user could have guessed document IDs and downloaded files from other organizations. Traditional code review might have caught it eventually. Real-time orchestration caught it before the code was ever committed.

The challenges

This approach isn’t magic. Here are the real challenges:

Challenge 1: The mental model shift

The problem: You instinctively want to grab the keyboard and code.The reality: You have to resist. Your job is to guide and review, not implement. This takes practice.

Challenge 2: Agent hallucinations

The problem: AI agents confidently write incorrect code sometimes.The solution: You’re reviewing everything in real-time. You catch mistakes just like you would in human code reviews, but faster because you’re continuously engaged.

Challenge 3: Set up overhead

The problem: Creating agent profiles and workflows takes time upfront.The reality: It’s an investment. After the first week, you’re moving faster than you ever did before.

Challenge 4: Complex debugging

The problem: Some bugs require deep, systematic debugging that agents struggle with.The solution: You can always jump into any worktree and code directly. You’re orchestrating, not abstaining.

Challenge 5: Not every task fits this model

When this works:
  • Feature development with clear requirements
  • Bug fixes with reproducible issues
  • QA feedback implementation
  • Refactoring with a defined scope
  • Tasks where the “what” is clear, even if the “how” requires exploration
When to use traditional development:
  • Novel problems with no established patterns
  • Deep algorithmic work requiring sustained focus
  • Debugging complex race conditions or timing issues
  • Exploratory work where you’re figuring out the problem itself
  • Tasks where the fastest path is just writing the code yourself
The key is recognizing which approach fits the task. I still write code directly for complex debugging sessions or when exploring unfamiliar territory. Orchestration is a tool, not a religion.

The skills that changed

What I spend my time on has fundamentally shifted:
Less time on:
  • Typing boilerplate code (agents handle this perfectly)
  • Looking up syntax (agents know it)
  • Writing repetitive tests (agents are thorough)
  • Context switching between tasks (seamless now)
More time on:
  • Architectural decisions
  • Business logic validation
  • Security and compliance review
  • Pattern recognition and quality assessment
  • Guiding agents through ambiguous requirements
The result: I’m operating at a higher level. Instead of being in the weeds of implementation, I’m ensuring correctness, consistency, and quality across multiple streams of work.

The future for Parallel Coding

This is still early. We’re figuring out best practices, refining workflows, and discovering edge cases. But the core insight feels solid:
Software development is becoming more about orchestration and less about implementation.
The developers who thrive won’t be the fastest typists or the ones who memorize the most syntax. They’ll be the ones who:
  • Make great architectural decisions quickly
  • Recognize patterns and anti-patterns instantly
  • Guide AI agents effectively
  • Maintain context across multiple workstreams
  • Ensure quality through continuous review
We’re not being replaced. We’re being elevated.