The first 90 days of a Virtual Delivery Center engagement are mostly setup. Onboarding pods, calibrating governance, debugging the integration cadence with your existing tooling, finding the rhythm between scope-definition and delivery. None of it is wasted, but none of it is the engagement at its real shape.
The interesting question is: what does the engagement look like at month 6, when the setup tax has cleared and the work is just delivery? A healthy VDC at six months looks specific and measurable. An unhealthy one looks busy. Here's how to tell which you have, with three indicators drawn from our own case-study patterns.
The setup tax: months 0–3
Before talking about month 6, reset the expectation about months 0–3. A new VDC engagement absorbs real setup overhead even when the platform handles most of it well:
- Codebase ramp. Even the best-vetted pod takes 2–4 weeks to internalize your codebase, conventions, and domain. Their throughput in week 2 is not their throughput in month 4.
- Governance calibration. The platform's default delivery cadence (sprint length, code-review SLAs, milestone gating) gets adjusted to fit your operating reality. This takes 2–3 sprints to stabilize.
- Integration debugging. Even with native GitHub / Jira / Monday integrations, your specific pipelines, branch policies, and review rules need 1–2 sprints of friction to settle.
- Outcome definition. The first quarter teaches both sides what "good acceptance" looks like for your work. By month 3, the milestone-acceptance pattern is repeatable.
If you're at month 2 of a new VDC engagement and frustrated by the velocity, that's normal. The gradient matters more than the absolute level. If month 2 feels worse than month 1, that's a signal. If month 2 feels marginally better than month 1, you're on track.
Month 6 is when the setup tax has fully cleared and the engagement is in steady state. That's where the indicators below become diagnostic.
Indicator 1: Velocity has stabilized AND moved up
The most-cited and most-misunderstood VDC metric. Two things must both be true:
(a) Velocity is stable. The pod's shipped-points-per-sprint (or whatever your throughput metric is) shows low variance over the last 4–6 sprints. Not flat — bounded. Engineering work has natural sprint-to-sprint variation; healthy stability means the variance is within a predictable band, not bouncing 3× between sprints.
(b) Velocity has trended up since month 3. Compared to the early-stable point (typically end of month 3), throughput at month 6 should be 1.5–2.5× higher. This isn't because the pod got bigger — it's because they got faster. The codebase ramp has paid off. Knowledge of your domain is now an asset.
If velocity is unstable at month 6, the pod is fighting structural friction — unclear scope, dependencies outside their control, or governance overhead they shouldn't be absorbing. If velocity is stable but flat (no improvement since month 3), the pod has plateaued — they've optimized within their constraints but the constraints themselves are limiting.
The healthy shape is: stable + still climbing, with the gradient flattening but not flat. Like a learning curve, not a step function.
Indicator 2: Governance friction has decreased
Governance friction is the time-tax of running the engagement. Standups that overrun. Code reviews that bounce three times before merging. Milestone sign-offs that take a week of back-and-forth. Escalations that need senior intervention to unblock.
At month 6, all of these should be measurably down from month 3:
- Code-review turnaround drops from days to hours. The pod and your senior engineers have learned each other's review style; comments converge faster.
- Milestone sign-offs happen the same day as the milestone demo, not the next sprint. Your team has internalized what "done" looks like for this pod.
- Escalation count drops to roughly zero per month. Things that needed your VP Engineering at month 2 are now handled at the pod-DM and your engineering manager level.
- Cross-team coordination happens directly between the pod and your dependent teams (data, infra, security) without your engineering leadership routing it.
If governance friction is flat or rising at month 6, something structural is wrong. Either the pod isn't trusted enough to operate autonomously (often a process problem, occasionally a quality problem), or your team hasn't internalized the platform's delivery model and is still trying to manage the pod as if it were staff augmentation. The adoption checklist covers what governance ownership should look like.
Indicator 3: The pod is shaping new work, not just executing it
This is the maturity signal that matters most for compounding value, and the one that's hardest to measure quantitatively. By month 6, a healthy pod is contributing to what gets built, not only how.
Practically:
- The pod's delivery manager surfaces architectural questions before they become rewrites. "We're going to hit a scaling wall on the API gateway in roughly three months — here's the redesign sketch we'd recommend."
- Pod members raise scope adjustments based on implementation reality. "This story as written takes 8 days; broken into these three smaller stories it takes 5 days and is more reversible."
- The pod proposes work it should pick up that you haven't even prioritized. "We've been seeing X pattern in the bug reports — there's a 2-sprint refactor that would eliminate the category."
- Architecture decisions involve the pod by default, not as a courtesy.
If the pod is still purely order-takers at month 6 — only doing what's in JIRA, never proposing, never pushing back — they're not yet a real engineering partner. They're high-grade staff augmentation. That's not necessarily wrong (some engagements deliberately want this), but it caps the value the engagement can deliver.
The healthy signal is when your engineering managers say things like "the pod caught X before we did" or "we're going to bring the pod into the architecture review." That's compounding value showing up.
Anti-pattern: the busy-but-unhealthy VDC
The trap to watch for at month 6 is an engagement that looks active but isn't producing compounding value. Surface signals:
- Lots of standups, lots of tickets in motion, lots of activity in chat.
- Velocity is "fine" but isn't moving up since month 3.
- Milestones are getting hit but they're scoped smaller than they were at month 3.
- Code reviews still go three rounds. Standups still run over.
- Your engineering manager is still spending 25%+ of their week on the engagement.
This is the engagement working hard and going sideways. The cause is almost always one of three things:
- Scope is too granular. The pod is executing well on the tickets they're given, but the tickets are post-decomposition order-taker work. They're not getting fed coherent objectives, just chunks.
- Governance is duplicated. The platform's delivery manager is doing the role, AND your engineering manager is doing the role too, and the work flows through both. Pick one.
- Pod composition is wrong. The pod was assembled for the work as scoped at month 0. By month 6, the work has shifted enough that the pod skill mix is mismatched. Solution: rebalance the pod, which a VDC can do without a contract amendment — see the role index for what's available.
Diagnosing which one applies is the work of a single 90-minute session with the pod's delivery manager and your engineering leadership. The fix is usually simpler than the diagnosis suggests.
What month 12 looks like if month 6 was healthy
Brief preview, because it's the natural next question. By month 12 of a healthy VDC engagement:
- The pod has full institutional knowledge of your codebase, on par with mid-tenure employees.
- Velocity is 2–3× initial baseline and has stabilized at the higher level.
- Governance friction is near zero — the pod operates with minimal touch from your engineering leadership.
- The pod is shaping at least 30% of incoming work, not just executing it.
- Pod composition has rotated through 1–2 specialist additions/removals to match how the work has evolved.
- The economics are clearly favorable vs. the alternatives — the math now backs up what felt right at month 3.
This is the "compounding value" payoff. It only happens if month 6 looked right.
Frequently asked questions
What if our engagement isn't showing these indicators at month 6?
Don't terminate — diagnose. The three anti-patterns above (granular scope, duplicated governance, wrong pod composition) cover most month-6 health issues, and all three are fixable inside the existing engagement. If a 90-minute diagnostic session with the pod's delivery manager doesn't surface a clear corrective, that's when termination is on the table.
How do we measure "pod shaping work" objectively?
Track the ratio of pod-originated suggestions vs client-originated tickets over the last quarter. At month 6, healthy is 15–25%. Below 10%, the pod is order-taking. Above 35%, the pod may be over-extending its mandate (rare but worth checking).
What's a realistic velocity multiplier from month 3 to month 6?
1.5–2.5× is the healthy band. Below 1.5× suggests the pod is plateauing early. Above 2.5× often reflects scope shifting toward easier work rather than the pod actually getting faster — worth verifying.
Do these indicators apply to non-engineering pods?
The shape applies; the metrics differ. For data-engineering pods, replace velocity with shipped-pipeline-count or data-quality-score. For ML pods, replace it with model-iteration cadence and offline-eval improvements. The three indicators (stabilized + climbing throughput, decreasing governance friction, work-shaping behavior) translate across pod types.
Where to start
If you're inside a VDC engagement and want a structured month-6 health review, schedule a 30-minute call. We'll walk the three indicators against your engagement, identify which (if any) anti-pattern applies, and propose a corrective if needed.
If you're earlier in the engagement and want to set up the indicators properly from day 1, see the VDC build playbook. And if you're still pre-engagement, the 12-question adoption checklist covers the questions to ask before signing — most of the month-6 problems originate in unanswered or vaguely-answered procurement questions.