Part III — Building Structure
Chapter 8

Decomposition at Scale

From vision to milestone to task: how to break something impossibly large into pieces that a real team can actually execute — without losing the plot.

Picture a project. Not a small one. A big, messy, six-month effort that involves four teams, a leadership team that keeps changing its mind, a hard deadline nobody pushed back on, and a pile of technical work that nobody has fully mapped. You've been given ownership of it. Congratulations.

The first instinct of most engineers in this position is to start doing. Write a ticket. Pull up a whiteboard. Schedule a design meeting. There is something comforting about action. It feels like progress.

But here is the problem: if you start building before you have broken the project down correctly, you will build the wrong thing. Or you will build the right thing in the wrong order and create dependencies that paralyze you in month three. Or you will build all the easy parts and arrive at the hard parts with no runway left.

Decomposition is the act of turning a large, fuzzy project into smaller, concrete pieces. Done well, it is the single most powerful thing you can do at the start of any complex effort. Done badly — or skipped entirely — it is the root cause of most of the chaos you will experience later.

This chapter teaches you how to do it well.

Why Decomposition Is Harder Than It Looks

Every engineer knows what decomposition is, in theory. Break big things into small things. It sounds trivial. But there is a reason experienced teams still get it wrong on almost every large project.

The core difficulty is this: when you start decomposing a project, you are working with incomplete information. You don't fully understand the scope yet. You haven't discovered all the hidden dependencies. The requirements are still shifting. The team hasn't thought through all the edge cases.

So you decompose based on what you know, which means your decomposition will be wrong in places you haven't discovered yet. This is fine and expected. But it creates a dangerous temptation: to delay decomposition until you know more. "Let me do a bit more investigation first. Let me wait until the requirements are clearer."

Trap

Waiting for perfect clarity before decomposing is like waiting to draw a map until you've already visited everywhere. The map exists to help you navigate the territory you haven't been to yet. A rough map now is worth more than a perfect map later.

The second difficulty is that most people conflate three very different things when they try to decompose a project: outcomes, milestones, and tasks. They are not the same thing. Mixing them up creates a structure that looks organized but falls apart the moment you start executing.

Let's define each one precisely, because the distinction matters more than almost anything else in this chapter.

The Three Levels: Outcome, Milestone, Task

Think of a project as having three distinct levels of structure. Each level answers a different question. Getting confused between them is one of the most common and costly mistakes in project execution.

Figure 8.1 — The Three Levels of Project Structure
LEVEL 1 — OUTCOME (the "what changes in the world") ┌─────────────────────────────────────────────────────────┐ │ "All merchants can accept payments in under 2 seconds │ │ globally, with 99.9% success rate." │ └──────────────────────┬──────────────────────────────────┘ │ breaks into ┌────────────┴────────────┐ ▼ ▼ LEVEL 2 — MILESTONES (the "what is demonstrably true at checkpoint N") ┌──────────────────┐ ┌──────────────────┐ │ M1: Routing │ │ M2: Latency │ │ service live │ │ under 400ms in │ │ in US region │ │ all 3 regions │ └────────┬─────────┘ └────────┬─────────┘ │ breaks into │ breaks into ┌────┴────┐ ┌────┴────┐ ▼ ▼ ▼ ▼ LEVEL 3 — TASKS (the "what one person does this week") ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ │ Build │ │ Write │ │ Profile│ │ Add │ │ router │ │ health │ │ EU │ │ caching│ │ logic │ │ checks │ │ traffic│ │ layer │ └────────┘ └────────┘ └────────┘ └────────┘
Each level answers a different question. Outcomes answer "why does this matter?" Milestones answer "how will we know we're making progress?" Tasks answer "what does one person do next?"

Level 1: The Outcome

The outcome is the top of the pyramid. It describes what changes in the world when the project is complete. Not what you built. Not what you shipped. What is now true that wasn't true before.

A good outcome statement is specific enough that someone could look at it six months from now and say with confidence, "yes, we achieved this" or "no, we didn't." It is not a feature list. It is not a technical description. It is a change in the state of the world.

Bad outcome: "Build a new payment routing system."
Good outcome: "All merchants process payments in under 2 seconds, globally, with a 99.9% success rate."

The difference is enormous. The bad version describes activity. The good version describes achievement. The bad version lets you ship something technically correct but completely useless. The good version forces you to confront whether what you built actually worked.

Level 2: Milestones

Milestones are checkpoints. They are points in time at which something is demonstrably true that wasn't true before. Not "we finished the design doc." Not "we started writing code." Something real and observable.

The test for a good milestone is simple: can you run a demo? Can you show a stakeholder something working? If the answer is no, you don't have a milestone — you have a plan item. Milestones are things you can point at and say "we got here."

Most projects have between three and seven milestones. Fewer than three usually means the milestones are too coarse — you won't know you're in trouble until it's too late. More than seven usually means you're managing tasks, not milestones, and you'll drown in overhead.

Level 3: Tasks

Tasks are the actual work. One person, one piece of output, a clear definition of done. "Write the routing logic for the payment service" is a task. "Design the architecture" is not a task — it's a milestone, or possibly an outcome, depending on how you look at it.

Tasks should be completable in days, not weeks. If a task takes two weeks to complete, it almost certainly contains hidden sub-problems that haven't been discovered yet, and it will slip in ways that are hard to predict.

Key Distinction

Outcomes tell you where you're going. Milestones tell you when you're making progress toward it. Tasks are what you do on Tuesday. These are three fundamentally different things. If your project plan mixes them together on the same list, your decomposition is broken.

Drawing the Dependency Graph Before You Regret It

Once you have your milestones, the next step most engineers skip is drawing the dependency graph. Not a Gantt chart. Not a timeline. A graph — specifically, a picture of what depends on what.

A dependency is a relationship between two pieces of work where one cannot meaningfully start until the other is done. Dependency A → B means "we cannot start B until A is complete."

The reason you draw this before you do anything else is not to produce documentation. It is to find problems early. Almost every large project contains a dependency that nobody realized existed until someone hit it at full speed in month four. If you draw the graph upfront, you find these hidden dependencies now, when you still have options.

A team is rebuilding their authentication system. They have six milestones: new token service, new session storage, migration of web app, migration of mobile app, migration of third-party integrations, and shutdown of the old system. Seems straightforward.

Then someone draws the dependency graph and notices that mobile app migration depends on the token service AND the session storage being complete. And third-party integrations depend on mobile being migrated first, because several partners authenticate through the mobile SDK. And shutdown of the old system depends on all three migration paths being complete.

What looked like six parallel tracks is actually a mostly sequential chain with a long critical path. The project was being planned as a three-month effort. The critical path analysis revealed it was six months minimum. Better to know this on day one than on day sixty.

How to Draw It

You don't need special software. A whiteboard works. A Google Doc with a table works. The process is what matters, not the tool.

Start by listing all your milestones. Then, for each milestone, ask two questions: "What must be true before we can start this?" and "What does completing this unlock?" Write the answers down. Then draw lines between them.

When you're done, you will have a graph. It might be messy. That's fine. You're looking for three things:

Framework

Three Things to Find in Your Dependency Graph

01
The Critical Path

The longest chain of sequential dependencies in the graph. This is your real timeline. Not the timeline you want, not the one you promised leadership — the one dictated by physics. Every other chain in the graph can float, but the critical path cannot. If any item on the critical path slips, the project slips.

02
The Bottlenecks

Nodes in the graph that multiple other things depend on. If a bottleneck milestone slips, it doesn't just affect one thing — it blocks everything that depends on it. These are where your risk is concentrated. Put your best people on them first.

03
Cross-Team Dependencies

Places where your milestone depends on another team delivering something. These are the most dangerous nodes in the graph, because you don't control them. Find them early so you can start the conversation with those teams before you're blocked, not after.

Finding the Critical Path (Without a Gantt Chart)

"Critical path" is a term that sounds like project management jargon, but the concept is genuinely useful and simple. It is just the longest sequence of steps where each step depends on the previous one.

Here is the important thing to understand about the critical path: it is not a schedule, it is a constraint. No amount of adding people, working nights, or pulling levers will make a project take less time than its critical path. If the critical path is 12 weeks, the project is 12 weeks at minimum. You can parallelize other things, throw more resources at other things, but the critical path is the floor.

To find it manually, without any software:

Method

Finding the Critical Path in Four Steps

01
List all the chains in your graph

Walk the dependency graph and trace every path from start to end. Each path is a chain of milestones. Write them all down.

02
Estimate the duration of each milestone

Don't be precise. Use rough estimates in weeks: 1 week, 2 weeks, 4 weeks. The precision doesn't matter yet — you're looking for relative magnitudes.

03
Add up each chain

Sum the durations along each path. The longest sum is the critical path duration. Any other chain that takes less time has "float" — spare time that can absorb slippage.

04
Compare to your deadline

If the critical path is longer than your deadline, you have a problem that no amount of hope will solve. You need to either change the scope, add resources to critical-path items, or renegotiate the deadline. Do this on day one, not day sixty.

Here is a worked example. Say you have a project with these milestones and rough estimates:

Figure 8.2 — Critical Path Example
Milestone A: New data model (3 wks) │ ├──→ Milestone B: Backend API (4 wks) │ │ │ ├──→ Milestone D: Web app integration (2 wks) │ │ │ │ │ └──→ Milestone F: Launch (1 wk) │ │ │ └──→ Milestone E: Mobile app integration (5 wks) │ │ │ └──→ Milestone F: Launch (same) │ └──→ Milestone C: Data migration scripts (6 wks) │ └──→ Milestone F: Launch (same) Chain 1: A → B → D → F = 3+4+2+1 = 10 weeks Chain 2: A → B → E → F = 3+4+5+1 = 13 weeks ← CRITICAL PATH Chain 3: A → C → F = 3+6+1 = 10 weeks
The critical path is Chain 2, through mobile integration. If you only optimized the data migration (Chain 3), you'd spend effort on something that isn't blocking the launch. The real constraint is mobile.

Notice what this tells you immediately: spending six weeks optimizing the data migration scripts is completely pointless if mobile integration is the bottleneck. Chain 3 has three weeks of float. You can take three weeks longer on it and launch on the same day. But if mobile integration slips by even one week, the whole project slips.

This is the practical value of critical path analysis. It tells you where to worry and where you can relax.

When to Parallelize and When Sequential Is the Only Way

One of the most powerful things you can do with a project is to run work in parallel. If Team A can work on the backend while Team B works on the frontend, you've effectively halved the calendar time for those two efforts. This is a big deal, and it's one of the main reasons large projects need multiple teams.

But parallelization has limits that are often underestimated.

What You Can Parallelize

Work that is independent — meaning the output of one piece does not change the starting conditions of another — can be parallelized freely. Two teams building two separate microservices that communicate over a well-defined API can work simultaneously without stepping on each other. The API contract is the interface, and as long as both teams honor it, they are decoupled.

Similarly, work that can proceed against a stub or a mock of an unfinished dependency can often be parallelized. If Team B needs Team A's service, but Team A can publish a mock that Team B can code against, then Team B doesn't need to wait for Team A to finish. They just need to switch to the real thing when it's ready.

What You Cannot Parallelize

Work that has a genuine logical dependency cannot be parallelized. You cannot run the data migration before the data model is finalized. You cannot test the integration before there is something to integrate against. You cannot write the launch playbook for a feature that doesn't exist yet.

But there's a subtler category that trips up many teams: work that looks parallel but shares a bottleneck resource. If three teams are all working "in parallel" but they all depend on one shared infrastructure team to make configuration changes, those three teams are not actually running in parallel. They're queuing on a single resource. The theoretical parallelism is real; the practical parallelism is not.

Common Mistake

Planning six tracks of parallel work and then discovering that all six tracks require changes to a shared database schema — which one DBA owns — means you have six parallel tracks in theory and one sequential track in reality. Always check whether your "parallel" work shares any bottleneck resources.

The Human Coordination Cost

There is another cost to parallelization that rarely shows up in project plans: the overhead of coordinating parallel work. When two things run in parallel, someone has to make sure they're going to fit together correctly at the end. This means interface meetings, integration testing, sync points, and review cycles.

This is not hypothetical overhead. Fred Brooks, who built the IBM OS/360 operating system in the 1960s and wrote about the experience in his classic book The Mythical Man-Month, observed that adding more people to a late software project makes it later. The reason is coordination overhead: every person added to a project increases the number of communication channels in the project, and those channels take time and energy to maintain.

A project with two people has one communication channel. With three people, three channels. With five people, ten channels. With ten people, forty-five channels. The parallelism benefits increase linearly. The coordination costs increase quadratically.

This doesn't mean parallelism is bad. It means you should parallelize intentionally — add parallel tracks where the benefit clearly outweighs the coordination cost, not just because you have the headcount available.

The Decomposition Meeting: Getting It Done With Your Team

Decomposition should not be a solo activity. You should involve the people doing the work, because they know things you don't. But you also should not hold a meeting with fifteen people and try to decompose a project by committee, because that produces mediocre output and exhausted participants.

The right size for a decomposition session is three to five people: you, the lead engineers for the major workstreams, and anyone who has deep knowledge of a specific area that is likely to contain surprises. That's it.

Here is a format that works in ninety minutes:

Meeting Format

The 90-Minute Decomposition Session

00
Before the meeting: send the outcome

Write down the outcome — what changes in the world when this project is done — and send it to participants before you meet. Give them 24 hours to read it and come prepared with questions. This saves 30 minutes of "wait, what are we actually building?" at the start.

01
Minutes 0–15: Align on the outcome

Read the outcome statement aloud. Ask: "Is there anything here that we disagree with or that we think is wrong?" Resolve disagreements before you decompose anything. If you can't align on the outcome, everything below it is built on sand.

02
Minutes 15–45: Brain-dump the work

Ask everyone to write down every major piece of work they can think of on sticky notes (physical or virtual). No filtering. No prioritizing. Just get everything out. This is where your senior engineers will surface the things you didn't know you didn't know.

03
Minutes 45–70: Group and label

Cluster the sticky notes into themes. Each cluster becomes a candidate milestone. Give each cluster a name that describes what is demonstrably true when that cluster is complete — not what the work is, but what it achieves.

04
Minutes 70–90: Draw the dependencies

Draw lines between milestones. Ask for each one: "Can we start this before that one is done?" Find the critical path. Identify cross-team dependencies. Flag anything that looks like a bottleneck. You won't finish the full analysis in twenty minutes, but you'll finish enough to know where the risks are.

The output of this session is not a finished plan. It is a rough structure you can refine. Within 24 hours of the session, write it up and share it with participants. Ask them to poke holes in it. Give them a week. Then finalize and share with the broader team.

The Hidden Work: What Doesn't Show Up on First Pass

Every decomposition misses things. The question is not whether your decomposition is complete — it isn't — but whether you've been systematic enough to catch the categories of work that are most often overlooked.

Here are the five most common categories of hidden work that engineers forget during decomposition:

Hidden Work Category What It Looks Like Typical Impact
Migration and cleanup Moving existing data, deprecating old APIs, cleaning up shadow systems that nobody documented High — often takes 2x as long as expected
Testing infrastructure Building test fixtures, setting up staging environments, writing integration tests for the new system High — often 30-40% of total engineering effort
Observability and alerting Adding metrics, logs, traces, and dashboards so you can tell if the new system is working Medium — skipped until launch, then rushed
Documentation and rollout Runbooks, launch guides, team training, communication to dependent teams Medium — the most underestimated soft work
Review cycles Design reviews, security reviews, privacy reviews, code reviews, production readiness reviews Variable — depends on your org, but never zero

The migration and testing categories are the ones that most consistently cause project timelines to blow up. A team will carefully estimate the core feature work, add it up, decide the project takes eight weeks, and then discover in week six that the data migration alone is a four-week effort that nobody planned for.

There is a simple check for this: after your decomposition, ask yourself "are we accounting for the transition from the old system to the new one, or just the new one?" Most teams account for the new system and forget the transition.

Scope Sizing: Getting the Estimates Right Enough

There is a whole industry of techniques for estimating software projects, and most of them are either too complex to use in practice or too imprecise to be useful. What follows is not the theoretically optimal approach — it is the approach that works in the real world, in the time you actually have.

Estimates Are Not Commitments

The most important thing to establish early is that an estimate is not a commitment. An estimate is your current best guess given current information. As you learn more, the estimate will change. That is not failure — that is how knowledge works.

If your organization treats estimates as commitments, you will get estimates that are padded to protect against uncertainty rather than estimates that accurately reflect the work. This produces planning that looks conservative and is actually useless, because it doesn't tell you where the real risks are.

Three-Point Estimation

For each major milestone, ask three questions instead of one:

Optimistic: How long would this take if everything went right? No surprises, no blocking dependencies, the team is focused, the requirements don't change. This is not your plan — it's your floor.

Pessimistic: How long would this take if we hit the kinds of problems we've hit before on similar work? This is not disaster planning — it's "what happens when the normal amount of bad stuff happens." This is the number most teams refuse to say out loud.

Most likely: Given the specific context of this project, what's your gut feeling? Usually somewhere between optimistic and pessimistic, but biased by whatever risks seem most present.

A simple formula called PERT gives you a single estimate from these three: (Optimistic + 4 × Most Likely + Pessimistic) / 6. This is not magic — it's just a weighted average that gives more weight to the most likely estimate while not ignoring the outliers.

Practical Tip

When someone gives you a single-point estimate ("this will take two weeks"), ask: "What's your optimistic and pessimistic range?" The range tells you how confident they are. A confident person says "one to three weeks." An uncertain person says "two weeks to three months." The range is more useful than the number.

The Two-Week Rule

Any task that is estimated to take more than two weeks should be broken down further. This is not a hard rule — some genuine pieces of work take four or six weeks. But the rule exists for a reason: tasks that take more than two weeks almost always contain hidden sub-problems that haven't been discovered yet.

When someone says "the data model refactor will take six weeks," they usually mean "I have a vague sense that this is a large piece of work and I haven't actually broken it down." The six weeks is covering for uncertainty. If you make them break it into concrete two-week pieces, they will often discover complexity they hadn't realized was there — which is exactly what you want.

Keeping the Decomposition Alive as the Project Evolves

Decomposition is not something you do once at the beginning and then put in a drawer. It is a living structure that needs to be updated as the project progresses. Requirements change. Hidden complexity surfaces. Deadlines shift. Team members leave. The decomposition needs to reflect reality, not the initial plan.

A practical way to keep it alive: run a brief decomposition review once a month. It doesn't need to be long — thirty minutes is usually enough. Ask three questions:

What has changed? New information, new requirements, new blockers. How do these change the structure of the work?

What did we discover was harder or easier than expected? This updates your estimates and reveals whether your original decomposition was accurate.

Is the critical path still the same? Sometimes a different chain of dependencies becomes critical as the project progresses. If the critical path has moved, your resource allocation needs to move with it.

The goal is not to constantly replan. It is to make sure that your decomposition continues to reflect what is actually true, so that your decisions are based on reality rather than the plan you made on day one.

A Complete Example: Decomposing a Real-World Project

Let's walk through a complete decomposition of a realistic project. This is the kind of thing you might actually face.

The project: Migrate the company's primary user authentication system from an old, self-hosted solution to a new one built on a modern identity platform. Eight backend services need to be updated. Three mobile apps need new SDK versions. Roughly 50 million existing user accounts need to be migrated. The old system must remain operational until migration is complete. Target: fully complete in five months.

Step 1: Write the Outcome

"All 50 million user accounts authenticate exclusively through the new identity platform, with zero degradation in login success rate, the old authentication system fully decommissioned, and all eight backend services and three mobile apps integrated with the new system."

Step 2: Brain-Dump the Work

In a ninety-minute session with four engineers, the following work items surface (raw, unstructured):

Build the new identity service. Set up the new identity platform contract. Migrate user accounts. Build dual-write so old and new run in parallel. Update backend service #1 through #8. Update iOS app. Update Android app. Update web app. Migrate legacy OAuth integrations. Write the rollback plan. Set up monitoring. Data validation pipeline. Performance testing. Security review. Sunset old service. Write migration runbook. Train on-call engineers. Handle edge cases: SSO accounts, federated identities, accounts with unusual status flags.

Step 3: Group Into Milestones

Figure 8.3 — Milestones for Authentication Migration
M1: Foundation complete New identity platform contract signed, new identity service deployed to staging, dual-write infrastructure live in prod (reads stay on old) — Estimated: 5 weeks M2: Backend services migrated All 8 backend services reading from new identity service, old service still live but in shadow mode, monitoring and alerting in place — Estimated: 6 weeks (parallel across 2 teams) M3: Mobile apps migrated iOS, Android, web apps all shipping with new SDK, >95% of active users on new auth path — Estimated: 8 weeks (app store review cycles add time) M4: Legacy integrations migrated All OAuth integrations, SSO accounts, federated identities migrated and validated — Estimated: 4 weeks M5: Account bulk migration complete All 50M accounts migrated, data validation pipeline confirms correctness, performance benchmarks passed — Estimated: 3 weeks M6: Old system decommissioned Traffic fully on new system, old service shut down, runbooks complete, on-call trained — Estimated: 2 weeks

Step 4: Draw Dependencies and Find the Critical Path

Figure 8.4 — Dependency Graph for Authentication Migration
M1: Foundation / | \ / | \ M2: Backend M3: Mobile M4: Legacy Services Apps Integrations \ | / \ | / \ M5: Account / \ Bulk Migration / \ | / \ | / M6: Decommission Chain 1: M1 → M2 → M5 → M6 = 5+6+3+2 = 16 weeks Chain 2: M1 → M3 → M5 → M6 = 5+8+3+2 = 18 weeks ← CRITICAL PATH Chain 3: M1 → M4 → M5 → M6 = 5+4+3+2 = 14 weeks Critical path runs through mobile apps. Mobile has 2-week float before it delays launch.
The mobile migration is the bottleneck. Backend services and legacy integrations have float. Resourcing decisions should prioritize mobile.

What This Tells You

The project is at minimum 18 weeks, not 20 (which is five months). You have two weeks of float before you're truly late. The mobile migration is the thing most likely to cause the project to slip — specifically, because of app store review cycles, which you don't control. That's your biggest risk, and you've found it on day one.

The backend services work and legacy integrations work have float, which means if those teams are a bit slow, it doesn't matter. You can reallocate some of their capacity to help the mobile team if mobile is struggling.

The hidden work that might have been missed without explicit analysis: the data validation pipeline (inside M5), the dual-write infrastructure (inside M1), and the on-call training (inside M6). All three were surfaced in the brain-dump and assigned to milestones. Without the brain-dump, they would likely have been "discovered" in month four.

Common Decomposition Mistakes and How to Avoid Them

Mistake 1: Decomposing by Team Instead of by Outcome

The most common decomposition mistake is to organize the work by which team owns it rather than by what outcomes the work produces. "Team A does X. Team B does Y. Team C does Z." This seems natural because it reflects how your organization is structured, but it produces a plan that is optimized for ownership rather than delivery.

The problem: each team's work might be perfectly decomposed within itself, but the dependencies between teams — the places where Team A's output is required before Team B can proceed — often aren't visible in this structure. You end up with three well-organized silos and a set of integration problems that nobody owns.

Decompose by outcome first. Then assign ownership to milestones. The teams serve the outcomes, not the other way around.

Mistake 2: Too Many Milestones

When you have more than seven milestones on a project, you're usually managing tasks, not milestones. You've decomposed too far at the structural level. The problem is that milestone reviews, status updates, and dependency management all have overhead. The more milestones you have, the more overhead you generate — and at some point, you're spending more energy tracking the work than doing it.

If you find yourself with twelve milestones, look for ones that can be merged. Ask: "Would it matter if these two milestones were combined into one? Would we lose important signal?" If the answer is no, combine them.

Mistake 3: Milestones That Are Activities, Not States

"Backend API development" is not a milestone. It's an activity. "Backend API deployed to staging and passing integration tests" is a milestone. The difference is simple: a milestone is something that is either true or false at a given moment. Either it happened or it didn't. An activity is something that is ongoing.

When your milestones are activities, you will have meetings where someone says "how's the backend API going?" and the answer is "it's going well, we're making progress." That tells you nothing about whether you're on track. When milestones are states, the answer is "either it's done or it isn't" — which is useful.

Mistake 4: Forgetting the Rollout

Almost every project decomposition accounts for building the thing and forgets about deploying it. Deployment is often 20-30% of the total effort on a large project. It includes feature flags, gradual rollouts, monitoring, rollback plans, customer communications, and support preparation. These are not trivial.

Always have at least one milestone in your decomposition that is explicitly about rollout and validation, separate from the milestone for building the feature.

The Key Principle of This Chapter

Decomposition is not planning. It is discovering the shape of the work. You cannot know what you don't know until you've drawn the map — and the map's most important feature is not where the roads go, but where the roads end.

Putting It Together: Your Decomposition Checklist

Before you leave the decomposition stage of a project and move into execution, run through this checklist. It takes ten minutes and it will catch 80% of the structural problems before they become execution problems.

Checklist

Before You Start Executing: The Decomposition Check

The outcome is written down in one sentence

It describes what changes in the world, not what gets built. Everyone on the team agrees on it.

There are between 3 and 7 milestones

Each milestone is a state, not an activity. Each can be demonstrated or shown to be true or false.

The dependency graph has been drawn

You know which things depend on which other things. The critical path has been identified.

Cross-team dependencies are identified and conversations have started

You haven't just written them down — you've reached out to the teams involved and confirmed their availability and timeline.

Hidden work has been accounted for

Migration, testing infrastructure, observability, documentation, and review cycles are in the plan, not assumed to be zero.

The critical path fits inside the deadline

If it doesn't, you've had the scope/deadline/resource conversation with leadership before you started execution, not after.

There is a rollout milestone

Deployment and validation are explicitly in the plan, not assumed to happen automatically after "done."

If you can check every item on this list, you are ready to start executing. If you can't — if there are items you're not sure about or that expose uncertainty — spend one more day on decomposition. That day will save you weeks later.

Most Common Mistake

  • Starting execution without drawing the dependency graph — then discovering the critical path in month three when there's no room to respond
  • Confusing milestones (states) with activities — makes progress invisible
  • Decomposing by team ownership instead of by outcome — hides integration risks
  • Forgetting migration, testing infrastructure, and rollout in the plan

Three Questions for Your Next Project

  • If I drew the dependency graph right now, where would the critical path be — and does our current resourcing reflect that?
  • Are our milestones states ("X is deployed and working") or activities ("we are working on X")?
  • Have we accounted for migration, testing, observability, and rollout — or are those "to be figured out later"?
Next Chapter
Chapter 9 — Milestones That Actually Work