The Team You Cannot Order Around
Imagine you are six months into a large platform migration. Your team has done solid work. The architecture is clean, the core service is built, and the integration plan is written. Now the hard part begins: you need seven other teams to migrate their services onto your new platform within the quarter. None of those teams work for you. None of their managers work for your manager. They each have sprint plans that were set three months ago, and your migration is not in any of them.
This is not an unusual situation. It is the normal situation for any cross-team project at a company past about 200 engineers. And yet most execution playbooks spend almost no time on it. They talk about planning, milestones, risk registers. They do not talk about the lived experience of standing at the edge of your org with a compelling vision, a tight deadline, and teams on the other side of that boundary who have their own compelling visions and their own tight deadlines.
Let's be precise about the problem. When you communicate up — to your director or VP — you have a power asymmetry working in your favor. You can escalate. You can ask for a decision. You have a channel that, even if uncomfortable to use, exists and works. When you communicate down — to your own team — you have another kind of power. You can assign work, unblock people, create clarity. But when you communicate sideways, you have neither. You cannot command. You cannot compel. You are, in the most literal sense, on your own.
Everything you get from peer teams, you earn through influence, relationship, and smart communication. This chapter teaches you how.
Why Sideways Is the Hardest Direction
Here is something worth sitting with: most engineers who struggle with lateral collaboration are not bad communicators. They are good communicators who are using the wrong mental model. They are approaching peer teams as though the problem is informational — that if they just explain things clearly enough, the other team will understand and comply. But the problem is almost never informational. It is motivational and structural.
Peer teams don't help you because they don't understand your project. They fail to help you because:
- Your project is not on their roadmap. They made commitments to their own stakeholders that they are being held to. Your request, however reasonable, is asking them to break or delay those commitments.
- The work lands on them, the benefit lands on you. You look good when the platform migration succeeds. They take on execution risk, engineer hours, and the operational burden of running on a new system — often with unclear upside for their own team metrics.
- They don't fully trust your design yet. This is often unspoken. They have not been part of the decisions that led to your design. They do not know if it is correct. Taking a dependency on something they didn't help build feels risky, and risk aversion is rational.
- There is no formal mechanism forcing them to prioritize you. They can defer. They can say they are looking into it. They can ask for more detail. Each of these is a polite way of saying: this is not my problem yet.
Lateral collaboration fails not because of bad intentions but because of misaligned incentives. The team that needs the work done is not the same team that has to do the work. Until you close that gap — either by making the work easy, making the outcome valuable to them, or making the cost of not doing it visible — you will keep hitting the same wall.
There is also a structural problem that gets worse as organizations scale. In a 20-person startup, cross-team communication is a Slack message. Everyone knows what everyone else is doing. There is a natural shared context. At 500 engineers, teams exist in something close to information silos. The payments team has deep knowledge of their fraud detection system and almost no knowledge of the recommendation team's data pipeline. When those two teams need to collaborate on something, they are not starting from a shared foundation. They are starting from two completely different mental models of how the world works.
This is the context gap. It is the source of more cross-team friction than any personality conflict or political motivation. And it is entirely solvable, but you have to solve it deliberately.
Building Trust Before You Need It
The single most common mistake principals make in lateral communication is this: they only talk to peer teams when they need something. The first email to the infrastructure team is a request. The first meeting with the data platform team is to get an API built. The first time they appear in a partner team's Slack channel is to report a problem.
This is a terrible pattern. Not because it is rude — it is not rude — but because it means you have no trust in the bank when you need to make a withdrawal. You are showing up at the bank for the first time to ask for a loan.
Making Trust Deposits
Trust deposits are small actions that build goodwill with teams you will depend on later. They do not need to be grand gestures. They need to be genuine and consistent.
Learn their pain before you bring yours
Spend a meeting understanding what a peer team is struggling with before you ask them for anything. What are they being measured on? What is slowing them down? What did their last postmortem say? This is not small talk. It is the foundation for every conversation you will have with them later.
Send signals that cost you something
Acknowledge their work publicly. When a peer team ships something that helps your users, say so — in a shared channel, in an all-hands, in a message to their manager. This costs you nothing except a few minutes. But it signals that you pay attention, that you recognize their effort, and that you are not purely transactional.
Offer before you ask
If you have knowledge, code, tooling, or access that would help a peer team, share it without being asked. If your team built a great load-testing framework, show it to the infrastructure team. If you have production data that would help the ML team validate a model, offer it. Every time you do this you are demonstrating that collaboration with you is not a one-way street.
Do the small asks reliably
When a peer team asks you for something small — a review, a clarification, a pointer to documentation — do it fast and do it well. This sounds obvious but it is not how most people operate under deadline pressure. When you are busy, small asks from other teams are easy to defer. But every deferred small ask is a trust withdrawal you may not even notice making.
Tell them what is coming before it arrives
If your team is planning something that will affect a peer team — a new API that changes a contract, a migration that adds load to their database, a deprecation that removes something they depend on — tell them early. Not in a formal RFC. In a Slack message, a quick coffee chat, a casual heads-up. Early warning, before the formal process, is one of the most powerful trust signals you can send.
Trust Traps to Avoid
There are also behaviors that drain trust slowly, often without you noticing.
- Saying yes and then not delivering. This is worse than saying no. A peer team that says no disappoints you. A peer team that says yes and then goes silent forces you to manage around them under deadline pressure. Be honest about your capacity.
- Bringing decisions after they are made. Sending a peer team a design doc three days before you ship is not collaboration. It is notification with extra steps. It signals that you did not actually want their input — you wanted their blessing.
- Escalating before exhausting direct channels. If you go to someone's manager before trying to resolve something directly, you have spent a large amount of trust and you will not get it back easily.
- Changing requirements after work has started. Every time this happens without a real explanation and a genuine conversation about the impact, you are spending trust you will need later.
- Taking credit for shared work. In a joint project, when you present results, name the teams that contributed. Every time you do this you are building the kind of reputation that makes future collaboration easier.
The RFC as an Alignment Tool
RFC stands for Request for Comments. The term comes from the internet engineering community, where RFCs are how new protocols get proposed, debated, and agreed upon. The idea is simple: before you build something that affects other people, you write down what you intend to build and why, and you invite them to respond before work begins.
In a software company, the RFC is the most underused alignment tool available. Most teams use design docs as internal documents — written for themselves, reviewed by their own team, occasionally shared with other teams after the design is mostly fixed. That is a design doc. It is not an RFC.
A true RFC has a different purpose and a different audience. Its primary goal is not to document a decision that has been made. It is to surface the information, objections, and concerns that should shape the decision before it is made. This distinction matters enormously.
The Same Doc, Two Different Outcomes
Two teams at the same company are both planning a migration to a new authentication service. Team A writes a design doc, reviews it internally, and sends it to two peer teams two weeks before the planned migration date asking for "any last feedback before we go live." Both peer teams say it looks fine. The migration goes live. Three weeks later, a third team that was never in the conversation discovers the new auth service doesn't support a token format they rely on. A six-week rollback begins.
Team B publishes a one-page RFC four weeks before the migration. The RFC states the problem, the proposed approach, and explicitly asks: "What are we missing? Who else depends on the current auth flow?" Within 48 hours, the same third team responds in the comments saying they depend on a specific token format. The migration is redesigned in a day. It goes live on schedule. No rollback.
The difference was not the quality of the engineering. Both teams were competent. The difference was the process: one invited participation, one invited approval. Only one discovered what it didn't know.
Anatomy of a Good RFC
A good RFC is not a long document. The best ones I have seen are two to four pages. Their purpose is to create a conversation, not to impress anyone with thoroughness. Here is the structure that works.
The Problem Statement
One paragraph. No jargon.The Proposed Solution
What you plan to do and why you chose it.Alternatives Considered
The most important section most people skip.Impact on Other Teams
The section that earns you the response you need.Open Questions
Where you are explicitly asking for help.Timeline and Deadline for Feedback
Without this, nothing happens.Running the RFC Process
Publishing the document is only step one. An RFC without a process is just a document that nobody read.
Step 1: Send it directly, not just to a mailing list. Email the tech lead and manager of every team you named in the impact section. Send it to any team that might have an opinion even if you didn't name them. A mailing list post is easy to ignore. A direct message to a specific person is much harder to ignore.
Step 2: Walk the important ones through it in person. For any team where the impact is significant, do not rely on them reading the document. Schedule a 30-minute walkthrough. Reading a doc is passive. A conversation forces engagement and surfaces objections that would never appear in written comments.
Step 3: Respond to every comment within 24 hours. If someone raises a concern, respond to it — even if just to say "good point, we're looking into it." Silence from the author is the fastest way to kill engagement. People stop commenting when they feel like their comments go into a void.
Step 4: Log every decision the RFC produced. When a discussion results in a design change, document it in the RFC with a short explanation. "After discussion with Team Platform, we changed the approach in Section 2 to use pull-based rather than push-based sync. Reason: push-based would have required firewall changes in three data centers." This makes the RFC a living record, not just an initial proposal.
Step 5: Close it explicitly. When the feedback period is over, post an update: "RFC is now closed. We incorporated feedback from Teams X, Y, Z. Changes are summarized here. We are proceeding with the modified plan. Implementation starts Monday." This signals respect for the people who engaged and gives everyone a clear sense of where things stand.
RFC Mistakes That Kill Alignment
Publishing too late. An RFC published two weeks before launch is not asking for input. It is asking for a rubber stamp. Peer teams know the difference and resent being put in that position.
Writing for experts. If your RFC is full of internal codenames, team-specific acronyms, and assumed context, only people already close to your work will be able to engage. You need the people who are farthest from your work.
Making it too long. A 20-page design doc is not an RFC. Peer engineers will not read 20 pages. They will skim, miss the important section, and give you useless feedback. Keep it short.
Not updating it. An RFC that doesn't change based on feedback signals that the process is theater. If you don't incorporate any of the input you received, people will stop responding to your next one.
Publish early, even when incomplete. It is better to share a rough RFC with clear open questions than to wait until it is polished. The earlier you share, the more time others have to catch problems you can still fix cheaply.
Lead with the why. The first paragraph should answer: why does this matter? Engineers engage with problems they understand as important. If they don't understand why you are doing this, they will not care how you are doing it.
Ask specific questions. "What are we missing?" gets nothing. "Team Auth: does this approach break your token refresh flow? We're unsure how to handle the expiry window in Section 3." gets a real answer.
Show your reasoning. When you explain why you chose one approach over another, you are inviting people to challenge your reasoning — not just your conclusion. This produces much better feedback.
Creating Shared Context
The RFC solves a specific problem: getting alignment on a specific proposal. But there is a deeper problem underneath it that the RFC alone cannot solve. That problem is the context gap.
The Context Gap Problem
Every team in a large organization lives inside its own context bubble. The payments team understands the payments system in extraordinary detail. They know its history, its failure modes, its quirky behaviors during peak traffic, the three incidents that shaped its current architecture, the ongoing debate about whether to rewrite the settlement engine. They have internalized all of this so deeply that it no longer feels like knowledge. It feels like reality.
The search team, two floors away, knows almost none of this. They know the payments API exists. They might know the general shape of it. But they don't know the quirks, the constraints, the history. When you ask the search team to do something that affects payments — or when payments asks search to do something — each team is working from a fundamentally different picture of the world.
This is not a problem of intelligence or effort. It is a structural consequence of how knowledge is distributed in a large organization. Each team has a rich model of their own system and a thin model of everyone else's. The thin models are full of assumptions that are plausible but wrong.
The infrastructure team is planning a change to the way secrets are rotated in the production environment. They have designed a new system that rotates secrets automatically every 24 hours instead of every 90 days. This is, objectively, a security improvement. They are proud of it.
What they don't know: the ML training team has a long-running job that runs for 36 hours and reads a database credential at startup. Under the new rotation system, that credential will expire halfway through the job. The job will crash. The training run — which took three weeks to set up — will be lost.
The ML team never mentioned this because they had no idea the infrastructure team was touching secret rotation. The infrastructure team never thought to ask because they assumed 24 hours was a safe rotation window. Both assumptions were reasonable given each team's knowledge. Together they produced a silent failure that would only surface on the next training run.
This is the context gap. It is not fixed by better communication during the incident. It is fixed by building shared context before the incident.
Tools for Building Context
There is no single tool that eliminates the context gap. It requires a combination of practices, applied consistently.
The cross-team architecture walkthrough. Once a quarter, run a meeting where each team spends 20 minutes walking the others through the most important aspects of their system — the pieces other teams are most likely to be affected by. Not a deep technical dive. A high-level orientation. What does your system do? What are its dependencies? What are its most important constraints? What are the failure modes that would surprise someone who doesn't know it well? Done consistently, these sessions build a shared model of the system landscape that makes every future collaboration easier.
The incident broadcast. When something goes wrong in your system, write a short summary and send it to the engineering-wide channel — or at minimum to the teams who might be affected. Not a post-mortem. A one-paragraph summary: what happened, what broke, what you learned. This builds shared context in real time and signals that you believe in transparency. It also invites other teams to flag if the same failure mode might affect them.
The "what we're building" update. Every two weeks, a short post in a shared channel: what your team shipped, what you're working on, what is coming in the next month. This is not a status report to management. It is a signal to peer teams about what is coming their way. It gives them the chance to flag conflicts before they become emergencies.
The shared glossary. In large organizations, the same word often means completely different things to different teams. "Event" means something very specific to the eventing infrastructure team and something completely different to the analytics team. "User" might refer to an external customer in one team's vocabulary and an internal service account in another's. A shared glossary — even a simple one — dramatically reduces the miscommunication that happens when the same word is used with different meanings.
Embedded engineers. The highest-bandwidth context-sharing mechanism is a human being. For critical cross-team integrations, consider temporarily embedding an engineer from your team with the partner team for one or two weeks, or vice versa. Nothing closes the context gap faster than working side by side. The engineer comes back with a mental model that no document could have produced.
Think of shared context as an investment with a compounding return. Every hour you spend building context with a peer team today saves you multiple hours of misalignment, rework, and conflict in the future. The engineering organizations that seem to execute the fastest are almost always the ones that have invested the most in shared context. They move fast because they have built the shared understanding that makes parallel work safe.
Making It Easy to Say Yes
Let's say you have done everything right so far. You have built trust. You have published an RFC. You have created shared context. Now you are asking a peer team to do something specific — to migrate their service, to build an integration, to change their deployment pipeline, to adopt a new protocol. They understand why it matters. They want to help. And still, it is not happening.
The problem is usually not motivation. The problem is friction. There is work standing between their goodwill and their actual contribution, and that work has nowhere to go on their sprint board.
The Real Cost of "No"
When a peer team doesn't do what you need, it often isn't because they are saying no. It is because saying yes costs them something and no one has done the work to reduce that cost. Consider what it takes for a team to adopt your new platform:
- Someone has to read your documentation (if it exists)
- Someone has to set up a test environment
- Someone has to write migration code
- Someone has to do a load test to make sure their service still performs
- Someone has to update the deployment pipeline
- Someone has to run the migration during a maintenance window
- Someone has to be on call for a week afterward in case something breaks
That is a lot of someone. And all of that "someone" is headcount that this team is already using for their own roadmap. The question is not whether they support your migration in principle. Of course they do. The question is: who pays the cost of making it happen?
Most engineers who run cross-team initiatives answer this question implicitly: the partner team pays. The partner team does the migration work, the testing, the pipeline changes. This is the default assumption. And it is why cross-team adoption of new platforms, protocols, and standards is so slow at most companies.
The fastest cross-team migrations I have ever seen all share a common pattern: the team asking for the migration did most of the work.
The Yes Playbook
Here is the specific set of moves that make it easy for other teams to say yes.
Write the migration guide before anyone asks for it. Not a one-page overview. A step-by-step guide that a mid-level engineer who has never touched your system can follow from start to finish without asking anyone for help. Include the commands. Include the expected outputs. Include the common failure modes and how to fix them. The number of teams that will adopt your platform is directly proportional to the quality of this document.
Build the migration tooling. If teams need to transform data, update configs, or run a script to migrate to your new system, write the script for them. Don't give them a specification of what the script should do. Give them the script, tested on a real service, with instructions for how to run it. Every hour you spend on migration tooling saves partner teams three to five times that in migration work.
Offer to do the first migration with them. Send an engineer from your team to sit with the first partner team through their migration. Not to do it for them — to do it alongside them. This catches the places where your documentation is wrong or incomplete. It builds trust between the two teams. And it gives you a reference case you can point to when convincing the next team: "We already did this with Team Search. Here's what it looked like. The migration took four hours."
Write the tickets they would have had to write. When you are asking a team to do work, create the JIRA or GitHub issues for that work. Write them clearly, with acceptance criteria. Scope them to be small enough to fit in a sprint. Make it so that the team's only job is to prioritize and execute — not to translate your request into actionable work items. This sounds like a small thing. It is not. It removes the invisible overhead that makes every cross-team request harder than it looks.
Create a shared test environment. One of the most common reasons cross-team integrations stall is that testing is hard. Setting up a test environment that exercises the integration between two systems requires knowledge that neither team fully has. Create a shared sandbox environment where partner teams can test their integration before touching production. Make joining it a one-click operation. Remove the setup cost entirely.
Take the on-call risk off their table. If a team is worried that migrating to your system will create new incidents for them, offer to take primary on-call responsibility for any issues that trace back to your platform for the first 30 days after their migration. This is a significant commitment. It is also the commitment that closes more deals than any other. It demonstrates that you believe in your system's quality and that you are not asking them to take a risk you would not take yourself.
There is a technique for reducing migration work that is almost always available and almost always underused. I call it the lazy migration pattern. Instead of asking a team to proactively migrate their service, you build a compatibility layer that handles both the old and new systems, and you auto-migrate their data or config in the background, invisibly, over time. They don't have to do anything. The migration happens. When it is complete you flip a flag and they are on the new system.
This only works for certain kinds of migrations, but when it does work, adoption goes from months to days. The lesson: if you want fast adoption, do not ask teams to work. Do the work for them and ask only for permission.
When Peer Collaboration Breaks Down
Even when you do everything right, peer collaboration sometimes breaks down. A team that agreed to a timeline stops responding. A team that was enthusiastic about your RFC objects to every revision. A team that seemed aligned for three months suddenly announces a competing approach that invalidates your design.
Here is the framework for thinking through these situations.
First: assume a legitimate reason. The most common mistake when peer collaboration breaks down is to assume bad faith. Rarely is it bad faith. Almost always it is one of three things: their priorities changed (their manager got a new mandate), they discovered a technical constraint they didn't know about earlier, or there was a miscommunication somewhere in the chain that led them to believe something different from what you believe. Start by understanding which of these is true before deciding how to respond.
Second: have the direct conversation first. Before you email anyone's manager, before you escalate to your skip-level, before you write a passive-aggressive document laying out the timeline and who failed to deliver what — have a direct conversation with the person on the other side. Tell them what you are seeing. Ask them what is going on. Listen without defending. You will be surprised how often this resolves something that felt unsolvable.
Third: separate the technical from the organizational. Sometimes what looks like a technical disagreement is actually an organizational one. Two teams disagree about how to design an API. The technical arguments are real, but the underlying issue is that each team wants ownership of the piece of the system the API is part of. Resolving the technical question without addressing the ownership question will not produce durable alignment. Identify what is really being contested.
Fourth: make the cost of impasse visible. Sometimes a peer team is not moving because the cost of not moving is not visible to them. They see their own sprint pressure clearly. They do not see the impact of their delay on your project's timeline and on the users and the business. This is not malice. It is a focus problem. Your job is to make the cost of impasse concrete and visible — not as a threat, but as information. "If we don't resolve this by Friday, the Q3 launch is at risk. Here is what that means for customers." Said directly and without drama, this often creates movement where nothing else has.
Fifth: escalate as a last resort, not a first move. When escalation is necessary, it should come with a clear framing: "We have tried the following direct approaches and they have not resolved the situation. We are not asking you to pick a side. We are asking for help creating a space where we can resolve this." This framing matters. Escalating with "Team X is blocking us and won't cooperate" positions the other team as the problem. That may feel satisfying in the moment, but it damages your reputation and the relationship. Frame escalation as a request for facilitation, not a verdict.
Patterns That Work at Scale
As the number of teams involved in a project grows, the complexity of lateral communication grows faster than linearly. Two teams need one shared channel. Five teams need a coordination structure. Twenty teams need an operating model. Here are the patterns that work at each scale.
The Working Group (2–5 teams). Create a small group with one representative from each team. Meet weekly. The agenda is simple: what was done this week, what is blocked, what needs a decision. Keep the meeting to 30 minutes. Give each representative the authority to make decisions for their team on the issues the working group covers. Without that authority, the meeting becomes a relay station that slows everything down.
The Tech Lead Sync (5–10 teams). When you are coordinating enough teams that a single working group gets too large, shift to a tech lead sync model. Each team nominates one technical point of contact. This person is not a project manager. They are the engineer who understands the technical surface area most relevant to the coordination. They meet weekly. They own the technical alignment between teams. Above them, program managers or engineering managers handle the schedule, resource, and escalation concerns separately.
The Hub and Spoke model (10+ teams). Beyond about ten teams, trying to keep everyone in the same room stops working. The meeting is too large, the agenda is too diffuse, and the people who are responsible for only a small piece of the work spend most of the time waiting for the part that is relevant to them. Shift to a hub and spoke model. The platform team (hub) maintains bilateral relationships with each dependent team (spokes). The hub is responsible for aggregating what all the spokes need and translating it into a coherent plan. The spokes are responsible for deep knowledge of their piece and surfacing problems to the hub early.
The number of communication channels between N teams is N(N-1)/2. Two teams have 1 channel. Five teams have 10. Ten teams have 45. Twenty teams have 190. This is why large cross-team projects do not fail because people stop caring. They fail because the volume of information that needs to flow between all the teams exceeds the capacity of any reasonable communication structure to carry it. The solution is not more communication. It is better structure for the communication that needs to happen.
The interface contract. The most powerful way to reduce ongoing lateral communication overhead is to define clean, stable interfaces between teams. When Team A and Team B have agreed on exactly what Team A's service will provide and exactly what Team B expects to receive — and when that contract is written down, versioned, and tested — those two teams can work almost independently. They don't need weekly syncs. They don't need embedded engineers. They need a contract and a way to tell when the contract is violated.
This is the ideal end state for any cross-team relationship. Not constant communication but a clear agreement, a shared test suite, and the confidence that comes from both sides understanding exactly where their responsibility ends and the other's begins.