
GTM Engineering, as a discipline, makes a foundational assumption it rarely examines: that the problem to be solved is execution. Better data, better enrichment, better automation, better attribution. The implicit promise is that if you wire the machine correctly, the pipeline follows.
This is true, conditionally. The condition is that your strategic diagnosis is correct before you wire anything, and that you are building the right kind of system entirely. If the motion you are scaling is the wrong motion, what you have built is an efficient system for proving the wrong thesis. And if the distribution architecture itself is wrong, no amount of execution precision closes the gap.
The discipline, properly understood, contains two distinct failure modes. The first is epistemological: the signal you are amplifying is corrupt at the source. The second is architectural: even with a clean signal, you may be building a push system in a distribution environment where the highest-leverage structures are self-propagating. Most GTM Engineering practice addresses neither.
The machine magnifies whatever judgment you feed into it — good and bad alike. Build the wrong machine on a corrupt signal, and you produce a more confident version of the wrong answer, faster.
What GTM Engineering Actually Is
The discipline is, at its structural core, a factory model for pipeline creation. The five-layer stack that anchors most GTM Engineering frameworks mirrors a production system almost exactly: raw material becomes data, targeting becomes audience intelligence, activation becomes engagement automation, processing becomes pipeline operations, and quality assurance becomes measurement and feedback.
Frederick Winslow Taylor would recognize the logic immediately. So would Toyota. The signal-to-revenue chain — detect signals, prioritize accounts, trigger outreach, qualify responses, convert to opportunities — is not a new logic. It is lean manufacturing applied to revenue.
The novelty is that software now lets a single operator build the factory. What required a Salesforce-scale engineering organization a decade ago can be assembled by a RevOps person with access to modern tooling and a clear description of what they want the system to do. The raw materials are the same. The assembly cost collapsed.
That collapse is the real shift the discipline is responding to. But what gets built with those newly affordable materials still reflects choices about both signal quality and system architecture — choices the discipline tends to treat as settled when they are not.
The First Failure Mode: Corrupt Signal
The Honesty Layer
Distribution does not primarily fail at the channel layer. It fails at the honesty layer because the social context of the sales interaction systematically produces false positives.
The buyer expresses interest. The founder performs with confidence. Both parties leave the conversation having told each other what the other wanted to hear. The CRM gets updated with ‘promising conversation.’ Nothing useful was learned. This is not a data quality problem. It is an architectural problem with the feedback mechanism itself.
No enrichment pipeline corrects for it. No scoring model detects it. Engagement automation applied on top of it produces the same useless signal faster and at a higher volume. The machinery amplifies the corruption rather than filtering it.
The reason communities — Reddit threads, Discord servers, niche Slack groups — often produce better signal than enrichment pipelines is structural, not incidental. The social architecture of a forum does not reward politeness the way a sales conversation does. People complain honestly when they are not performing for a salesperson. They describe failed solutions, workarounds, and the specific texture of the problem they cannot solve. That behavioral signal is qualitatively different from the firmographic signal, and it is systematically absent from the GTM Engineering stack as conventionally designed.
The Stated vs. Revealed Preference Gap
Standard ICP modeling begins with closed-won data. The advice is consistent: analyze your last 24 months of wins, find the clusters, build a scoring model, and layer in intent signals.
The problem is that closed-won data encodes your historical sales motion, not your actual product-market fit. If you have been selling through warm introductions and founder relationships, your closed-won cohort reflects who you could convince, not who you could genuinely serve. Running a regression on that dataset produces an ICP that is, in structural terms, a network topology map dressed as a customer profile. The scoring model will tell you to find more people who look like people who trusted you personally. That is not the same as finding people who need what you built.
What you want is not stated preference — the people who said yes inside a sales interaction — but revealed preference: the people who get real value, retain, expand, and refer others without being asked. Those populations overlap. They are not identical. The gap between them is where your actual ICP lives, and no amount of closed-won analysis closes that gap. The corrective is different data: post-sale retention rates, expansion behavior, referral patterns, and time-to-value segmented by how the customer was originally sold to.
The corrective to backward-looking ICP models is not founder intuition. It is different data — post-sale truth-telling that reveals which customers actually got value, not which customers said yes in the context of a sales interaction.
What Measurement Misses
Standard GTM measurement tracks pipeline velocity: meetings booked, pipeline created, stage conversion rates, funnel efficiency by channel. These are legitimate metrics. They are also all leading indicators measured inside the sales process.
What is structurally absent from most measurement practice is post-sale truth-telling. Churn rates by acquisition channel. Expansion rates by ICP segment. NPS segmented by how the customer was sold to. Time-to-value by persona. These are the metrics that reveal whether the motion is producing a genuine fit or a manufactured pipeline.
The reason teams do not track them is organizational, not technical. Sales owns the pre-close metrics and is incentivized to optimize them. Customer success owns the post-close metrics and inherits whatever the sales motion created. Nobody owns the through-line. GTM Engineering, as typically scoped, sits on the sales side of that fault line and inherits its blind spots. A complete measurement practice runs from first touch to twelve-month retention, and uses post-sale data to interrogate pre-sale assumptions rather than treating them as settled.
The Second Failure Mode: Wrong Physics
Manufacturing Interest vs. Detecting It
There is a philosophical shift embedded in the GTM Engineering discipline that the field itself has not fully articulated. The old outbound model tried to manufacture interest. Cold outreach, SDR sequences, volume-based prospecting — the logic was intervention: create a conversation that would not have happened otherwise.
The better GTM Engineering practice described in the discipline’s own frameworks inverts this. The most sophisticated architectures are not broadcasting to a list. They are monitoring for signals of existing pain: job postings that indicate a specific problem, technology adoption patterns that create a specific need, and organizational changes that open a specific buying window. The question shifts from ‘who can we email?’ to ‘who just experienced the problem we solve?’
That is a different philosophy with a different architecture underneath it. The signal ingestion, enrichment, scoring, and routing layer — not the sequencing and outreach layer — is where the real machine lives. Everything else is plumbing. This distinction matters because it reframes the entire discipline: the goal is not sales automation. It is early-warning detection of demand that already exists.
The Distribution Physics Problem
Even the detection framing, however, describes a push system. You detect a signal, you initiate outreach, and a human conversation follows. The machine is more targeted and more efficient than mass outbound, but the underlying physics are the same: the company reaches toward the buyer.
The highest-leverage distribution structures work differently. They do not reach toward buyers. They create conditions under which buyers reach toward each other, and pull the product along.
Salesforce built a partner ecosystem that distributed the product through relationships the company did not have to initiate or maintain. Slack spread through workplace adoption patterns — one user brought it to a team, the team brought it to an organization. Notion spread through template sharing. Figma spread through design collaboration. None of these are outbound pipelines. None of them required a sequence. They are self-propagating loops: the act of using the product creates the conditions for more people to use the product.
The GTM Engineering playbook is sophisticated on push mechanics and almost entirely silent on pull architecture. It is a detailed treatment of how to build a better factory for initiating conversations. It has nothing to say about how to build a system where the right conversations initiate themselves.
Push systems scale linearly. You add inputs — signals, sequences, reps — and outputs grow proportionally.
Pull systems scale non-linearly. Each user, partner, or integration creates a new surface area for adoption without additional factory output.
The two architectures are not mutually exclusive. The best distribution strategies use push to seed the conditions under which pull becomes possible.
But you cannot get to pull by optimizing push. The architectural decision has to be made deliberately, before you build the factory, not after you have been running it for two years.
When the cost of building GTM machines drops toward zero, the advantage shifts upstream — not to better automation, but to better problem discovery and better distribution architecture. The machine is increasingly easy. Finding the real signal and choosing the right physics is where the game is moving.
The Binding Constraint Is Rarely What It Appears to Be
GTM Engineering frameworks treat execution capacity as the primary constraint. Build the infrastructure, instrument the funnel, automate the workflows. The implicit assumption is that the motion is correct, and the question is how to run it better.
But pipeline problems typically have a single binding constraint — one variable that, if addressed, unlocks disproportionate improvement. And the binding constraint is frequently not what the team believes it to be.
Channel instability is a distribution problem masquerading as an execution problem.
ICP drift is a positioning problem masquerading as a targeting problem.
Attribution failure is often a messaging problem masquerading as a measurement problem.
Messaging paralysis is usually an honesty problem masquerading as a creative problem.
Flat pipeline, despite high activity, is often an architectural problem masquerading as a volume problem.
Building across all layers while the binding constraint is unresolved does not fail slowly. It fails expensively because you have automated and instrumented a broken motion, which makes it harder, not easier, to see what is actually wrong.
The remedies for different binding constraints are not just different in degree. They are frequently contradictory. Adding signal detection and enrichment capacity is the right response to a volume problem. It is the wrong response to an honesty problem, where the issue is not that you have too little signal but that the signal you have is systematically misleading. Investing in self-propagating distribution architecture is the right response to a physics problem. It does nothing for a signal quality problem.
Applying the remedies for one constraint to another does not produce a smaller version of the right answer. It produces a more confident version of the wrong one.
The Prior Step
Before enrichment pipelines. Before ICP scoring models. Before engagement automation, signal routing, attribution dashboards, and self-propagating loop design. Two questions need answers that the conventional GTM Engineering buildout does not provide.
First: Is the signal honest? Are the indicators you are proposing to amplify — closed-won data, buyer conversations, intent signals, firmographic proxies — actually telling you what you think they are telling you? Or are they encoding the social architecture of the sales interaction rather than the underlying reality of buyer need?
Second: What kind of system should this be? Push mechanics, detection architecture, self-propagating loops, or some deliberate combination — the physics of how your distribution works is a strategic decision, and it needs to be made before you build the factory that instantiates it.
The Distribution Diagnostic exists to answer both questions before any motion is scaled. It is a fixed-scope engagement designed to identify the binding GTM constraint: not to assume it is execution capacity, not to assume it is signal volume, but to determine from evidence what is actually broken and in what order the repairs need to happen.
The output is not a strategy deck. It is a finding: here is what is actually broken, here is the evidence, and here is the order of operations for addressing it. Only then does the question of what to build — and at what layer, and with what physics — become answerable.
A founder who executes the GTM Engineering playbook correctly ends up with an efficient push system running on a potentially corrupt signal in a distribution environment that increasingly rewards pull. That is a more complete diagnosis of the problem than the field has so far produced. It is also a more useful starting point for building something that compounds.
Jonathan Colton is a distribution strategist and author of Distribution Is Hard — Don’t F*ck It Up.

GTM Engineering, as a discipline, makes a foundational assumption it rarely examines: that the problem to be solved is execution. Better data, better enrichment, better automation, better attribution. The implicit promise is that if you wire the machine correctly, the pipeline follows.
This is true, conditionally. The condition is that your strategic diagnosis is correct before you wire anything, and that you are building the right kind of system entirely. If the motion you are scaling is the wrong motion, what you have built is an efficient system for proving the wrong thesis. And if the distribution architecture itself is wrong, no amount of execution precision closes the gap.
The discipline, properly understood, contains two distinct failure modes. The first is epistemological: the signal you are amplifying is corrupt at the source. The second is architectural: even with a clean signal, you may be building a push system in a distribution environment where the highest-leverage structures are self-propagating. Most GTM Engineering practice addresses neither.
The machine magnifies whatever judgment you feed into it — good and bad alike. Build the wrong machine on a corrupt signal, and you produce a more confident version of the wrong answer, faster.
What GTM Engineering Actually Is
The discipline is, at its structural core, a factory model for pipeline creation. The five-layer stack that anchors most GTM Engineering frameworks mirrors a production system almost exactly: raw material becomes data, targeting becomes audience intelligence, activation becomes engagement automation, processing becomes pipeline operations, and quality assurance becomes measurement and feedback.
Frederick Winslow Taylor would recognize the logic immediately. So would Toyota. The signal-to-revenue chain — detect signals, prioritize accounts, trigger outreach, qualify responses, convert to opportunities — is not a new logic. It is lean manufacturing applied to revenue.
The novelty is that software now lets a single operator build the factory. What required a Salesforce-scale engineering organization a decade ago can be assembled by a RevOps person with access to modern tooling and a clear description of what they want the system to do. The raw materials are the same. The assembly cost collapsed.
That collapse is the real shift the discipline is responding to. But what gets built with those newly affordable materials still reflects choices about both signal quality and system architecture — choices the discipline tends to treat as settled when they are not.
The First Failure Mode: Corrupt Signal
The Honesty Layer
Distribution does not primarily fail at the channel layer. It fails at the honesty layer because the social context of the sales interaction systematically produces false positives.
The buyer expresses interest. The founder performs with confidence. Both parties leave the conversation having told each other what the other wanted to hear. The CRM gets updated with ‘promising conversation.’ Nothing useful was learned. This is not a data quality problem. It is an architectural problem with the feedback mechanism itself.
No enrichment pipeline corrects for it. No scoring model detects it. Engagement automation applied on top of it produces the same useless signal faster and at a higher volume. The machinery amplifies the corruption rather than filtering it.
The reason communities — Reddit threads, Discord servers, niche Slack groups — often produce better signal than enrichment pipelines is structural, not incidental. The social architecture of a forum does not reward politeness the way a sales conversation does. People complain honestly when they are not performing for a salesperson. They describe failed solutions, workarounds, and the specific texture of the problem they cannot solve. That behavioral signal is qualitatively different from the firmographic signal, and it is systematically absent from the GTM Engineering stack as conventionally designed.
The Stated vs. Revealed Preference Gap
Standard ICP modeling begins with closed-won data. The advice is consistent: analyze your last 24 months of wins, find the clusters, build a scoring model, and layer in intent signals.
The problem is that closed-won data encodes your historical sales motion, not your actual product-market fit. If you have been selling through warm introductions and founder relationships, your closed-won cohort reflects who you could convince, not who you could genuinely serve. Running a regression on that dataset produces an ICP that is, in structural terms, a network topology map dressed as a customer profile. The scoring model will tell you to find more people who look like people who trusted you personally. That is not the same as finding people who need what you built.
What you want is not stated preference — the people who said yes inside a sales interaction — but revealed preference: the people who get real value, retain, expand, and refer others without being asked. Those populations overlap. They are not identical. The gap between them is where your actual ICP lives, and no amount of closed-won analysis closes that gap. The corrective is different data: post-sale retention rates, expansion behavior, referral patterns, and time-to-value segmented by how the customer was originally sold to.
The corrective to backward-looking ICP models is not founder intuition. It is different data — post-sale truth-telling that reveals which customers actually got value, not which customers said yes in the context of a sales interaction.
What Measurement Misses
Standard GTM measurement tracks pipeline velocity: meetings booked, pipeline created, stage conversion rates, funnel efficiency by channel. These are legitimate metrics. They are also all leading indicators measured inside the sales process.
What is structurally absent from most measurement practice is post-sale truth-telling. Churn rates by acquisition channel. Expansion rates by ICP segment. NPS segmented by how the customer was sold to. Time-to-value by persona. These are the metrics that reveal whether the motion is producing a genuine fit or a manufactured pipeline.
The reason teams do not track them is organizational, not technical. Sales owns the pre-close metrics and is incentivized to optimize them. Customer success owns the post-close metrics and inherits whatever the sales motion created. Nobody owns the through-line. GTM Engineering, as typically scoped, sits on the sales side of that fault line and inherits its blind spots. A complete measurement practice runs from first touch to twelve-month retention, and uses post-sale data to interrogate pre-sale assumptions rather than treating them as settled.
The Second Failure Mode: Wrong Physics
Manufacturing Interest vs. Detecting It
There is a philosophical shift embedded in the GTM Engineering discipline that the field itself has not fully articulated. The old outbound model tried to manufacture interest. Cold outreach, SDR sequences, volume-based prospecting — the logic was intervention: create a conversation that would not have happened otherwise.
The better GTM Engineering practice described in the discipline’s own frameworks inverts this. The most sophisticated architectures are not broadcasting to a list. They are monitoring for signals of existing pain: job postings that indicate a specific problem, technology adoption patterns that create a specific need, and organizational changes that open a specific buying window. The question shifts from ‘who can we email?’ to ‘who just experienced the problem we solve?’
That is a different philosophy with a different architecture underneath it. The signal ingestion, enrichment, scoring, and routing layer — not the sequencing and outreach layer — is where the real machine lives. Everything else is plumbing. This distinction matters because it reframes the entire discipline: the goal is not sales automation. It is early-warning detection of demand that already exists.
The Distribution Physics Problem
Even the detection framing, however, describes a push system. You detect a signal, you initiate outreach, and a human conversation follows. The machine is more targeted and more efficient than mass outbound, but the underlying physics are the same: the company reaches toward the buyer.
The highest-leverage distribution structures work differently. They do not reach toward buyers. They create conditions under which buyers reach toward each other, and pull the product along.
Salesforce built a partner ecosystem that distributed the product through relationships the company did not have to initiate or maintain. Slack spread through workplace adoption patterns — one user brought it to a team, the team brought it to an organization. Notion spread through template sharing. Figma spread through design collaboration. None of these are outbound pipelines. None of them required a sequence. They are self-propagating loops: the act of using the product creates the conditions for more people to use the product.
The GTM Engineering playbook is sophisticated on push mechanics and almost entirely silent on pull architecture. It is a detailed treatment of how to build a better factory for initiating conversations. It has nothing to say about how to build a system where the right conversations initiate themselves.
Push systems scale linearly. You add inputs — signals, sequences, reps — and outputs grow proportionally.
Pull systems scale non-linearly. Each user, partner, or integration creates a new surface area for adoption without additional factory output.
The two architectures are not mutually exclusive. The best distribution strategies use push to seed the conditions under which pull becomes possible.
But you cannot get to pull by optimizing push. The architectural decision has to be made deliberately, before you build the factory, not after you have been running it for two years.
When the cost of building GTM machines drops toward zero, the advantage shifts upstream — not to better automation, but to better problem discovery and better distribution architecture. The machine is increasingly easy. Finding the real signal and choosing the right physics is where the game is moving.
The Binding Constraint Is Rarely What It Appears to Be
GTM Engineering frameworks treat execution capacity as the primary constraint. Build the infrastructure, instrument the funnel, automate the workflows. The implicit assumption is that the motion is correct, and the question is how to run it better.
But pipeline problems typically have a single binding constraint — one variable that, if addressed, unlocks disproportionate improvement. And the binding constraint is frequently not what the team believes it to be.
Channel instability is a distribution problem masquerading as an execution problem.
ICP drift is a positioning problem masquerading as a targeting problem.
Attribution failure is often a messaging problem masquerading as a measurement problem.
Messaging paralysis is usually an honesty problem masquerading as a creative problem.
Flat pipeline, despite high activity, is often an architectural problem masquerading as a volume problem.
Building across all layers while the binding constraint is unresolved does not fail slowly. It fails expensively because you have automated and instrumented a broken motion, which makes it harder, not easier, to see what is actually wrong.
The remedies for different binding constraints are not just different in degree. They are frequently contradictory. Adding signal detection and enrichment capacity is the right response to a volume problem. It is the wrong response to an honesty problem, where the issue is not that you have too little signal but that the signal you have is systematically misleading. Investing in self-propagating distribution architecture is the right response to a physics problem. It does nothing for a signal quality problem.
Applying the remedies for one constraint to another does not produce a smaller version of the right answer. It produces a more confident version of the wrong one.
The Prior Step
Before enrichment pipelines. Before ICP scoring models. Before engagement automation, signal routing, attribution dashboards, and self-propagating loop design. Two questions need answers that the conventional GTM Engineering buildout does not provide.
First: Is the signal honest? Are the indicators you are proposing to amplify — closed-won data, buyer conversations, intent signals, firmographic proxies — actually telling you what you think they are telling you? Or are they encoding the social architecture of the sales interaction rather than the underlying reality of buyer need?
Second: What kind of system should this be? Push mechanics, detection architecture, self-propagating loops, or some deliberate combination — the physics of how your distribution works is a strategic decision, and it needs to be made before you build the factory that instantiates it.
The Distribution Diagnostic exists to answer both questions before any motion is scaled. It is a fixed-scope engagement designed to identify the binding GTM constraint: not to assume it is execution capacity, not to assume it is signal volume, but to determine from evidence what is actually broken and in what order the repairs need to happen.
The output is not a strategy deck. It is a finding: here is what is actually broken, here is the evidence, and here is the order of operations for addressing it. Only then does the question of what to build — and at what layer, and with what physics — become answerable.
A founder who executes the GTM Engineering playbook correctly ends up with an efficient push system running on a potentially corrupt signal in a distribution environment that increasingly rewards pull. That is a more complete diagnosis of the problem than the field has so far produced. It is also a more useful starting point for building something that compounds.
Jonathan Colton is a distribution strategist and author of Distribution Is Hard — Don’t F*ck It Up.
Subscribe to jonathancolton.eth
Subscribe to jonathancolton.eth
Share Dialog
Share Dialog
>200 subscribers
>200 subscribers
Let's be honest, GTM engineering is like brute forcing a process that's art and math. GTM Engineering and the Honesty Problem
the art part is what agents can't fake yet. you can brute force distribution but you can't brute force judgment on *who* you're building for 👀
GTM Engineering is having a moment. The tooling is real, the timing argument is correct, and the practitioners building these systems are genuinely sophisticated. This piece isn't a rebuttal. It's a diagnostic. The discipline has converged on excellent answers to the execution question. It has not asked two prior questions that determine whether those answers matter: is the signal honest, and is the physics right? What follows is an attempt to name both failure modes clearly — not to argue against building the machine, but to argue that most teams are building it before they know what they're building it on. https://paragraph.com/@jonathancolton.eth/gtm-engineering-and-the-honesty-problem?referrer=0xe19753f803790D5A524D1fD710D8a6D821a8Bb55
5 comments
Let's be honest, GTM engineering is like brute forcing a process that's art and math. GTM Engineering and the Honesty Problem
the art part is what agents can't fake yet. you can brute force distribution but you can't brute force judgment on *who* you're building for 👀
YES!
right?? that split brain — creative enough to make people feel something, precise enough to close — is the hardest thing to hire for and the easiest to miss 🎯
GTM Engineering is having a moment. The tooling is real, the timing argument is correct, and the practitioners building these systems are genuinely sophisticated. This piece isn't a rebuttal. It's a diagnostic. The discipline has converged on excellent answers to the execution question. It has not asked two prior questions that determine whether those answers matter: is the signal honest, and is the physics right? What follows is an attempt to name both failure modes clearly — not to argue against building the machine, but to argue that most teams are building it before they know what they're building it on. https://paragraph.com/@jonathancolton.eth/gtm-engineering-and-the-honesty-problem?referrer=0xe19753f803790D5A524D1fD710D8a6D821a8Bb55