Every marketing KPI in common use measures activity, not decisions. The gap between the two is where margin, speed, and competitive advantage quietly leak out of the modern enterprise — and it has been that way since the discipline was instrumented.

Picture a CMO presenting a $40 million media budget to her CFO at the end of a quarter. The deck is sharp. Click-through rates are up. Cost per lead is down. ROAS is trending in the right direction. The CFO listens, looks at the last slide, and asks a single question.

"How much did we spend per decision this quarter?"

Nobody in the room can answer. Not the agency. Not the analytics lead. Not the platforms. Not because the question is unfair, but because the entire measurement architecture of modern marketing was built to answer a different question — and the answer to the one that actually matters has been technically out of reach for as long as anyone in the room has been doing this job.

That gap is not a metrics problem. It is a P&L problem. And it is the largest source of unrecognized waste in the modern enterprise marketing function. The argument that follows names the flaw at industry-agnostic scale, explains why it has been unfixable until now, and shows what changes when agentic AI makes Cost Per Decision an operable metric instead of an academic one.

The structural flaw nobody names

Every dominant marketing metric in use today measures an input or an intermediary action, not the decision that drives the business. CPC measures the cost of an event the audience may or may not remember the next morning. CPM measures the cost of a delivered impression that may or may not have been seen. CPL measures the cost of a form fill from a person who may or may not be a real prospect. CPA measures the cost of an action that may or may not correlate with revenue. ROAS measures the ratio of two numbers, one of which is unreliable.

Each of these metrics measures a cost. None of them measures the cost of the choice the customer actually made.

Businesses do not run on clicks. They do not run on leads. They do not run on impressions or sessions or engagement rates. Businesses run on a small number of high-consequence customer decisions — to enroll, to fund, to buy, to renew, to refer, to expand. Everything that happens before one of those decisions is a leading indicator at best, and a vanity number at worst. The marketing organization that confuses the two is — quietly, and at scale — paying for activity instead of paying for outcomes.

The deeper flaw is that traditional KPIs force optimization at the layer that is cheapest to measure, not the layer that is most valuable to improve. That is Goodhart's Law dressed in a media plan. A metric that becomes a target stops being a measure of the thing you actually care about, and turns into a measure of how good your team has gotten at producing the metric.

And the cleanest tell that this is a structural problem rather than a vertical one is that it shows up identically in every category. Health insurance has it: cost per lead is wildly disconnected from cost per member-year of revenue, and entire books of business are being underwritten on the wrong number. Financial services has it: cost per application is not cost per funded account, and the difference between the two is where every fintech P&L gets quietly destroyed. Retail has it: cost per session and cost per cart describe different universes, and only one of them shows up on the income statement. The names change. The structure does not.

Why it could not be fixed before

Here is the question that will be in the back of every reader's mind by now: if this has been obviously true for decades, why didn't anyone fix it?

The honest answer is that Cost Per Decision has always been the theoretically correct metric — and three operational realities made it practically impossible to compute. First, attribution lag: the decision that actually matters to the business often happens weeks or months after the touchpoint that influenced it. A Medicare Advantage shopper who first sees an ad in October may not finalize a plan choice until January, by which point the campaign that originated the interest is closed, paused, or already optimized against the wrong signal.

Second, data fragmentation: the signals required to connect a touch to a decision lived in different systems, with different identifiers, governed by different teams, refreshed on different schedules. Media data in one platform. CRM data in another. Eligibility, contract, or financial outcome data in a third. Stitching the lineage from "first impression" to "final decision" was a quarterly project even in well-instrumented organizations, and a non-starter in everyone else.

Third, human bandwidth: connecting the dots at the velocity the business operates was an analyst problem, not a dashboard problem. Even when the data existed and the identity could be resolved, the cost in human hours to maintain the connections — across campaign updates, platform changes, product launches, and audience definitions — exceeded what any marketing organization could justify funding at the scale required.

So the industry did the rational thing. It settled for proxy metrics. Not because the proxies were better, but because the proxies were what the toolchain could compute in real time. Strategy has always known what to measure. Tools have always dictated what got measured. That is the entire story of how the discipline arrived at its current measurement architecture, and it is the entire reason that architecture is about to be replaced.

"Marketing KPIs and business KPIs have been structurally misaligned since the discipline was instrumented — and AI is the first technology that makes the correction operable, not aspirational."

What changes with agentic AI

Most of what gets discussed under the heading of "AI in marketing" today is generative — you prompt, it writes. That is not the technology that fixes the measurement problem. Agentic AI is. The distinction matters because the Cost Per Decision shift requires something that plans, executes, and operates across systems on its own, not something that drafts copy when asked.

Three specific capabilities of agentic systems collapse the historical barriers, each one mapped to one of the failure modes above.

The first is cross-system signal stitching at operational scale. An agent can hold identity, intent, and outcome in working memory across a media stack, a CRM, an eligibility system, and a finance ledger — and can do it continuously rather than as a quarterly reconciliation. The data fragmentation problem does not disappear, but for the first time the integration work is being done by a system that does not get bored, does not miss a quarter, and does not require a re-platforming project every time a new tool enters the stack.

The second is decision-window attribution. An agent can wait for the actual decision event before allocating cost — and then back-allocate that cost across the touches that influenced it, on the timeline the decision actually unfolded on, not on a pre-defined 30-day attribution window that was chosen by a platform vendor in 2014. The attribution lag problem becomes a feature, not a constraint, because the system is patient in a way humans have not been able to afford to be.

The third is continuous re-allocation. Once Cost Per Decision is computable on something closer to a real-time cadence, an agent can move spend toward higher-decision-yield channels in hours, not quarters. The compounding effect of this is harder to overstate than to overstate. An organization that re-allocates against the right metric ten times in the time a competitor re-allocates against the wrong one twice will out-pace that competitor on every horizon that follows.

This is what the Performance Max signal loop work has been pointing at all along, and what Kyle's Daily Morning Drive is being built to demonstrate end-to-end: a working architecture where the agent does the part of the job that humans were never structurally able to do at the velocity the business required.

One important caveat, because the bullishness above invites it. None of this works on top of bad data. An agent operating against a polluted signal environment will optimize confidently in the wrong direction, faster than a human could have done the same thing. That is the entire reason the 2026 Signal Architecture Framework exists: Cost Per Decision is downstream of signal quality, and signal quality is the prerequisite no one wants to slow down to fix.

The thesis already holds, vertical by vertical

The clean way to test whether an argument is structural is to check whether it generalizes. The Counter-Intuitive KPI series has been doing exactly this, one vertical at a time, and the pattern is now unmistakable.

In health insurance, Cost Per Member-Year exposes the disconnect between the cost of producing a lead and the cost of producing a member who actually generates a year of revenue. Plans that look cheap to acquire on a CPL basis routinely turn out to be the most expensive cohort on the books once retention, eligibility, and downstream cost of care are factored in. Same product, different KPI — different business.

In financial services, Cost Per Funded Account separates curious applicants from real revenue events. The application-stage funnel rewards volume; the funded-account-stage funnel rewards quality. Optimizing against the first one is how fintechs end up spending five-figure CACs to acquire deposit balances that will never recoup the spend. Optimizing against the second one is how the smart operators have started running circles around the rest of the category.

In retail, Cost Per Cart shows what happens when intent-to-buy becomes the unit instead of session. Sessions are infinite and cheap. Carts are scarce and revealing. A cart is a decision; a session is a hesitation. The retailers that figured this out reorganized their reporting around it within a quarter and saw their media efficiency curve bend almost immediately.

Three categories. One argument. Same structural flaw, same correction, same shape of result. And the verticals not yet written — higher education (cost per enrolled student versus cost per graduate), B2B SaaS (cost per opportunity versus cost per logo), automotive (cost per test drive versus cost per financed vehicle), property and casualty insurance (cost per quote versus cost per bound policy) — will each produce a piece that says the same thing with the noun changed. That is the point. Cost Per Decision is the noun-agnostic version of an argument the verticals have been making one at a time.

What this looks like on Monday morning

A thesis that cannot be acted on is a column, not a framework. So here is the practitioner version of the argument, intended to be runnable by a CMO with their team in one week.

Start with three diagnostic questions. One: what is the single decision your business actually runs on? Not the lead, not the opportunity, not the click — the decision that, when it happens, produces revenue. Two: what does it cost you, fully loaded, to produce one of those decisions today? Include the media, the production, the platform fees, the agency, the analytics, the operations time. Three: what signals would you need — and which of them are currently missing, broken, or stuck in a system you cannot reach — to compute that number on something faster than a quarterly cadence? The answers to those three questions, written down honestly on a single page, are the start of a Cost Per Decision operating model.

The organizational implication is real, and it is worth naming briefly because it is too large to fit in this piece. A Cost Per Decision shift is not a dashboard change. It is a re-aim of the measurement layer, the planning cadence, the optimization workflow, and — eventually — the org chart that wraps around all of it. The marketing organization that produces decisions at the speed and cost agentic systems make possible is not the marketing organization most companies have today. That is the next essay.

The competitive implication is simpler. Organizations that can compute Cost Per Decision faster than their competitors can compute Cost Per Lead will out-allocate them in every quarter that follows. The advantage is not a single-quarter efficiency gain. It is a compounding rate-of-learning gap, and rate-of-learning gaps in this discipline have historically been the gaps that do not close.

The unifying claim

Strip everything above to one sentence and it reads like this. Marketing KPIs and business KPIs have been structurally misaligned across every industry since the discipline was instrumented — and agentic AI is the first technology that makes the correction operable rather than aspirational. Every vertical case that has been published on this site, and every vertical case still to come, is a proof point for that single argument.

Uncommon Move exists to publish the working architectures behind that correction — the 2026 Signal Architecture Framework, the Cost Per Decision Calculator, the Morning Drive prototype, and the vertical applications that show what the correction looks like in real categories with real numbers. Not theory. Not vendor talking points. The actual systems, in the actual condition they need to be in, to make the metric that has always been correct finally measurable.

The first marketing organization in every category to compute Cost Per Decision faithfully will not be the most innovative one. It will be the one whose CFO asked the question in the quarterly review and whose CMO decided, that afternoon, that there would be a real answer by the next one. The rest of the category will catch up the way categories always catch up: years late, after the rate-of-learning gap has already done its work.

Share LinkedIn Post