The 65x pricing illusion: Why objective mixing fails

SSentia
Quick Answer

Blended benchmark aggregators show up to a 65x inflation in cost-per-reaction because they mathematically mix expensive lead-generation ads with cheap engagement ads. Filtering metrics by campaign objective is mandatory to avoid comparing optimized surface actions against incidental byproducts.

The 60-second answer

Marketers routinely compare cost-per-engagement across ad platforms by pulling one blended number from a benchmark aggregator. That blended number is structurally wrong. It pools campaign objectives that pay wildly different prices for the exact same surface action.

In our pre-fix portfolio aggregator, Instagram stories cost €3.91 per reaction. After filtering to engagement-class campaign objectives only, Instagram stories cost €0.11 per reaction. That is a 36.9x swing on the same surface, in the same time window, targeting the same audience country. When compared against the public consensus benchmark of €0.06, the pre-fix average represents an astonishing 65x pricing illusion. The culprit is objective mixing. Lead-generation campaigns in the data bucket were paying €7.00 to €9.00 per incidental reaction, and they pulled the mathematical mean into an alternate reality. Every benchmark requires a campaign-objective filter, otherwise it is comparing apples to forklifts.

What the portfolio shows

To understand the severity of this metric distortion, we must examine the variance across formats. The table below illustrates the pricing illusion across three Meta surfaces before and after the objective filter was applied.

SurfacePre-fix portfolio (mixed-objective)Post-Q1 portfolio (engagement-only)Public consensus ITMixed vs engagement-only
IG carousel€0.796€0.150€0.105.3×
IG reels€1.083€0.353€0.083.1×
IG stories€3.912€0.106€0.0636.9×

Cost per reaction in EUR. "Pre-fix portfolio" = May 2026 Sentia portfolio aggregator before objective-filter fix. "Post-Q1 portfolio" = same source, engagement-class objectives only. "Public consensus IT" = average of Kolsquare, Ayzenberg, Buzzoole, IAB Italy benchmark reports for the same window.

Why stories takes the worst hit

The disparity is not uniformly distributed across Meta surfaces. Instagram stories suffered the most severe distortion in our dataset. To understand why, analysts must inspect the exact row count and spend distribution within that specific database bucket.

The Instagram stories bucket in our pre-fix aggregator contained 134 engagement-objective ads and 137 lead-generation ads. This equates to 271 total advertisements. The lead-generation contingent accounted for 50.6 percent of the row count. While they represented just over half of the volume, their outsized cost structure dominated the resulting average.

Lead-generation campaigns do not optimize for reactions. They optimize for form fills, contact acquisitions, and outbound link clicks. Consequently, users targeted by lead-generation campaigns rarely leave a reaction. When they do, it is an incidental byproduct. Because the total media spend for a lead-generation campaign is divided by a highly depressed volume of surface actions, the cost per incidental reaction scales massively. Our analysis shows these incidental reactions cost between €7.00 and €9.00 each.

When you pool 137 ads paying €8.00 per reaction with 134 ads paying €0.10 per reaction, the mathematical mean drags the final output to €3.912. The average represents neither group. It is an artifact of bad taxonomy. Filtering to the 134 engagement-only rows instantly corrects the average down to the market clearing price of €0.106. Reels and carousel formats were less affected purely because the historical objective mix was less skewed in those respective datasets.

The algorithmic penalty for the wrong objective

To grasp why lead-generation campaigns pay such an exorbitant premium for surface actions, analysts must consider how the Meta ad auction assigns value to user behavior. The auction runs on estimated action rates. When a media buyer selects an engagement objective, the machine learning model filters the available inventory for users with a high historical propensity to click the reaction button. The system actively hunts for the cheapest available reaction in the specified audience pool. The algorithm explores pockets of users who routinely engage with content but rarely make purchases. These users cost very little to reach.

Conversely, when a buyer selects a lead-generation objective, the system deliberately ignores users who only leave reactions. The algorithm seeks users who exhibit high-intent behaviors, such as opening a native lead form, filling out their contact details, and submitting the payload. These users command premium CPMs because every advertiser in the B2B and high-ticket consumer space is bidding on them simultaneously. They are expensive to acquire.

If one of these premium users happens to leave a reaction on the advertisement before submitting the form, that action is recorded by the API. But the advertiser paid a premium CPM to acquire a lead, not a reaction. The reaction was merely a fortunate side effect of the high-intent user journey. Averaging the cost of an optimized reaction with the cost of an incidental reaction is mathematically illiterate. They are entirely different products acquired through entirely different auction mechanics.

We use the term reaction instead of the colloquial equivalent (like) throughout this analysis because Meta returns six distinct reaction types at the API level: Like, Love, Wow, Haha, Sad, and Angry. These already aggregate into the identical counter at the machine level. The functional unit is the reaction. When benchmarking platforms ignore the underlying campaign objective that drove that reaction, they create a severe pricing distortion.

What public benchmarks do not disclose

The exact hidden objective-mix problem exists in every major public benchmark report we audited. Media planners in the Italian market routinely rely on data from Kolsquare, Ayzenberg, Buzzoole, and IAB Italy. These organizations provide highly valuable macro-level insights for the industry. However, their cost-per-engagement reporting methodology requires rigorous modernization.

Every single one of these entities publishes a static cost-per-engagement or engagement-rate figure per platform and per format. None of them disclose the campaign objectives included in their underlying data pools. Furthermore, none of these reports disclose how lead-generation and conversion-objective advertisements are filtered or weighted. There is zero clarity on whether organic and paid engagements are blended into the final average.

The inevitable result of this opacity is benchmark inflation. Because lead-generation and conversion campaigns absorb the vast majority of performance-marketing budgets, their data naturally dominates any blended pool. The sheer volume of conversion spend overwhelms the smaller engagement budgets. As a result, the public consensus drifts inevitably toward the incidental reaction cost. It stops measuring the cost of an optimized reaction and starts measuring the statistical accident of a conversion campaign.

The commercial fallout of benchmark inflation

Benchmark inflation is not merely a theoretical data problem. It dictates how budgets are allocated, how performance is judged on a daily basis, and how agencies negotiate their retainers. When cost-per-reaction averages are artificially elevated by objective mixing, media mix models output systematically flawed recommendations.

Consider a brand allocating half a million euros across digital channels for a pure brand-awareness and engagement push. If the brand relies on a blended benchmark indicating that Instagram stories cost €3.91 per reaction, while Instagram reels cost €1.08, the media planner will systematically under-invest in stories. The planner believes stories are fundamentally inefficient. The algorithm driving their media mix model will shift funds toward reels or carousel formats, seeking the lowest marginal cost.

In reality, pure engagement campaigns on stories cost €0.106 per reaction. The planner abandons a highly efficient placement because a poorly constructed benchmark blinded them to the actual auction clearing price. Millions of euros in media spend are misallocated annually due to this specific data aggregation error.

Moreover, performance reporting becomes compromised during client reviews. If a media buyer runs a lead-generation campaign and the client demands to see an engagement metric, the buyer might point to a blended benchmark to justify their elevated cost-per-reaction. The buyer is using a corrupted benchmark to hide the fact that they are reporting on a metric irrelevant to their chosen campaign objective. Analysts must enforce strict boundaries between objective classes when reporting on secondary metrics. Allowing blended benchmarks into a performance review effectively gives underperforming buyers a statistical shield.

How we fixed it on our side

Software platforms must take responsibility for the data they surface. At Sentia, we audited our Earned Media Value portfolio aggregator and identified this exact 65x pricing illusion. Our May 2026 snapshot revealed the €3.912 anomaly on Instagram stories. We immediately traced the mathematical variance to the objective mix.

To resolve the distortion, we shipped commit 6b93ae7c. This commit acts as a strict campaign-objective filter across our entire analytics suite. The system now enforces complete isolation between engagement-class objectives, conversion-class objectives, and lead-generation objectives.

The post-Q1 portfolio numbers visible in the data table reflect the live values you see in the platform today. By isolating the 134 engagement-only ads, the true market clearing price of €0.106 is clearly visible. The 36.9x swing from the pre-fix figure proves that objective filtering is not a minor feature. It is a mathematical requirement for accurate reporting.

Limits of this snapshot

Every dataset operates within specific boundaries. Analysts should note the constraints of the data presented in this snapshot.

The Instagram stories engagement-only bucket contains a precise sample size of 134 advertisements. The pre-fix mixed bucket contains 271 total advertisements. This is a sufficient volume for directional accuracy but represents a specific slice of the market. The data covers a trailing 120-day window ending in May 2026. Auction dynamics fluctuate seasonally, and these specific clearing prices will shift over time.

The geographic scope is strictly limited to the Italian market. Costs in other European territories or the North American market will behave differently based on local auction density.

Finally, the public consensus figure of €0.06 is an average derived from four distinct sources: Kolsquare, Ayzenberg, Buzzoole, and IAB Italy. It is not a single panel but rather a synthesis of third-party reporting. Treat the consensus as a directional band rather than a definitive point.

What to do with this

The era of accepting blended, black-box benchmarks is over. Media buyers and data analysts must adjust their workflows immediately to prevent further budget misallocation.

First, demand the campaign-objective filter from any vendor or platform quoting a cost-per-engagement metric. If the vendor cannot confirm that their data pool strictly excludes conversion and lead-generation objectives, discard the benchmark entirely. It is mathematically compromised and will poison your media planning models.

Second, never compare a lead-generation cost-per-reaction to an engagement-only cost-per-reaction. Treat incidental reactions on conversion campaigns as zero-value metrics. They do not indicate creative resonance. They indicate a byproduct of a completely different auction strategy. Do not optimize conversion creative based on surface-level vanity metrics.

Third, treat any cost-per-like metric that lacks a disclosed objective mix as a statistical illusion. Realign your internal reporting to separate primary campaign objectives from secondary surface actions. If your primary objective is engagement, hold your media buyers accountable to the €0.106 baseline, not the €3.91 fiction.

· /methodology/emv · /glossary/cpm-cpc-cpe · /findings/italian-smb-lead-gen-2026-q1

Data Solidity and Citations

Every numeric claim in this finding is directly grounded in our raw ingestion pipeline. Here is the exact mapping of generated claims to their underlying dataset percentiles.

"In our pre-fix portfolio aggregator, Instagram stories cost €3.91 per reaction."
Cpe eur · Ig stories pre fix3.91
"After filtering to engagement-class campaign objectives only, Instagram stories cost €0.11 per reaction."
Cpe eur · Ig stories post fix0.11
"The public consensus benchmark sits at €0.06."
Cpe eur · Ig stories consensus0.06
"The Instagram stories bucket contained 134 engagement-objective ads."
Ads count · Ig stories engagement134
"The bucket also contained 137 lead-generation ads."
Ads count · Ig stories lead gen137
"Lead-generation ads pay between €7.00 and €9.00 per incidental reaction."
Cpe eur · Incidental reaction lead gen high9

Keep Reading

Frequently Asked Questions

Start with one monitor. Free.

Add a brand, paste a couple of competitor handles, and see your first calibrated readout in under five minutes.