ANALYTICS

Cookieless attribution: what landed, what we killed.

Google publicly abandoned the third-party cookie deprecation timeline on 22 July 2024. Most of the cookieless tooling that had been shipping urgently in 2023 is still live. This is a snapshot of where it ended up across the teams we audit, with the working parts separated from the ones that quietly got retired.

Does anyone still care about third-party cookies?

Safari and Firefox killed them years ago. Any audience heavy on iOS or privacy-mode users was already largely cookieless. Google keeping them on Chrome slowed the urgency but did not change the underlying distribution. Working number across the DTC brands we work with: about 58% of US mobile sessions were already third-party-cookieless before July 2024, driven by Safari share. For EU traffic the number is higher because of consent-mode opt-outs. If the deprecation never happens, your attribution still has to work for the majority of sessions where the cookie is not available.

Is server-side tagging actually worth the migration?

Yes, but not for the reason people cite. The privacy and ITP story is secondary. The real payoff is CAPI and Events API reliability. Server-side tagging ships events to Meta and TikTok that browser-side tagging was dropping at rates between 8 and 22%, mostly driven by ad-blockers, iOS intelligent tracking prevention, and connection failures mid-session.

In A/B tests we ran across six brands in 2024, switching the primary event stream to server-side recovered on average 14% more reported conversions, which corrected the under-optimisation of paid budget on events that were being dropped. The platforms saw more of the conversions actually happening, their bidding models got better signal, and CPAs came down without spend changing.

The cost: GTM server container on Google Cloud Run or App Engine runs about $60 to $400 per month depending on event volume. Setup takes a competent analytics engineer 2 to 4 weeks. If your monthly paid spend is under ~$20k, the payback math is marginal. Above that, it is usually the highest-ROI measurement project in the stack.

What about identity graphs (LiveRamp, Neustar, Epsilon)?

Two clients piloted LiveRamp RampID in 2023 to 2024. Both killed the pilot inside nine months. The match rates were not wrong. The integration cost and the contractual surface outpaced the incremental signal. For brands without a meaningful logged-in audience the payoff is thin. For brands with one (subscription, DTC with account purchase, gaming), their own first-party identity is usually the better investment. The one exception is large brands running a heavy connected-TV programme where cross-device reach measurement is the actual job; there the identity graph earns its keep.

Can MMM really substitute for multi-touch attribution?

They answer different questions. MMM answers "what channels drove incremental revenue this quarter". Multi-touch tries to answer "which ad touched the user who converted" and increasingly cannot, because the touches are unobservable. Teams that have killed multi-touch as a decision tool and kept MMM plus geo-lift tests have made noticeably better budget decisions through 2024. Teams still running both to reconcile them end up in a worse place than either one alone, because the numbers never match and the debate about why consumes the cycles that should have been spent on the geo test.

The open-source MMM options that actually work in production:

  • Robyn (Meta). Best-documented, largest community, steepest learning curve. Ridge regression with automated hyperparameter search.

  • LightweightMMM (Google). Bayesian, built on NumPyro. Lighter-weight than Robyn, closer to a research tool than a managed workflow.

  • PyMC-Marketing. Bayesian, more flexible on priors, more manual to operate.

What we tried and killed

  • UTM parameter standardisation as a primary fix. Necessary hygiene. Not a strategy. If your team thinks "we will fix attribution by making the UTMs consistent", they are about to spend six months on something that was never going to answer the question.

  • Probabilistic device-matching vendors. The signal-to-cost ratio was indefensible for all three teams who piloted. The vendors we evaluated (ID5, The Trade Desk's UID 2.0 for stitching) did not produce measurable budget-allocation improvements over a six-month window.

  • Custom ML attribution models trained on user journeys. Impressive demos, unreadable output, no teams we know of made budget decisions from them six months in. The failure mode is always the same: the model outputs touch-level weights, the team cannot tell whether the weights are right, nobody moves money based on them.

  • Hope that Chrome deprecation would force the issue. It did not. It will not. Plan for the world you have, not the one announced at a Google conference.

What is working

The stack that pays back across the teams we see: server-side tagging feeding clean CAPI/Events API streams, weekly MMM refresh, quarterly geo-lift validation, and one person whose actual job is to reconcile platform-reported spend to shipped revenue in the warehouse monthly. That last piece is not glamorous. It is also the difference between a team making good budget decisions and a team making confident bad ones.

Basis: six server-side tagging A/B tests in 2024; two identity-graph pilot post-mortems; observed practice across 32 attribution stacks audited in the first nine months of 2024.

© 2026 8LAB. All loops reserved.

EXPERIMENTS