ANALYTICS

The attribution stack we actually see at 40+ growth teams.

This is a working snapshot, not a recommendation. We've been running attribution audits as part of engagement discovery since 2023. The stack below describes what is in active use at the mid-market and Series B to D companies we audited through 2024. Sample is weighted toward DTC, B2B SaaS, and marketplaces. Enterprise stacks look different.

Layer 1. Event collection

  • GA4 at ~94% of teams. Universal Analytics ended 1 July 2023. The holdouts we still meet are on Matomo, Heap, or a warehouse-first setup with no product analytics tool on top.

  • Server-side tagging through GTM server container at ~48%. Up from ~20% at the start of 2024. The driver is CAPI reliability, not cookieless per se.

  • Meta CAPI at ~81%. TikTok Events API at ~43%. Close to universal in DTC, uneven in B2B. Google Ads Enhanced Conversions at ~68% but frequently misconfigured: we find duplicate-event issues on more than half of audits.

  • Product analytics tools. Amplitude at ~34%, Mixpanel at ~22%, PostHog at ~11% and growing fastest. These sit alongside GA4, not in place of it.

Layer 2. Warehouse and identity

  • BigQuery at ~62%, Snowflake at ~22%, the rest split between Redshift and Databricks. GA4's free BigQuery export is the single biggest driver of warehouse adoption we see among companies that did not have a warehouse before 2023.

  • Identity resolution is where the stack fractures. Segment or RudderStack at ~55% of teams, doing CDP-level stitching. mParticle at the enterprise end. Amplitude's identity tables at the product-led end. The remaining ~30% rolled their own in-warehouse, usually with dbt models joining auth events to device events on email or hashed-email keys.

  • Reverse ETL (Hightouch, Census) at ~40%, almost always for pushing warehouse-resolved audiences back to ad platforms. This is more strategically important than it sounds: teams that own their audience definitions in the warehouse consistently make better targeting decisions than teams that own them inside ad platforms.

Layer 3. Attribution model

  • Last-click inside GA4 as the default that nobody fully trusts but nobody removes. Still the decision layer for about 40% of teams. It overweights bottom-funnel and branded paid, underweights almost everything else, and the teams who know this adjust for it mentally rather than formally.

  • Multi-touch attribution (data-driven in GA4, or bought through Rockerbox, Dreamdata, HockeyStack) at ~35%. Credibility is mixed. Most teams we audit have stopped using the output for budget decisions and kept it for directional trend. The signal-to-noise problem is that touch-level weights are fundamentally unobservable post-ATT.

  • Marketing mix modelling at ~28% and growing. Tool split: Meta's open-source Robyn leads at the self-serve end. Recast and Prescient are common at the managed-service end. Google's LightweightMMM is present at technical teams. Refresh cadence is typically weekly. MMM without geo-lift validation has a credibility problem we keep seeing.

  • Geo-based incrementality tests as a validation layer on top of MMM at ~22%. Meta Marketing Science's GeoLift is the default; a few teams run custom DMA holdouts with NCSolver or their own matched-market design.

  • Incrementality tests on individual campaigns (lift tests in Meta/Google, custom holdouts) at ~52%. More prevalent than MMM because the setup cost is lower, and the budget-decision payoff is often faster.

Layer 4. Decision surface

  • Looker Studio at ~50%. Cheap, GA4-native, ugly. Serves as the default CMO dashboard even at teams that have better tooling further up the stack.

  • Triple Whale or Northbeam at ~45% of DTC specifically. Both lean on modelled attribution. The industry is split on whether that is an improvement or a different kind of guess. Triple Whale's inclusion of first-party post-purchase survey ("How did you hear about us") as a signal is the most differentiated piece.

  • Hex, Mode, Sigma, or in-house notebooks at the analytics-heavy teams (~20%). These are the teams that treat attribution as an ongoing research problem rather than a configured product.

  • Tableau or Power BI at enterprise. Different world, different problem set.

Common failure modes across all four layers

  1. Measurement debt. Teams keep adding layers without removing what no longer pays back. We audit stacks where three different MTA products are producing three different conversion counts and nobody has decided which one to deprecate.

  2. Unreconciled platform numbers. Meta Ads Manager reports X conversions, the warehouse reports Y, nobody on the team knows why. The gap is almost always a mix of CAPI dedup issues, attribution window differences, and post-view vs post-click defaults. Usually solvable in a week; usually outstanding for a year.

  3. Over-segmented dashboards. Fifty dashboards, none of them the source of truth. The best teams have three or four, and the hierarchy between them is documented.

  4. Budget decisions made on attribution tools that the team does not trust. This is the expensive failure mode. The analytics team flags the numbers as unreliable, marketing uses them anyway because it is what they have, the CFO questions the ROAS math, and the debate eats the quarter.

What is actually working

The teams making the best budget decisions have two things in common, and it is not the tool list. They have one person whose full-time job is reconciling platform-reported spend to shipped revenue in the warehouse, monthly. And they have at least one geo or holdout experiment running against the MMM output at all times. Everything else is instrumentation.

Basis: 41 attribution stacks audited between January 2024 and November 2024. Mid-market and Series B to D weighted. Not representative of enterprise, small business, or agency-managed stacks.

© 2026 8LAB. All loops reserved.

EXPERIMENTS