← All posts
Attribution strategy5 min read

iOS Attribution After ATT: What Subscription Apps Can Still Measure With Confidence

A working stack for iOS attribution post-ATT, with concrete tradeoffs between SKAdNetwork 4, deterministic device matching, and probabilistic fingerprinting, written for subscription-app growth teams running real ad spend.

ATT didn't break iOS attribution. It made every team admit how much of their pre-2021 attribution was effectively guessing on the back of an IDFA they thought they owned. The numbers got worse for a while; then everyone adapted; and most of the apps still running on the same dashboards they used in 2020 are looking at attribution that appears to work without anyone actually trusting it inside the growth team.

This post is the working stack. Three measurement layers, what each one is for, where they break, and how a subscription app at $50k+ MRR should think about combining them.

Layer 1: SKAdNetwork 4

SKAdNetwork is the only attribution signal Apple guarantees. It's also the most coarse, the most delayed, and the most misunderstood layer in the stack.

SKAN 4 gives you three postbacks per install (0–2 days, 3–7 days, 8–35 days), each with a coarse conversion value (low / medium / high), a fine value (0–63) on the first postback only when sample size is high enough, and a randomized delay before the postback fires. There's no user-level signal. Everything aggregates at the campaign level, sometimes coarsened further when Apple's privacy threshold isn't met.

What that means in practice:

  • You won't see SKAN attribution for new campaigns for at least 48 hours because of the postback timer. Plan creative-testing cadence around that.
  • The "high" conversion value bucket should mean trial-to-paid, not first-open. If you encode opens as "high", every install looks like a winner and SKAN becomes useless for ROAS decisions.
  • The fine value (0–63) is the most actionable part of the schema when you have enough volume — encode revenue tier in the first 6 bits and let Apple aggregate.

A conversion-value mapping for a subscription app might look like this:

BitsEncoded eventWhy
0–2App open count (0, 1, 2+)Cheap engagement signal
3Onboarding completedActivation gate
4Trial startedThe first revenue-shaped event
5Trial converted to paidThe signal that actually predicts LTV
coarselow / medium / high → free / trial / paidBackup for low-volume campaigns

The trial-converted bit is the one that earns its place. Everything else is context.

Layer 2: Deterministic device matching

ATT didn't remove the device's ability to identify itself; it removed the cross-app identifier shared across SDKs. A modern MMP can still match an install back to a campaign click using a combination of device fingerprint signals (locale, timezone, OS version, device model, screen size, font set, IP at install time) that aren't ATT-gated and that, in aggregate, are unique enough at the install instant to produce a confident match.

The match window matters. A 24-hour deterministic match against a click that happened in the last hour produces a high-confidence attribution. A 24-hour match against a click that happened 23.5 hours ago is much weaker — the same fingerprint may legitimately belong to a different user.

This is where MMPs make different bets. Some are aggressive on probabilistic fallback and quietly push attributions into the "organic" bucket when they're unsure; others are conservative and report uncertainty explicitly. AppSprint is closer to the second camp: every attributed install carries a match-method label (deterministic, probabilistic, skadnetwork, unattributed) and the dashboard exposes the breakdown by campaign so you can see when a campaign's attribution confidence drops.

The growth team's job isn't to chase the highest match rate. It's to know which subset of the data is high-confidence enough to drive a budget decision.

Layer 3: First-party SDK events

This is the layer most teams underweight. ATT didn't touch in-app event tracking at all. Your SDK still sees:

  • App opens, sessions, screen views
  • Custom events with arbitrary payloads
  • Purchase events, with full revenue and currency
  • Trial starts, renewal events, cancellation events (via RevenueCat or Superwall)
  • User identity within your own backend

The combination of (a) a deterministic match for the install and (b) first-party events for everything that happens after gives you 80% of what pre-ATT attribution gave you, with the user's consent shape unchanged. The remaining 20% — the cross-app graph that lets you say "this user saw an Instagram ad two weeks ago and then converted today" — is what SKAN is for.

The mistake is treating these layers as competing. They're complementary. SKAN handles the cross-app graph at the campaign level. Deterministic matching handles the per-install attribution at the device level. First-party events handle everything that happens after the install with full fidelity.

The stack that actually works

For a subscription app at $50k+ MRR running paid acquisition on Apple Search Ads, Meta, TikTok, and Google:

1. Set up SKAN 4 conversion values properly. Spend a week designing the bit map before you go live. The trial-converted bit is non-negotiable. The fine-value schema should mirror your LTV cohorts.

2. Pick an MMP that exposes match-method per install. If you can't tell which installs are deterministically matched, you can't trust the campaign-level reports.

3. Send first-party events for every revenue-shaped action. Trial start, initial purchase, renewal, cancellation, refund, billing issue. Even when SKAN coverage is good, first-party events are what let you compute LTV-by-campaign without 35-day delays.

4. Connect RevenueCat or Superwall. Don't roll your own subscription event pipeline. The reconciliation work (refunds, billing issues, plan changes) is more than a quarter of engineering time you don't need to spend.

5. Read the dashboard at three time horizons. Day-of for top-of-funnel signal (installs, trials). Day-7 for trial-to-paid conversion. Day-30+ for cohort revenue and retention. Don't make budget decisions on day-of revenue; subscription apps that do that over-rotate on cheap installs and under-spend on the campaigns that actually pay back.

Where AppSprint fits

AppSprint is opinionated about this stack: deterministic matching as the primary attribution signal, SKAN as the cross-app graph layer for campaigns where deterministic match is weak, first-party event ingestion as the revenue truth layer, and RevenueCat / Superwall integrations so subscription events show up correctly without a custom pipeline.

The single most useful thing it does is expose match_method per install on every attribution report. When a campaign's deterministic match rate drops below 70%, you'll see it. That's usually a creative or landing-page issue (the user clicked, but installed minutes later from a different network condition) — and it's exactly the kind of thing a growth team needs to see before the campaign spends another week on bad attribution.

iOS attribution after ATT is harder than it was in 2019. It's not harder than the data your team needs in order to make budget decisions every week. The job is to choose the right three layers and stop treating any one of them as the whole answer.

You might also like

See all posts →