SKAdNetwork 4 Conversion Value Mapping for Subscription Apps
A working SKAN 4 schema for subscription apps, the math behind fine vs coarse values, the postback timing tradeoffs, and the bit layout that lets a growth team read trial-to-paid signal in the first postback.
SKAdNetwork is the part of the iOS attribution stack most teams configure once, ship, and never look at again. That's a mistake — the conversion-value schema you ship at launch determines what your campaigns can optimise toward for the next year, and the difference between a thoughtful mapping and a default one is usually 15–30% of paid conversions hidden in the wrong bucket.
This post is the working SKAN 4 schema for a subscription app at $50k+ MRR. The bit layout, the postback timing, what to encode and what to drop, and how the MMP should expose it.
A 60-second SKAN 4 refresher
Three postback windows: 0–2 days, 3–7 days, 8–35 days. Each postback carries:
- A coarse conversion value:
low,medium, orhigh(ornullif Apple's privacy threshold isn't met). - A fine conversion value: an integer 0–63 (6 bits), included only in the first postback and only when Apple's per-campaign install volume is high enough.
- A campaign ID, the source app ID (if Apple's threshold is met), and a few other fields.
The fine value is what most teams under-use. 6 bits is enough to encode a meaningful state machine for a subscription app's first 48 hours. The coarse value is the fallback for low-volume campaigns where Apple will redact the fine value.
The bit layout that works
Here's the schema I'd ship for a subscription app with a 7-day trial:
| Bits | Field | Values |
|---|---|---|
| 0–1 | App-open count | 0, 1, 2, 3+ |
| 2 | Onboarding completed | 0 or 1 |
| 3 | Push permission granted | 0 or 1 |
| 4 | Trial started | 0 or 1 |
| 5 | Trial converted to paid | 0 or 1 |
That uses all 6 bits. 64 possible states. The two bits that earn their place are bit 4 (trial started) and bit 5 (trial converted) — together they tell you the full first-purchase funnel.
The coarse value should mirror the same funnel for low-volume campaigns:
low= installed but no meaningful actionmedium= trial startedhigh= trial converted to paid
That way you have the same signal in both representations, and the MMP can fall back to coarse without losing the most important decision (paid vs not).
Why the fine value matters more than people think
Most articles tell you to encode revenue tier in the fine value: $0–5 = 0, $5–15 = 1, and so on. For a subscription app, that's the wrong shape. By the time the first postback fires (0–2 days after install), almost no users have converted to paid revenue — they're in a trial. Encoding revenue at that point means the fine value collapses to 0 for 95% of installs.
The right shape for subscription apps is funnel state, not revenue tier. The fine value tells you "this install completed onboarding, granted push, and started a trial" — five bits worth of leading signal that predict whether the install will eventually convert. That's the signal your bid optimiser actually needs.
If you want revenue tier, encode it in the second and third postbacks — those don't have a fine value, but the coarse low / medium / high can map to refund-adjusted net revenue after the trial converts.
The postback timer reset
SKAN 4 adds a feature most teams don't use: every time you update the conversion value in the SDK, the postback timer resets. That means the first postback fires 24–48 hours after the last conversion-value update, not after the install.
For subscription apps, this is a huge lever. If a user installs, opens the app, completes onboarding, and starts a trial all within the first 30 minutes, you should update the conversion value at each step. The postback timer resets each time, and the eventual postback Apple sends will reflect the user's final state in the first 24 hours, not the state at install time.
The catch: every update buys you ≥24 hours of additional delay. If you update the CV every time the user opens the app for the first 6 days, the first postback won't fire until day 7. For subscription apps with 7-day trials, that's actually the right window — you want the first postback to fire after the trial-to-paid conversion has had a chance to happen.
A working timer strategy for a 7-day trial:
- Set CV at install (0 opens, no trial)
- Update CV when onboarding completes (resets timer)
- Update CV when trial starts (resets timer)
- Update CV when trial converts to paid (resets timer; last update before postback fires)
- Stop updating after day 6 so the postback fires by day 7–8
That sequence gives Apple a final CV that encodes "this user installed, onboarded, trialled, and converted" — all in the first postback's fine value.
Privacy threshold and what to do about it
Apple redacts data when an install group is too small. The thresholds aren't public, but the rough shape:
- Source app ID is redacted when the source app produced fewer than ~25 installs from the campaign.
- Fine value is redacted when the campaign produced fewer than ~25 conversions of that value.
- Coarse value is rarely redacted but can be when the campaign is tiny.
What this means in practice: large campaigns get full attribution, small campaigns lose the fine value first and the source app second. Don't try to read keyword-level data from SKAN — it's too coarse. Use SKAN for campaign-level cross-source-app analysis (which apps are sending you traffic) and use AdServices / first-party attribution for the per-install detail.
When SKAN actually drives decisions
Three places SKAN earns its place in the stack:
Cross-network campaign comparison. When you're running Meta, TikTok, and AppLovin, SKAN is the only attribution layer that's normalised across all three. AdServices only covers Apple Search Ads. First-party events tell you what happened after the install but not which ad network produced it. SKAN's campaign-level postbacks are the only signal that lets you compare networks on the same axis.
Discovery for new campaigns. When you launch a new campaign with no historical data, SKAN gives you a 24–48 hour read on whether installs are converting. Faster than waiting for D7 first-party conversion data.
Validation that deterministic attribution isn't lying. If your MMP claims 80% deterministic match rate for a campaign and SKAN's coarse value says 40% of those installs are converting at "high", but your first-party events show only 15% conversion, something's wrong — either the deterministic match is misattributing or your event pipeline is dropping conversions.
Where AppSprint fits
AppSprint receives SKAN postbacks directly from Apple, decodes the conversion value against your configured schema, and exposes the first / second / third postback breakdown per campaign. The dashboard shows the schema you've configured side-by-side with the actual postback distribution, so when a campaign's fine-value distribution drifts (e.g., trial starts dropping) you see it without writing a SQL query.
The CV schema itself is configured in code, not in the dashboard. That's deliberate — the schema should live next to the SDK setup so it's reviewable in PRs, not in a dashboard form that can be changed without a code review.
If you're running paid acquisition on iOS and your SKAN setup is whatever defaults your MMP shipped with, ship a real schema this week. It's a half-day of work and it's the most leveraged half-day of attribution work most subscription apps will do this year.
You might also like
See all posts →iOS Attribution After ATT: What Subscription Apps Can Still Measure With Confidence
A working stack for iOS attribution post-ATT, with concrete tradeoffs between SKAdNetwork 4, deterministic device matching, and probabilistic fingerprinting, written for subscription-app growth teams running real ad spend.
AppsFlyer Alternative for Subscription Apps
What subscription mobile apps actually need from an MMP, how AppsFlyer's pricing and partner-led architecture starts to feel heavy at $50k–$2M MRR, and where a RevenueCat-native, per-install MMP fits the growth team's workflow.
RevenueCat + MMP: How Subscription Revenue Should Flow Back to Campaigns
A working integration pattern for tying RevenueCat trial, purchase, renewal, and refund events to acquisition campaigns, with the subscriber-attribute model that scales past 100k MAU.