Apple Search Ads Revenue Attribution: From Keyword to Subscriber
Apple Search Ads campaign structure that exposes keyword-level revenue, the difference between AdServices and SKAdNetwork, and the reporting view that actually drives bid decisions for subscription apps.
Apple Search Ads is the cleanest paid channel iOS has. The user is already in the App Store, already searching for something close to your product, already past the friction that ads on Meta or TikTok have to overcome. The conversion rates are higher, the install quality is higher, and the keywords are explicit signals you can read directly.
It's also one of the channels most growth teams under-optimise, because the default reports look correct without actually answering the question they need to answer: which keywords produce subscribers, not just installs.
This post is the working setup. Campaign structure, the two attribution signals Apple gives you, how to combine them, and what the keyword-to-revenue view should look like in practice.
The two attribution signals Apple ships
Apple Search Ads exposes attribution two ways, and they're not the same.
AdServices framework (AAAttribution). Returns a token from the SDK at install time. The MMP exchanges that token with Apple's server within 24 hours and gets back the campaign, ad group, keyword, country, and a few other fields for that specific install. Keyword-level data is included, and the attribution is deterministic. This is the signal you want for keyword decisions.
SKAdNetwork. Returns coarse, delayed, aggregated campaign-level postbacks. Keyword-level data is not included — SKAN only exposes the campaign and the conversion value. Useful as a fallback when AdServices fails (rare) or for cross-campaign aggregation; not useful for bid decisions.
A correctly configured MMP uses AdServices as the primary signal for Apple Search Ads attribution and treats SKAN as a coverage backup. AppSprint does the AdServices token exchange directly with Apple and stores the campaign, ad group, keyword, match type, and search term against the install.
Campaign structure that makes keyword-level revenue readable
The structure most subscription apps inherit looks something like: one "Search" campaign, one "Discovery" campaign, ad groups split by language. That structure is fine for spend; it's terrible for measurement.
The structure that produces a readable keyword-to-revenue view:
1. Separate intent types into separate campaigns.
- Brand campaign (your own app name and variants)
- Competitor campaign (competing apps' names)
- Category campaign (broad terms — "habit tracker", "expense app")
- Discovery campaign (Apple's discovery placement)
Each of these behaves differently at the revenue layer. Brand traffic converts at 40–60% from tap to install; category traffic converts at 5–15%. Blending them in one campaign hides that.
2. Within each campaign, group keywords by competitive intensity, not language. Language is a country-level setting on Apple Search Ads — it splits automatically. What doesn't split automatically is whether you're bidding on a high-volume head term that everyone competes on, vs a long-tail keyword that converts at 3× the rate. Group those into separate ad groups so you can see them separately in the report.
3. Run discovery and search side by side. Discovery campaigns sometimes outperform Search on subscription apps because they pull users from contexts (Today tab, Search tab default state) where intent is broader but the user isn't already comparing alternatives. Treat them as separate campaigns and let the data tell you the mix.
The view that drives bid decisions
The reporting view that subscription teams actually need looks like this:
| Keyword | Taps | Installs | Trials | Paid conversions | Renewal revenue (D30) | Spend | ROAS |
|---|---|---|---|---|---|---|---|
| brand search | 1,200 | 580 | 320 | 195 | $4,680 | $890 | 5.26 |
| category head | 2,400 | 280 | 95 | 22 | $580 | $1,420 | 0.41 |
| long-tail 1 | 180 | 62 | 38 | 19 | $480 | $95 | 5.05 |
That table answers three questions in one row each:
- Is the keyword producing installs? (Install column)
- Are those installs converting to revenue? (Paid + Renewal columns)
- Is the spend justified by the revenue? (ROAS column)
A keyword's "expensive" or "cheap" label flips depending on which column you read. The category-head keyword above looks busy by taps and installs and dies at the revenue layer. The long-tail keyword looks small and is actually one of the best ROAS lines in the account.
This is why the AdServices signal matters. Without keyword-level attribution, you can't build this table. You're stuck reading campaign-level numbers and making bid decisions on a blended average that hides the real winners and losers.
What to track at the subscription layer
Three events earn their place in the Apple Search Ads view:
Trial start (D0–D1). A leading signal. Worth watching, but don't optimise bids against it alone — trial-start volume can scale with cheap, unqualified clicks.
Paid conversion (D3–D14, depending on trial length). This is when the revenue starts. A keyword's paid conversion rate is the single most useful number in the report.
D30 renewal revenue. The lagging signal that tells you which paid conversions stuck. Keywords with strong paid conversion and weak D30 retention are the ones that look like winners for the first two weeks and then quietly underperform. The growth team should look at D30 revenue every Monday.
What to avoid
Bidding on broad match across all campaigns. Broad match is fine for discovery, lethal for category campaigns where it'll burn budget on terms you'd never bid on directly. Use exact match in your category campaign; let discovery handle the rest.
Treating "search match" as a free expansion. Search match (Apple's automated keyword matching) is good for finding new terms. It's also indistinguishable from broad match if you don't review it weekly. Bid on the keywords search match surfaces; pause the ones that don't perform after 100 taps.
Ignoring the Search Term Report. Apple shows you the actual queries that triggered your ads. Most teams check this once. Check it weekly — the negative keywords you find there are the difference between a 4× ROAS account and a 1.5× ROAS account.
Where AppSprint fits
AppSprint connects to Apple Search Ads via the Search Ads API, exchanges the AdServices token on every install, and stores campaign, ad group, keyword, and match type per install. The reporting view shown above is the default Apple Search Ads dashboard in the product, computed automatically against RevenueCat or Superwall revenue events when those integrations are connected.
The dashboard also shows the AdServices attribution rate per campaign — when Apple's token exchange fails for a meaningful chunk of installs (it shouldn't, but it sometimes does), you'll see the drop and know to investigate before bid decisions degrade.
For a subscription app spending $5k+/month on Apple Search Ads, the difference between campaign-level reporting and keyword-level revenue reporting is usually a 20–30% efficiency gain — the same spend producing 20–30% more paid conversions because the bids reallocate to keywords that actually pay back. That's the bet of the AdServices-first setup, and it's the one the rest of the channel won't deliver.
You might also like
See all posts →AppsFlyer Alternative for Subscription Apps
What subscription mobile apps actually need from an MMP, how AppsFlyer's pricing and partner-led architecture starts to feel heavy at $50k–$2M MRR, and where a RevenueCat-native, per-install MMP fits the growth team's workflow.
iOS Attribution After ATT: What Subscription Apps Can Still Measure With Confidence
A working stack for iOS attribution post-ATT, with concrete tradeoffs between SKAdNetwork 4, deterministic device matching, and probabilistic fingerprinting, written for subscription-app growth teams running real ad spend.
RevenueCat + MMP: How Subscription Revenue Should Flow Back to Campaigns
A working integration pattern for tying RevenueCat trial, purchase, renewal, and refund events to acquisition campaigns, with the subscriber-attribute model that scales past 100k MAU.