Account Review

Meta Ads Performance Audit

A data-driven review of the current Meta advertising account, covering campaign structure, audience targeting, creative performance, and conversion efficiency.

Period: March 30 – April 28, 2026  ·  46 Campaigns  ·  87 Ad Sets  ·  258 Ads

Purchase ROAS
0.86x
$0.86 returned per $1 spent
Total Spend
$447,686
Revenue
$386,428
Net Loss
−$61,258
Purchases
3,641
Avg CPA
$123
Avg AOV
$106
CTR
1.21%
CPM
$13.71
79%
of the total ad budget is allocated to campaigns with a ROAS below 1.0x
$352,713 of $447,686 is going to campaigns that spend more than they generate.
Critical

Budget Allocation

Only 3 of 46 campaigns are generating a positive return. Those 3 winners received 21% of the budget. The remaining 79% went to campaigns losing money.

79%
Losing
Campaigns below 1.0x ROAS $352,713
Campaigns above 1.0x ROAS $94,973
Zero-purchase campaigns $1,235
The gap: If the underperforming budget was reallocated to campaigns matching the winners' 1.46x ROAS, projected additional revenue would be approximately $162,000 per month from the same total spend.
Campaign Spend Purchases Revenue ROAS Status
NRM - TOF - Broad AP+$93,984810$83,8180.89xLosing
NRM - TOF - SS CBO$44,143469$58,7371.33xProfitable
NRM - TOF - Evergreen CBO$38,903293$28,9170.74xLosing
NRM - SS CBO - All Others$34,843381$45,3671.30xProfitable
NRM - DPA TOF$28,023148$17,8430.64xLosing
NRM - DPA TOF II$25,019247$33,4211.34xProfitable
NRM - TOF - Skincare Brands (ever)$24,263166$13,9320.57xLosing
NRM - TOF - BETA CBO$20,714152$16,8930.82xLosing
NRM - TOF - Skincare Brands (new)$19,696135$13,0050.66xLosing
NRM - BOF - DPA Dynamic$16,994154$14,1310.83xLosing
NRM - BOF - 30 Day Visitors$11,41677$8,3690.73xLosing
NRM - BOF - ATC Retargeting$10,891100$8,9910.83xLosing
NRM - TOF - BETA NCA II Catalog$8,0091$2970.04x$8,009 CPA
NRM - TOF - Lapsed II$7,79169$6,4080.82xLosing
NRM - TOF - Evergreen CBO II$7,63745$5,2610.69xLosing
21%
Retargeting is performing worse than prospecting
Bottom-of-funnel campaigns should outperform top-of-funnel by 2–5x. Here, they trail by 21%.
Critical

Retargeting Underperformance

Retargeting warm audiences (people who already visited the site, added to cart, or viewed products) should be the highest-efficiency segment. Currently, it's producing lower returns than cold prospecting.

Top-of-Funnel (Prospecting)
0.93x
24 campaigns · $306K spend · $114 CPA
Bottom-of-Funnel (Retargeting)
0.73x
13 campaigns · $91K spend · $148 CPA
Why this matters: When retargeting underperforms prospecting, it typically signals stale audiences, creative fatigue at high frequency, insufficient audience segmentation, or missing exclusions (e.g., recent purchasers seeing retargeting ads).
Critical

Audience & Creative Fatigue

$175,000+ is being spent on ad sets where the average person has seen the same ads more than 4 times. Retargeting audiences are seeing ads up to 18.7 times.

Frequency by Ad Set (Top Offenders)

BOF – ATC Retarget
18.7x
BOF – 30-Day Visitors
8.0x
BOF – DPA ATC
6.4x
TOF – Broad AP+
4.9x
TOF – Skincare Brands
4.5x
Best practice ceiling
3.0x
Frequency over 4x on prospecting audiences correlates with rising CPMs and declining CTR. The top-spending campaign ($94K) has already reached 4.9x frequency — indicating the audience pool is being overserved without adequate creative rotation or expansion.
2x
Promotional campaigns outperform evergreen by 2x on ROAS
Yet evergreen campaigns receive 4.3x more budget than promotional ones.
High Impact

Promotional vs. Evergreen Strategy

Spring Sale Campaigns
1.46x
$84K spend · $82 CPA · 1,021 purchases
Evergreen Campaigns
0.73x
$364K spend · $139 CPA · 2,620 purchases

The promotional campaigns are proving the product can convert profitably on Meta. The challenge is the evergreen creative and offer strategy, which is generating a $139 CPA — 70% higher than the promotional approach.

This gap suggests the evergreen creative is not communicating enough urgency or value to justify the price point without a promotional offer.

Warning

Inconsistent Attribution Settings

The account uses three different attribution windows across its 46 campaigns, making it difficult to compare performance accurately or make confident optimization decisions.

Window A
7d click
1d view
39 campaigns
Window B
7d click
1d view
1d engaged
6 campaigns · $140K spend
Window C
Not set
1 campaign
The "engaged-view" attribution window (Window B) gives credit to people who watched a video and later purchased — even without clicking. This inflates ROAS reporting for those 6 campaigns, which represent 31% of total spend. Standardizing all campaigns to one attribution window is essential for accurate decision-making.
Warning

Ad Creative Performance

The account's highest-spending ad has a 0.29% CTR — roughly one-third the industry benchmark for DTC beauty. Meta's own quality rankings confirm the issue.

Top 8 Ads by Spend — CTR vs. 1.0% Benchmark

Ad #1 ($46.6K)
0.29%
Ad #2 ($18.3K)
0.35%
Ad #3 ($14.1K)
0.81%
Ad #4 ($12.7K)
0.42%
Ad #5 ($11.2K)
0.67%
Ad #6 ($8.9K)
0.58%
Ad #7 ($7.3K)
1.12%
Ad #8 ($6.1K)
0.73%
Industry Benchmark
1.0%+

Meta Quality Rankings Distribution

Below Average
27 ads
Average
31 ads
Above Average
2 ads
Not Ranked
198 ads
$141K was spent on ads that Meta itself rates as "Below Average" for conversion. Only 2 of 258 ads earned an "Above Average" engagement score. When the platform's own algorithm downgrades ad quality, it charges higher CPMs and shows ads to less qualified users.
Warning

Campaign Overlap & Self-Competition

The account runs 46 campaigns simultaneously across overlapping audiences and objectives. Multiple campaign variants target the same customer segments, driving up auction costs.

Overlap Area Campaigns Combined Spend Risk
DPA / Catalog variants7$83,218High
BOF / Retargeting variants13$91,432High
Beta / Test duplicates3$33,691Medium
Skincare Brands duplicates2$43,959Medium
Evergreen CBO duplicates2$46,540Medium
When multiple campaigns compete for the same user in the same auction, the account bids against itself — inflating CPMs and fragmenting data across too many campaigns. Consolidation into fewer, higher-budget campaigns gives the algorithm more signal to optimize efficiently.
Warning

Insufficient Spend Controls

Worst CPA — Single Campaign
$8,009
"Beta NCA II Catalog test" — 1 purchase from $8,009 in spend. 33,172 landing page views, 26 add-to-carts, 1 sale.
Zero-Purchase Campaigns
7
$1,235 spent across 7 campaigns that generated zero purchases during the period.

Automated rules or manual checkpoints to cap underperforming campaigns would prevent runaway spend. A standard practice is to pause or reduce budget on any campaign that exceeds 3x the target CPA without converting.

Analysis

Conversion Funnel Efficiency

Mapping the user journey from ad click to purchase reveals multiple drop-off points. Each stage represents an opportunity to improve the overall return.

Ad Clicks
507,193
▼ 28% drop — clicks that never load the page
Landing Page Views
364,120
▼ 89% drop — visitors who don't add to cart
Add to Cart
40,487
▼ 62% drop — carts that don't start checkout
Initiate Checkout
15,237
▼ 76% drop — checkouts that don't complete
Purchase
3,641
Click → LPV Rate
71.8%
LPV → ATC Rate
11.1%
ATC → Purchase
9.0%
Overall CVR
0.72%
The 28% click-to-LPV drop suggests either slow page load times, incorrect placement targeting (serving ads on placements where accidental clicks are common), or link issues. Addressing this alone could recover significant efficiency.
Pending

Tracking & Pixel Health

A comprehensive review of Meta Pixel configuration, Conversions API (CAPI) setup, event deduplication, and Aggregated Event Measurement priority is in progress.

Early indicators from the data:

Subscription conversions represent 37% of purchases but the account may not be fully optimizing for subscriber lifetime value. Verifying that the Pixel and CAPI are correctly reporting subscription events — and that value-based optimization includes subscription revenue — could meaningfully shift ROAS calculations.
Event deduplication should be verified. If both browser Pixel and server-side CAPI are firing purchase events without matching event IDs, conversions may be double-counted in some campaigns while underreported in others.

Full tracking audit results will be appended upon completion.

$135K–$270K
Estimated monthly revenue improvement from addressing these findings
Moving from 0.86x to a 1.3–1.6x blended ROAS on the same $450K monthly spend.
Action Plan

Prioritized Recommendations

Ranked by projected impact and implementation effort. Priority 1 items address the largest revenue leaks.

P1
Consolidate to 8–12 campaigns
Reduce from 46 to 8–12 well-structured campaigns to eliminate self-competition, give the algorithm more signal per campaign, and reduce CPMs from auction overlap.
↑ Estimated 15–25% CPM reduction
P1
Reallocate budget from sub-1x to proven winners
Shift spend from the 36 losing campaigns into the 3 profitable ones and new test variants. Cap any single test at 2x target CPA before pause.
↑ Projected $162K/mo additional revenue
P1
Fix retargeting audience segmentation
Segment BOF by recency (7-day, 14-day, 30-day) with unique creative for each stage. Exclude recent purchasers. Cap frequency at 6–8x for retargeting.
↑ BOF ROAS from 0.73x toward 2.0x+
P1
Refresh all creative — test 10+ new concepts
Replace all ads rated "Below Average." Prioritize video UGC, before/after content, and testimonial-driven formats. Target 1.0%+ CTR across all ads.
↑ CTR from 0.29–0.67% → 1.0%+
P2
Standardize attribution to 7-day click, 1-day view
Remove "engaged-view" attribution from all campaigns to ensure consistent, comparable performance data across the account.
→ Cleaner data for optimization decisions
P2
Implement automated spend controls
Set rules to pause or reduce budget on any campaign exceeding 3x target CPA within a 3-day window. Prevents runaway spend events like the $8,009 CPA campaign.
→ Prevents wasteful spend events
P2
Build evergreen creative that matches Spring Sale efficiency
Analyze what makes the Spring Sale ads convert (urgency, pricing, creative format) and apply those learnings to non-promotional evergreen concepts. Consider always-on value-adds.
↑ Evergreen ROAS from 0.73x → 1.0x+
P2
Scale value-based lookalike audiences
Build lookalikes from top-LTV subscribers (not just all purchasers). The 1% LAL already shows strong performance — expand to 2–3% with exclusions.
↑ Prospecting CPA reduction
P3
Audit Pixel and CAPI setup
Verify event deduplication, check for duplicate Pixel fires, and confirm subscription purchase events are being tracked correctly for value optimization.
→ Accurate reporting + better algorithmic targeting
P3
Address click-to-landing-page drop-off
Investigate why 28% of ad clicks never result in a page view. Check page load speed, placement targeting (especially Audience Network), and link accuracy.
↑ 28% more qualified traffic from same spend
Reference

Methodology & Data Notes

Data Source: Meta Ads Manager export covering March 30 – April 28, 2026. Three levels analyzed: Campaign (46 records), Ad Set (87 records), Ad (258 records).

ROAS Definition: Purchase conversion value divided by spend. "Purchase ROAS" uses the purchase conversion action only. Subscription value is tracked separately and noted where applicable.

Attribution: As configured per campaign (mixed windows — see Finding 5). All comparisons use each campaign's own attribution setting as reported.

CPA Calculation: Total spend / total purchase conversions.

CTR Benchmark: DTC beauty category average of 1.0–1.5% based on industry reports (Revealbot, Varos, Databox Q1 2026 benchmarks).

Frequency Threshold: Industry best practice of 3.0x for prospecting and 6–8x maximum for retargeting (Meta Business Help Center recommendations).

Projected Impact: Conservative estimates based on redirecting underperforming budget to match the account's own proven winning campaign ROAS. Actual results depend on creative quality, audience response, and market conditions.