Quick Answer

An email deliverability report combines ESP send data, mailbox provider feedback (Postmaster Tools, SNDS), authentication results (DMARC aggregate reports), and seedlist placement tests. The metrics that actually predict inbox performance are complaint rate, authentication pass rate, and provider-reported reputation — not open rate, which is broken by Apple Mail Privacy Protection.

Email Delivery Reports: What to Track and What to Ignore

By Braedon·Mailflow Authority·Email Deliverability·Updated 2026-05-16

A useful email deliverability report answers one question: are my messages reaching the inbox, and if not, why. Most reports I see from clients answer neither. They lead with open rate (broken since Apple MPP), bury complaint rate (the metric that actually matters), and never reference Google Postmaster Tools or Microsoft SNDS at all.

This guide walks through the email delivery report I actually use during audits — which metrics belong, which are vanity, and how to wire them together so the report drives a decision instead of decorating a slide.

What an email deliverability report actually measures

Deliverability is the gap between what your ESP says happened and what mailbox providers say happened. Your ESP sees: sent, accepted, bounced, complained. It does not see: filtered to spam, promotions tab placement, image-blocking, or provider-side reputation scoring.

A real email delivery report pulls from four sources:

SourceWhat it tells you
ESP send logsVolume, bounce categories, complaint webhooks
Mailbox provider toolsGoogle Postmaster, Microsoft SNDS, Yahoo CFL
DMARC aggregate reportsCross-sender authentication and alignment
Seedlist testsInbox vs spam placement at major providers

If any of those four is missing, your report is incomplete. Most ESP-native dashboards only cover the first one.

Metrics that drive decisions

These are the numbers I check first on any new audit, in order:

  1. Complaint rate per provider. Google enforces a hard 0.30% threshold under the Gmail bulk sender rules and starts filtering well before that. Microsoft acts on roughly the same band. Track this per provider, not in aggregate.
  2. Authentication pass rate (SPF, DKIM, DMARC). From DMARC aggregate reports, broken down by source. Any third-party sender below 99% needs investigation.
  3. Google Postmaster domain and IP reputation. Anything below "Medium" is a problem. "Bad" means active spam folder placement.
  4. Microsoft SNDS color (red/yellow/green) and complaint rate. Color codes summarize spam-trap hits and complaint volume.
  5. Bounce rate by category. A spike in 550 5.1.1 (user unknown) means your list hygiene slipped. A spike in 421 4.7.0 means rate limiting or temporary block.
  6. Seedlist inbox placement. From GlockApps, Mailgenius, or Litmus seedlists — measured weekly, not after each send.

Practitioner note: I weight Microsoft SNDS heavier than most people do. It's the only direct signal Microsoft gives you, and the spam-trap hits column will flag list quality problems three or four weeks before Outlook starts spam-foldering. If you're not on SNDS yet, that's your first homework.

Metrics that mislead

  • Open rate as an absolute number. Apple MPP pre-loads images, so any list with iOS share above 30% will show inflated, unreliable opens. Use it for relative campaign comparison only.
  • "Delivered" without complaint context. A 99.5% delivered rate with a 0.5% complaint rate is a worse outcome than a 92% delivered rate with a 0.05% complaint rate. The first one is on track to get blocked.
  • Spam score tools. Tools that rate your HTML "spam score" out of 10 do not reflect how Gmail or Outlook filters work. Ignore them.
  • Sender Score (Validity). Useful as a directional signal, but it's IP-based and most modern infrastructure is reputation-by-domain.

How to wire the report together

I build deliverability reports as a single sheet with three tabs: provider view (Gmail, Microsoft, Yahoo, Apple, B2B), trend view (rolling 4-week complaint and authentication rates), and incident log (any reputation drops, blocklist hits, or threshold breaches).

For the data layer:

  • ESP exports via API into a warehouse (BigQuery or Postgres works fine)
  • Google Postmaster Tools pulled with the Postmaster API
  • Microsoft SNDS scraped (no API, unfortunately) or via tools like Mailhardener
  • DMARC aggregate reports parsed via Mailhardener, Dmarcian, or Postmark

The deliverability monitoring tools guide covers vendor selection in more depth. If you're managing more than one sending domain, you'll want a multi-domain monitoring layer that rolls up across all of them.

Practitioner note: The single biggest reporting upgrade most clients make is adding DMARC aggregate report parsing. You suddenly see every IP and source sending as your domain — including the ones you didn't authorize and the ESPs your marketing team signed up for without telling IT.

Reporting cadence and thresholds

Set thresholds, not just trend lines. Alert on:

  • Complaint rate above 0.10% on any provider (well below Google's 0.30% enforcement)
  • DMARC failure rate above 1% from any aligned source
  • Google Postmaster reputation drop of one tier
  • Microsoft SNDS color change to yellow or red
  • Bounce rate above 2% on any campaign
  • Inbox placement below 90% on any major provider in seedlist

For monthly executive reporting, lead with the inbox placement number and the complaint rate. Those two predict revenue impact. Everything else is supporting evidence.

Common reporting mistakes

  • Reporting at the account level instead of per-sending-domain when you operate multiple brands or subdomains. Reputation lives at the domain, not the account.
  • Omitting transactional and marketing in the same report. They should be separated — they have different baselines.
  • Conflating "sent" with "attempted." If your ESP suppresses 8% of your list before send, the denominator matters.

If your team is debugging current placement issues, start with why emails go to spam and the Gmail complaint rate threshold deep-dive before redesigning the report.

If you need help building a deliverability reporting stack that actually drives decisions, book a consultation. I set up DMARC aggregate parsing, Postmaster Tools integration, and weekly inbox placement reporting for agencies and SaaS teams every month.

Sources


v1.0 · May 2026

Frequently Asked Questions

What should an email delivery report include?

At minimum: total sends, accepted, bounces (hard and soft), complaint rate per mailbox provider, authentication pass rates (SPF, DKIM, DMARC alignment), Google Postmaster reputation tiers, Microsoft SNDS color codes, and a seedlist inbox placement score. Skip open-rate-only reports — they're badly distorted by Apple MPP.

How often should you generate a deliverability report?

Weekly for active senders above 50,000 sends per month, monthly for smaller programs. Reputation problems compound fast — by the time a monthly report flags a complaint spike, you've usually been damaging your sender reputation for two to three weeks.

Are open rates still useful in a delivery report?

Only as a relative trend, not an absolute number. Apple Mail Privacy Protection pre-fetches images for opted-in users, inflating opens for any list with a meaningful Apple share. Use opens to compare campaign A vs B, not to judge inbox placement.

What's the difference between delivered and inboxed?

Delivered means the receiving server accepted the message (returned 250 OK). Inboxed means it landed in the primary inbox rather than spam, promotions, or junk. Your ESP can only report delivered. Inbox placement requires seedlist testing or panel data from a third-party tool.

How do I track deliverability across multiple ESPs?

Centralize with DMARC aggregate reports (every sender hits your DMARC RUA inbox) plus Google Postmaster Tools and Microsoft SNDS at the domain or IP level. ESP dashboards alone won't show you the cross-vendor picture.

Want this handled for you?

Free 30-minute strategy call. Walk away with a plan either way.