An inbox placement test sends your email to a seed list of test addresses across major providers (Gmail, Yahoo, Outlook, Apple) and reports where each lands — inbox, Promotions tab, spam folder, or rejected. Use GlockApps or similar for a real test. Run before major campaigns, after authentication changes, and when diagnosing delivery problems. The results show what your real users see, not what your ESP reports as 'delivered.'
Inbox Placement Test: How to Run One That Actually Means Something
What Inbox Placement Tests Measure
A proper inbox placement test (sometimes called an email placement test or inbox test) reveals where your email actually lands, broken down by:
- Provider: Gmail, Outlook, Yahoo, Apple Mail, AOL, etc.
- Folder: Inbox, Promotions (Gmail), spam folder, or rejected
- Authentication results: SPF, DKIM, DMARC pass/fail
- Score/diagnostic notes: specific reasons for placement decisions
This is different from your ESP's "delivered" metric, which just confirms the receiving server accepted the message — without telling you whether the recipient ever sees it. Inbox placement tools fill that gap by using real seed mailboxes to measure actual folder destination.
Tools to Use
GlockApps ($79-559/month)
The standard choice. Sends to seed addresses across 30+ providers. Reports placement, spam scores, authentication results. Good for ongoing monitoring.
Mail-Tester (free or paid)
Quick one-off testing. Strong on technical diagnostics (authentication, content score). Limited inbox placement breadth — useful as a first sanity check before deeper testing. Doubles as a free inbox placement test for solo senders.
Litmus and Email on Acid (paid)
Primarily rendering testing (how your email looks across clients), but include some inbox placement features. Better paired with GlockApps for full coverage.
Inbox Inspector by 250ok (now Validity)
Enterprise-tier; expensive but comprehensive for high-volume senders.
For most senders, GlockApps + occasional mail-tester checks covers the practical use cases.
How to Run a Useful Test
1. Prepare a realistic test message
Use your actual email content — not a stripped-down test. The test should reflect what real subscribers receive.
2. Send from your real sending domain and ESP
Don't test from a "clean" sender if you actually send from a different one. The reputation of the real sending source matters.
3. Test before and after changes
If you're changing authentication, ESP, or template, test before and after to measure impact.
4. Run multiple tests across days
Single-test results are noisy. Run 3-5 tests across different times/days for a representative baseline.
5. Cross-reference with real metrics
Compare seed test results to actual campaign engagement. If inbox placement is reported at 95% but click rates are unusually low, real placement may be worse.
Interpreting Results
Gmail Tab Distribution
Gmail's Primary/Promotions/Updates split confuses many senders. Some interpretations:
- Marketing → Promotions = healthy. This is where promotional mail belongs. Don't try to "escape" Promotions.
- Marketing → Primary = good, but may not be sustainable for sales-oriented content.
- Marketing → Spam = problem to investigate.
- Transactional → Promotions = problem. Transactional should land in Primary.
Provider Breakdowns
- Gmail dominant = focus on Gmail-specific signals (engagement, complaint rate)
- Outlook problems = often authentication-related (Outlook is strict)
- Yahoo problems = often complaint or RFC 8058 compliance
- Apple Mail problems = relatively rare; usually severe reputation issues
Spam Score
Most tests provide a numeric spam score (often 0-10). Useful but limited — real placement is decided by full filter chains, not just a single score.
Common Testing Mistakes
Testing only mail-tester
Mail-tester is a great quick check but doesn't measure cross-provider placement. Don't conclude "my email is fine" from one mail-tester score.
Testing without warmup
A test from a fresh, unwarmed sending source will land in spam regardless of content quality. Test from your actual sending source with its existing reputation.
Testing identical messages repeatedly
Filters detect identical content sent to many addresses. Test with realistic content variation.
Ignoring authentication failures in tests
If a test shows authentication failures, your real users get them too — fix immediately.
Over-reacting to one bad result
Placement varies. One test showing 78% inbox doesn't mean you have a crisis. Look for trends across multiple tests.
When to Run Tests
| Scenario | Test Frequency |
|---|---|
| Steady-state ongoing sender | Monthly |
| Diagnosing delivery problems | Immediately, then weekly during recovery |
| Major campaign | Before send + after send |
| Authentication changes | Before and after |
| ESP migration | Before, during, after |
| New sending domain | After warmup, before scaling |
Practitioner note: Inbox placement tests are diagnostic tools, not optimization tools. They tell you where you stand. They don't tell you how to fix problems. After a bad test, the next step is full diagnosis (authentication, reputation, list quality), not "tweak the email until it passes the test."
Practitioner note: GlockApps is the industry default for inbox testing because their seed lists are large enough to be statistically meaningful. Smaller free tools test against 5-10 addresses — useful for spot-checks but not representative. For production diagnostics, pay for GlockApps or similar.
Practitioner note: A common test interpretation error: senders see Gmail Promotions placement and think they have a deliverability problem. Promotions IS inbox for marketing email — recipients still see it, just in a different tab. Worry about Spam folder placement, not Promotions placement.
If you're running inbox placement tests but not sure how to interpret the results or what to fix, book a consultation. I'll review your test data and identify the specific actions that will improve your real-world placement.
Sources
- GlockApps: Inbox placement testing guide
- Litmus: Inbox placement overview
- Google: Sender guidelines
- M3AAWG: Best practices
v1.0 · May 2026
Frequently Asked Questions
What is an inbox placement test?
An inbox placement test sends your email to a curated list of seed addresses at major mailbox providers. After delivery, the test reports where each message landed: Inbox, Promotions/Updates tab (Gmail), spam folder, or rejected entirely. This shows actual placement, not just whether the receiving server accepted the message.
How is inbox placement different from delivery rate?
Delivery rate measures whether the receiving server accepted your message — it can be 99% even if half your messages land in spam. Inbox placement measures where the message actually appeared. A healthy sender has both: 98%+ delivery AND 90%+ inbox placement.
What's a good inbox placement rate?
90%+ across major providers is healthy. 80-90% needs attention. Under 80% indicates real deliverability problems. By provider, expect: Gmail 85-95% inbox + 5-15% Promotions (which is acceptable for marketing), Outlook 85-95%, Yahoo 85-95%, Apple 90-95%.
How often should I run inbox placement tests?
Before major campaigns, after any authentication or infrastructure changes, monthly for ongoing senders, and immediately when diagnosing a delivery problem. Don't test obsessively — placement varies by send, so single test results are directional, not definitive.
Why do inbox placement tests sometimes show different results than real campaigns?
Seed addresses are fresh and lack the engagement history of real subscribers. Real subscribers who engage actively get better placement; dormant subscribers worse. Seed test results reflect baseline sender reputation, not per-recipient personalization. Use seed tests as a baseline indicator, not a perfect predictor of real campaign placement.
Want this handled for you?
Free 30-minute strategy call. Walk away with a plan either way.