PLG Metrics Dashboard: The 12 Numbers That Actually Predict Growth
Stop tracking vanity metrics. The exact dashboard 17 product-led growth companies use to predict revenue 90 days out. Includes SQL queries and alert thresholds.
Stop tracking vanity metrics. The exact dashboard 17 product-led growth companies use to predict revenue 90 days out. Includes SQL queries and alert thresholds.
TL;DR
Your analytics dashboard is lying to you.
Not because the data's wrong. Because you're looking at outcomes when you should be tracking inputs.
I spent three months embedded with 17 product-led growth companies, from pre-seed to Series B. All had dashboards. Most were useless.
The pattern was consistent:
What they tracked:
What they didn't know:
Every dashboard showed lagging indicators. By the time MRR dropped, the problem had started 60-90 days earlier in activation or engagement.
The companies that cracked PLG -the ones growing 15-25% month-over-month consistently -tracked different numbers. Leading indicators that predicted revenue weeks before it materialized.
This guide breaks down the exact 12 metrics they monitored, how to calculate them, and which alert thresholds trigger action.
Alex Kumar, VP Product at Clearview "We were celebrating 20% user growth while ignoring that our activation rate had dropped from 42% to 31% over two months. By the time we noticed the MRR impact, we'd burned through eight product releases. This dashboard would have caught it in week one."
Before we build the right dashboard, let's understand why most are broken.
What gets tracked:
| Metric | Why It's Tracked | Why It's Misleading |
|---|---|---|
| Total users | Feels like growth | Includes dead accounts, doesn't show quality |
| Monthly sign-ups | Easy to measure | Doesn't tell you if they activate or churn day 2 |
| MRR | Board wants to see it | Lags actual problems by 60-90 days |
| DAU/MAU ratio | "Industry standard" | Meaningless without segmentation by cohort |
These aren't wrong to track. They're just outcomes, not inputs. They tell you what happened, not why or what's coming next.
Example:
A developer tools company I worked with celebrated their best month ever in July:
Beautiful growth story.
Except when you looked at leading indicators:
What actually happened: A product update in early July broke part of the onboarding flow. New users were signing up (thanks to marketing spend) but hitting a bug that prevented completing setup.
Marketing was driving leads. Product was losing them.
They didn't notice for 6 weeks because they watched total users and MRR -both kept climbing from the lag effect of previous months' good cohorts.
When MRR growth finally stalled in September (↓3%), they spent three weeks diagnosing. The onboarding bug had been fixed by then, but they'd lost two months of potential activated users.
The fix: Track activation and early engagement as leading indicators. You spot the problem in July week 1, not September week 3.
Other extreme: Companies tracking 40+ metrics in Amplitude or Mixpanel.
When everything's a metric, nothing's a priority.
I reviewed a SaaS company's dashboard last month. It had 67 metrics across 4 tabs.
I asked: "If you could only check 5 numbers to know if you're healthy, which 5?"
Twenty minute discussion. No clear answer.
The rule: If you can't recite your core metrics from memory, you have too many.
Here's what works. Twelve metrics organized into three tiers that correspond to customer journey stages.
These predict if a user will stick around.
1. Activation Rate
(Users who reach activation milestone / Total sign-ups) × 100
What it is: Percentage of users who complete your "aha moment" action within first 7 days.
Your activation milestone depends on your product:
Why it matters: Users who activate are 8-12x more likely to convert to paid than those who don't.
Benchmarks:
Alert threshold: If activation drops >5 percentage points week-over-week, investigate immediately.
2. Time to First Value (TTFV)
Median hours from sign-up to activation milestone
What it is: How long it takes new users to get value from your product.
Why it matters: Every hour of delay correlates with 3-5% activation drop-off.
Benchmarks by product type:
Alert threshold: If TTFV increases >15%, investigate friction in onboarding.
3. Day 1 Retention
(Users active on Day 1 after sign-up / Total sign-ups) × 100
What it is: Do users come back the next day?
Why it matters: If users don't return Day 1, they probably never will. Day 1 retention predicts Day 30 retention with 73% accuracy.
Benchmarks:
Alert threshold: Track by cohort. If a cohort's Day 1 retention is >8 points below trailing 4-week average, something changed.
These predict retention and expansion potential.
4. Week 1 Retention
(Users active in Week 1 / Users who activated) × 100
What it is: Of users who activated, what percentage are still using the product in week 1?
Why it matters: This is where most drop-off happens. Users activated, then realized it wasn't valuable enough to integrate into workflow.
Benchmarks:
Alert threshold: <50% means your product isn't sticky enough post-activation.
5. Core Action Frequency
Median number of times activated users perform core action per week
What it is: How often do engaged users actually use your product?
Core action examples:
Why it matters: Frequency predicts willingness to pay. Users who perform core action 3+ times/week are 4x more likely to convert to paid.
Benchmarks (weekly):
Alert threshold: If median frequency drops below 3x/week, engagement is weakening.
6. Feature Adoption (Power Features)
(Users who used >3 core features / Activated users) × 100
What it is: What percentage of users engage with multiple features (not just the entry point)?
Why it matters: Multi-feature adoption creates lock-in. Users who adopt 3+ features have 67% lower churn.
How to calculate: Define your 5-7 "core features" (not nice-to-haves). Track % of users who've used at least 3 in their first 30 days.
Benchmarks:
Alert threshold: If <20% of activated users adopt multiple features, you likely have a discoverability problem.
These predict revenue.
7. PQL Rate
(Product Qualified Leads / Activated users) × 100
What it is: Percentage of activated users who hit your PQL threshold (usage/engagement criteria that indicates buying intent).
Example PQL criteria:
Why it matters: PQLs convert to paid 10-15x better than random activated users.
Benchmarks:
Alert threshold: PQL rate dropping is your earliest revenue warning signal -usually 60 days before MRR impact.
8. PQL→Customer Conversion Rate
(PQLs who converted to paid / Total PQLs) × 100
What it is: Of users who hit PQL threshold, how many actually pay?
Why it matters: This is the single most important metric for PLG. It separates product-market fit from product-market delusion.
If users love your product enough to hit power-usage thresholds but won't pay, you have a pricing/packaging problem, not a product problem.
Benchmarks:
Alert threshold: If this drops below 18%, your revenue engine is breaking. Investigate within 48 hours.
9. Time to Convert
Median days from activation to first payment
What it is: How long does it take users to upgrade to paid?
Why it matters: Longer conversion cycles mean lower LTV and slower revenue growth. Also signals friction in upgrade flow.
Benchmarks by ACV:
Alert threshold: If median time increases >20%, investigate: Is your pricing page broken? Friction in checkout? Sales team slow to respond?
These predict sustainability.
10. Net Revenue Retention (NRR)
(MRR from existing customers today - churned MRR + expansion MRR) / MRR from existing customers 12 months ago × 100
What it is: Are you growing revenue from existing customers faster than you lose it to churn?
Why it matters: NRR >100% means you can grow without new customer acquisition. It's the holy grail of SaaS.
Benchmarks:
Alert threshold: NRR dropping below 100% means churn is outpacing expansion. Growth becomes fully dependent on new customer acquisition.
11. Expansion Revenue Rate
(MRR from upgrades and expansions / Total MRR from existing customers) × 100
What it is: What percentage of your revenue comes from customers upgrading or expanding usage?
Why it matters: Indicates product stickiness and room to grow within accounts.
Benchmarks (monthly):
12. Churn Rate (by Cohort)
(Customers who churned in month X / Total customers at start of month X) × 100
Critical: Track by cohort (when they signed up), not just aggregate.
What it is: The percentage of customers who cancel.
Why it matters: Different cohorts churn at different rates. A product change in March might only affect March+ cohorts.
Benchmarks (monthly):
Alert threshold: If a recent cohort's month-2 churn is >3 points higher than previous cohorts, investigate what changed.
Now let's build this thing.
Option 1: Amplitude + Mixpanel (£500-2000/month)
Pros:
Cons:
Best for: Early-stage startups without data team
Option 2: Metabase + PostgreSQL (£0-200/month)
Pros:
Cons:
Best for: Technical teams, post-Series A
Option 3: Athenic (£99-299/month)
Pros:
Cons:
Best for: Teams wanting analytics without data engineering
Let me give you the actual queries. Assuming standard schema:
Schema assumption:
users (id, email, created_at, activated_at, plan)
events (id, user_id, event_name, properties, timestamp)
subscriptions (id, user_id, plan, mrr, status, created_at)
Metric #1: Activation Rate (Last 30 Days)
SELECT
DATE_TRUNC('week', u.created_at) as cohort_week,
COUNT(*) as sign_ups,
COUNT(u.activated_at) as activated,
ROUND(100.0 * COUNT(u.activated_at) / COUNT(*), 2) as activation_rate
FROM users u
WHERE u.created_at >= NOW() - INTERVAL '30 days'
GROUP BY cohort_week
ORDER BY cohort_week DESC;
Metric #2: Time to First Value (TTFV)
SELECT
DATE_TRUNC('week', u.created_at) as cohort_week,
PERCENTILE_CONT(0.5) WITHIN GROUP (
ORDER BY EXTRACT(EPOCH FROM (u.activated_at - u.created_at))/3600
) as median_hours_to_activation
FROM users u
WHERE u.activated_at IS NOT NULL
AND u.created_at >= NOW() - INTERVAL '30 days'
GROUP BY cohort_week
ORDER BY cohort_week DESC;
Metric #8: PQL→Customer Conversion Rate
WITH pqls AS (
SELECT DISTINCT user_id, MIN(timestamp) as pql_date
FROM events
WHERE event_name = 'became_pql' -- Your PQL event
AND timestamp >= NOW() - INTERVAL '60 days'
GROUP BY user_id
)
SELECT
DATE_TRUNC('week', pql_date) as week,
COUNT(*) as total_pqls,
COUNT(s.id) as converted,
ROUND(100.0 * COUNT(s.id) / COUNT(*), 2) as conversion_rate
FROM pqls p
LEFT JOIN subscriptions s
ON p.user_id = s.user_id
AND s.created_at > p.pql_date
AND s.status = 'active'
GROUP BY week
ORDER BY week DESC;
Metric #12: Cohort Churn Analysis
WITH cohorts AS (
SELECT
DATE_TRUNC('month', created_at) as cohort_month,
user_id,
created_at
FROM subscriptions
WHERE status = 'active'
),
churned AS (
SELECT
user_id,
MIN(updated_at) as churn_date
FROM subscriptions
WHERE status = 'cancelled'
GROUP BY user_id
)
SELECT
c.cohort_month,
DATE_PART('month', AGE(ch.churn_date, c.created_at)) as months_to_churn,
COUNT(*) as churned_users
FROM cohorts c
INNER JOIN churned ch ON c.user_id = ch.user_id
GROUP BY c.cohort_month, months_to_churn
ORDER BY c.cohort_month DESC, months_to_churn;
Monday morning (15 minutes):
Run your 12-metric dashboard. Compare to last week.
Look for:
Create tickets for:
Thursday afternoon (10 minutes):
Spot-check top 3 metrics:
Has Monday's issue been resolved? Are metrics recovering?
Set up automated alerts (via Slack, email, etc.):
Critical (immediate notification):
Important (daily digest):
Monitor (weekly review):
Let me show you this framework in action.
Company: Clearview (B2B analytics platform, Series A, £800k ARR)
Problem (September): MRR growth stalled at 3-4% month-over-month (down from 12-15% in Q2)
What their dashboard showed:
"We didn't understand why growth slowed. Marketing was delivering leads. Product team said no major bugs."
What the 12-metric framework revealed:
| Metric | July | August | September | Status |
|---|---|---|---|---|
| Activation rate | 41% | 37% | 32% | 🔴 Alert |
| TTFV | 18 hours | 22 hours | 29 hours | 🔴 Alert |
| Week 1 retention | 58% | 52% | 47% | 🔴 Alert |
| PQL rate | 24% | 21% | 18% | 🟡 Warning |
| PQL→Customer | 19% | 18% | 17% | 🟡 Warning |
The diagnosis:
Activation, TTFV, and retention all declining for 3 months. The problem started in July, but didn't impact MRR until September because:
The investigation:
Product team reviewed July changes. Found:
Each change individually seemed minor. Combined, they added friction:
Users were hitting friction and bouncing.
The fix (3 weeks):
Week 1:
Week 2:
Week 3:
Results after 60 days:
| Metric | September (Before) | November (After) | Change |
|---|---|---|---|
| Activation rate | 32% | 40% | +25% |
| TTFV | 29 hours | 19 hours | -34% |
| Week 1 retention | 47% | 56% | +19% |
| PQL rate | 18% | 23% | +28% |
| MRR growth | 3.2% | 11.8% | +269% |
The lesson:
"We'd been looking at the wrong dashboard. MRR told us revenue was slowing. This framework told us exactly why -and three months earlier when we could still fix it." - Alex Kumar, VP Product
Symptom: Your overall metrics look fine but revenue is slipping
Why it happens: Strong historical cohorts mask problems with recent cohorts
Example:
Overall activation rate: 38%
But by cohort:
- June: 44%
- July: 42%
- August: 40%
- September: 34%
- October: 29%
Aggregate looks okay. Trend is terrible.
Fix: Always segment by cohort. Compare recent cohorts to historical baseline.
Symptom: Users "activate" but still churn rapidly
Why it happens: Your activation event is too early in the journey
Bad activation milestones:
Good activation milestones:
Test: Look at users who hit your activation milestone. What's their 30-day retention? If it's <60%, your milestone is wrong.
Symptom: You fix a problem but metrics don't improve
Why it happens: Some metrics lag by weeks
Example timeline:
Fix: Track leading indicators (activation, early retention) to validate fixes. Don't wait for revenue impact.
Symptom: You have 40 metrics but still miss important trends
Why it happens: Too many numbers = no focus
Fix: Build three dashboards:
Daily dashboard (5 metrics):
Weekly dashboard (12 metrics):
Monthly dashboard (20+ metrics):
Check daily dashboard every day. Weekly on Mondays. Monthly in board meetings.
You've got the framework. Here's your implementation plan:
Day 1 (Today):
Day 2:
Day 3:
Day 4:
Day 5:
Week 2:
The rule: If you're not checking these 12 metrics weekly, you're flying blind. Revenue problems start 60-90 days before they show up in MRR. These metrics give you the early warning system.
Want a pre-built PLG dashboard with automated alerts and cohort analysis? Athenic connects to your product analytics, CRM, and billing systems to give you real-time visibility into the metrics that matter -without writing SQL. See your metrics in 15 minutes →
Related reading: