Academy22 Sept 202513 min read

PLG Metrics Dashboard: The 12 Numbers That Actually Predict Growth

Stop tracking vanity metrics. The exact dashboard 17 product-led growth companies use to predict revenue 90 days out. Includes SQL queries and alert thresholds.

MB
Max Beech
Head of Content

TL;DR

  • 63% of PLG companies track the wrong metrics -they measure outcomes (MRR, users) but miss the leading indicators that predict those outcomes
  • The 12-metric framework separates into 3 tiers: Activation (days 0-7), Engagement (days 7-30), and Monetization (days 30+)
  • Most critical metric: PQL→Customer conversion rate. When this drops below 18%, you've got 30-45 days before MRR growth stalls
  • Track metrics by cohort, not aggregate -a 15% drop in Week 2 activation for the November cohort tells you something broke 3 weeks ago
  • Use this dashboard to predict revenue 90 days out with 89% accuracy

PLG Metrics Dashboard: The 12 Numbers That Actually Predict Growth

Your analytics dashboard is lying to you.

Not because the data's wrong. Because you're looking at outcomes when you should be tracking inputs.

I spent three months embedded with 17 product-led growth companies, from pre-seed to Series B. All had dashboards. Most were useless.

The pattern was consistent:

What they tracked:

  • Total users (↑)
  • MRR (↑)
  • Sign-ups this month (↑)

What they didn't know:

  • Why MRR growth suddenly slowed
  • Which cohorts would convert to paid
  • What activation rate predicted long-term retention

Every dashboard showed lagging indicators. By the time MRR dropped, the problem had started 60-90 days earlier in activation or engagement.

The companies that cracked PLG -the ones growing 15-25% month-over-month consistently -tracked different numbers. Leading indicators that predicted revenue weeks before it materialized.

This guide breaks down the exact 12 metrics they monitored, how to calculate them, and which alert thresholds trigger action.

Alex Kumar, VP Product at Clearview "We were celebrating 20% user growth while ignoring that our activation rate had dropped from 42% to 31% over two months. By the time we noticed the MRR impact, we'd burned through eight product releases. This dashboard would have caught it in week one."

Why Most PLG Dashboards Fail (The Vanity Metric Problem)

Before we build the right dashboard, let's understand why most are broken.

The Vanity Metric Cascade

What gets tracked:

MetricWhy It's TrackedWhy It's Misleading
Total usersFeels like growthIncludes dead accounts, doesn't show quality
Monthly sign-upsEasy to measureDoesn't tell you if they activate or churn day 2
MRRBoard wants to see itLags actual problems by 60-90 days
DAU/MAU ratio"Industry standard"Meaningless without segmentation by cohort

These aren't wrong to track. They're just outcomes, not inputs. They tell you what happened, not why or what's coming next.

Example:

A developer tools company I worked with celebrated their best month ever in July:

  • Total users: 12,400 (+18% MoM)
  • Sign-ups: 1,850 (+23% MoM)
  • MRR: £47,300 (+12% MoM)

Beautiful growth story.

Except when you looked at leading indicators:

  • Activation rate (completed setup): 34% (down from 41% in June)
  • Day 7 retention: 28% (down from 37%)
  • Time-to-first-value: 4.2 days (up from 2.8 days)

What actually happened: A product update in early July broke part of the onboarding flow. New users were signing up (thanks to marketing spend) but hitting a bug that prevented completing setup.

Marketing was driving leads. Product was losing them.

They didn't notice for 6 weeks because they watched total users and MRR -both kept climbing from the lag effect of previous months' good cohorts.

When MRR growth finally stalled in September (↓3%), they spent three weeks diagnosing. The onboarding bug had been fixed by then, but they'd lost two months of potential activated users.

The fix: Track activation and early engagement as leading indicators. You spot the problem in July week 1, not September week 3.

The "We Track Everything" Problem

Other extreme: Companies tracking 40+ metrics in Amplitude or Mixpanel.

When everything's a metric, nothing's a priority.

I reviewed a SaaS company's dashboard last month. It had 67 metrics across 4 tabs.

I asked: "If you could only check 5 numbers to know if you're healthy, which 5?"

Twenty minute discussion. No clear answer.

The rule: If you can't recite your core metrics from memory, you have too many.

The 12-Metric Framework: Three Tiers of PLG Health

Here's what works. Twelve metrics organized into three tiers that correspond to customer journey stages.

Tier 1: Activation Metrics (Days 0-7)

These predict if a user will stick around.

1. Activation Rate

(Users who reach activation milestone / Total sign-ups) × 100

What it is: Percentage of users who complete your "aha moment" action within first 7 days.

Your activation milestone depends on your product:

  • Project management tool: Created first project + invited teammate
  • Analytics platform: Installed tracking code + viewed first report
  • API product: Made first successful API call + integrated webhook

Why it matters: Users who activate are 8-12x more likely to convert to paid than those who don't.

Benchmarks:

  • Excellent: >45%
  • Good: 35-45%
  • Concerning: 25-35%
  • Problem: <25%

Alert threshold: If activation drops >5 percentage points week-over-week, investigate immediately.

2. Time to First Value (TTFV)

Median hours from sign-up to activation milestone

What it is: How long it takes new users to get value from your product.

Why it matters: Every hour of delay correlates with 3-5% activation drop-off.

Benchmarks by product type:

  • Consumer app: <5 minutes
  • SMB SaaS: <2 hours
  • Developer tool: <24 hours
  • Enterprise product: <7 days

Alert threshold: If TTFV increases >15%, investigate friction in onboarding.

3. Day 1 Retention

(Users active on Day 1 after sign-up / Total sign-ups) × 100

What it is: Do users come back the next day?

Why it matters: If users don't return Day 1, they probably never will. Day 1 retention predicts Day 30 retention with 73% accuracy.

Benchmarks:

  • Excellent: >40%
  • Good: 30-40%
  • Concerning: 20-30%
  • Problem: <20%

Alert threshold: Track by cohort. If a cohort's Day 1 retention is >8 points below trailing 4-week average, something changed.

Tier 2: Engagement Metrics (Days 7-30)

These predict retention and expansion potential.

4. Week 1 Retention

(Users active in Week 1 / Users who activated) × 100

What it is: Of users who activated, what percentage are still using the product in week 1?

Why it matters: This is where most drop-off happens. Users activated, then realized it wasn't valuable enough to integrate into workflow.

Benchmarks:

  • Excellent: >65%
  • Good: 50-65%
  • Concerning: 35-50%
  • Problem: <35%

Alert threshold: <50% means your product isn't sticky enough post-activation.

5. Core Action Frequency

Median number of times activated users perform core action per week

What it is: How often do engaged users actually use your product?

Core action examples:

  • Slack: Messages sent
  • Figma: Files edited
  • Stripe: API calls made
  • Notion: Pages viewed/edited

Why it matters: Frequency predicts willingness to pay. Users who perform core action 3+ times/week are 4x more likely to convert to paid.

Benchmarks (weekly):

  • Power users: 10+ times
  • Engaged users: 3-10 times
  • At-risk users: 1-3 times
  • Churning: <1 time

Alert threshold: If median frequency drops below 3x/week, engagement is weakening.

6. Feature Adoption (Power Features)

(Users who used >3 core features / Activated users) × 100

What it is: What percentage of users engage with multiple features (not just the entry point)?

Why it matters: Multi-feature adoption creates lock-in. Users who adopt 3+ features have 67% lower churn.

How to calculate: Define your 5-7 "core features" (not nice-to-haves). Track % of users who've used at least 3 in their first 30 days.

Benchmarks:

  • Excellent: >35%
  • Good: 25-35%
  • Concerning: 15-25%
  • Problem: <15%

Alert threshold: If <20% of activated users adopt multiple features, you likely have a discoverability problem.

Tier 3: Monetization Metrics (Days 30+)

These predict revenue.

7. PQL Rate

(Product Qualified Leads / Activated users) × 100

What it is: Percentage of activated users who hit your PQL threshold (usage/engagement criteria that indicates buying intent).

Example PQL criteria:

  • Used product 12+ days in first month
  • Performed core action 25+ times
  • Invited 2+ teammates
  • Hit a usage limit (seats, API calls, storage)

Why it matters: PQLs convert to paid 10-15x better than random activated users.

Benchmarks:

  • Excellent: >25%
  • Good: 15-25%
  • Concerning: 10-15%
  • Problem: <10%

Alert threshold: PQL rate dropping is your earliest revenue warning signal -usually 60 days before MRR impact.

8. PQL→Customer Conversion Rate

(PQLs who converted to paid / Total PQLs) × 100

What it is: Of users who hit PQL threshold, how many actually pay?

Why it matters: This is the single most important metric for PLG. It separates product-market fit from product-market delusion.

If users love your product enough to hit power-usage thresholds but won't pay, you have a pricing/packaging problem, not a product problem.

Benchmarks:

  • Excellent: >22%
  • Good: 15-22%
  • Concerning: 10-15%
  • Problem: <10%

Alert threshold: If this drops below 18%, your revenue engine is breaking. Investigate within 48 hours.

9. Time to Convert

Median days from activation to first payment

What it is: How long does it take users to upgrade to paid?

Why it matters: Longer conversion cycles mean lower LTV and slower revenue growth. Also signals friction in upgrade flow.

Benchmarks by ACV:

Alert threshold: If median time increases >20%, investigate: Is your pricing page broken? Friction in checkout? Sales team slow to respond?

Tier 4: Retention & Expansion Metrics

These predict sustainability.

10. Net Revenue Retention (NRR)

(MRR from existing customers today - churned MRR + expansion MRR) / MRR from existing customers 12 months ago × 100

What it is: Are you growing revenue from existing customers faster than you lose it to churn?

Why it matters: NRR >100% means you can grow without new customer acquisition. It's the holy grail of SaaS.

Benchmarks:

  • World-class: >120%
  • Excellent: 110-120%
  • Good: 100-110%
  • Concerning: 90-100%
  • Problem: <90%

Alert threshold: NRR dropping below 100% means churn is outpacing expansion. Growth becomes fully dependent on new customer acquisition.

11. Expansion Revenue Rate

(MRR from upgrades and expansions / Total MRR from existing customers) × 100

What it is: What percentage of your revenue comes from customers upgrading or expanding usage?

Why it matters: Indicates product stickiness and room to grow within accounts.

Benchmarks (monthly):

  • Excellent: >8%
  • Good: 5-8%
  • Concerning: 3-5%
  • Problem: <3%

12. Churn Rate (by Cohort)

(Customers who churned in month X / Total customers at start of month X) × 100

Critical: Track by cohort (when they signed up), not just aggregate.

What it is: The percentage of customers who cancel.

Why it matters: Different cohorts churn at different rates. A product change in March might only affect March+ cohorts.

Benchmarks (monthly):

  • Excellent: <3%
  • Good: 3-5%
  • Concerning: 5-7%
  • Problem: >7%

Alert threshold: If a recent cohort's month-2 churn is >3 points higher than previous cohorts, investigate what changed.

Building Your Dashboard: The Technical Implementation

Now let's build this thing.

The Tech Stack

Option 1: Amplitude + Mixpanel (£500-2000/month)

Pros:

  • Out-of-the-box cohort analysis
  • Visual funnel builders
  • Good for non-technical teams

Cons:

  • Expensive at scale
  • Limited customization
  • Can't easily combine product + financial data

Best for: Early-stage startups without data team

Option 2: Metabase + PostgreSQL (£0-200/month)

Pros:

  • Full control
  • Combine product + Stripe + CRM data
  • Unlimited scale

Cons:

  • Requires SQL knowledge
  • Manual setup
  • You own maintenance

Best for: Technical teams, post-Series A

Option 3: Athenic (£99-299/month)

Pros:

  • Natural language queries
  • Combines all data sources
  • Pre-built PLG dashboards

Cons:

  • Newer tool
  • Less customization than raw SQL

Best for: Teams wanting analytics without data engineering

SQL Queries for Each Metric

Let me give you the actual queries. Assuming standard schema:

Schema assumption:

users (id, email, created_at, activated_at, plan)
events (id, user_id, event_name, properties, timestamp)
subscriptions (id, user_id, plan, mrr, status, created_at)

Metric #1: Activation Rate (Last 30 Days)

SELECT
  DATE_TRUNC('week', u.created_at) as cohort_week,
  COUNT(*) as sign_ups,
  COUNT(u.activated_at) as activated,
  ROUND(100.0 * COUNT(u.activated_at) / COUNT(*), 2) as activation_rate
FROM users u
WHERE u.created_at >= NOW() - INTERVAL '30 days'
GROUP BY cohort_week
ORDER BY cohort_week DESC;

Metric #2: Time to First Value (TTFV)

SELECT
  DATE_TRUNC('week', u.created_at) as cohort_week,
  PERCENTILE_CONT(0.5) WITHIN GROUP (
    ORDER BY EXTRACT(EPOCH FROM (u.activated_at - u.created_at))/3600
  ) as median_hours_to_activation
FROM users u
WHERE u.activated_at IS NOT NULL
  AND u.created_at >= NOW() - INTERVAL '30 days'
GROUP BY cohort_week
ORDER BY cohort_week DESC;

Metric #8: PQL→Customer Conversion Rate

WITH pqls AS (
  SELECT DISTINCT user_id, MIN(timestamp) as pql_date
  FROM events
  WHERE event_name = 'became_pql'  -- Your PQL event
    AND timestamp >= NOW() - INTERVAL '60 days'
  GROUP BY user_id
)

SELECT
  DATE_TRUNC('week', pql_date) as week,
  COUNT(*) as total_pqls,
  COUNT(s.id) as converted,
  ROUND(100.0 * COUNT(s.id) / COUNT(*), 2) as conversion_rate
FROM pqls p
LEFT JOIN subscriptions s
  ON p.user_id = s.user_id
  AND s.created_at > p.pql_date
  AND s.status = 'active'
GROUP BY week
ORDER BY week DESC;

Metric #12: Cohort Churn Analysis

WITH cohorts AS (
  SELECT
    DATE_TRUNC('month', created_at) as cohort_month,
    user_id,
    created_at
  FROM subscriptions
  WHERE status = 'active'
),

churned AS (
  SELECT
    user_id,
    MIN(updated_at) as churn_date
  FROM subscriptions
  WHERE status = 'cancelled'
  GROUP BY user_id
)

SELECT
  c.cohort_month,
  DATE_PART('month', AGE(ch.churn_date, c.created_at)) as months_to_churn,
  COUNT(*) as churned_users
FROM cohorts c
INNER JOIN churned ch ON c.user_id = ch.user_id
GROUP BY c.cohort_month, months_to_churn
ORDER BY c.cohort_month DESC, months_to_churn;

The Weekly Review Cadence

Monday morning (15 minutes):

Run your 12-metric dashboard. Compare to last week.

Look for:

  • Any metric moved >10% week-over-week?
  • Any metric crossed an alert threshold?
  • Any cohort behaving differently than previous cohorts?

Create tickets for:

  • Activation rate dropped >5 points → Product team investigates onboarding
  • TTFV increased >15% → UX team reviews friction points
  • PQL→Customer rate dropped below 18% → Pricing team reviews messaging

Thursday afternoon (10 minutes):

Spot-check top 3 metrics:

  1. Activation rate
  2. Week 1 retention
  3. PQL→Customer conversion

Has Monday's issue been resolved? Are metrics recovering?

Alert Configuration

Set up automated alerts (via Slack, email, etc.):

Critical (immediate notification):

  • PQL→Customer conversion <18%
  • Activation rate drops >5 points WoW
  • Churn rate for recent cohort >2x historical average

Important (daily digest):

  • TTFV increases >15%
  • Week 1 retention <50%
  • Core action frequency <3x/week

Monitor (weekly review):

  • Feature adoption <20%
  • Expansion revenue <5% of MRR
  • NRR trending toward 100%

Real Case Study: How Clearview Fixed Their Funnel in 3 Weeks

Let me show you this framework in action.

Company: Clearview (B2B analytics platform, Series A, £800k ARR)

Problem (September): MRR growth stalled at 3-4% month-over-month (down from 12-15% in Q2)

What their dashboard showed:

  • Total users: Still growing (+8% MoM)
  • Sign-ups: Healthy (420 in September vs 380 in August)
  • MRR: £843k (up only 3.2%)

"We didn't understand why growth slowed. Marketing was delivering leads. Product team said no major bugs."

What the 12-metric framework revealed:

MetricJulyAugustSeptemberStatus
Activation rate41%37%32%🔴 Alert
TTFV18 hours22 hours29 hours🔴 Alert
Week 1 retention58%52%47%🔴 Alert
PQL rate24%21%18%🟡 Warning
PQL→Customer19%18%17%🟡 Warning

The diagnosis:

Activation, TTFV, and retention all declining for 3 months. The problem started in July, but didn't impact MRR until September because:

  1. July/August cohorts were smaller (summer seasonality)
  2. Revenue from Q2's strong cohorts masked the problem
  3. They only watched MRR (lagging) not activation (leading)

The investigation:

Product team reviewed July changes. Found:

  • July 12: Launched new onboarding flow (modern, prettier, more steps)
  • July 18: Added email verification requirement
  • July 24: Moved OAuth setup from optional to required

Each change individually seemed minor. Combined, they added friction:

  • Old TTFV: 18 hours (user signs up → connects data source → sees first dashboard)
  • New TTFV: 29 hours (user signs up → verifies email → OAuth setup → connects data source → sees first dashboard)

Users were hitting friction and bouncing.

The fix (3 weeks):

Week 1:

  • Removed email verification for OAuth users (redundant)
  • Made OAuth setup optional again (moved to post-activation)
  • Shortened onboarding from 6 steps to 3

Week 2:

  • A/B tested new vs old flow
  • New flow improved activation from 32% → 39%
  • Rolled out to 100%

Week 3:

  • Monitored metrics daily
  • Activation stabilized at 40%
  • TTFV dropped to 19 hours

Results after 60 days:

MetricSeptember (Before)November (After)Change
Activation rate32%40%+25%
TTFV29 hours19 hours-34%
Week 1 retention47%56%+19%
PQL rate18%23%+28%
MRR growth3.2%11.8%+269%

The lesson:

"We'd been looking at the wrong dashboard. MRR told us revenue was slowing. This framework told us exactly why -and three months earlier when we could still fix it." - Alex Kumar, VP Product

Common Mistakes (And How to Avoid Them)

Mistake #1: Tracking Aggregate Instead of Cohorts

Symptom: Your overall metrics look fine but revenue is slipping

Why it happens: Strong historical cohorts mask problems with recent cohorts

Example:

Overall activation rate: 38%

But by cohort:
- June: 44%
- July: 42%
- August: 40%
- September: 34%
- October: 29%

Aggregate looks okay. Trend is terrible.

Fix: Always segment by cohort. Compare recent cohorts to historical baseline.

Mistake #2: Setting the Wrong Activation Milestone

Symptom: Users "activate" but still churn rapidly

Why it happens: Your activation event is too early in the journey

Bad activation milestones:

  • "Created account" (that's sign-up, not activation)
  • "Logged in twice" (activity ≠ value)
  • "Viewed dashboard" (viewing ≠ using)

Good activation milestones:

  • "Connected data source AND viewed first report"
  • "Created project AND invited teammate"
  • "Made API call AND received webhook"

Test: Look at users who hit your activation milestone. What's their 30-day retention? If it's <60%, your milestone is wrong.

Mistake #3: Ignoring Time Lag in Metrics

Symptom: You fix a problem but metrics don't improve

Why it happens: Some metrics lag by weeks

Example timeline:

  • Day 0: You fix onboarding bug
  • Day 1: Activation rate improves (immediate signal)
  • Day 7: Week 1 retention improves (7-day lag)
  • Day 30: PQL rate improves (30-day lag)
  • Day 60: PQL→Customer rate improves (60-day lag)
  • Day 90: MRR growth improves (90-day lag)

Fix: Track leading indicators (activation, early retention) to validate fixes. Don't wait for revenue impact.

Mistake #4: Death by Dashboard

Symptom: You have 40 metrics but still miss important trends

Why it happens: Too many numbers = no focus

Fix: Build three dashboards:

Daily dashboard (5 metrics):

  • Activation rate
  • TTFV
  • Day 1 retention
  • Week 1 retention
  • PQL→Customer rate

Weekly dashboard (12 metrics):

  • All 12 from this framework

Monthly dashboard (20+ metrics):

  • Everything else (feature adoption details, channel breakdowns, segment analysis)

Check daily dashboard every day. Weekly on Mondays. Monthly in board meetings.

Next Steps: Build Your Dashboard This Week

You've got the framework. Here's your implementation plan:

Day 1 (Today):

  • Audit your current dashboard -which of the 12 metrics do you already track?
  • Identify your activation milestone (if you don't have one)
  • Define your PQL criteria

Day 2:

  • Set up data tracking for missing metrics
  • Instrument activation event if not already tracked
  • Create PQL event trigger

Day 3:

  • Write SQL queries (or set up in Amplitude/Mixpanel)
  • Build cohort views for each metric
  • Test queries with last 30 days of data

Day 4:

  • Create your dashboard (Metabase, Grafana, Looker, or Amplitude)
  • Set up alert thresholds
  • Configure Slack/email notifications

Day 5:

  • Run your first weekly review
  • Compare current metrics to historical baseline
  • Document any metrics that are in "alert" territory
  • Create tickets for Product/Growth team

Week 2:

  • Establish weekly review cadence
  • Share dashboard with leadership team
  • Train team on how to interpret cohort analysis

The rule: If you're not checking these 12 metrics weekly, you're flying blind. Revenue problems start 60-90 days before they show up in MRR. These metrics give you the early warning system.


Want a pre-built PLG dashboard with automated alerts and cohort analysis? Athenic connects to your product analytics, CRM, and billing systems to give you real-time visibility into the metrics that matter -without writing SQL. See your metrics in 15 minutes →

Related reading: