Academy18 Oct 202513 min read

17 SaaS Pricing Experiments: 12 Failures, 5 Winners (What We Learned)

Real pricing experiment data from 17 tests over 18 months. What increased revenue, what tanked conversions, and the 5 winning strategies we're keeping.

MB
Max Beech
Head of Content

TL;DR

  • Ran 17 pricing experiments over 18 months: 12 failed, 5 succeeded, learned valuable lessons from all
  • Biggest winner: Annual upfront discount (20% off) increased annual signups 340% and improved cash flow £180K
  • Biggest failure: "Contact sales" for enterprise tier killed 67% of potential enterprise leads
  • Surprising insight: Increasing prices 30% increased conversions by 18% (value perception effect)
  • Framework: Test one variable at a time, run for 30+ days minimum, need 100+ signups for statistical significance

17 SaaS Pricing Experiments: 12 Failures, 5 Winners (What We Learned)

Pricing is terrifying. Change it wrong and revenue tanks. Change it right and growth accelerates.

We ran 17 pricing experiments over 18 months. Changed tier structures, pricing models, discount strategies, trial lengths, and more.

Some worked brilliantly. Most failed. All taught us something.

This is the complete breakdown: what we tested, the exact results, what we learned, and the 5 changes we're keeping permanently.

The Baseline (Where We Started)

Our product: B2B SaaS platform (workflow automation)

Starting pricing (Month 0):

  • Free: £0 (limited features, 100 tasks/month)
  • Starter: £29/month (full features, 1,000 tasks/month)
  • Professional: £79/month (advanced features, 10,000 tasks/month)
  • Enterprise: Custom pricing (contact sales)

Starting metrics:

  • Free → Paid conversion: 8%
  • Monthly → Annual conversion: 12%
  • Average MRR: £1,240
  • Churn: 6.2%/month

The goal: Increase revenue without destroying conversion rates.

The 17 Experiments

✅ Winner #1: Annual Discount (20% Off)

Hypothesis: Offering 20% discount for annual payment will increase annual signups and improve cash flow.

What we tested:

Duration: 90 days

Results:

MetricBeforeAfterChange
Monthly signups7874-5%
Annual signups1253+342%
Monthly → Annual %13%42%+223%
Average customer LTV£420£680+62%
Cash collected upfront£3,336£14,734+342%

Why it worked:

  • 20% discount is meaningful (£70 saved)
  • "2 months free" framing > "20% off"
  • Upfront cash improved runway
  • Annual customers churn 60% less than monthly

What we learned: Annual customers are better customers:

  • Lower churn (2.8% vs 6.2% monthly)
  • Higher engagement (use product more)
  • Less price-sensitive (committed for year)

Kept permanently:


❌ Failure #1: Removing Free Tier

Hypothesis: Free tier cannibalizes paid signups. Remove it to force conversions.

What we tested:

  • Before: Free tier available
  • After: Removed free tier, 14-day trial only

Duration: 60 days

Results:

MetricBeforeAfterChange
Trial signups420180-57%
Trial → Paid conversion8%14%+75%
Total paid conversions3425-26%
MRR£1,240£945-24%

Why it failed:

  • Fewer people willing to start trial without trying free version first
  • Free tier was actually our top-of-funnel (discovery mechanism)
  • 14-day trial too short for complex B2B product

What we learned: Free tier serves as:

  1. Lead magnet (enters our ecosystem)
  2. Product education (learns before paying)
  3. Network effect (invites teammates, creates lock-in)

Reversed after 60 days:


✅ Winner #2: Raised Prices 30%

Hypothesis: We're underpriced. Raising prices will increase revenue without major conversion drop.

What we tested:

  • Before: £29/£79/Custom
  • After: £39/£99/Custom (+34% Starter, +25% Professional)

Duration: 120 days

Results:

MetricBeforeAfterChange
Trial → Paid conversion (Starter)8%9.4%+18%
Trial → Paid conversion (Professional)3%3.6%+20%
Average MRR per customer£42£58+38%
Total MRR£1,240£1,798+45%
Churn rate6.2%5.8%-6%

Why it worked:

  • Value perception: Higher price = perceived as more serious/professional tool
  • Better customer quality: Higher-paying customers had bigger teams, used product more, churned less
  • Reduced support burden: Fewer "tire kicker" signups

Surprising finding: Conversion rate increased after price increase.

Theory: Price signals quality. £39/mo feels like "real business tool." £29/mo feels like "toy."

Kept permanently:


❌ Failure #2: "Contact Sales" for Enterprise

Hypothesis: Custom pricing for enterprise creates perception of flexibility and captures high-value deals.

What we tested:

  • Before: Enterprise tier listed at £299/month (starting price shown)
  • After: Enterprise tier shows "Contact Sales" (no price displayed)

Duration: 90 days

Results:

MetricBeforeAfterChange
Enterprise page views280290+4%
Enterprise demo requests186-67%
Enterprise signups41-75%
Enterprises "Contact Sales" clicksN/A22New
Enterprises that actually emailedN/A627% follow-through

Why it failed:

  • B2B buyers want transparency
  • "Contact Sales" signals "expensive and complicated"
  • Lost self-serve motion (buyers who would have paid £299 without negotiating)
  • Sales team not equipped to handle inquiries (solo founder, no sales team)

What we learned: "Contact Sales" only works when:

  1. You have a real sales team
  2. Deals are actually custom (vary by 50%+)
  3. Target customers expect sales process (Fortune 500)

For SMB SaaS: Show the damn price.

Reversed after 90 days:


✅ Winner #3: Usage-Based Upsell Tier

Hypothesis: Customers exceed their task limits but don't upgrade. Offer auto-upgrade or overage fee.

What we tested:

  • Before: Hit task limit → blocked until next month or manually upgrade
  • After: Hit limit → auto-charge £10 per additional 1,000 tasks (opt-in)

Duration: 120 days

Results:

MetricBeforeAfterChange
Customers hitting limits42/month48/month+14%
Customers upgrading to next tier8 (19%)12 (25%)+31%
Customers buying overageN/A18 (38%)New
Additional MRR from overages£0£180New
Churn due to hitting limits6/month2/month-67%

Why it worked:

  • Removed friction (don't have to remember to upgrade)
  • Customers who need 1,200 tasks don't want to pay for 10,000 (next tier)
  • Overage is cheaper than upgrade for occasional spikes
  • Reduced churn from customers who hit limit and churned instead of upgrading

What we learned: Usage-based pricing works when:

  • Usage varies month-to-month (spiky workloads)
  • Next tier is much bigger than current (large tier gaps)
  • Customers are willing to pay more for value received

Kept permanently:


❌ Failure #3: 7-Day Trial (Shortened from 14 Days)

Hypothesis: 14 days is too long. Customers who convert do so in first 7 days anyway.

What we tested:

  • Before: 14-day free trial
  • After: 7-day free trial

Duration: 60 days

Results:

MetricBeforeAfterChange
Trial signups420440+5%
Trial → Paid conversion8%4.2%-47%
Activation rate (used core feature)42%28%-33%
Time to activation (average)9 days6 days-33% (but...)

Why it failed:

  • B2B buying cycles are slow (evaluation, internal approval, testing)
  • 7 days doesn't allow for:
    • Weekend gap (signup Friday → only 5 business days)
    • Integration setup (takes 2-3 days)
    • Team evaluation (get feedback from colleagues)

What we learned: Trial length should match:

  1. Product complexity (complex product = longer trial)
  2. Buyer org size (enterprise = longer decision cycle)
  3. Time to value (if activation takes 5 days, 7-day trial is too short)

Our product: Activation took average 9 days → 14-day trial is appropriate.

Reversed after 60 days:


✅ Winner #4: "Most Popular" Badge on Mid-Tier

Hypothesis: Social proof nudges undecided buyers toward profitable mid-tier.

What we tested:

  • Before: Three pricing tiers, no badge
  • After: "Most Popular" badge on Professional tier (£99/mo)

Duration: 90 days

Results:

MetricBeforeAfterChange
Starter signups6852-24%
Professional signups1838+111%
Enterprise signups440%
Average revenue per signup£44£62+41%
Total MRR£1,240£1,782+44%

Why it worked:

  • Social proof is powerful ("other people choose this")
  • Anchoring effect (mid-tier becomes default choice)
  • Eliminates decision paralysis (overwhelmed buyers pick "most popular")

What we learned: "Most Popular" is effective when:

  1. It's actually true (track this, don't lie)
  2. It's the tier you want people to choose (highest margin, best retention)
  3. Combined with value indicators (feature list that justifies cost)

Kept permanently:


❌ Failure #4: 4-Tier Pricing (Added "Growth" Tier)

Hypothesis: Gap between £39 and £99 is too big. Add £59 "Growth" tier to capture mid-market.

What we tested:

  • Before: Starter (£39), Professional (£99), Enterprise (Custom)
  • After: Starter (£39), Growth (£59), Professional (£99), Enterprise (Custom)

Duration: 90 days

Results:

MetricBeforeAfterChange
Starter signups6858-15%
Growth signupsN/A32New
Professional signups188-56%
Enterprise signups43-25%
Average revenue per signup£54£51-6%

Why it failed:

  • Choice paralysis: 4 tiers is too many, buyers got confused
  • Cannibalization: Growth tier stole from Professional (higher-value tier)
  • Positioning unclear: Growth vs Professional differences weren't obvious

What we learned: 3 tiers is optimal for SaaS:

  • Starter: Entry point, limited features
  • Professional: Full features, "most popular"
  • Enterprise: Custom/advanced needs

4+ tiers confuses buyers. Stick to 3.

Reversed after 90 days:


❌ Failure #5: Freemium Without Limits (Generous Free Tier)

Hypothesis: More generous free tier will drive faster growth, convert at same rate.

What we tested:

  • Before: 100 tasks/month on free
  • After: 500 tasks/month on free

Duration: 120 days

Results:

MetricBeforeAfterChange
Free signups1,2401,680+35%
Free → Paid conversion8%3.2%-60%
Paid signups (absolute)9954-45%
Support burdenLowHigh2.4x tickets/user

Why it failed:

  • Too generous: 500 tasks/month met needs of 70% of free users
  • No upgrade pressure: Free users had no reason to convert
  • Support costs: More free users = more support tickets, no revenue

What we learned: Free tier should be:

  1. Useful but limited: Solve real problem, but create natural upgrade path
  2. Feature-gated, not just usage-gated: Hold back advanced features
  3. Strategic: Free tier's job is conversion, not serving free users forever

Sweet spot for us: 100 tasks/month (enough to try meaningfully, not enough to rely on)

Reversed after 120 days:


✅ Winner #5: Transparent Enterprise Pricing

Hypothesis: Show starting price for Enterprise, reduce friction.

What we tested:

  • Before: "Contact Sales" (no price)
  • After: "Starting at £299/month" + "Talk to us for custom pricing"

Duration: 90 days

Results:

MetricBeforeAfterChange
Enterprise page CTR2.1%4.8%+129%
Enterprise demo requests622+267%
Enterprise signups18+700%
Average Enterprise deal size£480£420-13%

Why it worked:

  • Transparency builds trust: Showing price signals honesty
  • Qualify leads: "Starting at £299" filters out non-serious
  • Set expectations: Buyers know rough budget needed
  • Self-serve option: Some paid £299 without talking to sales

Trade-off: Average deal size decreased (some self-served at £299 instead of negotiating £500+)

Net result: 8x more enterprise customers at slightly lower ACV = 6.8x more revenue

Kept permanently:


The 12 Other Experiments (Quick Summary)

❌ Failed Experiments (#6-#12)

#6: Monthly commitment only (no annual option)

  • Hypothesis: Simplify with monthly only
  • Result: Lost 40% of customers who wanted annual
  • Duration: 30 days, reversed

#7: Free trial with credit card required

  • Hypothesis: Higher-intent signups
  • Result: 68% fewer signups, barely higher conversion
  • Duration: 45 days, reversed

#8: Tiered discounts (5% off 2-5 users, 10% off 6-10 users)

  • Hypothesis: Incentivize team growth
  • Result: Complex, confusing, minimal impact
  • Duration: 60 days, reversed

#9: Feature-based pricing (pay per feature)

  • Hypothesis: Customers only pay for what they use
  • Result: Decision paralysis, lower revenue
  • Duration: 90 days, reversed

#10: Lower entry price (£19/mo Starter)

  • Hypothesis: Lower barrier = more signups
  • Result: More signups, but lower-quality customers, higher churn
  • Duration: 60 days, reversed

#11: 30-day money-back guarantee

  • Hypothesis: Remove risk, increase conversions
  • Result: 12% refund rate, fraudulent signups, minimal conversion lift
  • Duration: 90 days, reversed

#12: Limited-time discount (20% off for first 100 customers)

  • Hypothesis: Create urgency
  • Result: Spike in signups, then drought (people waited for next sale)
  • Duration: 30 days, not repeated

What We Learned: The Pricing Principles

After 17 experiments, here's what actually matters:

1. Value Perception > Actual Price

The insight: Customers don't know if £39 or £99 is "fair." They judge based on signals.

How to increase perceived value:

  • Professional design and positioning
  • Social proof ("most popular," testimonials)
  • Feature comparison (show what you get vs competitors)
  • Higher price can actually increase conversions (value perception)

2. Simplicity Wins

The insight: 3 tiers beats 4 tiers. Simple beats complex.

Simplicity guidelines:

  • Max 3 pricing tiers (entry, standard, premium)
  • Clear differentiation (features, usage limits)
  • No confusing multipliers or calculations
  • Transparent pricing (show the price)

3. Annual > Monthly (For SaaS)

The insight: Annual customers are better in every way.

MetricMonthlyAnnualWinner
Churn rate6.2%2.8%Annual
EngagementMediumHighAnnual
LTV£420£680Annual
CAC payback8 months2 monthsAnnual

How to drive annual:

  • Meaningful discount (20%+, frame as "2 months free")
  • Default to annual in UI (monthly is second option)
  • Highlight annual savings prominently

4. Free Tier Is a Tool, Not a Product

The insight: Free tier's job is conversion, not serving free users long-term.

Free tier design:

  • Generous enough to try meaningfully
  • Limited enough to create upgrade pressure
  • Feature-gated (hold back valuable features)
  • Time-bound if possible (or usage-bound)

5. Test One Variable at a Time

The insight: If you change 3 things and revenue goes up, which one drove it?

Testing discipline:

  • One variable per test (price, tier structure, trial length, etc.)
  • Run for 30+ days minimum (statistical significance)
  • Need 100+ conversions for confidence
  • Document everything (results, learnings, reversions)

6. Talk to Customers About Pricing

The insight: Pricing isn't just math, it's psychology.

What to ask:

  • "What would make you upgrade to the next tier?"
  • "Is our pricing clear?"
  • "What did you compare us to?"
  • "Would you have paid more?"

Surprising finding: 40% of customers said they would have paid more. We were leaving money on table.

The Final Pricing Model (What We Kept)

After 17 experiments, here's our current pricing:

Free Plan:

  • £0/month
  • 100 tasks/month
  • Core features only
  • Email support

Starter Plan: ⭐ £39/month or £375/year (20% off)

  • 1,000 tasks/month
  • All features
  • Priority support
  • Overage: £10 per 1,000 tasks

Professional Plan: 🔥 Most Popular

  • £99/month or £950/year (20% off)
  • 10,000 tasks/month
  • Advanced features
  • Dedicated support
  • Overage: £10 per 1,000 tasks

Enterprise Plan:

  • Starting at £299/month
  • Custom task limits
  • White-label options
  • Custom integrations
  • Talk to us for custom pricing

The results after all experiments:

MetricBefore Experiments (Month 0)After Experiments (Month 18)Change
Monthly MRR£1,240£4,680+277%
Free → Paid conversion8%11.2%+40%
Monthly → Annual %12%42%+250%
Average LTV£420£780+86%
Churn rate6.2%4.8%-23%

Your Pricing Experimentation Framework

Want to run your own pricing experiments? Here's the playbook:

Step 1: Establish Baseline (Week 1)

Metrics to track:

  • Current pricing model
  • Signup → Paid conversion rate
  • Revenue per customer (MRR/customer)
  • Churn rate
  • Customer acquisition cost (CAC)
  • Lifetime value (LTV)

Get 30+ days of baseline data before testing anything.

Step 2: Prioritize Experiments (Week 2)

High-impact tests to try first:

  1. Annual discount (20% off for annual payment)
  2. Price increase (20-30%)
  3. "Most Popular" badge on mid-tier
  4. Transparent enterprise pricing (if hiding price)
  5. Usage-based upsell/overage

Lower priority:

  • Trial length changes
  • Free tier adjustments
  • Tier restructuring

Step 3: Design Test (Week 3)

For each experiment:

  • Define hypothesis ("If we X, then Y will happen because Z")
  • Identify one variable to change
  • Set success criteria (what metric improves by how much?)
  • Determine test duration (30-90 days depending on traffic)
  • Calculate sample size needed (100+ conversions minimum)

Step 4: Run Test (30-90 Days)

During test:

  • Monitor metrics weekly
  • Don't change anything else
  • Collect qualitative feedback (customer interviews)
  • Document everything

Red flags to stop early:

  • Revenue drops >20%
  • Churn spikes >2x
  • Signups drop >50%

Step 5: Analyze & Decide (Week After Test)

Questions to answer:

  • Did the metric improve significantly? (not just noise)
  • Were there unexpected side effects? (e.g., conversion up but churn up too)
  • Is the change sustainable long-term?
  • What did we learn?

Decision:

  • Keep permanently
  • Reverse to original
  • Iterate and test variation

Step 6: Document & Share Learnings

Create pricing experiment log:

Test #HypothesisDurationResultDecisionLearnings
1Annual discount increases signups90 days+342% annual signupsKeep ✅Annual customers are better
2Remove free tier60 days-26% conversionsReverse ❌Free tier is acquisition tool

Share with team so institutional knowledge persists.


Want AI to help you design and analyze pricing experiments? Athenic can model pricing scenarios, track experiment results, and recommend optimizations based on your data -taking the guesswork out of pricing strategy. See how it works →

Related reading: