Your pricing hasn't changed in two years. It's based on what seemed reasonable when you launched: three tiers, per-user pricing, annual discounts. Competitors charge roughly the same, so it feels validated.
But when you analyze product usage data, you discover something surprising. Your highest-value customers barely use the features you charge premium for. Meanwhile, a feature you include for free is the primary driver of customer retention and expansion.
Your pricing model is completely disconnected from how customers actually get value from your product.
This is more common than you'd think. Most B2B pricing is designed based on market research, competitive positioning, and business model assumptions. It's rarely informed by actual usage data showing what customers value and what drives willingness to pay.
After helping five B2B companies redesign pricing based on product analytics, I've learned that usage data reveals pricing opportunities that market research misses entirely.
Here's how to use analytics to make pricing decisions that align with actual customer value.
The Value Metric Analysis
Your pricing metric (what you charge for—users, storage, API calls, seats) should correlate with the value customers receive. If it doesn't, you're leaving money on the table or charging for the wrong thing.
Step 1: Identify potential value metrics
List all the things you could theoretically charge for:
- Number of users/seats
- Amount of data/storage/processing
- Number of projects/workspaces/accounts
- Usage volume (API calls, reports generated, emails sent)
- Outcome achieved (deals closed, leads generated, savings realized)
Each represents a different dimension of customer value.
Step 2: Analyze correlation with customer value
For each potential metric, run correlation analysis against outcomes you know indicate value:
- Retention rate
- Net revenue retention (expansion revenue)
- Customer satisfaction/NPS
- Contract value at renewal
Example findings:
| Potential Metric | Correlation with NRR | Correlation with Retention |
|---|---|---|
| Number of users | 0.23 (weak) | 0.31 (weak) |
| Data processed (GB) | 0.71 (strong) | 0.68 (strong) |
| Reports generated | 0.54 (moderate) | 0.49 (moderate) |
| Integrations connected | 0.82 (very strong) | 0.79 (very strong) |
This data tells a clear story: customers who connect more integrations expand revenue at much higher rates than customers who just add users. Your pricing should reflect that.
Step 3: Validate willingness to pay
Correlation with retention shows what customers value. But will they pay for it?
Survey high-usage customers: "If we charged based on [value metric] instead of [current metric], would that feel more aligned with the value you get?"
Run pricing experiments in specific segments or geographies. Test integration-based pricing with new customers and measure conversion rates vs. user-based pricing.
If customers who use the value metric heavily also have higher LTV and express willingness to pay for it, you've found your pricing metric.
The Feature Value Hierarchy
Not all features are created equal. Some drive retention and expansion. Others are table stakes. Some are rarely used but when needed, they're critical. Your pricing tiers should reflect this hierarchy.
Step 1: Classify features by impact
For each major feature, calculate:
- Adoption rate: What % of customers use this feature?
- Retention impact: Do users who adopt this retain better?
- Expansion correlation: Do users who adopt this expand revenue more often?
This creates a feature value matrix:
| Feature | Adoption Rate | Retention Lift | Expansion Correlation |
|---|---|---|---|
| Basic reporting | 94% | +12 pts | 1.1x |
| Custom dashboards | 67% | +34 pts | 2.3x |
| API access | 23% | +41 pts | 3.1x |
| Mobile app | 51% | +8 pts | 1.0x |
| Advanced filters | 39% | +28 pts | 1.8x |
Step 2: Identify tier placement opportunities
High adoption, low impact (Basic reporting, Mobile app): These are table stakes. Everyone expects them. They should be in your base tier. Don't gate these—they won't drive upsells, but their absence will prevent conversions.
Medium adoption, high impact (Custom dashboards, Advanced filters): These are your tier differentiators. Power users need them, but not everyone. Perfect for mid-tier and top-tier placement.
Low adoption, very high impact (API access): These are enterprise features. A small percentage of customers need them badly and will pay premium. These belong in your highest tier or as add-ons.
Step 3: Validate with cohort analysis
Compare revenue outcomes for customers on different tiers:
- Starter tier customers: What % upgrade within 12 months? To which tier?
- Professional tier customers: What's their expansion rate vs. downgrades?
- Enterprise tier customers: What's their retention vs. lower tiers?
If 60% of Starter customers upgrade to Professional within 6 months, your Starter tier is under-featured. If only 5% of Professional customers ever upgrade to Enterprise, your Enterprise tier is over-featured or over-priced.
The Usage-Based Pricing Opportunity
Many B2B companies charge flat fees regardless of usage intensity. This creates value misalignment: light users overpay (and churn), heavy users underpay (leaving money on the table).
Usage-based pricing aligns cost with value, but only if you pick the right usage metric.
Evaluate usage-based pricing fit:
Signal 1: Wide variance in usage intensity
If your 90th percentile user uses 10x more than your median user, usage-based pricing captures that variance. If usage is relatively uniform, stick to flat pricing.
Analyze: distribution of your value metric across customers. If it's highly skewed (some users use WAY more than others), usage-based pricing might make sense.
Signal 2: Usage correlates with business outcomes
High-usage customers should have better retention, higher NRR, and stronger satisfaction. If usage doesn't correlate with these outcomes, usage-based pricing won't work—you'd be charging more to customers who aren't getting proportional value.
Signal 3: Usage is predictable and controllable
Usage-based pricing works when customers can predict and control their usage. If usage spikes unexpectedly, it creates bill shock and churn.
Example: Charging per API call works if customers can estimate call volume. Charging per "data point processed" fails if customers can't predict how many data points they'll have.
Step 4: Test usage-based pricing in a segment
Don't rip out your current pricing. Test usage-based models with new customers or in a specific segment.
Track:
- Conversion rates (vs. flat pricing)
- Revenue per customer
- Retention rates
- Customer satisfaction with pricing model
If usage-based pricing shows equal or better conversion, higher revenue, and maintained retention, consider broader rollout.
The Freemium Boundary Analysis
If you offer a free tier, analytics reveals where to draw the line between free and paid.
The wrong approach: Limit free tier by arbitrary metrics (10 users, 5 projects, 100MB storage)
The right approach: Limit free tier just below the threshold where users extract serious business value
Step 1: Identify the activation threshold
At what usage level do free users become valuable enough that they should pay?
Analyze free user behavior and identify the inflection point:
- Free users who create 1-3 reports rarely convert (2% conversion rate)
- Free users who create 4-10 reports sometimes convert (18% conversion rate)
- Free users who create 11+ reports frequently convert (47% conversion rate)
The inflection point is around 10 reports. That's when usage indicates serious value extraction.
Step 2: Set the free limit just below the inflection point
Cap free tier at 8 reports per month. This lets users validate value (create a few reports), but forces conversion right before they extract serious value (11+ reports).
Step 3: Validate with cohort comparison
Compare free users who hit the cap vs. those who don't:
- Users who hit the cap: 34% convert to paid within 60 days
- Users who don't hit the cap: 4% convert
If hitting the cap drives conversion (not just churn), you've found the right boundary.
If users who hit the cap mostly churn rather than convert, your free tier is too restrictive. You're forcing users to pay before they've experienced enough value to justify it.
The Pricing Experiment Framework
Don't make pricing changes based solely on analytics. Validate with experiments.
Test 1: Tier value perception
Show current customers (in survey or interview) your proposed tier structure. Ask:
- "Which tier would you choose if you were buying today?"
- "Do you feel like the features in each tier align with the price?"
- "Are there features in a higher tier you don't need, or features in a lower tier that feel essential?"
If 40% of current Professional customers would choose Starter tier if buying today, your Professional tier isn't differentiated enough.
Test 2: Price sensitivity by segment
Run van Westendorp pricing sensitivity analysis by customer segment (company size, industry, use case).
Ask four questions:
- At what price would this product be so expensive you wouldn't consider it?
- At what price would you consider it expensive, but still might buy it?
- At what price would you consider it a bargain?
- At what price would it be so cheap you'd question the quality?
Plot responses to find the optimal price range for each segment. You might discover enterprise customers are willing to pay 3x what SMB customers will, justifying segment-specific pricing.
Test 3: New customer pricing variants
A/B test pricing models with new signups (if you have sufficient volume):
- Variant A: Current pricing
- Variant B: New value-based pricing
- Variant C: Usage-based pricing
Track conversion rates, activation rates, and early retention (30/60 day). This shows not just which pricing converts better, but which attracts customers who actually succeed with the product.
Analytics reveals what customers value and how they use your product. Experiments validate whether they'll pay for it. Together, they create pricing that aligns with real value delivery instead of market assumptions.