Jennifer sits in the conference room as her CEO asks "What should we price this new product at?" She says "$99/month?" knowing even as the words leave her mouth it's a guess. Her CEO presses: "Based on what?" Jennifer has no data, no customer research, no pricing model. "...it feels right?" she offers weakly.
This scene plays out at companies everywhere because most pricing decisions are based on gut feel instead of customer research. Teams pick a number that "sounds reasonable" or matches competitor pricing or hits some arbitrary margin target. Then they launch and discover customers won't pay it, or worse, they would have paid 3x more.
Good pricing isn't guessing. It's systematically researching what customers actually value and what they'll realistically pay for it. Here's the framework for pricing research that finds the right price through customer conversations.
The Pricing Research Framework
The goal is to answer three critical questions through structured research: What will customers actually pay (price sensitivity)? What do they value most (feature prioritization)? How should you package different tiers (bundling and tiers)?
Your method combines qualitative customer interviews with quantitative surveys to get both the "why" behind pricing decisions and the statistical validation. The typical timeline is four to six weeks for complete pricing research from initial interviews through analysis and recommendations.
The Pricing Interview Process
Step 1: Qualitative Interviews (10-15 customers)
Start with 10 to 15 qualitative interviews across three groups: current customers who are already paying, prospects who are evaluating but haven't bought yet, and churned customers who left so you can understand price objections. This mix gives you perspectives from people at different stages of their buying decision.
Structure each 45-minute interview in four parts. Part one covers current state for 10 minutes. Ask "Walk me through how you currently solve this problem" and follow up with what tools they use, what those tools cost, what's working, and what's not. Listen carefully for their current spend levels, what alternatives they've tried, and their biggest pain points.
Part two is value discovery for 15 minutes. Ask "What would make a solution worth paying for?" Then dig into what outcomes matter most, how they measure success, and what 10x better would look like. Listen for the value drivers that matter to them and the specific success metrics they use.
Part three focuses on feature prioritization for 10 minutes. Show them your list of potential features and ask "Which of these are must-haves versus nice-to-haves?" Use the MoSCoW prioritization method where must-have means they won't buy without it, should-have means very important, could-have means nice but not critical, and won't-have means they don't care. Listen for which features are truly important and which are deal-breakers.
Part four tests price sensitivity for 10 minutes using the Van Westendorp Price Sensitivity Meter. Tell them "I'm going to ask you four questions about pricing. There are no wrong answers." Then ask: At what price would this product be so expensive you wouldn't consider buying it? At what price would it be priced so low you'd question the quality? At what price would it start getting expensive but not be out of the question? And at what price would you consider it a bargain? Record all four price points for later analysis.
Step 2: Analyze Interview Data
After completing 10 to 15 interviews, plot your Van Westendorp results. You'll have data showing each customer's four price points: too cheap, bargain, getting expensive, and too expensive. For example, customer 1 might say $20 is too cheap, $50 is a bargain, $150 is getting expensive, and $300 is too expensive. Customer 2 might say $30, $75, $200, and $400 respectively.
Plot cumulative curves for all four metrics: the percentage who say it's too cheap (ascending curve), percentage who say too expensive (descending), percentage who say bargain (descending), and percentage who say getting expensive (ascending). Where these curves intersect tells you critical price points.
The Optimal Price Point (OPP) is where "too expensive" and "too cheap" intersect. The Indifference Price Point (IPP) is where "bargain" and "getting expensive" intersect. For example, your results might show OPP at $125 a month and IPP at $175 a month, giving you a recommended pricing range of $125 to $175 monthly. This is data-backed pricing guidance instead of guessing.
Step 3: Feature Value Analysis
From your interviews, rank features by how many customers said each was a must-have. Must-haves where 80% or more say they won't buy without it include features like launch coordination, sales enablement, and team collaboration. Should-haves where 50 to 80% rate them as very important include analytics dashboard, templates library, and integrations. Nice-to-haves where less than 50% care include custom branding, advanced reporting, and API access.
Use this feature ranking for packaging decisions. Put must-haves in all tiers since customers won't buy without them. Put should-haves in higher tiers to create differentiation. Save nice-to-haves for enterprise tier only to justify premium pricing.
The Conjoint Analysis (Advanced)
For more precise feature valuation:
What is Conjoint Analysis?
Shows customers product variations with different features and prices. They choose preferred option.
Analysis reveals:
- How much each feature is worth
- Price sensitivity
- Optimal package configurations
Example Conjoint Setup
Show customer 8-10 choices like:
Option A:
- Basic features
- Email support
- $99/month
Option B:
- Advanced features
- Phone support
- $199/month
Ask: "Which would you choose?"
Repeat with different combinations.
Analysis output:
Feature values:
- Advanced analytics: Worth $50/month to customers
- Phone support: Worth $30/month
- Integrations: Worth $40/month
- Custom branding: Worth $15/month
Use this to build tiers that maximize value capture.
The Quantitative Survey (100+ responses)
After qualitative interviews, validate with larger sample:
Survey Structure
Section 1: Screening
- Are you the decision-maker for [category] tools?
- What's your company size/revenue?
- Do you currently use similar tools?
Section 2: Feature Importance
For each feature, rate importance (1-5):
- Not important
- Slightly important
- Moderately important
- Very important
- Extremely important
Section 3: Price Sensitivity
Same Van Westendorp questions:
- Too cheap
- Bargain
- Getting expensive
- Too expensive
Section 4: Package Preference
Show 3 package options. Ask: "Which would you choose for your team?"
Analyze: Which package gets most votes?
Survey Distribution
Send to:
- Email list (prospects + customers)
- LinkedIn outreach
- Industry communities
- Paid survey panel (if needed)
Goal: 100+ qualified responses
Timeline: 2-3 weeks
The Competitive Price Analysis
Research competitor pricing:
Data to Collect
For each competitor:
- Pricing tiers (names, prices)
- What's included in each tier
- Discounts offered (annual, volume)
- Average contract size
- Common objections (from sales team)
Example:
Competitor A:
- Starter: $49/month (limited features)
- Pro: $149/month (most features)
- Enterprise: $499/month (custom)
Competitor B:
- Basic: $79/month
- Growth: $199/month
- Scale: Custom pricing
Your positioning:
- Need to be competitive in $100-$200 range
- Can differentiate with unique features at similar price
- Or charge premium ($250+) with clear differentiation
Synthesizing Pricing Research
Combine all data sources:
Input 1: Van Westendorp Results
Optimal price: $125-$175/month
Input 2: Feature Value Analysis
Must-haves: Launch coordination, enablement, collaboration Worth paying for: Analytics (+$50), integrations (+$40), phone support (+$30)
Input 3: Competitive Analysis
Competitors priced at: $50-$200/month Market expects: 3 tiers (Starter, Pro, Enterprise)
Input 4: Business Goals
Need: $150K ARR to hit revenue target Require: $100+ ACV (annual contract value)
Output: Recommended Pricing
Starter: $99/month
- Must-have features only
- Email support
- Up to 5 users
Pro: $199/month ← Target most customers here
- All features
- Phone support
- Up to 25 users
- Analytics + integrations
Enterprise: Custom (starts $499/month)
- Unlimited users
- Dedicated CSM
- Custom integrations
- SLA
Rationale:
- Pro tier at $199 aligns with Van Westendorp IPP ($175)
- Competitive with market
- Feature packaging based on must-haves vs. nice-to-haves
- Drives customers to Pro tier (most value)
The Price Testing Process
Don't launch without testing:
Test 1: Landing Page Price Test
Method: A/B test pricing on landing page
Setup:
- Control: $199/month
- Variant A: $149/month
- Variant B: $249/month
Measure:
- Conversion to trial/demo
- Trial to paid conversion
- Revenue per visitor
Run for: 2-4 weeks, 1000+ visitors per variant
Winning price: Highest revenue per visitor
Test 2: Sales Call Testing
Method: Sales team tests different price points
Process:
- Weeks 1-2: Pitch at $199/month
- Weeks 3-4: Pitch at $249/month
- Weeks 5-6: Pitch at $179/month
Measure:
- Close rate
- Objection frequency
- Average deal size
Find: Price with best close rate × deal size
Test 3: Upgrade Path Testing
For existing customers:
Test upgrade messaging and pricing:
- Offer analytics upgrade for +$50/month
- Measure: What % take it?
- If <5%, price too high
- If >20%, price too low
- Target: 10-15% attach rate
Common Pricing Research Mistakes
Mistake 1: Only asking "What would you pay?"
Direct question, biased answers (everyone says lower)
Problem: Not reliable
Fix: Use Van Westendorp (4 questions) or conjoint analysis
Mistake 2: Only talking to existing customers
They're already bought-in, not price-sensitive
Problem: Miss prospect concerns
Fix: Interview prospects and churned customers too
Mistake 3: Copying competitor pricing
You match competitor prices exactly
Problem: Race to bottom, no differentiation
Fix: Use competitive data as one input, not only input
Mistake 4: Asking only about price
You skip feature value research
Problem: Don't know what to include in each tier
Fix: Prioritize features (must-have vs. nice-to-have)
Mistake 5: No quantitative validation
You interview 5 people, call it done
Problem: Too small sample
Fix: Validate with 100+ survey responses
Quick Start: Pricing Research in 4 Weeks
Week 1: Qualitative Interviews
- Day 1-2: Recruit 10-15 interview participants
- Day 3-5: Conduct interviews (Van Westendorp + feature prioritization)
Week 2: Competitive Analysis
- Day 1-3: Research 5-10 competitor pricing structures
- Day 4-5: Analyze pricing ranges and positioning
Week 3: Quantitative Survey
- Day 1-2: Build survey (Van Westendorp + features + package preference)
- Day 3-5: Distribute survey, collect 100+ responses
Week 4: Analysis and Recommendation
- Day 1-2: Analyze Van Westendorp data (plot curves, find OPP/IPP)
- Day 3: Analyze feature value and package preferences
- Day 4: Create pricing recommendation with rationale
- Day 5: Present to leadership
Deliverable: Pricing recommendation document with research backing
Impact: Data-driven pricing vs. guessing
The Uncomfortable Truth
Most companies set pricing by:
- Guessing what sounds right
- Copying competitors
- Internal cost-plus calculations
They don't:
- Interview customers about willingness to pay
- Quantify feature value
- Test pricing before launching
What works:
- Van Westendorp Price Sensitivity Meter (4 questions, find optimal range)
- Feature value analysis (what's must-have vs. nice-to-have)
- Quantitative validation (100+ survey responses)
- Competitive context (one input, not only input)
- Testing (A/B test before committing)
The best pricing research:
- Qualitative interviews (10-15 customers)
- Quantitative survey (100+ responses)
- Competitive analysis (market context)
- Feature value analysis (for packaging)
- Price testing (validate before launch)
If you can't explain why your price is $199 vs. $149 with data, you haven't done enough research.
Research deeply. Validate quantitatively. Test before launching.