You ask a customer: "Would you pay $500/month for this?"
They say: "That sounds reasonable."
You launch at $500/month. Nobody buys.
This is the pricing research trap. Customers aren't lying when they say a price sounds reasonable. But what people say they'll pay and what they actually pay are different things.
Pricing research works when you ask questions that reveal true willingness-to-pay, not hypothetical answers.
Here's how to research pricing in ways that actually inform decisions.
Why "Would You Pay $X?" Doesn't Work
The direct question fails for several reasons:
Reason 1: Customers lowball
They think if they say "too expensive," you'll lower the price. So they understate what they'd actually pay.
Reason 2: Customers don't know
They've never bought a product like yours before. They have no reference point. "Sounds reasonable" is a guess.
Reason 3: Context matters
A customer might pay $500/month if they have budget approval and believe they'll see ROI. They won't pay if they're bootstrapped or skeptical of value.
The direct question strips away context. Better questions embed context so answers are grounded in reality.
Reason 4: Hypotheticals are easy to say yes to
"Would you pay?" is hypothetical. Hypotheticals have no consequences. People say yes to things they'd never actually buy.
The Research Questions That Reveal True Willingness-to-Pay
Question 1: "What do you currently spend solving this problem?"
This anchors pricing in their current reality.
If they say "we spend $2,000/month across three tools," you know they have $2K/month budget allocated to this problem. Your pricing should fit within (or justify exceeding) that budget.
If they say "we handle it manually, no budget allocated," you're asking them to create new budget. That's harder. You'll need to prove value clearly.
Question 2: "If you stopped using our product, what would you do instead? What would that cost?"
This reveals your alternative. If they'd go back to a manual process (zero cost), your pricing needs to prove clear ROI. If they'd use a competitor at $800/month, you have flexibility up to that point.
Their answer tells you the switching cost and the price ceiling.
Question 3: "What ROI or payback period would make this a no-brainer purchase?"
This reveals their value expectations.
If they say "we'd need 3-month payback," you can work backwards: What price achieves 3-month payback given the value they'll get?
If they say "we'd need to save 10 hours per week," calculate the cost of those 10 hours and price accordingly.
Question 4: "At what price does this become too expensive to consider?"
This reveals the upper bound.
If they say "over $1,000/month would be hard to justify," you know $1K+ is a red line.
But don't stop there. Follow up: "What about $800? $600?" Find the range where they transition from "definitely too expensive" to "worth considering."
Question 5: "At what price does this feel so cheap you'd question the quality?"
This reveals the lower bound.
If they say "under $200/month would make me wonder if it's enterprise-ready," you know $200 is your floor. Pricing below that hurts perceived value.
Question 6: "If we offered three tiers—Basic, Pro, Enterprise—which would you choose and why?"
This tests packaging, not just price points.
Listen for which features they care about. If everyone chooses Basic because it has everything they need, you're leaving money on the table. If everyone chooses Enterprise because Basic is too limited, your packaging is wrong.
Question 7: "How much effort would it take to get budget approval for this purchase?"
This reveals buying friction.
If they say "I have discretionary budget up to $5K," you know deals under $5K close fast. If they say "anything over $10K needs VP approval and takes months," you know pricing impacts sales velocity.
The Van Westendorp Price Sensitivity Meter
A structured pricing research method that reveals acceptable price ranges.
Ask four questions:
- "At what price would this be so expensive you wouldn't consider it?" (Too expensive)
- "At what price would you consider it expensive but still worth considering?" (Expensive but acceptable)
- "At what price would you consider it a bargain?" (Cheap)
- "At what price would it be so cheap you'd question the quality?" (Too cheap)
Plot the responses:
- X-axis: Price points
- Y-axis: Percentage of respondents
Find where lines intersect:
- Optimal Price Point: Where "too expensive" and "too cheap" lines cross
- Acceptable Range: Where most responses cluster
This gives you a data-driven price range, not just gut feel.
How to Test Packaging and Tiering
Pricing isn't just about dollar amounts. It's about how you package features into tiers.
Method 1: Feature preference surveys
List 10-15 features. Ask customers:
- "Which features are must-haves vs. nice-to-haves?"
- "Which features would you pay extra for?"
Group responses by customer segment (SMB vs. Enterprise, novice vs. expert). Look for patterns.
If SMBs only care about Features A, B, C and enterprises also need D, E, F, you have natural tier differentiation.
Method 2: Conjoint analysis (for advanced research)
Show customers multiple pricing scenarios with different feature combinations and price points:
- Scenario A: Features X, Y, Z at $300/mo
- Scenario B: Features X, Y at $200/mo
- Scenario C: Features X, Y, Z, W at $500/mo
Ask: "Which would you choose?"
Conjoint analysis statistically determines which features drive willingness-to-pay and which don't.
This is rigorous but requires larger sample sizes (100+) and analysis expertise.
Method 3: Prototype testing
Show customers your proposed pricing page with three tiers. Ask:
- "Which tier would you choose?"
- "Is anything confusing about these tiers?"
- "Are there features in the wrong tier?"
If everyone picks the cheapest tier, you're not differentiating enough. If everyone picks the middle tier, your highest tier might not be compelling.
How to Interpret Answers (What Customers Say vs. What It Means)
They say: "That seems expensive."
What it means: Could be three things:
- They genuinely can't afford it (budget constraint)
- They don't see enough value to justify the price (value perception issue)
- They're negotiating (they'll pay, but want a discount)
How to probe: "Expensive compared to what? What would make the price feel justified?"
They say: "I'd need to check with my boss / procurement / finance."
What it means: The price is above their discretionary authority.
What to ask: "What's your approval threshold? What would make approval easier?"
They say: "We don't have budget for this right now."
What it means: Either:
- True budget constraint (timing issue, not pricing issue)
- Not a priority (value issue)
- Soft no (they don't want to say "we won't buy")
How to probe: "If budget weren't a constraint, would this solve a problem you're actively trying to fix?"
They say: "I'd pay $X for this."
What it means: That's their starting negotiation point. Actual willingness-to-pay is probably 20-30% higher.
What to ask: "What value would you expect to get at that price?"
They say: "This would save us so much time/money."
What it means: They see clear value. You have pricing power.
What to ask: "How much time/money would it save? What's that worth to you?"
The Mistake of Asking Current Customers About Pricing
Current customers are the wrong people to ask about future pricing.
Why:
Problem 1: They're anchored to current price
If you're currently $200/month, they'll resist $400/month even if $400 is fair for the value.
Problem 2: They'll lowball to protect themselves
They fear a price increase. Of course they'll say "current price is perfect, don't change it."
Problem 3: They might not represent your target market
Your early customers aren't always your ideal future customers. Pricing for them locks you into a segment you're trying to move beyond.
Instead:
- Research pricing with prospects and new customers who haven't anchored on your current price
- Research with your ideal target segment, not just whoever's easiest to reach
- If you must ask current customers, frame it carefully: "If you were evaluating us today, what would you expect to pay?"
How to Research Pricing for New Products or Features
When you don't have an existing product, pricing research is harder but still possible.
Approach 1: Reference class forecasting
Research what customers pay for adjacent solutions:
- Competitors
- Alternative tools they currently use
- Budget allocated to this category
Example: "We're building a new analytics tool. What do you currently pay for analytics? What would make you switch?"
Their current spend is your pricing reference point.
Approach 2: Value-based interviews
Don't ask about price directly. Ask about value:
- "If this solved [problem], how much time would you save?"
- "How much revenue would that unlock?"
- "What does it cost you when this problem isn't solved?"
Calculate the value you deliver, then price as a fraction of that value (10-30% is common).
Approach 3: Prototype pricing pages
Create mockups of pricing tiers with dollar amounts. Show them to target customers.
Ask:
- "Which tier would you choose?"
- "Does this pricing feel fair for what you're getting?"
- "What concerns come up when you see these prices?"
You're testing perception before committing to actual pricing.
When Research Says Lower Prices but You Shouldn't Listen
Sometimes pricing research suggests lowering prices, but that's the wrong move.
Red flag 1: Customers say it's expensive but still buy
If research says "too expensive" but win rates and retention are strong, ignore the research. Customers complain about price as a negotiation tactic.
Real price problems show up in metrics: high abandonment, low win rates, price-driven churn.
Red flag 2: Low-value customers want lower prices
If SMB customers say you're expensive but enterprise customers happily pay, the issue isn't pricing—it's targeting.
Don't lower prices to capture low-value segments. Focus on high-value segments where your pricing works.
Red flag 3: Customers want more for less
Of course customers want full features at the lowest tier. That doesn't mean you should give it to them.
Good packaging intentionally gates valuable features in higher tiers. Don't collapse tiers just because customers asked.
The Pricing Research You Should Run at Each Stage
Pre-launch (no customers yet):
- Research what target customers currently pay for alternatives
- Test pricing page prototypes with prospects
- Use Van Westendorp method to find acceptable price range
Early stage (first 50 customers):
- Interview customers about value received vs. expectations
- Ask churned customers if price was a factor
- Test packaging by analyzing which features drive retention
Growth stage (hundreds of customers):
- Survey customers on willingness-to-pay for new features
- Test new pricing tiers with A/B experiments
- Research pricing for new segments you're entering
Scale (thousands of customers):
- Run regular price sensitivity studies
- Test pricing changes with controlled experiments
- Research pricing by segment and vertical
The Pricing Research You Shouldn't Waste Time On
Don't ask: "What's a fair price for this?"
"Fair" is meaningless. Fair to whom? Based on what?
Don't ask: "Would you buy this at $X?"
Hypotheticals don't predict actual behavior.
Don't ask: "What features should we include in each tier?"
Customers will ask for everything in the cheapest tier. Packaging is your job, not theirs.
Don't ask: "How much would you pay for Feature X?"
Features don't have prices. Outcomes do. Ask about the value of the outcome, not the feature.
The Metric That Validates Your Pricing Research
Good pricing research predicts actual behavior.
Test: After researching pricing, launch it. Then measure:
- Win rate: Did pricing help or hurt deal closure?
- Expansion rate: Are customers upgrading to higher tiers?
- Churn rate: Are customers leaving due to price?
- Willingness-to-pay observed vs. stated: Did customers pay what research suggested?
If research said "customers will pay $500" but win rates collapse at $500, your research methods need work.
If research said "$500 feels expensive" but customers happily pay and expand, you validated pricing despite objections.
Research informs pricing. Metrics validate it.