Pricing Research in 3 Weeks: Van Westendorp Meets Reality

Pricing Research in 3 Weeks: Van Westendorp Meets Reality

I spent three weeks running pricing research that produced three wildly different answers: $12, $47, and $200.

Same product. Same target market. Same research methodology—the Van Westendorp Price Sensitivity Meter, the academic gold standard that every pricing consultant recommends. The model promised to reveal the "optimal price point" through four simple questions. Instead, it revealed that our target market had no idea what they'd actually pay.

The problem wasn't the methodology. It was the assumption that people can accurately predict what they'll pay for something they've never bought before.

We'd run the Van Westendorp analysis by the book. Asked 200 prospects four questions: at what price would this product be too expensive, expensive but you'd consider it, a bargain, and too cheap to be credible. The model analyzed the intersections and spat out a "optimal price range" of $39-$58.

We launched at $49. Conversion rate was 4%. We were expecting 12-15% based on industry benchmarks.

I spent the next six months learning that pricing research isn't about finding the right number—it's about understanding the gap between what people say they'll pay and what they actually pay. That gap is where real pricing strategy lives.

Why Van Westendorp Told Us Nothing Useful

The Van Westendorp Price Sensitivity Meter is elegant in theory. You plot four curves based on prospect responses and find the intersection points that reveal price acceptance ranges. It's visual, it's data-driven, and it gives executives confidence that pricing is "scientifically validated."

It also assumes people understand the value of your product well enough to have opinions about its price. For established product categories, this works. For anything novel or complex, it produces fiction.

Our product was a B2B SaaS tool that automated something prospects currently did manually with spreadsheets. They knew the pain of the manual process. They had no mental model for what automation should cost.

When we asked "at what price would this be too expensive?" they anchored on random numbers. Some compared it to Excel ($10/month). Others compared it to hiring a junior analyst ($4,000/month). The averages were meaningless because the mental models were incompatible.

I watched the interview recordings afterward. Prospects would pause for 15-20 seconds before answering the price questions, visibly uncomfortable. They were making up numbers because they felt obligated to answer. Those fabricated numbers became our pricing strategy.

The worst part? The model made us feel scientific about being completely wrong.

What We Should Have Asked Instead

Three months after launch, we were desperate. At 4% conversion, the product wasn't viable. We needed to understand why pricing wasn't working, and Van Westendorp data couldn't tell us.

So I stopped asking about price and started asking about behavior.

The question that changed everything: "Walk me through the last time you had to decide whether to buy software like this. What made you pull out your credit card or walk away?"

Nobody mentioned price first. They talked about trust. They talked about whether they believed the product would actually work. They talked about whether they could get their team to adopt it. They talked about whether it would integrate with their existing tools.

Price came up eventually, but it was always in context: "I'd pay $200/month if I was confident it would save me 10 hours a week. But I'm not confident yet, so even $20 feels risky."

This revealed the actual problem. Our pricing research had focused on finding the right number. The real question was: what needs to be true for prospects to believe our product is worth any price?

We weren't losing deals because our price was wrong. We were losing deals because we hadn't built enough trust to justify even a cheap price.

The Three-Week Framework That Actually Worked

After the Van Westendorp disaster, I rebuilt our pricing research around behavioral truth instead of hypothetical preferences.

Week one was about understanding the decision context, not the price number. I interviewed 30 prospects who had recently evaluated similar tools—including competitors and alternatives they'd chosen instead of us.

I didn't ask about price directly. I asked them to reconstruct their decision timeline: What triggered the search? Who was involved? What did you need to see to feel confident? What almost stopped you from buying?

The patterns were obvious within 15 interviews. Prospects needed three things before price mattered:

First, proof from a peer that it actually worked. Not a case study—a real conversation with someone like them who'd already bought it and gotten value.

Second, confidence they could implement it without a nightmare integration project. They'd been burned before by "simple" tools that took six months to deploy.

Third, a clear understanding of what success looked like in 30 days. They needed to know what "working" meant so they could evaluate whether to expand or cancel.

Price never came up as a primary concern in these conversations. It was always conditional: "I'd pay X if Y was true."

Week two was about testing willingness to pay in context. Instead of asking "what would you pay for this product?" I showed prospects different versions of our value proposition with different price points attached.

"Here's option A: $29/month, self-service setup, email support."

"Here's option B: $99/month, we help you set it up, priority support, quarterly business reviews."

"Which would you choose and why?"

This forced people to make tradeoffs instead of evaluating price in a vacuum. The responses were dramatically different from Van Westendorp.

At $29, prospects worried it was too cheap to include real support. At $99, they wanted more hand-holding than we'd planned to offer. At $149, the package felt right but several said they'd need manager approval, which would slow the deal.

The sweet spot wasn't a number—it was a combination of price and package that matched their willingness to take risk. Self-serve buyers wanted cheap and simple. Risk-averse buyers wanted expensive and supported.

Week three was about validating with real behavior. We didn't launch a full campaign. We created three landing page variations with different price points and drove 200 visitors to each through targeted ads.

We measured click-through to the demo request form. Not "would you buy this?" but "will you click this button right now?"

The $49 price point (our Van Westendorp "optimal") generated 4% click-through. The $99 price point with a stronger support package generated 11% click-through. The $149 price point with white-glove onboarding generated 8% click-through but attracted enterprise prospects instead of SMB.

This behavioral data told us something Van Westendorp couldn't: higher prices with stronger packages generated more interest because they reduced perceived risk. We'd been trying to compete on price when prospects wanted to buy confidence.

The Uncomfortable Discovery About "Too Expensive"

The most counterintuitive finding from our three-week research: the prospects who said our price was "too expensive" in surveys were not the ones who failed to convert.

I ran the analysis. Of the prospects who'd told us $49 was "expensive but would consider," about 60% actually converted when we followed up. Of the prospects who'd said $49 was "a bargain," only 12% converted.

The "bargain" respondents were tire-kickers who had no intention of buying. They said $49 was cheap because they weren't actually planning to spend any money. The "expensive but would consider" respondents were serious buyers doing real evaluation.

This flipped our entire understanding of price resistance. Complaints about price weren't objections—they were negotiation. People who thought carefully about whether our product was worth $49 were qualified buyers. People who instantly said "that's a great price!" were not.

I learned to treat "too expensive" as a sign of engagement, not rejection. The question wasn't whether they thought our price was high—it was whether they were doing the mental math to justify the purchase.

We started using "expensive but would consider" responses as a qualification signal. Those prospects got more attention, not less. We focused on proving value, not defending price.

What Pricing Research Actually Reveals

After running pricing research for a dozen products, I've learned that the point isn't to find the optimal number. It's to understand the value perception gaps that make pricing feel risky to buyers.

Van Westendorp asks "what would you pay?" The better question is "what would need to be true for you to feel confident paying X?"

That question reveals the real work. Sometimes the answer is better positioning. Sometimes it's stronger social proof. Sometimes it's a different package that reduces implementation risk. Sometimes it's a totally different business model.

I worked with a company whose pricing research revealed prospects were anchoring on the wrong competitor. We thought we competed with $500/month enterprise tools. Prospects were comparing us to $10/month consumer apps.

The pricing problem wasn't the number—it was category confusion. We needed to change how prospects understood what we did before pricing made sense. No amount of price optimization would fix a positioning problem.

Another company discovered their pricing problem was actually a sales process problem. Prospects were fine with the price in principle, but the procurement process took 90 days and involved six stakeholders. By the time they got approval, the champion had lost momentum and the deal died.

The pricing fix? Offer a 30-day pilot at 50% off to start immediately, then convert to full price with simplified procurement. Same end price, different structure, 3x conversion rate.

The Real Three-Week Timeline

Here's what pricing research actually looks like when you're trying to find truth instead of validate a number you've already picked:

Week one: Interview 20-30 recent buyers and near-miss prospects. Don't ask about price. Ask them to reconstruct their decision process. Find the moments where they almost walked away and what changed their mind.

Record every call. Go back and count how many seconds into the conversation before price comes up naturally. If it's in the first 2 minutes, you have a positioning problem. If it's after 10 minutes, you're probably in the right ballpark.

Week two: Test price in context, not isolation. Show prospects packages at different price points. Force them to choose. Watch which objections are about price versus risk, capability, or fit.

Ask this exact question: "If I could wave a magic wand and eliminate one barrier to buying this today, what would it be?" If nobody says price, your pricing research is distracting you from the real problem.

Week three: Test with real behavior, not surveys. Drive traffic to landing pages with different price points. Measure clicks, demo requests, trial signups—real actions that indicate real intent.

Run this for at least 200 visitors per variation. Less than that and you're just measuring noise. But 200 visitors with real intent beats 2,000 survey responses from people guessing.

Why Most Teams Skip the Hard Part

The Van Westendorp method is popular because it's clean. You run surveys, analyze data, produce charts, present to executives. It feels rigorous and objective.

The behavioral approach is messy. You're interpreting qualitative interviews, testing multiple packages, measuring click-through rates that might not translate to revenue. It's harder to present in a slide deck.

But here's what I've learned: executives don't actually want a scientifically optimal price. They want confidence that the price will work in market.

Clean methodology that produces wrong answers doesn't help them. Messy research that reveals behavioral truth does.

I've stopped presenting "the optimal price is $X based on Van Westendorp analysis." Instead I say: "We tested three packages in market with real prospects. The $99 package with strong support generates 2.5x more interest than the $49 self-serve package. Here's why."

That story is harder to tell but easier to act on. It gives product clarity on what to build into each package. It gives sales clarity on how to position value. It gives marketing clarity on what messaging resonates.

A single number from Van Westendorp gives you none of that.

The Question That Matters More Than Price

After three weeks of pricing research, the question we should have been asking wasn't "what's the right price?"

It was: "What's preventing prospects from believing our product is worth any price?"

For us, the answer was lack of proof. Prospects needed to see that companies like them had implemented successfully and gotten value fast. No price point would fix that.

We spent the next quarter building a customer proof engine: video testimonials, implementation case studies, a Slack community where prospects could ask current customers questions directly.

Conversion rate went from 4% to 14% at the same $49 price point. Then we raised prices to $99 with a stronger package, and conversion rate held at 13%.

The pricing research didn't tell us to build proof. The behavioral research did. That's the difference between asking what people will pay and understanding what makes them willing to pay anything at all.

Most pricing research fails because it tries to find a number when the real problem is building enough value and trust to justify any number. Van Westendorp can't tell you that. Only talking to real buyers about real decisions can.