I'd just finished rebuilding our messaging framework. Three months of work—customer interviews, persona development, competitor analysis, messaging testing.
The new messaging tested well. Prospects said it was "clear" and "compelling." Sales said it was "much better than before."
We rolled it out across the website, sales materials, and campaigns. Then we waited for results.
Six months later: No improvement in win rates. No change in average deal size. No increase in close velocity.
The messaging sounded better, but it wasn't performing better.
I couldn't figure out why until the VP of Revenue Operations asked me a simple question: "Did you look at revenue data when you built the messaging?"
"I looked at customer interview data and win/loss analysis."
"That's qualitative. Did you look at which segments actually generate the most revenue? Which deal sizes close fastest? Which customers expand most? Which buyer titles correlate with higher ASP?"
I hadn't. I'd built messaging based on what customers said in interviews, not what revenue data revealed about who actually bought and why.
He sent me a Salesforce report I'd never seen: Revenue performance segmented by industry, company size, buyer title, initial deal size, time to close, and expansion rate.
The patterns destroyed every messaging assumption I'd made.
Our messaging targeted VP-level buyers. Revenue data showed Director-level buyers closed 40% faster with 25% higher win rates.
Our messaging led with enterprise scale and security. Revenue data showed mid-market deals had 2.1x better unit economics than enterprise.
Our messaging emphasized our broadest use case. Revenue data showed our most specific, narrow use case drove 3x higher expansion revenue.
I'd built messaging for who I thought we should sell to. Revenue data showed who actually bought from us—and they were completely different buyers.
What Revenue Data Revealed About Buyer Behavior
After that conversation, I spent two weeks analyzing revenue data with RevOps. The insights were uncomfortable.
Finding #1: High-ASP Segments Needed Different Messaging
I'd built one messaging framework for all segments. "Personas" were our segmentation model—CTO persona, VP Ops persona, Security leader persona.
Revenue data revealed personas weren't the relevant segmentation. Deal size was.
RevOps segmented all closed deals by average selling price (ASP) and looked at patterns:
Low-ASP deals ($100-300K):
- Average sales cycle: 42 days
- Win rate: 64%
- Buyer title: Director or Manager level
- Primary objection (from lost deals): Price vs. alternatives
- Expansion rate (12 months): 23%
Mid-ASP deals ($300-600K):
- Average sales cycle: 68 days
- Win rate: 54%
- Buyer title: VP level
- Primary objection: Integration complexity
- Expansion rate: 51%
High-ASP deals ($600K+):
- Average sales cycle: 104 days
- Win rate: 41%
- Buyer title: C-level (with VP as champion)
- Primary objection: Organizational change management
- Expansion rate: 71%
These weren't just different deal sizes. They were fundamentally different buying motions with different decision criteria, different objections, and different value perception.
But our messaging treated them identically. Same value props, same competitive positioning, same demo flow.
The uncomfortable truth: Revenue data showed our messaging was optimized for mid-ASP deals (it worked there—54% win rate). But it was underperforming in low-ASP deals (should be >70% win rate given simpler buying motion) and catastrophically underperforming in high-ASP deals (41% win rate was unacceptable).
PMM decision: Build segment-specific messaging:
- Low-ASP: Emphasize fast time-to-value, simple setup, clear ROI. Address price objections directly with TCO comparison.
- Mid-ASP: Emphasize integration capabilities, flexibility, and workflow optimization.
- High-ASP: Emphasize change management support, executive-level business outcomes, and risk mitigation.
Result: Within two quarters:
- Low-ASP win rate: 64% → 73%
- Mid-ASP win rate: 54% → 58% (incremental improvement)
- High-ASP win rate: 41% → 52% (massive improvement)
Same product, different messaging by deal size.
Finding #2: Fast-Close Profiles Revealed Urgency Triggers
I'd been messaging around "why buy our product category" (generic urgency). Revenue data revealed specific urgency triggers that varied by customer profile.
RevOps segmented deals by time-to-close and looked for patterns in fast-close vs. slow-close deals.
Deals that closed in <45 days (fast):
- Trigger event 68% of the time: Recent security incident, audit failure, or compliance deadline
- Buying committee size: 2-3 people (small, decisive)
- Evaluation process: Shortened—often skip full POC, move straight to procurement
- Common buyer title: Director of Security or Compliance lead
Deals that took 90+ days (slow):
- Trigger event 31% of the time: General "we should improve this" (no urgency)
- Buying committee size: 5-7 people (large, slow)
- Evaluation process: Comprehensive POC, multiple vendor comparisons, endless internal discussions
- Common buyer title: VP of IT or Operations (strategic but not urgent)
Fast-close deals weren't faster because our sales team executed better. They were faster because the buyer had urgent pain—something had gone wrong, a deadline was looming, executive pressure was high.
Our messaging didn't speak to these urgency triggers. It spoke to generic efficiency gains and long-term value.
PMM decision: Rebuild messaging to speak to specific trigger events:
- For security/compliance buyers: Lead with "avoid [recent incident type]" and "meet compliance deadlines without delays"
- For efficiency buyers: Lead with "eliminate manual processes costing your team X hours per week"
Build separate landing pages, sales plays, and email sequences for different trigger events instead of one generic "why buy" message.
Result: Percentage of deals closing in <60 days increased from 34% to 48%. We'd made urgency explicit instead of hoping prospects would infer it.
Finding #3: Expansion Revenue Patterns Revealed True Value
I'd built messaging around our primary use case—the one most customers bought for initially.
Revenue data showed that expansion revenue patterns revealed what customers actually valued most, and it wasn't the primary use case.
RevOps analyzed expansion deals (existing customers buying more):
Initial purchase use case:
- Use Case A: 62% of new customers
- Use Case B: 38% of new customers
Expansion purchases (12 months later):
-
Customers who initially bought for Use Case A:
- 71% expanded to add Use Case C
- 48% expanded to add Use Case D
- 12% expanded to add more of Use Case A
-
Customers who initially bought for Use Case B:
- 83% expanded to add Use Case C
- 61% expanded to add Use Case D
- 34% expanded to add more of Use Case B
The pattern was undeniable: Use Case C drove the most expansion revenue, but we weren't leading with it in messaging.
Why? Because Use Case C was harder to explain upfront. It required customers to understand the platform first. So we led with Use Case A (easy to explain, easy first sale).
But revenue data showed Use Case C had the highest customer lifetime value. It drove expansion. It created lock-in. It was what customers valued most after 12 months.
PMM decision: Reposition Use Case C as the primary value prop, with Use Case A as the entry point ("Start with A, expand to C").
Build messaging that set expectations upfront: "Most customers start by solving [Use Case A], then discover our platform enables [Use Case C], which typically drives 2-3x more value."
Result: Customers who understood the Use Case C value prop upfront were 2.4x more likely to expand within 12 months. We'd aligned messaging with long-term value instead of just initial sale ease.
The Messaging Changes Revenue Data Forced
After analyzing revenue patterns, I rebuilt our core messaging around three revenue insights:
Change #1: Buyer Title-Specific Messaging
Old approach: One "buyer persona" (VP of Operations) with one value prop.
Revenue data insight: Win rates varied dramatically by buyer title.
- Director-level buyers: 61% win rate
- VP-level buyers: 54% win rate
- C-level buyers: 43% win rate
Why the variance?
I interviewed sales reps on wins and losses by buyer level:
Director-level wins: They had direct pain (dealt with the problem daily), clear ROI calculation (hours saved per week), and faster decision authority (fewer stakeholders to convince).
VP-level wins: They cared about strategic impact (team efficiency, scalability) but needed stronger business case (boarder organizational value, not just team-level).
C-level losses: We were messaging tactical efficiency. C-level buyers cared about strategic outcomes (revenue impact, competitive advantage, market positioning). Our messaging was too small for their lens.
New messaging by buyer level:
- Directors: Lead with "eliminate X hours of manual work per week" (tactical, personal)
- VPs: Lead with "scale operations without proportional headcount increases" (strategic team-level)
- C-suite: Lead with "reduce time-to-market for new products by 30%" (business outcomes, competitive advantage)
Same product, different language based on buyer altitude.
Change #2: Segment-Specific Value Props
Old approach: Generic value prop emphasizing flexibility and power.
Revenue data insight: Deal size and win rate varied dramatically by segment.
High-performing segments (where we had >60% win rates and high ASP):
- Mid-market healthcare
- Financial services (compliance-driven)
- SaaS companies (100-500 employees)
Low-performing segments (where we had <40% win rates and heavy discounting):
- Enterprise retail
- Manufacturing
- Small businesses (<50 employees)
I'd been building generic messaging trying to appeal to all segments. Revenue data said we should lean into where we won and stop pretending we had product-market fit everywhere.
New approach: Build segment-specific landing pages, case studies, and messaging for our three high-performing segments.
Lead homepage messaging with "Built for mid-market healthcare, financial services, and SaaS companies."
Yes, this narrowed our addressable market messaging. But revenue data showed we weren't winning outside those segments anyway—we were just burning sales cycles.
Result: Inbound lead quality improved (more leads from target segments, fewer from low-fit segments). Sales qualified leads faster. Win rates in target segments improved 12 percentage points.
Change #3: Outcome-Based Messaging Over Feature-Based
Old messaging: Led with features and capabilities.
"Our platform provides X, Y, and Z capabilities so you can manage [process]."
Revenue data insight: Fast-close deals and high-expansion customers responded to outcomes, not features.
I analyzed win/loss interview notes alongside revenue data:
High-value customers (high ASP + high expansion):
- Talked about business outcomes: "We needed to reduce compliance risk," "We had to speed up product launches," "We were losing customers due to slow onboarding."
- Rarely mentioned specific features until late in buying process
- Bought based on confidence we could solve their specific business problem
Low-value customers (low ASP + low expansion):
- Talked about feature checklists: "Do you have Feature X? What about Feature Y?"
- Compared feature matrices across vendors
- Bought based on price and feature parity
Our messaging was optimized for feature comparison, which attracted low-value buyers. We weren't speaking the language of high-value buyers.
New messaging: Lead with business outcomes, mention features as enablers.
Old: "Our platform offers automated workflows, custom dashboards, and API integrations."
New: "Reduce compliance audit preparation from 6 weeks to 3 days. (Powered by automated workflows, custom dashboards, and API integrations.)"
Result: Inbound demo requests from feature shoppers decreased 22%. Inbound demo requests from outcome-focused buyers increased 47%. Average deal size of inbound leads increased $80K.
The Uncomfortable Questions Revenue Data Raised
Analyzing revenue data didn't just inform messaging improvements. It raised uncomfortable strategic questions:
Question #1: Should we stop selling to segments where we don't win?
Revenue data showed 38% win rate in retail segment vs. 67% in healthcare. We were spending equal sales and marketing resources on both.
If we explicitly positioned as "not for retail," we'd lose potential revenue. But we were already losing those deals—we were just wasting sales cycles pretending we might win.
Decision: Explicitly deprioritize retail in messaging, reallocate resources to healthcare. Short-term revenue risk (lose retail deals we might close). Long-term revenue gain (higher win rates in focused segments).
Question #2: Should we change our product roadmap based on expansion data?
Expansion revenue data showed Use Case C drove 2.8x more expansion than Use Case A. But product roadmap was heavily weighted toward improving Use Case A features (because that's what most new customers bought for).
Should we shift roadmap investment toward expansion drivers (Use Case C) instead of acquisition drivers (Use Case A)?
Decision: 60/40 split—60% of roadmap toward acquisition use cases (need to keep new customer pipeline healthy), 40% toward expansion use cases (maximize lifetime value).
Question #3: Should we increase prices in high-value segments?
Revenue data showed mid-market healthcare customers had:
- 71% win rate (highest of any segment)
- 11% average discount rate (lowest of any segment)
- 89% 12-month retention (highest of any segment)
- 68% expansion rate (highest of any segment)
We were clearly underpriced in this segment. They bought at full price, expanded aggressively, and rarely churned.
Should we increase prices specifically for healthcare?
Decision: Test 20% price increase for new healthcare customers. Win rate dropped from 71% to 64% (still excellent), but revenue per customer increased 43% (price increase + maintained expansion). Net revenue impact: +$2.6M annually.
Revenue data didn't just change messaging—it challenged our entire GTM strategy.
What I'd Tell PMMs About Revenue Data
If you're building messaging based on customer interviews, personas, and win/loss feedback without looking at revenue data, you're guessing at what works.
Here's what to ask RevOps for:
Win rate and ASP by segment. Not overall win rate—that tells you nothing. Win rate and deal size by industry, company size, and buyer title. Find where you actually win and message to those buyers.
Sales cycle by buyer profile. Which buyers close fast? Which drag out? Fast-close profiles reveal where your value prop resonates. Slow-close profiles reveal where you're not creating urgency.
Expansion revenue patterns. What do customers buy after the initial purchase? That reveals what they value most. Message to long-term value, not just initial sale.
Discount rate by competitive scenario. Which competitors force heavy discounting? That reveals where your differentiation is weak. Which deals close at full price? That reveals where positioning is strong.
Revenue data tells you what actually works, not what you think should work.
Customer interviews tell you what people say they care about. Revenue data tells you what they actually pay for.
Build messaging based on what wins, not what sounds good.