The CMO asked me to share PMM's quarterly metrics in the marketing all-hands.
I pulled up my dashboard. It showed:
- Win rate by competitor
- Average deal size by vertical
- Sales cycle length by segment
- Discount frequency trending down
- Feature objections from lost deals
- Churn rate by customer segment
- Expansion revenue rate
The CMO looked confused. "These are sales metrics. Where are your marketing metrics?"
"These are PMM metrics. They tell me if positioning and enablement are working."
"What about content downloads? Campaign engagement? MQL conversion?"
"Those are demand gen metrics. PMM doesn't generate demand—we enable revenue. These metrics show whether we're doing that effectively."
The CMO wasn't convinced, but the CRO was listening from the back of the room. He spoke up: "This is the first time I've seen PMM track metrics I actually care about."
That comment changed the trajectory of PMM's role in our company.
Most product marketers track marketing metrics—content performance, campaign engagement, lead conversion. Those metrics tell you whether your marketing is working. They don't tell you whether your product marketing is working.
PMM's job isn't to generate leads. It's to make revenue more efficient—higher win rates, larger deals, faster sales cycles, better retention.
If you're not tracking sales metrics, you're not measuring what actually matters.
Here are the seven sales metrics I track as a product marketer, what they reveal, and how I use them to drive PMM decisions.
Metric #1: Win Rate by Competitor
What it measures: The percentage of competitive deals we win when facing each specific competitor.
Why it matters: This is the single best measure of whether your competitive positioning and battle cards actually work.
If your win rate against Competitor A is 58% but against Competitor B is 31%, you have a Competitor B problem. Either your battle card isn't working, or you're genuinely disadvantaged and need to avoid those deals.
How I track it:
I work with sales ops to ensure every competitive opportunity in Salesforce is tagged with the competitor name. Then I calculate:
Win rate vs. Competitor A = (Deals won vs. A) / (Total deals vs. A)
I track this monthly and look for:
- Trending changes (is our win rate improving or declining?)
- Variations by segment (do we win more in enterprise vs. mid-market against this competitor?)
- Correlation with battle card usage (do deals where sales used the battle card have higher win rates?)
What this revealed:
We had 62% win rate overall, which seemed great.
But when I segmented by competitor:
- vs. Competitor A: 71% win rate (we were crushing them)
- vs. Competitor B: 48% win rate (roughly even)
- vs. Competitor C: 29% win rate (we were getting destroyed)
Our overall win rate looked good because we weren't facing Competitor C very often. But when we did, we lost.
I interviewed sales reps who'd lost to Competitor C. The pattern was clear: they positioned on feature X that we genuinely couldn't match. Our battle card tried to position around it, but there was no credible counter.
PMM decision: Stop trying to compete with Competitor C in deals where feature X is a requirement. Work with sales ops to identify those deals early and disqualify fast rather than waste time on deals we'll lose.
Win rate vs. Competitor C stayed low (we avoided those deals), but overall win rate improved to 68% because we stopped losing winnable deals to unwinnable competitive situations.
Metric #2: Average Selling Price (ASP) by Vertical
What it measures: The average deal size closed in each industry vertical or customer segment.
Why it matters: This reveals which positioning resonates strongly enough to command premium pricing, and which segments see you as a commodity.
If ASP in healthcare is $680K but ASP in manufacturing is $180K, either healthcare has bigger budgets or your positioning creates more value perception in healthcare.
How I track it:
I segment all closed deals by industry/vertical and calculate average deal size over rolling 12-month periods.
I also track:
- Discount rate by vertical (are we discounting more in certain segments?)
- Deal size distribution (are there outliers skewing the average?)
- Deal size trend over time (is ASP increasing or decreasing in each vertical?)
What this revealed:
Our ASP overall was $420K.
By vertical:
- Financial services: $780K average
- Healthcare: $620K average
- Retail: $290K average
- Manufacturing: $185K average
We'd been investing equal messaging and positioning effort across all verticals. But financial services buyers valued our product 4x more than manufacturing buyers.
When I dug deeper, I found financial services buyers cared about compliance and risk reduction—outcomes with massive perceived value. Manufacturing buyers cared about operational efficiency—valuable, but harder to quantify.
Our generic messaging emphasized product features. It didn't emphasize the compliance value that financial services buyers cared about most.
PMM decision: Build vertical-specific positioning that emphasized the highest-value outcomes for each segment. For financial services: compliance and risk reduction. For healthcare: patient data security. For retail: fraud prevention.
Within six months, ASP in financial services increased to $940K. Healthcare increased to $710K.
We stopped trying to force-fit manufacturing into our positioning and accepted that it was a lower ASP segment—still valuable, but not worth the same level of PMM investment.
Metric #3: Sales Cycle Length by Segment
What it measures: How long it takes from opportunity creation to close, segmented by customer type, deal size, or product.
Why it matters: Short sales cycles mean your positioning creates urgency and clarity. Long sales cycles mean buyers are uncertain, need extensive evaluation, or don't perceive enough value to prioritize the decision.
PMM can't eliminate legitimate evaluation processes, but we can reduce unnecessary friction—unclear value props, weak differentiation, missing proof points.
How I track it:
I calculate median sales cycle (not average—outliers skew average) for:
- Segment (SMB, mid-market, enterprise)
- Vertical
- Product or use case
- Competitive vs. non-competitive deals
What this revealed:
Overall sales cycle was 76 days.
By segment:
- SMB: 34 days
- Mid-market: 68 days
- Enterprise: 127 days
That seemed normal—bigger deals take longer.
But when I segmented by product/use case within each segment:
- Use Case A (SMB): 28 days
- Use Case B (SMB): 52 days
Same segment, same deal size range, but Use Case B took nearly 2x as long to close.
I interviewed sales on Use Case B deals. The pattern: prospects struggled to build an internal business case. They understood the problem, but couldn't quantify the ROI enough to get budget approval.
We had ROI calculators for Use Case A (showing clear cost savings). We had nothing for Use Case B (value was productivity improvement, harder to quantify).
PMM decision: Build ROI calculators and business case templates specifically for Use Case B, emphasizing productivity gains in quantified terms.
Sales cycle for Use Case B dropped from 52 days to 41 days over the next quarter—still longer than Use Case A, but 20% faster because we'd removed friction from the buying process.
Metric #4: Discount Frequency and Depth
What it measures: How often sales discounts deals, and by how much.
Why it matters: High discount rates signal weak positioning. If sales constantly discounts to close deals, buyers don't perceive enough value to pay full price.
Occasional discounts are normal (multi-year deals, competitive displacement, strategic accounts). But systematic discounting means your pricing doesn't align with perceived value.
How I track it:
I track:
- Percentage of deals closed with discount (discount frequency)
- Average discount percentage when discounts are given (discount depth)
- Discount frequency by segment, competitor, and sales rep
Most importantly, I track discount reasons:
- Competitive pressure (competitor offered lower price)
- Budget constraints (buyer had limited budget)
- ROI uncertainty (buyer needed to prove value first)
- Strategic relationship (exec-level decision to invest in key account)
What this revealed:
62% of deals closed with some discount. Average discount depth was 18%.
That seemed high, but I didn't know if it was a positioning problem or a market dynamics problem.
When I segmented by discount reason:
- Competitive pressure: 34% of discounted deals
- Budget constraints: 22% of discounted deals
- ROI uncertainty: 31% of discounted deals
- Strategic relationship: 13% of discounted deals
ROI uncertainty was the largest category. Sales was discounting because buyers weren't confident they'd realize value fast enough to justify full price.
This wasn't a pricing problem—it was a positioning problem. We weren't demonstrating value credibly enough.
PMM decision: Build customer proof points showing time-to-value. Case studies emphasizing "achieved ROI in 90 days" instead of generic success stories. Reference calls with customers who'd realized fast value.
Discount frequency dropped from 62% to 51% over two quarters. When buyers believed they'd get value fast, they paid full price.
Metric #5: Feature Objections in Lost Deals
What it measures: Which missing features or product gaps are mentioned in deals we lose.
Why it matters: This tells you whether you're losing on positioning (you have the feature but didn't communicate it) or product gaps (you genuinely lack what buyers need).
If prospects keep asking for Feature X and you have it, that's a positioning failure—sales doesn't know how to position it.
If prospects keep asking for Feature Y and you don't have it, that's roadmap input for product.
How I track it:
I run win/loss interviews on every lost competitive deal and code the reasons prospects gave for choosing the competitor.
I categorize objections:
- Feature gaps (we lack a feature they needed)
- Positioning failures (we have the feature but didn't communicate it effectively)
- Pricing (competitor was cheaper)
- Relationship (competitor had existing relationship)
- Other
I track feature gap mentions over time to see patterns.
What this revealed:
I ran interviews on 40 lost competitive deals over one quarter.
Feature objections appeared in 28 of those deals (70%).
When I coded which features:
- Feature A: mentioned in 15 deals (38%)
- Feature B: mentioned in 8 deals (20%)
- Feature C: mentioned in 5 deals (13%)
Feature A kept coming up. I checked with product—we had Feature A, but it was buried in our settings and poorly documented.
This wasn't a roadmap gap. This was a positioning and enablement gap. Sales didn't know we had it, so they couldn't counter the objection.
PMM decision: Update battle cards to explicitly address Feature A. Train sales on how to demo it. Add it to competitive comparison pages.
Win rate in deals where Feature A was mentioned improved from 22% to 54% over the next two quarters.
Feature B and C were legitimate product gaps. I took that data to product with evidence: "We're losing $X in pipeline per quarter because we don't have Feature B. Here's the business case for prioritizing it."
Metric #6: Churn Rate by Segment
What it measures: What percentage of customers cancel or don't renew, segmented by how they were originally sold.
Why it matters: High churn in specific segments reveals positioning or ICP problems. If you're selling to customers who don't get value, that's a PMM failure—wrong positioning attracted wrong customers.
How I track it:
I track 12-month retention rate for:
- Customers sold on different use cases or positioning angles
- Customers from different verticals
- Customers from different deal sizes
- Customers sold through different channels
What this revealed:
Overall 12-month retention was 87%, which seemed healthy.
But when I segmented by original use case positioning:
- Customers sold on Use Case A: 94% retention
- Customers sold on Use Case B: 76% retention
Huge gap.
I interviewed churned customers from Use Case B. Pattern: they'd bought expecting outcome X, but our product delivered outcome Y. The positioning had set wrong expectations.
Sales had been selling Use Case B because it was an easier initial sell, but those customers churned fast because they didn't get the value we'd promised.
PMM decision: Tighten ICP definition for Use Case B positioning. Only target companies where we could genuinely deliver outcome X. Stop using Use Case B positioning as a generic "get in the door" message.
New customer retention for Use Case B improved to 88% over the next year because we stopped selling to customers who'd churn.
Metric #7: Expansion Revenue Rate
What it measures: How much existing customers expand their contracts over time.
Why it matters: Expansion revenue reveals whether your original positioning led customers to discover additional value, or if you undersold them initially.
High expansion rates can signal either good land-and-expand positioning or weak initial positioning that left money on the table.
How I track it:
I track:
- Percentage of customers who expand within 12 months
- Average expansion deal size as percentage of original deal
- Time to first expansion
- Features/products customers expand into
What this revealed:
68% of customers expanded within 12 months. Average expansion was 47% of original deal size.
That seemed good, but when I looked at which features customers expanded into:
- 82% expanded to Feature Set A (which we hadn't included in initial deal)
- 61% expanded to Feature Set B (also not in initial deal)
These weren't upsells to premium features. These were core features that customers needed from day one but hadn't bought initially.
I interviewed sales. The pattern: they were selling "starter packages" to reduce initial deal friction, then expanding later once customers saw value.
This was leaving money on the table. If 82% of customers eventually bought Feature Set A, we should've included it in initial positioning, not positioned it as an add-on.
PMM decision: Repackage standard offering to include Feature Set A and B by default, increase initial deal size, reduce expansion deal complexity.
Average initial deal size increased 38%. Expansion revenue rate dropped to 54% (because we were selling more upfront), but net revenue per customer increased because we weren't waiting 6 months to sell features customers needed from day one.
Why These Metrics Matter More Than Marketing Metrics
Every PMM dashboard should answer one question: "Is our positioning making revenue more efficient?"
Marketing metrics (content downloads, MQLs, campaign engagement) don't answer that question. They tell you whether people are interested, not whether that interest converts to revenue.
Sales metrics tell you whether your positioning:
- Wins competitive deals (win rate by competitor)
- Commands premium pricing (ASP by segment)
- Creates urgency (sales cycle length)
- Justifies full price (discount frequency)
- Communicates capabilities (feature objections)
- Attracts right-fit customers (churn by segment)
- Positions full value upfront (expansion rate)
These are the metrics that determine PMM's business impact.
When I switched from tracking marketing metrics to tracking sales metrics, PMM's credibility with the revenue team completely changed.
The CRO started inviting me to forecast calls because I understood pipeline dynamics.
The CFO started asking me for input on pricing strategy because I tracked ASP and discount patterns.
Sales leadership started treating PMM as partners instead of content producers because I measured success the same way they did.
If you're a product marketer tracking MQLs and content downloads, you're measuring the wrong things.
Track what actually matters: revenue efficiency.