The VP Marketing asked me to set OKRs for the PMM team. "Make them measurable," she said. "Tie them to business outcomes."
I spent two weeks crafting what I thought were excellent OKRs:
Objective: Drive product adoption and revenue growth
Key Results:
- Increase monthly active users by 25%
- Improve marketing qualified lead (MQL) conversion by 15%
- Launch 12 products with 90%+ on-time delivery
- Achieve 95% sales certification on new products
- Reduce customer churn by 10%
I was proud of these. They were specific, measurable, tied to business outcomes. I presented them confidently.
The VP Marketing frowned. "How much of MAU growth does PMM actually control?"
Me: "Well, that's shared with Product and Marketing..."
Her: "And MQL conversion—isn't that mostly Demand Gen's responsibility?"
Me: "Yes, but PMM influences messaging..."
Her: "So you're being measured on metrics you don't control?"
I realized my mistake. I'd chosen metrics that sounded impressive but didn't reflect what PMM actually owned. I'd be accountable for outcomes I couldn't directly influence.
Worse, my key results were all lagging indicators. By the time we'd know if we hit them, it would be too late to course-correct.
We spent the next quarter iterating on PMM OKRs. We went through three complete revisions before landing on a framework that actually worked.
Here's what I learned about setting OKRs for product marketing—and what actually drives accountability versus what creates false precision.
What I Got Wrong: Choosing Metrics I Didn't Control
My first attempt at PMM OKRs had a fatal flaw: I'd chosen metrics PMM influenced but didn't own.
The test: "If my entire PMM team executed perfectly, could we still miss this metric because other teams underperformed?"
For every metric on my list, the answer was yes:
MAU growth: Could miss if Product builds features customers don't want or if Customer Success fails at onboarding.
MQL conversion: Could miss if Demand Gen targets wrong audience or if website UX is broken.
Churn reduction: Could miss if Product has quality issues or if Customer Success doesn't intervene.
These weren't PMM OKRs. They were company OKRs that PMM contributed to.
The problem with being measured on metrics you don't control: You end up spending all your time negotiating with other teams instead of doing your actual job.
What I should have done: Distinguish between metrics PMM contributes to and metrics PMM owns.
Contribute to (company goals, PMM inputs):
- Revenue growth
- Customer acquisition
- Product adoption
- Customer retention
Own (PMM-controlled outputs and outcomes):
- Sales win rates in competitive deals
- Launch readiness and execution quality
- Sales team product knowledge and certification
- Time to productivity for new product launches
The first set are important but shared. The second set are directly influenced by PMM quality.
The Framework That Actually Worked: Inputs, Outputs, Outcomes
After three failed attempts, we landed on a framework with three layers:
Layer 1: Inputs (Activities PMM commits to doing)
These are the things PMM will definitely do, regardless of outcome. They establish baseline execution expectations.
Examples:
- Conduct 30 customer interviews per quarter
- Publish competitive intel updates weekly
- Deliver sales enablement 2 weeks before every launch
- Complete win/loss interviews for 80% of closed deals
Why this matters: Establishes discipline and baseline rigor. If PMM isn't doing these inputs consistently, we can't expect good outcomes.
Layer 2: Outputs (Deliverables PMM creates)
These are the artifacts PMM produces. Quality and timeliness matter.
Examples:
- 90% of sales reps certified on new products within 2 weeks of launch
- Battle cards for top 5 competitors refreshed monthly
- Launch briefs delivered 6 weeks before ship date
- Messaging frameworks completed for 100% of T1/T2 launches
Why this matters: Demonstrates PMM is delivering what stakeholders need, when they need it. Addresses the "PMM is too slow" complaint.
Layer 3: Outcomes (Business impact from PMM work)
These are the results that matter. They're influenced by PMM quality but not entirely controlled.
Examples:
- Win rate against top competitor improves from 35% to 45%
- New product attach rate reaches 25% within 90 days of launch
- Sales ramp time for new hires reduced from 90 to 60 days
- Customer-reported positioning clarity increases 20% in surveys
Why this matters: Proves PMM drives business value, not just creates deliverables.
Our Final OKRs (What Actually Worked)
After iterations, here's what our Q3 2024 PMM OKRs looked like:
Objective 1: Improve competitive win rates through better intelligence and enablement
KR1 (Output): Deliver refreshed battle cards for top 3 competitors monthly, with 90%+ sales team utilization
KR2 (Outcome): Increase win rate against Competitor X from 35% to 45%
KR3 (Input): Complete 20 win/loss interviews and identify top 3 competitive gaps
Why this worked: Clear PMM ownership. We control battle card quality and timeliness (Output). We influence win rates through enablement quality (Outcome). We commit to research rigor (Input).
Objective 2: Drive faster adoption of new products through launch excellence
KR1 (Output): Achieve 95% sales certification on all T1/T2 launches within 2 weeks
KR2 (Outcome): Reach 30% attach rate for new product within 60 days of launch
KR3 (Input): Deliver launch briefs and enablement materials 4 weeks before ship date
Why this worked: We control enablement delivery and certification (Output, Input). We heavily influence adoption through positioning quality (Outcome).
Objective 3: Build systematic competitive intelligence program
KR1 (Input): Publish weekly competitive intel updates capturing 80%+ of major competitor changes
KR2 (Output): Competitive intelligence database with 100% coverage of top 5 competitors
KR3 (Outcome): 80% of sales reps report competitive intel is "very helpful" in deals (quarterly survey)
Why this worked: We control the process and deliverables. We measure impact through stakeholder feedback, which PMM directly influences.
What Changed: From Vanity Metrics to Real Accountability
The difference between our first OKRs and final OKRs:
First attempt (what sounded good):
- Increase MAU 25%
- Improve MQL conversion 15%
- Reduce churn 10%
These looked impressive but were mostly outside PMM's control.
Final version (what PMM actually owns):
- Win rate against Competitor X: 35% → 45%
- Sales certification rate: 95% within 2 weeks
- Attach rate for new products: 30% within 60 days
These were directly influenced by PMM work quality.
The shift: From company metrics PMM contributes to → PMM metrics PMM owns.
The Leading Indicators That Actually Predicted Success
The biggest revelation: OKRs should include leading indicators you can influence, not just lagging indicators you measure.
Lagging indicator: Win rate improved 10 percentage points
Leading indicators:
- 90% of sales reps used battle cards in competitive deals
- 85% of sales reps reported battle cards were "very helpful"
- Average time from competitive intel to updated battle card: <5 days
Why leading indicators matter: You can see problems early and fix them before they impact outcomes.
Example: Three weeks into the quarter, we checked leading indicators:
- Battle card usage: Only 60% (target: 90%)
- Battle card helpfulness rating: 70% (target: 85%)
This told us our battle cards weren't good enough. We revised them mid-quarter. By quarter end, usage hit 88% and win rates improved.
Without leading indicators, we'd only know at quarter end that we missed the target—too late to fix it.
The Metrics We Stopped Tracking (And Why)
Stopped tracking: Number of launches delivered
Why: Volume doesn't equal quality. We were doing 12 launches per quarter but half were poorly executed. Switching to "90% of T1/T2 launches delivered with full enablement 4 weeks before ship date" forced quality over quantity.
Stopped tracking: MQLs generated from product launches
Why: Too many variables outside PMM's control (paid spend, list quality, event attendance). Switched to "attach rate within 60 days" which better reflects positioning quality.
Stopped tracking: Customer NPS
Why: Influenced by product quality, customer success, support—not primarily PMM. Switched to "positioning clarity" in customer surveys, which PMM directly impacts.
Stopped tracking: Sales quota attainment
Why: Dozens of factors influence this. Switched to "sales ramp time" which PMM enablement directly impacts.
The principle: Stop tracking metrics that make PMM look good but don't reflect PMM quality.
How We Actually Measured These OKRs
Some OKRs are easy to measure (sales certification rates, launch delivery dates). Others require building measurement systems.
Measuring: Win rates in competitive deals
System:
- Salesforce report filtering for "Competitor X" in competitive field
- Monthly export to calculate win rate trend
- Win/loss interviews to validate data quality
Cadence: Updated monthly, reviewed in team OKR check-ins
Owner: PMM Ops person maintains report, shares with team
Measuring: Battle card utilization
System:
- Battle cards stored in Highspot (tracks views and downloads)
- Monthly survey asking sales reps: "Did you use battle cards in competitive deals this month?"
- Win/loss interviews asking if battle card was helpful
Cadence: Usage data pulled monthly, survey quarterly
Owner: PMM Ops pulls data, competitive intel PMM analyzes
Measuring: Positioning clarity
System:
- Added two questions to quarterly customer survey:
- "How clearly do you understand what [product] does?" (1-5 scale)
- "How well can you explain our product's value to colleagues?" (1-5 scale)
- Tracked quarter-over-quarter change
Cadence: Quarterly customer survey (handled by Customer Success)
Owner: PMM analyzes results, shares with team
The key: Don't let perfect measurement prevent setting useful OKRs. Directional accuracy is enough.
The Quarterly Rhythm That Made OKRs Useful
OKRs only work if you actually use them to drive decisions and prioritization.
Week 1 of quarter:
- Team workshop: Review last quarter's OKR performance
- Set current quarter OKRs based on company priorities
- Assign owners for each key result
- Define measurement systems and reporting cadence
Weekly:
- 15-minute OKR standup in team meeting
- Each owner shares progress, flags blockers
- Team identifies what needs to accelerate or pivot
Mid-quarter (Week 6-7):
- Full team review of OKR progress
- Identify if we're on track or need course correction
- Adjust tactics if leading indicators are off
End of quarter:
- Score OKRs (0.0 to 1.0 scale)
- Retrospective: What worked, what didn't
- Document learnings for next quarter
The discipline: If you're not reviewing OKRs weekly, they're not actually driving your work.
What "Good" OKR Performance Actually Looks Like
Common misconception: You should hit 100% of OKRs.
Reality: If you hit 100% of OKRs, you set them too low.
The scoring framework we used:
0.0-0.3: Significantly missed. Something went wrong (lack of resources, wrong strategy, external factors).
0.4-0.6: Partially achieved. Made progress but fell short of ambitious target.
0.7-0.9: Achieved or exceeded. Good execution and realistic but stretching goals.
1.0+: Crushed it. Either got lucky or set targets too conservatively.
Our target: Average 0.7 across all OKRs
If we're consistently scoring 0.9+, we're not setting ambitious enough goals. If we're scoring below 0.5, we're being unrealistic or under-resourced.
Example Q3 results:
Objective 1: Competitive win rates
- KR1 (Battle cards refreshed monthly): 1.0 (achieved)
- KR2 (Win rate 35% → 45%): 0.8 (reached 43%)
- KR3 (20 win/loss interviews): 0.6 (completed 12)
- Objective score: 0.8 (strong performance)
Objective 2: Product launch adoption
- KR1 (95% sales certification): 0.9 (achieved 93%)
- KR2 (30% attach rate): 0.5 (reached 22%)
- KR3 (Launch briefs 4 weeks early): 1.0 (achieved)
- Objective score: 0.8 (good execution, but adoption below target)
Overall quarter: 0.8 average—strong performance
The 0.5 on attach rate triggered investigation. We learned positioning was fine but pricing was barrier. Fed that insight to Product team. Fixed in following quarter.
The Mistakes Teams Make with PMM OKRs
Mistake 1: Too many OKRs
Some teams set 5-6 objectives with 15+ key results. Nobody can focus on that many things.
Fix: 3 objectives max, 3 key results each. Focus beats breadth.
Mistake 2: All lagging indicators
OKRs with only end-state metrics (win rates, revenue, adoption) provide no early signals.
Fix: Mix of leading and lagging indicators. Include inputs and outputs alongside outcomes.
Mistake 3: Metrics you don't control
Being measured on company-wide metrics that PMM influences but doesn't own.
Fix: Focus on metrics PMM directly impacts. Contribute to company goals, but don't own them as PMM OKRs.
Mistake 4: No measurement system
Setting OKRs without infrastructure to measure them.
Fix: Define measurement approach when setting the OKR. If you can't measure it, don't make it a key result.
Mistake 5: Set and forget
Creating OKRs at start of quarter then never reviewing them.
Fix: Weekly check-ins, mid-quarter deep review, end-of-quarter retrospective.
For Teams Building PMM Measurement Systems
As PMM teams grow, tracking multiple OKRs across competitive intelligence, launches, and enablement becomes complex. Some teams find value in platforms like Segment8 that consolidate metrics from competitive programs, launch performance, and sales enablement into unified dashboards—reducing the operational overhead of measuring PMM impact across disconnected systems.
The Uncomfortable Truth About PMM OKRs
Most PMM teams avoid setting real OKRs because it creates accountability they're not ready for.
It's easier to say "PMM contributes to revenue growth" than to commit to "Win rate against Competitor X improves from 35% to 45%."
The first is vague enough to claim success regardless of results. The second is specific enough to be held accountable.
Real OKRs create real accountability. That's uncomfortable.
But it's also how PMM earns credibility and influence. When you commit to measurable outcomes and deliver them, you become indispensable. When you hide behind vague contributions, you become expendable.
The teams that scale PMM successfully:
- Set specific OKRs focused on metrics PMM owns
- Mix inputs, outputs, and outcomes
- Include leading indicators for early course correction
- Review progress weekly and adjust tactics mid-quarter
- Accept accountability for results
The teams that stay in ambiguous "we contribute" mode:
- Avoid specific commitments
- Measure only lagging indicators
- Set OKRs once and forget them
- Blame other teams when metrics miss
The choice is yours: Vague contributions that make you replaceable, or specific outcomes that make you essential.
Set real OKRs. Track them rigorously. Deliver them consistently. That's how PMM becomes strategic.