I'd been using Pragmatic Framework for two years. I knew the boxes. I could build buyer personas, create positioning, plan launches, enable sales.
Then I watched a senior PMM use the framework differently.
We were debating which market segment to target: mid-market or enterprise. Product wanted enterprise (bigger deals). Sales wanted mid-market (shorter cycles). Everyone had opinions, nobody had data.
The senior PMM said: "Let's use the framework to decide."
She pulled out Market box insights (buyer research), Focus box analysis (competitive positioning by segment), Business box data (pricing and win rates), Programs box metrics (launch success by segment), and Readiness box feedback (what sales could actually execute).
Within 30 minutes, the data made the choice obvious: mid-market for the next 12 months, then expand to enterprise.
That's when I learned: Basic Pragmatic is about executing the boxes. Advanced Pragmatic is about using the boxes to drive strategic decisions.
Beyond Executing the Framework: Strategic Application
Most PMMs use Pragmatic Framework tactically. They interview customers and build personas for the Market box. They write positioning statements for the Focus box. They create pricing calculators for the Business box. They plan launches for the Programs box. They make sales decks for the Readiness box.
That's execution. It's valuable. But it's not strategic.
Advanced Pragmatic means using the framework to make market entry decisions—which segment to target, which category to own. It means influencing product strategy—what to build and what not to build. It means creating competitive moats—sustainable differentiation competitors can't copy. It means driving organizational change—getting product, sales, and marketing aligned on a single direction.
Here's how I learned to do this.
Using Buyer Research to Reshape Product Roadmap
For a year, I'd been doing basic Market box work: interview customers, document personas, share insights with product. Product would thank me, file the insights, and continue building whatever they'd already planned.
Then I learned to connect buyer research directly to product decisions.
I was at a company where product built features based on customer requests. A customer would ask for feature X, product would add it to roadmap, we'd build it, and nobody would use it. This happened repeatedly.
I interviewed customers not about features but about jobs. What decision were you trying to make when you opened the product? What happened when you couldn't accomplish that? How much time or money did that cost you?
One customer said: "I needed to forecast next quarter's pipeline for a board meeting. I spent three hours manually pulling data from Salesforce into a spreadsheet. I presented it to the board and they questioned my numbers because the data was already a week old. I looked incompetent."
That's not a feature request. That's a job—produce an accurate forecast for board scrutiny—with real business impact. He does this weekly. The current solution (manual spreadsheets) costs him three hours per week and creates board-level risk.
I interviewed 15 more customers about their jobs. Forecasting next quarter's pipeline came up in 12 of those conversations. Everyone did it weekly. Everyone felt board-level pressure. Everyone used manual spreadsheets.
I mapped this to the feature product was planning to build: a competitor tracking dashboard. I asked the same 15 customers about competitor tracking. Three of them mentioned it. They did it quarterly. The business impact was low—nice to know, not need to know. They had no current solution because it wasn't urgent enough to solve.
I presented this to product: "We're planning to build competitor tracking. Three customers mentioned it, they do it quarterly, low business impact. But 12 customers desperately need automated forecasting for board meetings. They do it weekly, high business impact, currently using painful manual processes. Which should we build?"
Product built the forecasting feature. Adoption was 80% within the first month. The competitor tracking dashboard stayed on the backlog.
What changed: I stopped documenting what customers asked for and started mapping jobs to urgency based on frequency and business impact. Product used that data to prioritize features that solved urgent problems instead of nice-to-have requests.
Finding Underserved Segments Through Win/Loss Patterns
Most win/loss analysis asks a single question: Why did we win or lose this deal?
I learned to ask a different question: What pattern of customers do we consistently win, and what pattern do we consistently lose?
I analyzed six months of win/loss data and segmented it by customer type. Mid-market SaaS companies: 65% win rate, 6-week sales cycle, $40K average contract value, 95% retention. Why we won: fast implementation compared to alternatives. Why we lost: price compared to enterprise platforms.
Enterprise companies: 25% win rate, 6-month sales cycle, $150K average contract value, 70% retention. Why we won: existing relationship with champion. Why we lost: missing features enterprise required.
SMB companies: 45% win rate, 3-week sales cycle, $10K average contract value, 60% retention. Why we won: easy buying process. Why we lost: churned after 12 months when they outgrew basic features.
We were trying to win in all three segments. Our positioning was "for companies of all sizes." Our roadmap had features for SMB (simple workflows), mid-market (integrations), and enterprise (advanced security). We weren't great at any of them.
I presented to product and the exec team: "We have 65% win rate in mid-market with 95% retention. We have 25% win rate in enterprise with 70% retention. We should dominate mid-market and stop competing in enterprise until we have the features to actually win those deals."
Product focused the roadmap on mid-market capabilities. We stopped positioning for enterprise. We let enterprise deals go. Within two quarters, our mid-market win rate increased to 78% and retention stayed above 90%.
What changed: I stopped treating win/loss as deal-by-deal feedback and started looking for segment-level patterns. The data showed us which customers we were built to serve and which ones we should walk away from until we could genuinely win.
Building Positioning That Competitors Can't Copy
Most positioning is feature-based. We have X capability. We do Y better than competitors.
The problem: competitors add that feature six months later, and your differentiation disappears.
I learned to position on workflow integration instead of features.
We sold a revenue intelligence product. I could position on features: "We have forecast accuracy and pipeline visibility." Competitors could copy those features in a quarter. They'd announce "Now with forecast accuracy" and suddenly we weren't different anymore.
Instead, I positioned on workflow: "We're the only revenue intelligence platform built into your existing forecast workflow—no new tools, no data migration, works with what you already use."
Competitors couldn't copy this in a quarter. It required deep integrations with the tools our customers already used. It required understanding the weekly forecast workflow. It required designing around how revenue leaders actually worked, not just what features they said they wanted.
When a competitor announced "Now with forecast accuracy," it didn't matter. Their solution required migrating data and learning a new tool. Ours fit into the workflow they already had.
The positioning held for 18 months even as competitors added similar features. The differentiation wasn't the feature—it was how we delivered it within existing workflows.
Positioning Between Two Extremes
I worked at a sales engagement company competing against Outreach and Salesloft. We positioned as "better sales engagement" and lost constantly. We were smaller, less feature-rich, and couldn't compete on enterprise capabilities.
Then I learned the Goldilocks positioning strategy: position between two extremes instead of against the market leader.
The extremes were obvious. On one side: Outreach and Salesloft—enterprise platforms, expensive, slow to implement, incredibly feature-rich. On the other side: spreadsheets and manual processes—free, fast, but severely limited in capabilities.
We weren't trying to be better than Outreach. We couldn't win that fight. We also weren't a spreadsheet replacement. We had too much capability for that comparison.
So I positioned in the middle: "Not as heavy as enterprise platforms, not as limited as spreadsheets. The right fit for mid-market teams who need power without complexity."
Buyers who found Outreach too expensive came to us. Buyers who found spreadsheets too limited came to us. We stopped competing with either extreme—we owned the middle.
Pipeline doubled within a quarter. We weren't fighting for enterprise deals we'd lose or SMB deals we couldn't monetize. We were winning deals from customers who needed exactly what we offered: more than a spreadsheet, less than an enterprise platform.
Using Pricing Research to Validate Product Ideas
Most companies price after product is built. We built this, now what should we charge?
I learned to use pricing research before building features to validate whether customers would actually pay for them.
Product wanted to build a major feature: custom reporting dashboards. The engineering estimate was 6 months. Before product started building, I ran pricing research with 20 customers.
If we built custom reporting dashboards, how much more would you pay per month? Most said $0-$50. Would you pay $200 per month for this capability? Most said no. If we don't build this, would you churn? Nobody said yes.
I ran the same research on automated forecast accuracy. How much more would you pay? Most said $200-$500. Would you pay $300 per month? Most said yes immediately. If we don't build this, would you churn? Half said they'd strongly consider leaving.
I presented to product: "Custom reporting has low willingness-to-pay, most customers won't pay extra for it, and nobody will churn without it. Automated forecasting has high willingness-to-pay, customers will pay $300/month, and we risk churn if we don't build it. Build forecasting first."
Product built forecasting. We priced it at $250/month additional. Attach rate was 60% within the first quarter. Custom reporting stayed in the backlog.
What changed: Pricing research became product validation. Before investing 6 months of engineering time, we asked customers what they'd pay for. High willingness-to-pay signaled urgent problems. Low willingness-to-pay signaled nice-to-haves that weren't worth building.
Creating Upgrade Paths Through Pricing Tiers
Most pricing is flat: one tier, one price, take it or leave it.
I learned to design pricing tiers that create natural upgrade paths as customers grow.
I designed three tiers. Starter at $500/month with basic features and limited to 5 users. Professional at $1,500/month with advanced features and unlimited users. Enterprise at custom pricing with custom workflows and dedicated support.
But here's what made it work: I designed the Professional tier to solve a problem Starter customers would hit after 6 months—team growth beyond the 5-user limit.
I talked to customers who'd been on Starter for 6+ months. Most had grown their team beyond 5 people. They were paying $500/month but couldn't add new users. The Professional tier solved exactly that problem at exactly the moment they hit it.
Forty percent of Starter customers upgraded to Professional within 9 months. We didn't have to sell the upgrade. The pricing structure created the upgrade motion naturally as customers grew.
Using Launches to Shape Market Categories
Most companies launch features with the same positioning each time. Every launch reinforces the same category, the same message, the same competitive frame.
I learned to use sequential launches to shift market positioning gradually over time.
We were positioned as a "sales engagement platform"—an established category, easy for buyers to understand, but incredibly crowded with competition.
I didn't want to stay there. I wanted to move to "revenue intelligence"—a newer category with less competition, but buyers didn't understand it yet.
I couldn't shift positioning overnight. If I launched in Q1 saying "We're now a revenue intelligence platform," buyers would be confused. Sales wouldn't know how to pitch it. The market wasn't ready.
So I sequenced the positioning shift across four launches. Q1 launch: positioned as "sales engagement platform" (where we were). Q2 launch: positioned as "sales engagement with revenue intelligence" (introduced the new concept). Q3 launch: positioned as "revenue intelligence for sales teams" (shifted the emphasis). Q4 launch: positioned as "revenue intelligence platform" (owned the new category).
Each launch reinforced the shift. By Q4, we'd moved from the crowded sales engagement category to the newer revenue intelligence category. Win rate increased 25% because we'd redefined the competitive landscape through sequential positioning shifts.
Buyers expected the change because we'd been preparing them for three quarters. Sales knew how to pitch it because they'd been practicing the transition. The market understood the category because we'd been educating them progressively.
Using Beta Programs to Test Positioning and Pricing
Most beta programs test product functionality. Does the feature work? Are there bugs? Can customers use it?
I learned to use beta programs to test positioning and pricing before general availability.
For one product launch, I ran the beta with two cohorts. Cohort A saw positioning as "workflow automation platform" and pricing at $2,000/month. Cohort B saw positioning as "sales productivity tool" and pricing at $1,000/month.
I measured which positioning drove faster adoption. Cohort A took 6 weeks average to activate. Cohort B took 2 weeks. I measured which pricing drove better retention. Cohort A had 90% retention after 90 days. Cohort B had 60% retention.
The data showed: "sales productivity tool" positioning drove faster adoption, but "workflow automation platform" positioning drove better retention. The $2,000 price point had better retention than $1,000, which meant customers who paid more valued it more.
I launched with "workflow automation platform" positioning at $2,000/month. Adoption was slightly slower than it could have been, but retention was excellent. We'd validated positioning and pricing with real customer behavior, not guesses.
Tracking Objections to Identify Product Gaps
Most objection handling is defensive. Here's how to overcome this objection. Here's what to say when prospects push back.
I learned to track objections systematically to identify product gaps that were costing us revenue.
I built a simple tracking system where sales logged every objection they heard in deals: the objection itself, how frequently it came up, and whether it was a deal-breaker or just a negotiation tactic.
After a quarter, the data showed clear patterns. "You don't have SSO" came up in 60% of enterprise deals and was a deal-breaker for 40% of those. "No mobile app" came up in 30% of deals and was a deal-breaker for 10%. "No custom reporting" came up in 25% of deals and was a deal-breaker for 5%.
I calculated the revenue impact. No SSO: losing 24% of enterprise deals (60% frequency × 40% deal-breaker rate). Average enterprise deal was $150K. We were evaluating 20 enterprise deals per quarter. We were losing approximately $720K ARR because we didn't have SSO.
No mobile app: losing 3% of deals. Revenue impact: $90K ARR. No custom reporting: losing 1.25% of deals. Revenue impact: $37K ARR.
I presented to product: "We're losing $720K ARR because we don't have SSO. We're losing $90K because we don't have mobile. We're losing $37K because we don't have custom reporting. Build in that order."
Product built SSO. Win rate in enterprise deals improved from 25% to 38% within two quarters. The objection tracking system turned sales feedback into product prioritization based on actual revenue impact, not just which objection was loudest.
Using Win Wires to Train Product Teams
Most win wires go to sales. Here's how we won this deal. Here's what worked. Copy this approach.
I learned to send win wires to product with competitive insights they could use to shape roadmap.
After every competitive win, I wrote a 3-minute win wire covering four things. Why was the customer evaluating? What was the trigger event that made them start looking? What competitors did they consider? Why did they choose us instead of those competitors? What almost made them choose a competitor instead of us?
I sent these to product weekly, not just to sales.
Product started seeing real market feedback in near-real-time. They learned which capabilities consistently won deals. They learned which gaps consistently came close to losing deals. They learned which competitive threats were growing and which ones were fading.
After six months of weekly win wires, product came to me and said: "We're seeing the same competitor mentioned in 40% of win wires, and the reason we're winning is our integration with Salesforce. We should double down on that integration and make it even stronger."
They were right. The win wires had trained them to see patterns I was seeing in the market. They started making product decisions based on real competitive feedback, not just on what customers asked for in feature requests.
Using the Framework for Strategic Decisions
The most advanced way to use Pragmatic Framework isn't about executing any single box better. It's about using all five boxes together to make strategic decisions.
I was at a company where product wanted to build a mobile app. The CEO wanted it. Product said customers were asking for it. Engineering estimated 9 months of development time.
Instead of debating based on opinions, I used the framework to evaluate systematically.
Market box: I interviewed 20 customers about mobile. Fifteen percent mentioned mobile as a requirement. For most, it was nice-to-have, not a deal-breaker. Only 3% said they'd churn without it. I analyzed win/loss data: we'd never lost a deal because we didn't have mobile.
Focus box: I evaluated whether mobile capability would change our competitive positioning. Our competitors didn't have mobile either. Having it wouldn't differentiate us. Not having it wasn't hurting us.
Business box: I ran pricing research. How much more would you pay for mobile? Most customers said $0-$50/month. Would you pay $200/month for mobile access? Most said no. The willingness-to-pay was minimal.
Programs box: I evaluated how mobile would impact our launch strategy. It wouldn't open new market segments. It wouldn't change how we position the core product. It would be a nice feature addition, not a market-shifting launch.
Readiness box: I talked to sales about whether they could sell mobile. Most said: "Customers ask about it occasionally, but it's never the reason they buy or don't buy."
I presented this to the exec team: "Market box shows 3% of customers would churn without mobile, we've never lost a deal because of it. Focus box shows it won't change our competitive position. Business box shows minimal willingness-to-pay. Programs box shows it won't shift our market positioning. Readiness box shows sales rarely positions it as a selling point. Recommendation: delay mobile for 12 months and invest the 9 months of engineering time in capabilities that do differentiate us."
We delayed mobile. We built automated forecasting instead, which Market box data showed 70% of customers desperately needed. Win rate increased. Revenue grew. Nobody churned because we didn't have mobile.
The framework didn't make the decision. It organized the data so the decision became obvious.
The Shift from Tactical to Strategic
Most PMMs spend their careers executing the boxes. They build personas for Market. They write positioning for Focus. They create pricing calculators for Business. They plan launches for Programs. They make sales decks for Readiness.
That's valuable work. But it's tactical, not strategic.
The shift from tactical to strategic happens when you stop asking "How do I complete this box?" and start asking "What decision does this box help us make better?"
Tactical PMMs complete buyer personas and share them with product. Strategic PMMs use buyer research to reshape product roadmap by connecting customer jobs to feature urgency.
Tactical PMMs write positioning statements and give them to sales. Strategic PMMs build positioning that creates defensible competitive moats by focusing on workflows competitors can't easily replicate.
Tactical PMMs set prices based on competitor analysis. Strategic PMMs use pricing research to validate product ideas before engineering builds them.
Tactical PMMs plan launches and coordinate teams. Strategic PMMs use sequential launches to shift market categories over time.
Tactical PMMs train sales on product features. Strategic PMMs track sales objections to identify revenue-impacting product gaps.
The framework is the same. How you use it changes everything.
After using Pragmatic for two years tactically, I started using it strategically. My impact on the business increased dramatically. I wasn't just executing GTM work—I was shaping product strategy, influencing market perception, and driving decisions that changed company direction.
That's advanced Pragmatic. Not better execution of the boxes. Better use of the framework to drive the decisions that matter most.