Building a Win/Loss Taxonomy: How to Categorize Feedback You Can Actually Use
Random win/loss notes don't drive decisions. A solid taxonomy turns anecdotes into patterns you can measure, track, and act on systematically.
You've conducted 50 win/loss interviews. You have 50 documents full of insights. And when your VP asks "why are we losing to Competitor X?" you have to manually search through notes, reconstruct patterns from memory, and hope you're not missing critical data points.
This is the problem with unstructured win/loss feedback. Every interview is valuable. Collectively, they're impossible to analyze.
A win/loss taxonomy solves this. It's a standardized categorization system that lets you tag every piece of feedback consistently, then aggregate across deals to spot patterns, trends, and systematic issues.
Here's how to build a taxonomy that turns interview notes into strategic intelligence.
Why Standard Categories Matter More Than Detailed Notes
Detailed interview notes capture nuance. But nuance doesn't scale.
If Interview 1 says "integration was complicated," Interview 2 says "API documentation was unclear," and Interview 3 says "our developers couldn't figure out the setup," you have three related problems that look like three different problems because you used different words.
A taxonomy forces consistency. All three get tagged as "Integration: Setup Complexity." Now you can see that 3 out of 20 losses mentioned integration setup as a factor. That's 15% of deals—worth investigating.
Without taxonomy, those three comments stay buried in prose. With taxonomy, they become a pattern you can measure and track over time.
The Core Taxonomy Structure: Primary Loss Reasons
Start with high-level categories that answer: Why did we lose this deal?
Product Fit
- Missing required features
- Doesn't solve core use case
- Poor performance or reliability
- Integration limitations
- Technical architecture mismatch
Price and Value
- Price too high
- ROI case not compelling
- Value not differentiated from alternatives
- Budget constraints (external factor)
Market Position
- Competitor had better brand/reputation
- Competitor had deeper relationship
- Incumbent advantage (switching cost)
- Lack of category leadership
Sales Process
- Poor sales execution or responsiveness
- Demo didn't resonate
- Champion couldn't build internal consensus
- Lost executive sponsorship
- Long or unclear sales cycle
External Factors
- Timing (postponed, not right now)
- Organizational change (merger, reorg, freeze)
- Compliance or regulatory issues
- Geographic or industry constraints
Each category should be mutually exclusive. If a loss could fit two categories, your taxonomy needs refinement.
The goal: any loss should clearly map to one primary reason. If you're debating which category fits, your categories aren't well defined.
Secondary Tags: The Context That Explains Patterns
Primary categories tell you what happened. Secondary tags tell you why it matters.
Competitor context
- Which competitor won (if known)
- What competitor capability was most compelling
- Whether this was an incumbent vs. newcomer dynamic
Deal characteristics
- Deal size (segment: SMB, mid-market, enterprise)
- Industry vertical
- Use case or buyer persona
- Sales cycle length
Stakeholder dynamics
- Who championed your solution (buyer persona)
- Who vetoed or blocked (IT, security, procurement, executive)
- Whether consensus existed or stakeholders conflicted
Stage of failure
- Never engaged
- Lost in discovery
- Lost after demo
- Lost in evaluation/POC
- Lost in negotiation
Secondary tags reveal patterns primary categories miss. If you're losing enterprise deals to incumbents but winning mid-market greenfield opportunities, that's a go-to-market strategy insight. You won't see it unless you tag deals by size and competitive context.
How to Tag Wins (Yes, Categorize Why You Win Too)
Most teams only analyze losses. That's a mistake.
If you don't know why you win, you can't replicate success. You need a win taxonomy that mirrors your loss taxonomy.
Why we won:
Product Superiority
- Better features or capabilities
- Easier to use or implement
- Better performance
- Superior integrations
Value and ROI
- Better price-to-value ratio
- Clearer ROI case
- Lower total cost of ownership
Market Position
- Stronger brand or reputation in target segment
- Better customer references
- Category leadership
Sales Execution
- Responsive and consultative sales process
- Demo resonated with key stakeholders
- Champion successfully built consensus
- Strong executive relationship
Competitive Displacement
- Competitor failed to deliver
- Competitor had product gaps
- Competitive pricing or licensing advantage
When you tag both wins and losses consistently, you can compare: Do we win when product matters and lose when brand matters? Do we win against Competitor A but lose to Competitor B? These insights shape strategy.
Tagging Confidence Levels: When You're Not Sure
Not every interview gives you clear answers. Sometimes buyers don't know why they chose what they chose. Sometimes they won't tell you. Sometimes you have CRM notes but no interview.
Add confidence tags to account for data quality:
High confidence: Direct buyer interview with specific details
Medium confidence: Interview happened but buyer was vague, or data from sales notes that seem reliable
Low confidence: No interview, CRM notes only, or buyer deflected questions
This prevents you from treating speculation as fact. If 10 losses are tagged "price" but 8 are low confidence, you don't actually know price was the issue—you know your sales reps guessed it was price.
Only make strategic decisions based on high-confidence data. Use low-confidence tags to identify patterns worth investigating with better research.
The Taxonomy That Evolves With Your Market
Your first taxonomy won't be perfect. That's fine. Start with something, then refine as you learn.
Refinement pattern 1: Split overly broad categories
If "Product Fit" captures 60% of losses, it's too broad. Split it into more specific subcategories: "Missing Features," "Integration Issues," "Usability Problems," "Performance Concerns."
Refinement pattern 2: Merge rarely-used categories
If you have 12 categories but 3 of them account for only 2% of losses combined, you're over-indexing on edge cases. Merge rare categories into "Other" until they're common enough to warrant their own bucket.
Refinement pattern 3: Add new categories as market evolves
If you enter a regulated industry and suddenly "Compliance Requirements" comes up in 20% of deals, add it as a primary category. Your taxonomy should reflect current market dynamics, not historical ones.
Update your taxonomy quarterly. Too frequent and you can't track trends. Too infrequent and you're working with stale categories.
Turning Taxonomy Into Dashboards That Drive Decisions
Once you have consistent tagging, you can build views that answer strategic questions:
Question: Where should product invest next?
View: Top product-related loss reasons, filtered by deal size and vertical
If "Missing Feature X" is the #1 reason for enterprise losses but doesn't appear in SMB losses, you know where feature investment has the highest revenue impact.
Question: How do we perform against specific competitors?
View: Win rate by competitor, segmented by primary loss reason
If you lose to Competitor A mostly on price but lose to Competitor B mostly on features, you need different strategies for each competitive situation.
Question: Is our sales process improving?
View: Sales process-related losses over time (quarterly trend)
If Q1 had 15% of losses due to sales execution but Q3 has only 5%, your sales training is working. If it's increasing, you have a systematic sales problem.
Question: Are we winning in our target segment?
View: Win rate by deal size, industry, and persona
If you win 70% of deals in mid-market healthcare but only 30% in enterprise fintech, your positioning and GTM strategy should reflect where you actually win.
A good taxonomy makes these dashboards trivial to build. Without taxonomy, every question requires manual research.
The Tagging Workflow That Ensures Consistency
Taxonomy only works if everyone tags consistently. That requires process.
Step 1: Create a tagging guide with examples
Don't just list categories. Show examples of what fits each category and what doesn't.
"Missing Features: Customer needed capability we don't have (e.g., SSO, HIPAA compliance, specific integration)"
"Integration Issues: We have the capability but setup/implementation was too complex (e.g., API confusion, unclear documentation)"
The line between these two categories is subtle. Examples make the distinction clear.
Step 2: Assign one person to do initial tagging
If five people tag interviews, you'll get five different interpretations of categories. One person ensures consistency.
After interviews, the researcher tags findings based on the guide. Over time, they develop intuition for edge cases.
Step 3: Review tags quarterly for drift
Even with one person tagging, interpretation drifts over time. Quarterly, spot-check 10 random interviews and verify tags still match the guide. If not, re-tag historical data or update the guide.
Consistent taxonomy requires discipline. But the payoff is analysis that actually reflects reality instead of whoever's interpreting data.
Kris Carter
Founder, Segment8
Founder & CEO at Segment8. Former PMM leader at Procore (pre/post-IPO) and Featurespace. Spent 15+ years helping SaaS and fintech companies punch above their weight through sharp positioning and GTM strategy.
More from Win/Loss Analysis
Ready to level up your GTM strategy?
See how Segment8 helps GTM teams build better go-to-market strategies, launch faster, and drive measurable impact.
Book a Demo
