Launch Tier Frameworks: T1/T2/T3 Criteria & Resource Allocation

Launch Tier Frameworks: T1/T2/T3 Criteria & Resource Allocation

I watched a VP of Product Marketing defend their launch tiering system to an angry executive team. They'd just spent six weeks coordinating a T1 launch for a feature update that generated $200K in pipeline. Meanwhile, a genuinely transformative product sat in beta for three months because "we didn't have the resources for another T1."

The executive asked the obvious question: "Why did we treat a minor feature like a company-defining launch and starve our actual strategic bet?"

The VP pulled up their tiering criteria document. It was a beautiful matrix with weighted scores across eight dimensions—market size, competitive differentiation, revenue potential, strategic importance, technical complexity. They'd scored the feature update at 87 points. The transformative product scored 82.

The executive stared at the spreadsheet and said what everyone was thinking: "This is bullshit. We optimized for scoring well on your framework instead of asking what actually matters to the business."

I've seen this pattern repeat at a dozen companies. PMMs build elaborate tiering frameworks that feel rigorous but produce absurd outcomes. You end up with three T1 launches per quarter because everything scores high enough to qualify. Your tiering system becomes a political weapon where whoever lobbies hardest gets T1 resources.

The problem isn't that tiering frameworks are wrong—it's that most PMMs design them to score launches instead of to make resource allocation decisions. Those are fundamentally different objectives.

What Tiering Actually Does

Before I understood how tiering worked, I thought it was about organizing launches into neat categories. T1 is big, T2 is medium, T3 is small. Simple taxonomy.

That's not what tiering does. Tiering is a forcing function for resource allocation battles.

Here's what actually happens when you launch a product: Six different teams (PMM, product, sales, marketing, CS, eng) all have work to do. Each team has a finite budget and capacity. Someone has to decide how much of that budget goes to this launch versus everything else competing for attention.

Without tiering, every product manager argues their launch deserves full company support. Every launch becomes a negotiation. Sales leadership doesn't know which launches to prioritize in enablement. Marketing doesn't know which campaigns get the biggest budget. PMMs don't know where to spend their time.

Tiering makes one strategic decision—"This is a T1"—that cascades into a dozen resource allocation decisions automatically:

A T1 means sales gets two full enablement sessions plus ongoing coaching. Marketing allocates $150K to the campaign. PMM dedicates 60% of their time for eight weeks. The CEO mentions it in the board deck.

A T2 means sales gets one enablement webinar. Marketing allocates $30K. PMM dedicates 20% of their time for four weeks. The launch doesn't make the board deck.

A T3 means sales gets an email with talk tracks. Marketing repurposes existing content. PMM spends a week total. Nobody above director level hears about it.

The tier isn't a category—it's a resource commitment. And the criteria for assigning tiers should optimize for one question: "What level of resource commitment will maximize return for the business?"

That's a completely different question than "How do we score this launch on eight weighted dimensions?"

The Resource Allocation Lens

I rebuilt our tiering framework after watching that VP get torn apart by executives. Instead of starting with scoring criteria, I started with resource constraints.

I mapped out exactly what each tier would receive across every team:

T1 Commitment:

  • PMM: One PMM dedicated full-time for 8 weeks pre-launch, 4 weeks post-launch
  • Sales: Two full-day enablement sessions, customized pitch decks per segment, weekly office hours for 60 days
  • Marketing: $150K campaign budget, dedicated demand gen PM, analyst briefings, PR push
  • Product: Executive sponsor, launch bug priority over new features, engineering support for demos
  • CS: Proactive outreach to top 100 accounts, expansion playbooks, specialized training
  • Executive: CEO keynote, board presentation, all-hands announcement, customer advisory board discussion

T2 Commitment:

  • PMM: One PMM at 30% capacity for 4 weeks pre-launch, 2 weeks post-launch
  • Sales: One enablement webinar, standard pitch deck, FAQ document
  • Marketing: $30K campaign budget, email campaigns, blog posts, social amplification
  • Product: PM sponsor, launch bugs prioritized within normal sprint cycle
  • CS: Email announcement to customer base, standard documentation
  • Executive: VP mention in quarterly planning, skip-level update

T3 Commitment:

  • PMM: One PMM at 10% capacity for 2 weeks total
  • Sales: Email with 3-sentence pitch and talk tracks, no live training
  • Marketing: $5K budget, repurposed content only, organic social
  • Product: No special prioritization
  • CS: Knowledge base article, no proactive outreach
  • Executive: None

When I showed this to leadership, the conversation shifted immediately. Instead of debating scoring methodologies, we debated resource strategy.

"Can we actually deliver two full-day enablement sessions per quarter? What if we have three T1s?"

"No. We can do maybe four T1s per year. So we need to pick the four launches that justify this level of investment."

That constraint forced the right question: "Which launches are worth a dedicated PMM for three months, $150K in marketing spend, and two days of the entire sales team's time?"

Suddenly the minor feature update didn't look like a T1 anymore.

The Three Questions That Determine Tier

Once you've defined the resource commitment for each tier, the criteria become obvious. You're asking three questions:

Question 1: Will this launch create enough new revenue to justify the resource investment?

Not "is this strategically important" or "does this have competitive differentiation." Those are real considerations, but they don't drive tiering.

The question is: If we dedicate a PMM full-time for three months, spend $150K on campaigns, and take two days of sales time for enablement—will this generate enough pipeline to make that ROI positive?

For a T1, I use a simple benchmark: The launch should create at least 3x the fully-loaded cost of the resources committed. If the total resource commitment costs $400K (PMM time, marketing budget, sales opportunity cost, etc.), the launch should generate at least $1.2M in new pipeline within six months.

If it won't hit that threshold, it's not a T1. Doesn't matter how "strategic" it is.

This forces honest conversations. A product manager will argue their launch is strategic because it enters a new market. I'll ask: "Will we close $1.2M in that new market in six months?" If the answer is no, we tier it down and allocate resources accordingly.

Question 2: Does this launch require behavior change across the entire sales organization?

Some launches require every rep to sell differently. New positioning, new use cases, new competitive differentiation. You're asking hundreds of people to unlearn old pitches and learn new ones.

That level of behavior change requires T1 resources. You need multiple enablement sessions, ongoing coaching, executive reinforcement, continuous feedback loops. Anything less and reps won't change how they sell.

Other launches are additive. "Here's a new feature that solves X problem for Y customer segment. If you see this use case, mention this capability." That doesn't require company-wide behavior change. A T2 or T3 enablement approach works fine.

The mistake I see PMMs make: They tier based on product scope (big product = T1) instead of behavior change scope. I've seen massive product launches that were terrible T1s because they didn't change how anyone sold. And I've seen small feature launches that were perfect T1s because they enabled a completely new sales motion.

Question 3: What's the cost of under-investing versus over-investing?

This is the asymmetric risk question.

Some launches have massive downside risk if you under-resource them. You're entering a new category where first-mover advantage matters. Competitors are watching. Analysts are forming opinions. Early customer success determines whether you get market traction or get written off.

Under-investing in these launches is catastrophic. You get one shot to make a market impression. If the launch flops because you didn't enable sales properly or didn't run a big enough marketing campaign, you've poisoned the market. Even a huge re-launch six months later won't recover the momentum.

Those launches should be T1 even if the short-term revenue projection is modest. The cost of failure is too high.

Other launches have low downside risk. You're adding a feature to an existing product. Customers will discover it organically. Even if the launch generates zero awareness, the feature will eventually get adopted through normal product usage and CS conversations.

Over-investing in these launches is wasteful. You spend T1 resources on something that would succeed as a T3. That's $150K in marketing budget you could have spent elsewhere.

I learned this the hard way when we treated an API update as a T1 because the product team was excited about it. We spent eight weeks on enablement and campaigns. Six months later, 90% of adoption came from customers who read the changelog and enabled it themselves. We'd have gotten the same outcome spending 10% of the resources.

The Political Reality of Tiering

The hardest part of tiering isn't the framework—it's the organizational politics.

Every product manager believes their launch deserves T1 treatment. Every executive has a strategic priority they want elevated. Your tiering framework becomes a battleground where people lobby for higher tiers to get more resources.

I've watched PMMs try to solve this with "objective" scoring systems. They build elaborate point systems where launches are scored on weighted criteria. The idea is that if the framework is objective, people can't argue with the results.

This never works. People just game the scoring system. They inflate market size estimates. They claim competitive differentiation that doesn't exist. They argue their launch is strategically important because it aligns with some executive's OKR.

You end up spending more time debating scores than making actual tiering decisions.

Here's what actually works: Own the subjectivity.

I run tiering as a small group decision with four people in the room: VP of Product Marketing, VP of Product, CRO, and CMO. We look at the upcoming launches, we review the resource commitments for each tier, and we make a call based on the three questions above.

Someone always disagrees. A product manager argues their launch deserves T1. A sales leader wants more resources for a T2. That's fine. We hear them out, we consider the argument, and then we make a decision and move on.

The key is making the trade-offs explicit. When someone argues for promoting a launch to T1, I ask: "Which current T1 should we demote to make room?" That forces the real conversation.

We can't run four T1s in a quarter—we don't have the resources. So if we promote this launch to T1, something else gets demoted. Which launch should get fewer resources?

That question usually ends the lobbying. People stop arguing for T1 when they realize it means taking resources away from another launch they also care about.

How Tiers Actually Work in Practice

The mistake I made early in my career: I thought the tier determined the launch's success. T1 launches would succeed, T3 launches would fail.

That's not how it works. Tier determines resource allocation. Success depends on whether the launch strategy matches the resource level.

I've seen brilliant T3 launches that generated massive adoption because the PMM designed a strategy that worked with limited resources. They picked a product that could be explained in one email. They enabled sales with simple talk tracks that reps could memorize in five minutes. They targeted a customer segment that was already asking for the capability.

The launch succeeded because the strategy was optimized for T3 constraints.

I've also seen T1 launches flop despite massive resource investment because the fundamental strategy was broken. Great enablement can't save a confused value prop. A $150K marketing campaign can't create demand for a product nobody wants.

The tier gives you resources. You still have to know what to do with them.

Here's what I've learned about each tier:

T1 is for category-defining moments. You're changing how the market thinks about your company. That requires sustained investment, executive involvement, and company-wide alignment. Use T1 when you need to shift customer perception, enter a new market, or launch a product that changes your competitive position.

T2 is for meaningful additions to existing motions. You're not changing how the company goes to market, but you're giving sales a new capability to sell or giving customers a new reason to buy. Use T2 when the launch enhances what you already do well but doesn't require wholesale behavior change.

T3 is for improvements that customers will discover organically. You're making the product better, but you don't need to create awareness or change behavior. Customers will find the capability through normal product usage. Use T3 when adoption will happen naturally without significant GTM investment.

The tiers aren't value judgments. A T3 launch isn't less important than a T1—it just needs different resources. Some of the most valuable product improvements I've launched were T3s because they solved customer problems without requiring massive GTM overhead.

When Tiering Breaks Down

I've seen three scenarios where tiering frameworks fail completely:

Scenario 1: You have too many T1s.

If you're running more than one T1 per quarter, your tiering system isn't working. T1 requires full organizational focus. You can't give full focus to three simultaneous launches.

What usually happens: Teams spread T1 resources across multiple launches. Each launch gets half the enablement it needs, half the marketing budget it needs, half the PMM attention it needs. None of the launches succeed at T1 level.

Better approach: Pick one T1 per quarter. Be ruthless. Tier everything else down and resource it appropriately.

Scenario 2: You tier based on product scope instead of GTM scope.

Big product launches aren't automatically T1. The tier should reflect the GTM complexity, not the engineering complexity.

I've seen huge product launches that were perfect T2s because the GTM motion was straightforward: "Customers are already asking for this, here's how to sell it." And I've seen small feature launches that required T1 because they enabled a completely new sales play.

Tier based on what the launch requires from GTM teams, not what it required from engineering.

Scenario 3: You let politics override strategy.

An executive declares that their pet project is a T1. You don't have the resources for another T1, but you can't say no to an executive, so you tier it T1 and under-resource everything.

This destroys the tiering system. If tiers are political instead of strategic, nobody trusts them. Teams stop planning around tiers because they know the tiers will change based on who has the most influence.

The only solution is executive alignment on tiering principles before you assign any specific tiers. Get the leadership team to agree: We can support X T1s per year, Y T2s per quarter, unlimited T3s. Any launch that exceeds those caps gets deferred or tiered down.

Make that agreement in advance. When someone lobbies for an exception, point back to the agreement.

What I'd Tell My Younger Self

If I could go back to when I built my first tiering framework, I'd tell myself this:

Tiering isn't about organizing launches into categories. It's about forcing honest resource allocation decisions in an environment where everyone wants more resources than you have.

Build your framework around constraints, not criteria. Start by defining exactly what each tier receives across every team. Make the resource commitments so clear that people understand the cost of each tier.

Then use three questions to assign tiers: Will this generate enough revenue to justify the investment? Does this require behavior change across the entire organization? What's the cost of getting this wrong?

Everything else—scoring systems, weighted matrices, elaborate criteria—is theater. It makes tiering feel rigorous but produces bad decisions.

The best tiering frameworks are simple. Everyone understands what each tier means. Everyone knows how tiers get assigned. And everyone trusts that the tiers reflect actual strategic priorities instead of political maneuvering.

That simplicity is harder than it looks. It requires saying no to executives, demotion launches that product teams are excited about, and defending resource constraints when people want exceptions.

But it's the only way tiering actually works. And when it works, it transforms how you launch products—because you're finally investing the right resources in the right launches instead of spreading everything thin across too many priorities.