Your product has 247 tracked events. Every button click, page view, and hover is instrumented. Your engineering team spent three months building the tracking infrastructure.
And when product asks "Should we invest in Feature X or Feature Y?" nobody knows which events to query to answer that question.
This is the trap of comprehensive event tracking. Tracking everything feels thorough, but it creates noise that obscures signal. You end up with hundreds of events that sounded useful when defined but never actually inform decisions.
After building event tracking strategies for six B2B products and cleaning up dozens of bloated analytics implementations, I've learned that effective event tracking isn't about completeness. It's about tracking the specific events that answer specific business questions.
Here's how to build an event strategy that actually gets used.
The Business Question Framework
Don't start with "What can we track?" Start with "What do we need to decide?"
List your top 10 business questions—the decisions your team needs to make quarterly:
- Should we build Feature X or prioritize Feature Y?
- Which onboarding approach drives better activation?
- What drives expansion revenue vs. churn?
- Which user segments have the highest LTV?
- Do free users ever convert to paid, and what triggers conversion?
For each question, identify the minimum events needed to answer it. This becomes your core event taxonomy.
Question: "Should we build Feature X or prioritize Feature Y?"
Events needed:
feature_x_used(with properties: user_id, timestamp, completion_success)feature_y_used(same properties)customer_renewedorcustomer_churned(to correlate feature usage with retention)revenue_expansion(to correlate with upsell)
That's four events that answer a strategic product question. You don't need to track every interaction within Feature X—just whether it was used and whether usage correlates with business outcomes.
Question: "Which onboarding approach drives better activation?"
Events needed:
onboarding_step_completed(with properties: step_name, user_id, timestamp, onboarding_version)activation_achieved(your specific activation definition)user_signed_up(to track cohorts)
Three events that let you compare onboarding variants and measure which drives activation. You don't need granular tracking of every tooltip and modal—just whether users complete each step and achieve activation.
This question-first approach prevents tracking bloat. Every event maps to a decision you actually need to make.
The Three-Tier Event Hierarchy
Organize events into three tiers based on importance and frequency of use.
Tier 1: Core business events (5-10 events)
These answer your most critical business questions. They should be:
- Stable (change rarely, so historical analysis remains valid)
- High-signal (directly tied to business outcomes)
- Frequently queried (used weekly or daily for decision-making)
Examples:
user_activated(achieved first value)user_retained(came back after 30 days)feature_adopted(used core feature)trial_started/trial_convertedcustomer_expanded/customer_churned
These events get the most attention: clean definitions, rigorous QA, comprehensive documentation. If these are wrong, your entire analytics foundation is broken.
Tier 2: Feature and workflow events (20-40 events)
These track specific product areas and workflows. They're used for feature-specific analysis but not company-wide reporting.
Examples:
report_created/report_sharedintegration_connected/integration_usedteam_member_invited/team_member_activateddashboard_customized
These events help product teams understand feature adoption and usage patterns. They're important but not critical infrastructure.
Tier 3: Diagnostic events (as needed)
These are temporary tracking for specific investigations. You add them, analyze, and often remove them.
Examples:
new_ui_flow_started(tracking adoption of a redesign)checkout_step_3_error_shown(debugging a specific problem)mobile_app_feature_attempted(validating mobile usage patterns)
Diagnostic events aren't permanent infrastructure. They're investigation tools. Add them for 30-90 days, get the insights you need, then sunset them.
This tier system prevents event sprawl. You're clear about which events are permanent infrastructure vs. temporary investigation.
Event Naming Conventions That Scale
Poor naming creates confusion. Good naming makes events self-documenting.
Use object-action structure
[object]_[action] format makes events scannable and predictable.
Good:
report_createddashboard_vieweduser_invitedintegration_connected
Bad:
create_report(action-first is less scannable)new_dashboard(what happened to the dashboard?)invitation_sent(passive voice is ambiguous)
Object-first naming groups related events together alphabetically in your analytics tool. All report_* events appear together, making exploration intuitive.
Use past tense for completed actions
Events represent things that already happened. Past tense makes this clear.
Good: payment_processed, trial_started, feature_enabled
Bad: process_payment, start_trial, enable_feature
Past tense reads naturally in queries: "Users who trial_started in the last 30 days" vs. "Users who start_trial in the last 30 days."
Be specific enough to be useful, general enough to be stable
Too specific: signup_button_clicked_homepage_hero_section_variant_b
Too general: button_clicked
Right level: signup_started with properties: source: homepage_hero, variant: b
The event name should be stable. The properties can vary without breaking historical analysis.
Event Properties: What to Include
Events without properties are timestamps. Events with properties are insights.
Required properties for every event:
user_id(who did this?)timestamp(when did this happen?)session_id(which session was this part of?)
These enable basic segmentation and sequencing.
Context properties (add to most events):
platform(web, mobile, API)account_tier(free, paid, enterprise)user_role(admin, member, viewer)account_id(for B2B products where multiple users belong to one account)
These enable segmentation without event bloat. You don't create report_created_by_admin and report_created_by_member events—you create one report_created event with user_role property.
Outcome properties (add to completion events):
success: true/false(did the action complete successfully?)error_type(if failed, why?)time_to_complete(how long did it take?)
These turn events from counts into quality measurements. Not just "100 reports created" but "85 reports created successfully, 15 failed due to data errors."
Avoid property bloat:
Don't add properties you won't query. "Maybe we'll want to know this someday" leads to property sprawl that slows queries and confuses analysis.
Only add properties that answer specific questions you're actually asking.
The Event Audit Process
Event taxonomies drift over time. New events get added without considering existing coverage. Similar events track the same action differently. Technical debt accumulates.
Run quarterly event audits to keep your taxonomy clean.
Step 1: Identify unused events
Query your analytics tool for events that haven't been referenced in any dashboard, report, or analysis in the last 90 days.
If nobody's querying an event for three months, it's not useful. Deprecate it.
Step 2: Find duplicate or overlapping events
Look for events that track essentially the same thing with different names:
report_generatedandreport_createdprobably shouldn't both existuser_loginandsession_startedmight be redundanttrial_beginandtrial_startedare duplicates
Consolidate to one canonical event. Update documentation. Sunset duplicates.
Step 3: Check for missing critical events
Review your business questions list. Can you answer each question with current events?
If you can't measure something critical to business decisions, that's a gap. Add the missing event.
Step 4: Validate event accuracy
Spot-check that events fire correctly:
- Trigger the action manually and verify the event appears
- Check that properties contain expected values
- Validate counts match reality (if you had 100 signups last week, does
user_signed_upshow 100 events?)
Broken events are worse than missing events. They create false confidence in bad data.
When to Track Everything (Temporarily)
There are valid reasons to instrument comprehensively for short periods.
Reason 1: You're investigating a problem and don't know the cause
If conversion drops and you don't know why, temporarily track granular user actions through that flow. Once you identify the problem, remove the granular tracking.
Reason 2: You're redesigning a critical flow and need before/after comparison
Track detailed interactions with the old flow and the new flow for 60 days. Compare behavior. Once you have your insights, remove detailed tracking and return to key milestones only.
Reason 3: You're entering a new market and don't know usage patterns yet
Track broadly for the first 90 days to understand how users actually use the product. Then pare down to the events that proved useful.
Comprehensive tracking is a discovery tool, not a permanent strategy.
When your event strategy answers specific business questions with clean, well-defined events, analytics becomes a decision-making tool instead of a data warehouse you're afraid to query.