Our activation rate plateaued at 51% and wouldn't budge. We'd optimized the obvious things: simplified onboarding, fixed bugs, improved messaging. We'd hit diminishing returns.
The product team wanted to declare victory ("51% is pretty good!"). I wanted to get to 65%.
The only way forward: systematic experimentation.
Over four months, I designed and ran 12 experiments testing different approaches to increasing activation. I expected 8-9 to work.
Reality: Only 4 had positive impact. 3 had no effect. 5 actually decreased activation.
But those 4 winners took us from 51% → 64% activation. And the 8 experiments that failed taught me as much as the ones that succeeded.
Here's every experiment I ran, what happened, and what I learned.
The Baseline: 51% Activation
Before experimenting, I needed to be crystal clear on the current state:
Activation definition: User completes first meaningful project with their real data within 14 days of signup
Current activation rate: 51% Current time-to-activation: 3.8 days average Current onboarding completion: 68% (but only 51% hit full activation)
Where users dropped off:
- 32% never completed onboarding at all
- 17% completed onboarding but never created a meaningful project
- 51% reached full activation
My hypothesis: There were untapped opportunities to increase activation by testing unconventional approaches.
Experiment 1: Gamification & Progress Bars ❌ FAILED
Hypothesis: Users would be more motivated to complete onboarding if we made it feel like a game with points, badges, and progress tracking.
What we built:
- Progress bar showing "You're 73% complete!"
- Points system for completing each step
- Badges for milestones ("First project created!")
- Leaderboard showing top users (optional, opt-in)
Expected impact: +8-10 percentage points to activation
Actual results after 4 weeks:
- Activation rate: 51% → 48% (-3 percentage points)
- Completion time: 3.8 days → 4.2 days
- User feedback: Mixed to negative
What users said:
"The badges and points felt like a distraction. I just wanted to get my work done."
"This isn't a game. I'm trying to solve a real problem and the gamification made it feel less serious."
"I don't care about points. I care about getting insights from my data."
Why it failed:
Our users were business professionals trying to accomplish work tasks. Gamification felt juvenile and patronizing.
Exception: Early career users (20-25 years old) responded slightly better, but they were only 8% of our base.
What I learned: Gamification works for consumer products and learning platforms. For B2B productivity tools, users want efficiency, not entertainment.
Experiment 2: AI-Powered Onboarding Assistant ✅ WON
Hypothesis: Users would activate faster if an AI assistant proactively helped them through onboarding.
What we built:
- Chat interface accessible throughout onboarding
- AI trained on common onboarding questions
- Proactive suggestions: "I noticed you're trying to [X]. Want help?"
- Ability to ask natural language questions
Expected impact: +5-7 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 57% (+6 percentage points)
- Completion time: 3.8 days → 2.9 days
- Support tickets during onboarding: -34%
What users said:
"The AI caught me when I was stuck and gave me exactly the answer I needed."
"I didn't want to submit a support ticket for a simple question. The AI answered it instantly."
"Having someone (even if it's AI) to ask questions made me less anxious about getting things wrong."
Why it worked:
Users got stuck on small questions that blocked progress. The AI removed that friction without requiring them to wait for human support.
Key success factors:
- AI was trained on actual support tickets, so answers were relevant
- Proactive suggestions appeared at moments when users showed struggle signals
- Fallback to human support when AI couldn't help
What I learned: AI works best as a friction-removal tool, not as a replacement for good onboarding design. It caught edge cases and unusual questions that we couldn't anticipate.
Experiment 3: Video Tutorials (Detailed) ❌ FAILED
Hypothesis: Users would understand onboarding better with comprehensive video tutorials showing every step.
What we built:
- 12-minute video tutorial covering entire onboarding process
- Step-by-step screencasts with narration
- Embedded in onboarding flow with "Watch this first" recommendation
Expected impact: +6-8 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 49% (-2 percentage points)
- Completion time: 3.8 days → 5.1 days
- Video completion rate: 18% (most users skipped or abandoned mid-video)
What users did:
- 62% started the video
- 29% watched >50% of it
- 18% completed the full video
- Users who watched the full video activated at 43% (lower than baseline)
Why it failed:
Watching a 12-minute video felt like homework before being allowed to use the product. Users wanted to do things, not watch things.
Users who completed the video were actually worse performers because they spent 12 minutes passively learning instead of actively using the product.
What I learned: Long-form video tutorials feel like obstacles, not aids. If you must use video, keep it under 90 seconds and make it optional.
Experiment 4: Micro-Videos (Contextual) ✅ WON
Hypothesis: Short, contextual videos at point of need would help more than one long tutorial.
What we built:
- 30-60 second video clips embedded in specific onboarding steps
- Triggered when user paused on a step for >30 seconds
- Skippable and optional
- Showed exactly how to complete the current step
Expected impact: +4-6 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 55% (+4 percentage points)
- Completion time: 3.8 days → 3.2 days
- Video engagement: 47% of users watched at least one micro-video
Why it worked:
Videos appeared at the exact moment users needed help with a specific step. They were short enough to watch without feeling like a time commitment.
Users who watched 2+ micro-videos activated at 68% vs. 51% baseline.
What I learned: Contextual, micro-content at point of need > comprehensive upfront tutorials. Help users exactly when and where they're stuck, not before they encounter problems.
Experiment 5: Mandatory Onboarding Checklist ❌ FAILED
Hypothesis: Forcing users to complete all onboarding steps before accessing the product would ensure they were properly set up.
What we built:
- Blocked access to main product until onboarding checklist was 100% complete
- 7 required steps, no skipping allowed
- Estimated time: 15-20 minutes
Expected impact: +10-12 percentage points (force completion = higher activation)
Actual results after 2 weeks (experiment stopped early):
- Activation rate: 51% → 38% (-13 percentage points)
- Trial-to-paid conversion: 22% → 14%
- Angry user emails: 47 in 2 weeks
What users said:
"I wanted to explore the product before spending 20 minutes on setup. This felt like being locked out."
"I abandoned the trial because I wasn't willing to invest that much time before seeing if the product worked."
"Let me decide what I need to set up. I don't want to be forced through arbitrary steps."
Why it failed:
Users resist forced workflows. Mandatory onboarding felt like a barrier between them and the value they wanted to experience.
What I learned: You can guide users toward best practices, but forcing them creates resistance. Recommended paths work better than mandatory paths.
Experiment 6: Quick Wins First, Full Setup Later ✅ WON
Hypothesis: Users would be more willing to complete full setup after experiencing a quick win.
What we built:
- New onboarding flow: Get quick win in 3 minutes → Then option to do full setup
- Used sample data to show immediate value
- After users saw results, offered: "Want to see this with your real data? Let's connect your data source."
Expected impact: +7-9 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 59% (+8 percentage points)
- Completion time: 3.8 days → 2.6 days
- % who completed full setup after quick win: 73%
Why it worked:
Users experienced value before investing effort. This created motivation to complete the harder steps (data connection, configuration).
Quick win → Investment works better than Investment → Value
What I learned: Show value first, ask for work second. Users are willing to invest effort once they've seen the product works.
Experiment 7: Social Proof & Testimonials ⚪ NO EFFECT
Hypothesis: Showing social proof during onboarding would increase user confidence and completion.
What we built:
- Customer testimonials embedded in onboarding
- Usage stats: "2,847 users completed this step today"
- Success stories: "Company X achieved Y result using this feature"
Expected impact: +3-5 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 51% (no change)
- Users who clicked on testimonials: 8%
- User feedback: Mostly ignored
Why it had no effect:
Users were focused on their own setup. Testimonials felt like marketing content that distracted from their task.
Exception: Social proof worked better in the consideration phase (before signup), but didn't affect activation.
What I learned: During onboarding, users are in "do mode" not "evaluate mode." Social proof doesn't overcome friction or add value to the experience.
Experiment 8: Simplified Onboarding (Fewer Steps) ✅ WON
Hypothesis: Reducing required onboarding steps would increase completion.
What we changed:
- Old flow: 7 required steps
- New flow: 3 required steps (moved 4 steps to "optional advanced setup")
- Focused on minimum path to value
Expected impact: +5-8 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 58% (+7 percentage points)
- Completion time: 3.8 days → 2.1 days
- % who later completed "optional" steps: 62%
Why it worked:
Less friction = higher completion. Users who reached activation quickly were more willing to go back and complete advanced setup later.
Counterintuitive finding: Making steps optional increased the likelihood users would complete them compared to when they were required.
What I learned: Minimize required steps. Make everything else optional and easy to do later. Users who get value will come back for advanced features.
Experiment 9: Personalized Onboarding Paths ⚪ NO EFFECT
Hypothesis: Customizing onboarding based on user role/industry would improve relevance and activation.
What we built:
- Role selection: "I'm a [Marketer/Analyst/Manager/etc]"
- Industry selection: "I work in [SaaS/Ecommerce/Finance/etc]"
- Customized terminology, examples, and templates based on selections
Expected impact: +6-8 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 52% (+1 percentage point, not statistically significant)
- Users who selected role/industry: 78%
- User feedback: Positive but no impact on activation
Why minimal impact:
While users appreciated personalized examples, it didn't remove onboarding friction or speed up time-to-value. It was nice-to-have, not game-changing.
What I learned: Personalization is valuable for engagement and satisfaction, but doesn't necessarily move activation metrics unless it removes specific friction for specific segments.
Experiment 10: Email Drip Campaign ⚪ NO EFFECT
Hypothesis: Sending educational emails during onboarding would guide users to completion.
What we built:
- 5-email sequence sent over 10 days
- Tips, best practices, case studies
- Reminders to complete onboarding
Expected impact: +4-6 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 51.4% (negligible change)
- Email open rates: 34%
- Click-through rates: 8%
Why no effect:
Users who were going to activate did so within first few days. Users who delayed activation weren't being stopped by lack of information—they were stopped by friction, lack of time, or lack of urgency.
What I learned: More communication doesn't solve activation problems. Removing friction does. Email works better for re-engagement than for initial activation.
Experiment 11: Human Check-In (High-Touch) ❌ FAILED (Cost)
Hypothesis: Personal outreach from success team would increase activation for at-risk users.
What we did:
- Identified users at risk of not activating (showing struggle signals)
- Success team member emailed offering help
- Scheduled 15-min onboarding calls for interested users
Expected impact: +8-12 percentage points for targeted segment
Actual results after 4 weeks:
- Activation rate for targeted segment: 34% → 48% (+14 percentage points) ✅
- But: Cost per activation: $140
- Scalability: Not sustainable at current volume
Why mixed result:
It worked for activation: Human help dramatically improved completion rates.
But cost was prohibitive: At 600 signups/month, this would cost $84K/month.
What I learned: High-touch works but doesn't scale. Reserve it for high-value accounts or build triggers that identify who truly needs help vs. offering to everyone.
Experiment 12: Incentive Rewards ❌ FAILED
Hypothesis: Offering rewards for completing activation would motivate users.
What we tested:
- Complete onboarding → Get $25 credit toward subscription
- Complete within 3 days → Get $50 credit
Expected impact: +10-15 percentage points
Actual results after 4 weeks:
- Activation rate: 51% → 54% (+3 percentage points)
- But: Credit redemption created adverse selection (deal-seekers, not committed users)
- 90-day retention of incentivized users: 41% vs. 67% for non-incentivized
Why it failed:
Incentives attracted users who wanted the credit, not users who wanted the product. They activated to get the reward, then churned.
What I learned: Financial incentives can inflate activation metrics but don't improve long-term engagement or retention. Focus on product value, not bribes.
The Winning Combination
After testing 12 experiments, I kept the 4 that worked and stacked their effects:
- AI-powered onboarding assistant: +6 percentage points
- Contextual micro-videos: +4 percentage points
- Quick wins first, full setup later: +8 percentage points
- Simplified onboarding (fewer required steps): +7 percentage points
Theoretical stacked impact: +25 percentage points
Actual stacked impact: +13 percentage points (51% → 64%)
Why less than theoretical: Improvements overlapped (they solved some of the same friction points), so effects weren't fully additive.
But 64% activation was a massive improvement from 51%.
What I Learned About Experimentation
Learning 1: Most Experiments Fail
Success rate: 4 out of 12 (33%)
I expected 8-9 to work. Only 4 had positive impact.
This is normal. Most experiments fail. That's why you experiment—to find the few things that actually work.
The key: Run experiments rigorously so you know what failed and can kill it quickly.
Learning 2: Small Wins Compound
No single experiment took us from 51% to 64%. Four small wins (4-8 points each) stacked to create the big improvement.
Don't wait for the one big idea. Run lots of small experiments. Keep what works. The compound effect is powerful.
Learning 3: Removing Friction > Adding Features
The experiments that worked removed friction:
- AI assistant removed question-answering friction
- Micro-videos removed confusion friction
- Quick wins removed motivation friction
- Simplified flow removed step-count friction
The experiments that failed added features (gamification, tutorials, incentives) without removing core friction.
Onboarding optimization is friction removal, not feature addition.
Learning 4: Measure Long-Term Impact, Not Just Activation
Incentive experiment taught me this lesson. Activation went up, but retention went down.
Always track:
- Activation rate (did experiment work?)
- Time-to-activation (did it speed things up?)
- 90-day retention (did it attract the right users?)
- Product engagement (are they actually using it?)
A successful experiment improves all four metrics, not just activation.
Learning 5: Context Beats Content
Detailed video tutorial failed. Contextual micro-videos succeeded.
Mandatory checklist failed. Optional, contextual guidance succeeded.
The pattern: Generic, upfront content doesn't work. Contextual, point-of-need help does.
Lesson: Meet users where they are with exactly what they need in that moment.
How to Run Activation Experiments
Step 1: Baseline Your Current State
Before experimenting, establish:
- Current activation rate
- Current time-to-activation
- Current drop-off points
- Current support burden
This is your baseline for measuring impact.
Step 2: Generate Hypotheses
Brainstorm experiments based on:
- User interview insights (where do they struggle?)
- Drop-off data (where do they abandon?)
- Competitive research (what do others do?)
- Team ideas (product, CS, sales)
Aim for 10-15 experiment ideas.
Step 3: Prioritize by ICE Score
ICE framework:
- Impact: How much could this improve activation? (1-10)
- Confidence: How sure am I this will work? (1-10)
- Ease: How easy to build and test? (1-10)
ICE Score = (Impact + Confidence + Ease) / 3
Run highest-scoring experiments first.
Step 4: Design Rigorous Tests
For each experiment:
- Define hypothesis clearly
- Identify success metrics
- Determine sample size needed
- Set test duration (usually 4 weeks)
- Plan A/B test (50/50 split)
Never run experiments without control groups.
Step 5: Ship, Measure, Learn
Ship: Launch experiment to 50% of users Measure: Track activation rate, time-to-activation, retention Learn: Analyze results, user feedback, behavioral data
Decision rules:
- If activation improves >3 points: Keep it
- If activation drops: Kill it immediately
- If no change: Kill it (no benefit = not worth maintaining)
Step 6: Stack Winners, Kill Losers
Keep successful experiments. Kill everything else.
Don't get attached to ideas. The data decides what stays.
Stack successful experiments to compound improvements.
The Uncomfortable Truth
Most activation improvement efforts fail because teams:
Guess instead of experiment:
- Build what they think will work
- Don't A/B test
- Can't prove impact
- Keep failed initiatives forever
Give up too early:
- Run 2-3 experiments
- When they fail, declare "we've tried everything"
- Stop experimenting
Succeed at the wrong metric:
- Optimize for activation without checking retention
- Celebrate vanity wins that don't improve business outcomes
The best teams:
- Run 10-15 experiments per quarter
- Expect 60-70% to fail
- Kill failures fast, scale winners
- Stack small wins to create big improvements
- Measure long-term impact, not just activation
I ran 12 experiments. 8 failed. 4 worked.
Those 4 took activation from 51% → 64%.
That's how experimentation works. Most things fail. Find the things that work, stack them, and compound the wins.
Don't stop experimenting because most experiments fail. Stop experimenting when you run out of ideas to test.
We're at 64% activation now. Target is 75%.
I've got 9 more experiments queued up.