Your product team just redesigned the onboarding flow. It looks clean, modern, and intuitive—on paper.
You launch it. Three days later, customer success is fielding a surge of support tickets. New users are getting stuck at Step 3. They're confused about where to upload files. Some are abandoning the setup entirely.
This could have been caught with user testing. Watching a real user try to complete onboarding would have revealed the confusion before it hit production.
But most B2B companies skip user testing. They assume it's only for consumer apps, or they can't get busy enterprise users to participate, or their product is "too complex" to test in short sessions.
All of these assumptions are wrong.
Here's how to run user testing for B2B products—even complex, technical, enterprise software.
Why B2B Companies Skip User Testing (and Why They Shouldn't)
Excuse 1: "Our users are too busy to participate in testing"
True, enterprise users have less spare time than consumer app testers. But they'll participate if you:
- Pay them appropriately ($100-$200 for 45 minutes)
- Respect their time (start on time, end on time, don't waste minutes)
- Frame it as helping them improve a tool they use every day
Excuse 2: "Our product is too complex to test in an hour"
You're not testing the entire product. You're testing specific workflows: Can a new user complete setup? Can an admin add a team member? Can a user export a report?
Complex products need testing more, not less. That's where usability issues hide.
Excuse 3: "We get feedback from customers already"
Feedback is reactive. Customers tell you about problems after they've struggled with them. User testing is proactive. You catch problems before they ship.
Excuse 4: "We're B2B, not a consumer app. Usability matters less."
B2B users deal with terrible UX all the time, so they tolerate bad design. But "tolerate" isn't "love." Given a choice between a clunky tool and a smooth tool that solves the same problem, users choose smooth.
Poor usability doesn't just frustrate users. It slows adoption, increases support burden, and gives competitors an opening.
The User Testing Scenarios Worth Running
Don't test your entire product. Test the specific workflows where usability issues create the most friction.
Scenario 1: First-time user onboarding
Task: "You just signed up for [product]. Your goal is to [complete core setup task]. Walk me through how you'd do that."
This reveals:
- Where new users get confused
- What steps feel unclear or unnecessary
- Where they abandon or call support
Scenario 2: Core workflow completion
Task: "You need to [primary use case your product solves]. Show me how you'd do that."
This reveals:
- Whether users understand the intended workflow
- Where they take inefficient paths or get stuck
- What features they expect but don't find
Scenario 3: Finding and using a specific feature
Task: "You want to [use Feature X]. How would you find and use it?"
This reveals:
- Whether navigation makes sense
- Whether feature naming/labeling is intuitive
- Whether users understand what features do
Scenario 4: Error recovery
Task: "You just made a mistake and want to undo it. What would you do?"
This reveals:
- Whether error messages are clear
- Whether recovery paths exist and are discoverable
- Whether users feel confident or anxious using the product
Scenario 5: Admin/configuration tasks
Task: "You need to add a new team member and give them specific permissions. Walk me through that."
This reveals:
- Whether admin workflows are intuitive (often neglected in design)
- Whether permission models make sense to non-technical users
- Where admins struggle or need documentation
The User Testing Protocol That Works for B2B
Step 1: Recruit the right participants
Test with people who match your target users:
- Same role (if you're building for data analysts, test with data analysts)
- Same level of expertise (don't test with power users if most customers are beginners)
- Same company context (test with users from similar company sizes and industries)
Recruit 5-8 participants per round. After 5, you'll see most major issues. After 8, you'll start seeing diminishing returns.
Step 2: Set up the session
- Schedule 45-60 minutes
- Use screen sharing (Zoom, Google Meet, etc.)
- Record the session (with permission) so you can review later
- Have one person facilitate, one person take notes
Step 3: Frame the session correctly
Start by setting expectations:
"Thanks for joining. We're testing our product, not testing you. There are no wrong answers. In fact, if something's confusing, that's our fault, not yours. I'm going to ask you to try some tasks and think out loud as you go. I won't be able to help much—I want to see what you'd do if I weren't here. Sound good?"
This removes pressure and encourages honest reactions.
Step 4: Give scenario-based tasks, not instructions
Bad: "Click the Settings icon, then click Add User, then enter their email."
Good: "You want to add Sarah to your team so she can view reports. How would you do that?"
The first tells them what to do. The second tests whether they can figure it out.
Step 5: Encourage thinking out loud
Ask users to narrate their thought process:
"As you work through this, tell me what you're thinking. What are you looking for? What do you expect to happen?"
This reveals not just where they struggle, but why they're confused.
Step 6: Ask follow-up questions
When users get stuck, don't jump in to help immediately. Ask:
- "What are you looking for right now?"
- "What would you expect to happen if you clicked here?"
- "Does this make sense to you, or is something unclear?"
Let them struggle for 15-20 seconds. That discomfort reveals friction points.
Step 7: Debrief at the end
After tasks, ask:
- "What was easier than expected? What was harder?"
- "Was there anything confusing or frustrating?"
- "How does this compare to other tools you use for this task?"
This captures impressions that didn't come up during task execution.
What to Look For During User Testing
You're not just watching whether users complete tasks. You're watching how they complete them.
Red flag 1: Long pauses
If a user stops and stares at the screen for 10+ seconds, they're stuck. Something isn't intuitive.
Red flag 2: Misinterpreting labels or icons
If a user says "I'm looking for Settings" and clicks "Preferences," your labeling is confusing.
Red flag 3: Clicking around hoping something works
If users are clicking random things without a clear hypothesis, they don't understand the interface model.
Red flag 4: Saying "I would Google this" or "I would ask someone"
If users immediately defer to external help, your product isn't self-explanatory enough.
Red flag 5: Expressing frustration or apologizing
If users say "sorry, I'm not good at this," that's not a user problem. That's a design problem. If someone who matches your target persona can't figure it out, neither will your customers.
Green flag 1: Smooth, confident task completion
Users complete tasks without hesitation. They don't question whether they're doing it right.
Green flag 2: Positive surprise
Users say things like "oh, that's easier than I expected" or "nice, I like that." This indicates good UX.
Green flag 3: Accurate predictions
Users say "I expect clicking this will do X" and they're right. This means your interface is predictable and intuitive.
How to Synthesize Findings into Actionable Insights
After testing 5-8 users, you'll have hours of recordings and pages of notes. Now what?
Step 1: Tag issues by severity
Critical (P0): Users couldn't complete the task or made serious errors
High (P1): Users completed the task but struggled, got confused, or took inefficient paths
Medium (P2): Users mentioned frustration or confusion but ultimately figured it out
Low (P3): Minor UI issues that didn't block progress
Step 2: Tag issues by frequency
- Universal: 5+ out of 6 participants hit this issue
- Common: 3-4 out of 6 participants hit this issue
- Occasional: 1-2 out of 6 participants hit this issue
Step 3: Prioritize fixes
Fix immediately: Critical issues that are common or universal
Fix soon: High-severity issues that are common, or critical issues that are occasional
Backlog: Medium or low-severity issues
If 6 out of 6 users couldn't complete onboarding without help, that's a critical, universal issue. Drop everything and fix it.
If 1 out of 6 users mentioned they didn't like a button color, that's occasional and low-severity. Note it, but don't prioritize it.
Step 4: Document findings with video clips
When you report findings to your team, include short video clips of users struggling.
A 20-second clip of a user saying "I don't understand what this means" is 10x more convincing than a bullet point that says "users found the label confusing."
Seeing real users struggle creates empathy and urgency.
The Testing Cadence That Catches Issues Early
Before major releases: Test new features or redesigns before launch. Catch issues while they're still easy to fix.
Quarterly: Run testing on existing workflows to catch UX debt that's accumulated over time.
After usability changes: If you ship a fix based on testing, validate that the fix actually solved the problem by testing again.
When support tickets spike: If you see a surge in support volume around a specific workflow, test it to find the root cause.
Common B2B User Testing Mistakes
Mistake 1: Testing with your own team
Your teammates are too familiar with the product. They'll breeze through tasks that confuse real users. Always test with external participants who match your target users.
Mistake 2: Guiding users too much
If you help users when they get stuck, you won't see where the real friction is. Let them struggle (within reason).
Mistake 3: Testing too many things at once
Focus each session on 2-3 key tasks. If you test 10 workflows in 45 minutes, you won't get deep insights on any of them.
Mistake 4: Only testing with power users
Power users adapt to bad UX. Test with typical users or even beginners to see if your product is intuitive for people who don't already know it deeply.
Mistake 5: Treating findings as requests
If one user says "I wish there was a shortcut here," that's feedback. If five users struggle with the same workflow, that's a usability issue. Don't treat every comment as a feature request.
When User Testing Reveals Bigger Problems
Sometimes testing uncovers issues bigger than UX:
Issue: Users don't understand what the product does
This isn't a UX fix. This is a positioning or education problem. Users need better onboarding, clearer messaging, or in-product education.
Issue: Users are trying to do things the product isn't designed for
This might mean your positioning attracts the wrong users, or your product is missing a core use case.
Issue: Users complete tasks but don't see the value
They can use the product, but they don't understand why it matters. This is a value communication problem, not a usability problem.
User testing sometimes reveals that the real issue isn't how you built something, but what you built or who you built it for. Those insights are just as valuable as finding a confusing button.
The ROI of User Testing
A user testing session costs:
- Participant incentives: $600-$800 (5 users × $100-150 each)
- Researcher time: 6-8 hours (recruiting, running sessions, synthesis)
- Total: ~$1,500 in hard costs + team time
Compare that to:
- Support costs from thousands of confused users
- Churn from users who can't figure out your product
- Lost deals because prospects found a competitor easier to use
One round of testing before launch can prevent months of support debt and lost revenue.
User testing isn't expensive. Shipping bad UX is.