I built a dashboard with 47 metrics and spent three months watching nobody use it.
Not the VP of Sales. Not the Chief Product Officer. Not even my own manager. The Looker dashboard I'd spent weeks building sat there, perfectly organized with color-coded tiles and trend lines, generating exactly zero decisions.
The worst part? I kept updating it every week. I'd refresh the data, adjust the filters, add new visualizations. I told myself people just needed time to adopt it. The real problem was that I'd built a dashboard that measured everything except what actually mattered.
I learned this the hard way during a quarterly business review when our CMO asked a simple question: "What impact has product marketing had on revenue this quarter?"
I pulled up my beautiful dashboard and started explaining our 23% increase in battlecard downloads, our 89% sales training completion rate, and our improved messaging adoption scores. She stopped me halfway through.
"That's great, but did we close more deals? Did we expand accounts faster? Did we reduce time-to-first-value?"
I didn't have answers. My dashboard was full of activity metrics that made me look busy, not impact metrics that proved I was effective. That's when I realized most PMM dashboards are executive theater—designed to show we're doing things, not that those things matter.
The Vanity Metrics Problem
The first version of my dashboard tracked everything I could measure. Content downloads. Email open rates. Training session attendance. Competitive intelligence report views. Page views on our messaging guide.
These metrics had one thing in common: they were easy to collect and made me look productive. Downloads were up 40%! Training attendance hit 92%! Our messaging doc got 347 views!
None of them answered the question executives actually care about: is product marketing making it easier to grow revenue?
I spent a month interviewing the stakeholders I thought were my dashboard's audience—Sales VP, Product VP, our CMO, three different regional sales directors. I asked them what decisions they needed to make and what data would help them make those decisions faster.
Not one person mentioned battlecard downloads.
What they did mention: Why are we losing to Competitor X in enterprise deals? Which customer segments have the highest expansion rates? Are new reps ramping to productivity faster after we changed the onboarding program? Do launches actually generate pipeline or just noise?
These questions required different metrics entirely. I wasn't tracking sales cycle length by competitor. I didn't know expansion revenue by segment. I couldn't connect enablement programs to rep productivity. I had no idea which launches drove qualified pipeline versus vanity traffic.
I'd been measuring my inputs—how much stuff I produced—when stakeholders needed to see my outputs—whether that stuff changed business outcomes.
What Executives Actually Look At
I rebuilt my dashboard around a simple test: if this number moved 20% in either direction, would someone change their behavior?
If battlecard downloads dropped 20%, would anyone do anything differently? Probably not. If win rates against our top competitor dropped 20%, would Sales change their approach? Absolutely.
That filter eliminated 80% of my original metrics. What remained were the numbers that actually drove decisions:
Competitive win rates became the anchor metric for my competitive intelligence work. Not "how many battle cards did we create" but "are we winning more against the competitors we're targeting?" I tracked this monthly, broken out by our top three competitors and by deal size.
When our win rate against Competitor A dropped from 64% to 52% over two months, it triggered a response. We interviewed recent losses, updated positioning, ran targeted enablement sessions, and watched the win rate climb back to 61% the following quarter. That's a metric that matters—it changed behavior and we could measure the impact.
Launch-generated pipeline replaced launch "engagement" metrics. I stopped tracking blog post views and started tracking how many qualified opportunities came from launch campaigns within 90 days.
Our Q2 launch generated 47,000 website visits and 12 qualified opportunities worth $340K. Our Q3 launch generated 12,000 website visits and 31 qualified opportunities worth $890K. The second launch was objectively more successful despite lower vanity metrics. Without pipeline tracking, we would have called the first launch a winner and doubled down on the wrong tactics.
Message adoption in actual sales conversations became the measure of enablement effectiveness. I partnered with our RevOps team to pull Gong conversation data and track whether reps were using our core value props in discovery calls.
After rolling out new messaging in January, adoption went from 23% of calls to 71% of calls over six weeks. More importantly, deals where reps used the new messaging had a 19% higher win rate. That number got Sales leadership bought into every subsequent messaging update.
Time-to-productivity for new reps replaced training completion rates. I worked with Sales Ops to track how long it took new hires to close their first deal and reach their first full quota month.
When we revamped onboarding in Q3, time-to-first-deal dropped from 67 days to 52 days. Time-to-full-quota dropped from 4.2 months to 3.6 months. The sales VP immediately allocated budget to expand the program—something that never happened when I reported "95% training completion."
The Leading vs. Lagging Balance
The harder lesson was understanding when to track leading indicators versus lagging indicators.
Lagging indicators are outcomes—win rates, revenue, deal velocity. They're what executives care about most, but they lag reality by weeks or months. By the time you see win rates dropping, you're already losing deals you can't get back.
Leading indicators predict those outcomes before they happen. They give you time to intervene.
I learned this during a brutal quarter where our win rates dropped 11% and nobody saw it coming. Well, the data saw it coming—we just weren't looking at the right data.
Two months before win rates fell, our sales certification scores had dropped. New reps were scoring 73% on product knowledge tests versus the usual 88%. One month before win rates fell, competitive intelligence requests from Sales spiked 40%—reps were confused about how to position against a competitor's new feature.
Both were leading indicators screaming that we had a problem. I'd been tracking both metrics but didn't recognize them as predictive. I thought of them as "program health" metrics, not early warning signals.
Now my dashboard has both layers. Lagging indicators tell me what happened. Leading indicators tell me what's about to happen so I can fix it.
For competitive positioning, I track competitor mentions in Gong calls (leading) and competitive win rates (lagging). When mentions of a competitor spike, I know we need to update battlecards before win rates start falling.
For messaging effectiveness, I track message adoption in sales calls (leading) and deal conversion rates for opportunities where reps used the messaging (lagging). When adoption starts dropping, I know we need refresher enablement before conversion rates suffer.
For launches, I track sales team confidence scores pre-launch (leading) and launch-generated pipeline (lagging). If reps don't feel confident selling the new capability before launch, pipeline numbers will be disappointing. The leading indicator gives me time to run additional enablement.
The Mistake I Still See Everywhere
The most common dashboard mistake I see other PMMs make is tracking metrics that prove they're busy, not metrics that prove they're effective.
I reviewed dashboards for a dozen PMM teams last year. Every single one tracked content creation. "We published 47 blog posts this quarter!" "We created 23 sales assets!" "We ran 8 training sessions!"
Not one tracked whether that content changed anything.
When I asked "What happened after you published those 47 blog posts?" the answer was usually vague. Some traffic. Some leads. Maybe influenced some deals. But no clear connection between effort and outcome.
The shift from activity to impact is uncomfortable because impact metrics are harder to move. You can create 47 blog posts if you try hard enough. You can't force 47 blog posts to generate $500K in pipeline—that requires the posts to actually be good and reach the right people.
Activity metrics make you look productive in the short term. Impact metrics make you valuable in the long term. Most PMMs choose productivity over value because it's safer.
Until someone asks "What would change if we eliminated this role?" and you realize your entire dashboard is filled with metrics that don't answer that question.
Building a Dashboard That Gets Used
After multiple failed attempts, I finally built a dashboard that stakeholders actually opened without me sending reminder emails.
The secret wasn't better visualizations or more sophisticated tracking. It was ruthless focus on the smallest possible set of metrics that drove the biggest decisions.
My final dashboard has 8 metrics. That's it.
Three competitive metrics: win rate vs. top competitor, competitive deal cycle length, competitive discount depth. These tell Sales whether our competitive positioning is working.
Two enablement metrics: time-to-first-deal for new reps, message adoption in Gong calls. These tell Sales whether our enablement is effective.
Two launch metrics: launch-generated pipeline, sales confidence scores. These tell Product and Marketing whether our launches are worth the investment.
One customer insight metric: customer interview velocity. This tells Product whether we're generating enough qualitative insight to inform roadmap decisions.
Each metric appears on a single dashboard with three views: current month, trend over 12 months, and breakdown by segment or region. No complicated filters. No drill-downs that require training to navigate. One screen, eight numbers, three views each.
The VP of Sales checks it every Monday. Product looks at it before roadmap planning. My manager uses it in her QBRs with the executive team. It gets used because it's simple enough to understand in 30 seconds and relevant enough to drive real decisions.
What I'd Tell My Past Self
If I could rebuild that first dashboard knowing what I know now, I'd start with one question: "What decision would change if this metric moved 20%?"
If the answer is "nothing" or "I'm not sure," delete the metric. No matter how easy it is to track or how impressive the trend looks.
I'd track half as many metrics and spend twice as much time making sure the data was actually accurate. My early dashboards were full of metrics that were directionally correct but not precisely right. Close enough for a weekly check-in, but not reliable enough to bet a strategy on.
The first time someone made a big decision based on my dashboard and then discovered the data was slightly off, I lost credibility for six months. Precision matters more than quantity.
I'd connect every metric to a specific stakeholder decision. Not "Sales might find this interesting" but "The VP of Sales uses this to decide whether to invest in additional competitor training." If I couldn't name the decision-maker and the decision, the metric didn't belong on the dashboard.
I'd build the dashboard with my stakeholders, not for them. The dashboards that got used were the ones where I sat with Sales Ops, Product Ops, and Marketing Ops and asked "What do you need to see to make your job easier?" The dashboards I built in isolation stayed in isolation.
Most importantly, I'd accept that the best PMM dashboards are boring. No fancy visualizations. No clever data science. Just the core metrics that make it obvious whether product marketing is making the business better.
The goal isn't to build a dashboard that makes you look smart. It's to build a dashboard that makes your stakeholders' jobs easier. Those are different things.
The Real Purpose of a PMM Dashboard
After years of building dashboards that nobody used and a few that actually did, I've learned that a PMM dashboard has one job: prove that product marketing is a revenue function, not a support function.
That means tracking metrics that connect to money. Pipeline generated. Win rates improved. Deal cycles shortened. Expansion rates increased. Time-to-productivity reduced.
Everything else is noise.
The uncomfortable truth is that most PMMs avoid revenue metrics because revenue is harder to influence than activity. It's easier to create 50 battle cards than to improve win rates by 5%. It's easier to run 10 training sessions than to reduce ramp time by 30 days.
But easy metrics don't build credibility or protect your headcount when budget cuts come. Revenue metrics do.
I learned this during a round of layoffs where every team had to justify their budget. The teams with dashboards full of activity metrics had to fight for every dollar. The teams with dashboards tied to revenue kept their budgets and got increases.
Your dashboard is a bet on what you think matters. Choose metrics that prove product marketing drives growth, or choose metrics that prove you're busy. Only one of those keeps you employed when things get hard.