The Slack message from our VP of Marketing arrived at 9:47 AM on a Tuesday: "Gartner says AI skills gaps are the #1 challenge for PMM leaders in 2025. Let's get ahead of this. Find us a training program."
Three hours of research later, I had a shortlist. LinkedIn Learning's AI for Marketers. Coursera's Generative AI Specialization. A boutique firm offering "AI Literacy for GTM Teams" at $2,000 per person. The course descriptions all promised the same thing: transform your team from AI novices to AI-powered marketing machines in six weeks.
We chose the boutique option. Six people, six weeks, $12,000 total. The syllabus looked perfect. Week 1: Understanding large language models. Week 2: Prompt engineering fundamentals. Week 3: AI tools for content creation. Week 4: Predictive analytics and personalization. Week 5: Ethical AI and bias mitigation. Week 6: Building your AI implementation roadmap.
The team completed every module. They aced the quizzes. They built impressive final projects—AI-generated customer personas, automated competitive intelligence dashboards, predictive lead scoring models.
And then, two weeks after the course ended, I ran a simple audit: how many people were actually using AI tools in their daily work?
The answer was zero.
The Confession That Changed Everything
I cornered Sarah, one of our senior PMMs, in the kitchen. She'd been the most enthusiastic about the training, the first to volunteer, the one who built the most sophisticated final project.
"Why aren't you using any of this?" I asked.
She looked at me like I'd asked her to explain quantum physics. "Using what, exactly?"
"The AI tools. The frameworks from the course. Any of it."
"Oh." She poured her coffee slowly. "Because I don't know which tool to use. We learned about ChatGPT, Claude, Jasper, Copy.ai, like fifteen different platforms. But nobody told me which one I'm supposed to use. Am I allowed to put customer data into ChatGPT? Do we have a corporate account? Is there a budget? What if I pick the wrong tool and violate some policy I don't know about?"
The problem clicked into focus. We'd spent $12,000 teaching people how to drive, but we hadn't given them keys to a car. Or told them where they were allowed to drive. Or whether they were even authorized to be behind the wheel.
What the Research Actually Shows
Gartner's 2025 report on product marketing leaders identifies AI/GenAI skills gaps as the top challenge. Every article about the finding recommends the same solution: training programs, upskilling initiatives, AI literacy courses.
But there's a second data point that nobody connects to the first one: 61% of marketers already use AI tools.
Read that again. The majority of marketers are already using AI. So how can skills gaps be the #1 challenge?
The answer is that "AI literacy" is a placeholder term for a completely different problem. The actual challenges aren't technical. They're organizational and strategic:
Which tools should we standardize on? Teams are drowning in options. ChatGPT, Claude, Gemini, Jasper, Copy.ai, Writesonic, Persado, Dynamic Yield, HubSpot Breeze, Salesforce Einstein. Every vendor is bolting AI features onto their existing platforms. There's no clear framework for evaluating which tools solve real problems versus which are just AI-washing.
What are we allowed to put into these tools? Legal says no customer data in unapproved tools. IT says any tool that doesn't pass security review is forbidden. Finance says we can't add new vendor contracts without budget approval. Meanwhile, individual contributors are quietly using free ChatGPT accounts because nobody has given them a sanctioned alternative.
Who decides what gets automated? The copywriter who spent three years developing voice and tone guidelines doesn't want AI generating first drafts that sound generic. The analyst who built the competitive intelligence process doesn't trust AI to identify what's actually important versus what's just mentioned frequently. These aren't knowledge gaps. They're legitimate concerns about quality and judgment.
How do we measure whether it's actually helping? Most AI tools promise time savings. But if Sarah generates a blog post in 10 minutes with AI instead of 90 minutes manually, what happens to those 80 saved minutes? Do they get reallocated to strategic work? Or do they just create the expectation that she should now produce eight times more content?
No training course addresses these questions. Because these aren't curriculum problems. They're organizational design problems masquerading as skills gaps.
The Spreadsheet That Nobody Wanted
After the Sarah conversation, I did something that felt absurdly low-tech: I made a spreadsheet.
Column 1: Common PMM tasks (competitive research, content creation, data analysis, customer research synthesis, launch planning).
Column 2: AI tools that claim to help with each task.
Column 3: Actual cost (including hidden costs like learning curve and integration time).
Column 4: Security/compliance status at our company.
Column 5: Who currently owns this task.
Column 6: Whether automating it actually saves time or just creates more work.
I spent two days filling it out. Then I scheduled a meeting with Sarah and the team to review it.
The reaction was immediate and visceral. They hated it.
Not because the analysis was wrong. Because it forced decisions that nobody wanted to make.
Looking at competitive research, there were seven AI tools that could automate various parts of the process. But our current approach—manually tracking competitors via a combination of news alerts, customer interviews, and sales feedback—worked well. It was slow, but it was thorough. Automating it would be faster but less nuanced.
The spreadsheet asked: is speed more valuable than nuance? Nobody wanted to answer.
Looking at content creation, AI could generate first drafts quickly. But our content strategy relied on a specific voice and deep subject matter expertise. AI-generated content would require heavy editing to match our standards.
The spreadsheet asked: would generating mediocre drafts that need significant revision actually save time versus starting from scratch with a clear point of view? Nobody wanted to answer.
The spreadsheet made visible what everyone had been avoiding: adopting AI isn't a knowledge problem. It's a prioritization problem. And prioritization requires making hard choices about what matters most.
The Three Questions That Actually Matter
I scrapped the spreadsheet (too confrontational) and distilled it into three questions that became our AI adoption framework:
Question 1: What specific problem are we trying to solve?
Not "should we use AI?" but "what are we actually trying to fix?" If the answer is "we want to be more efficient," that's not specific enough. Efficient at what? If the answer is "we need to scale content production," why? Is there proven demand for more content, or are we just assuming more is better?
This question forced us to separate real problems from FOMO. Turned out, we had three legitimate problems: (1) competitive intelligence took too long to synthesize, (2) sales enablement requests were overwhelming our capacity, (3) customer research insights were stuck in interview recordings and never made it into messaging.
Those were problems worth solving. "Use AI because everyone else is" wasn't.
Question 2: What are we willing to give up?
AI tools that save time usually trade something for that speed. Nuance for scale. Customization for automation. Human judgment for algorithmic consistency.
This question forced us to acknowledge what we valued most. For competitive intelligence, we valued speed over nuance—we needed to react faster even if the analysis was less deep. For sales enablement, we valued consistency over customization—better for every rep to have good-enough materials than for the top 20% to have perfect materials and everyone else to have nothing. For customer research, we valued accessibility over comprehensive synthesis—better to have partial insights available immediately than complete insights that never get used.
Once we acknowledged these tradeoffs explicitly, tool selection became straightforward.
Question 3: Who has authority to make this change?
The fatal flaw in most AI adoption: nobody clearly owns the decision. IT needs to approve security. Legal needs to approve data handling. Finance needs to approve budget. The team lead needs to approve workflow changes. Individual contributors need to actually use the tool.
This question forced us to map decision rights before we evaluated tools. Who could unilaterally say yes? Who had veto power? Who needed to be consulted versus informed?
In our case: IT had veto power on security. Legal had veto power on customer data handling. Team leads had final decision authority on workflow changes. Individual contributors had input but not veto power. Finance had budget approval but couldn't override the team lead on tool selection within approved budget.
Clarifying this in advance eliminated 90% of the organizational paralysis.
What We Actually Did
With those three questions answered, our AI adoption became dramatically simpler.
For competitive intelligence: We implemented a specialized monitoring tool that aggregates competitor news, product updates, and customer sentiment. Not because it was the most sophisticated AI tool, but because it solved our specific problem (synthesis takes too long) and the tradeoff made sense (speed over nuance).
For sales enablement: We adopted an AI content generation tool specifically for creating battle card variations and objection handling scripts. Not for primary positioning documents—those still require human strategic thinking—but for the derivative content that was overwhelming our capacity.
For customer research: We implemented an AI transcription and synthesis tool that extracts key themes from interview recordings. Not as a replacement for deep analysis, but as a way to make insights accessible immediately instead of waiting for comprehensive synthesis that often never happened.
Total cost: $8,400/year. Less than we spent on training.
The team adopted all three tools within two weeks. No additional training required. Because the real barrier was never knowledge.
What This Means for the Industry
The AI literacy crisis in product marketing is real. But it's not the crisis that training vendors want you to think it is.
The crisis isn't that PMMs don't understand how AI works. It's that organizations haven't made the strategic decisions required for AI adoption to work:
- Which specific problems are we trying to solve?
- What tradeoffs are we willing to make?
- Who has authority to make changes?
- How do we measure success?
- What happens to the time we save?
These are leadership questions, not training questions. Sending people to courses without addressing these organizational issues is like teaching someone to speak French without planning to send them to France. Technically useful, practically irrelevant.
The product marketing leaders who navigate this successfully in 2025 won't be the ones who send their teams to the most training. They'll be the ones who make clear strategic decisions about what problems matter most and what tradeoffs they're willing to accept.
For teams navigating these strategic AI adoption decisions while managing competitive intelligence and market shifts, platforms like Segment8 help centralize the workflows that AI tools can augment—but only after you've made the hard strategic choices about what to automate and what to keep human.
The Uncomfortable Truth
Six months after the original training investment, I ran into the boutique firm's founder at a conference. I told her the story—how the training didn't lead to adoption, how organizational clarity mattered more than technical knowledge.
She wasn't surprised.
"Ninety percent of our clients have the same experience," she said. "The training works great. People learn the skills. And then they go back to organizations that haven't made any of the strategic decisions required for them to actually use those skills."
"So why do you keep selling training?" I asked.
"Because companies are willing to pay for training. They're not willing to do the hard work of organizational decision-making. Training feels like action. Making strategic choices about tradeoffs feels like politics."
She was right. And that's exactly why the AI literacy crisis will persist regardless of how many people complete AI training courses.
The knowledge is the easy part. The courage to make strategic choices is what's actually missing.
And you can't teach that in a six-week course.