- Another PM Day
- Posts
- AI Roadmaps Need Experiments, Not Features
AI Roadmaps Need Experiments, Not Features
The AI Roadmap Trap

Welcome, builders of the future.
If you've ever found yourself promising an AI feature on your roadmap that you're not even sure is possible, you're not alone.
Traditional roadmaps with milestones and deadlines often fall short when applied to AI, which is inherently fuzzy and experimental.
This guide is your cheat code to building AI roadmaps that reflect reality, allowing your team to experiment, learn, and keep leadership aligned.
Plus: a few of my favorite weekly finds—from AI agents reshaping tools like Jira and Slack, to research challenging how we even tell humans and machines apart.
Let’s dive in.
Today at a glance
My favorite weekly finds
🛠️ Tools
Use Brainnote as an AI thought organizer to summarize your ideas in seconds
Claude AI Integrations now connects directly with popular work tools like Jira and Confluence, moving beyond simple chatbot functionality towards more agentic AI capabilities.
Google's Data Science Agent automates your data analysis setup in Colab, handling tasks like data cleaning and exploration.
Teamble offers a conversation coach within Slack and Microsoft Teams to facilitate better giving and receiving of workplace feedback.
Chikka uses AI voice agents to conduct customer interviews, providing deeper insights without the need for manual interviews.
Currents analyzes social media discussions to deliver real-time insights about what your target audience is talking about.
Guse lets you automate any workflow using a familiar spreadsheet interface, integrating with over 200 apps.
📰 Intelligent Insights
Ethan Mollick discusses the increasing sycophancy of ChatGPT and the growing evidence that AI can be "hyper persuasive," reminding us to remain critical of AI interactions.
The Minimal Turing Test: Researchers propose a "Minimal Turing Test," challenging individuals to prove they're human using just one word—a novel approach to distinguishing humans from AI.
An MIT study reveals that implementing AI in a financial services company led to a 50% workforce reduction, 18% lower turnover, and 40% cost savings, highlighting the transformative impact of AI on work processes.
Ellie Pavlick on AI's Language Understanding Computer scientist Ellie Pavlick explores whether AI truly understands language, noting that even creators can't fully explain how their models work despite knowing the code.
📰 ICYMI
Leading with Confidence Around Powerful Stakeholders (learn more)
Your ‘Perfect’ Strategy Plan Is Already Dead (learn more)
Strategy Is a Daily Habit! (learn more)
AI Roadmaps Need Experiments, Not Features
Why should I care?
Ever been stuck promising an AI feature on your roadmap that you’re not even sure is possible?
Yeah, me too.
The way we build traditional roadmaps with milestones, deadlines and features, breaks down fast when applied to AI.
You can’t treat AI like a standard product backlog item. It’s fuzzy. It’s experimental.
And if you roadmap it wrong, you burn trust, time, and team morale.
This post is your cheat code.
It’ll show you how to build AI roadmaps that actually reflect reality and give your team room to experiment, learn, and still keep leadership aligned.
If you’re managing AI products (or about to), you’ll want this in your toolkit.
Let’s dive in.
1- Ditch Features, Think in Capabilities
Bryan Bischof (former Head of AI at Hex) gave me a much better mental model:
the capability funnel.
Instead of shipping a binary feature (“Done or not?”), you track progress through increasing levels of usefulness. Think of it like stages in skill development.
Let’s say you’re building a query assistant:
Can it generate syntactically valid SQL?
Can those queries actually execute without crashing?
Are the results even remotely relevant?
Do they match the user’s intent?
Do they actually solve the full job to be done?
Each level is a checkpoint. It’s not “working” vs “not working” or it’s “how useful is this right now?”
That’s huge for PMs, because it gives you a way to show measurable progress even before full value is unlocked.
2- The Best Teams Roadmap Experiments, Not Features
Here’s the big shift: smart teams don’t lock in features—they lock in experiments.
Eugene Yan (applied scientist at Amazon) shared how he plans ML projects:
Step 1: Two weeks to ask “Do we have the right data?”
Step 2: A month to explore “Can AI actually solve this?”
Step 3: Six weeks to build a prototype and A/B test it.
He’s not jumping to build. He’s running a feasibility sequence. Each phase has a clear “keep going or stop” point.
That structure gives leadership confidence (less risk of sunk costs) and gives the team freedom to learn, iterate, and adapt.
Even though modern LLMs are less dependent on traditional feature engineering, the same principle holds: time-box learning, validate often, and don’t overcommit before you understand what’s possible.
3- Want Fast AI Progress? Build Boring Infra First.
There’s one more piece most teams overlook: evaluation infrastructure.
Without it, your experiments are vibes-based. With it, you move with precision.
Reply