Part 2 of 2. ← Read Part 1: AI Adoption and Product Management: Proven Across Industries
Dogfooding means using your own product. Most teams do it occasionally: test a feature, check a flow, make sure nothing’s broken. This is the story of what happens when dogfooding becomes your entire job.
In Part 1, I wrote about running a business without AI and wishing I’d had it. That was me on the outside, using tools that happened to be available.
My primary work tool right now is an AI product that we’re building in-house, and most of what I use it for …is building other AI products. I’m way beyond dogfooding: I’m living inside the dog.
You Can Only Break What You Know
Dogfooding comes naturally to me because I come from a QA background. I’m not always intentionally trying to break things, but I happen to be good at it (for better or worse).
Here’s something I’ve believed through every product I’ve ever worked on: the only way you can truly break your product is if you know it. If you don’t know how it’s supposed to work (the happy path, the sad path, the edge cases, all of it) then how can you break it? You can’t find what’s wrong if you don’t deeply understand what right looks like.
Think about your own product. Could you walk through every flow eyes closed, backwards and forwards? I can with ours. And honestly, I probably think about the non-happy path more than the happy path. That’s where the real product lives. In the cracks, the edge cases, the moments where a user does something you didn’t anticipate.
Because I’m using our platform constantly, not just to test it but to actually do my work, I catch issues that might not surface otherwise. Sometimes it comes from me actively pushing the product to its limits, trying to do the most with it. But more often, it comes from me debugging someone else’s problem and uncovering the root cause.
Someone on my team will surface an issue because they were testing something, and when I dig in, I find the real crack in the foundation. Then it’s the hunt: tracing it back, hitting the “aha” moment, going down the rabbit hole of how it connects to other features, other bugs, things that slipped past QA, things nobody thought to check.
And because this is an AI product, there’s an additional layer of complexity. Is this an LLM problem, or is this a problem with our application? I genuinely try to give the benefit of the doubt that it’s not us. But that process of tracing, investigating, and disambiguating is where I surface the issues that are ours. It’s frustrating. But I’m also doing detective work into a black box, learning how the AI actually behaves, figuring it out in real time because there’s no playbook. That’s what makes building AI products different from anything I’ve done before.
Here’s a concrete example. When Claude Opus 4.6 launched with support for a 1 million token context window, I was pushing our platform to take advantage of it. Our LiveView sessions started crashing during long responses. We thought it was memory issues from the context size. Turns out it was a rendering problem: every stream chunk was triggering a full re-parse of the entire accumulated response, and the process would slow down until the page refreshed. We traced it back, found the real culprit, and fixed it. That’s the kind of issue you only find when you’re living in it; true dogfooding.
That kind of experience has also built my technical abilities in ways I didn’t expect. Navigating the “is it the LLM or is it us?” question over and over has forced me to understand APIs, scripts, model capabilities and limitations, context windows, and application architecture at a level I never planned on. But when you’re the one in the product every day, tracing issues back to their root cause, you learn fast because you have to.
The Discipline of the Dual Hat
Here’s the challenge of dogfooding as a PM: sometimes I’m the user hitting a wall, sometimes I’m the PM deciding whether that wall matters. I have to know which one I am at any given moment.
When I catch something, I don’t fix it immediately. I log it. I weigh it against the roadmap. Because while I am a heavy user of this product, I am not the paying customer. I know that my issues are not necessarily the general issues that need to be solved. My use of this product is way beyond any typical user.
Here’s the thing that keeps me grounded: I know our users. Not just personas or research summaries. I know how they actually use the product, what they touch, what they ignore. And I know my experience is the extreme edge case. That’s what lets me separate “this matters to me” from “this matters to our users.” When something’s unique to my usage, I deprioritize it, even if it’s frustrating.
But there have been more than a few times where I flagged something, deprioritized it (because I’m a power user living inside the dog), and then a real user ran into the exact same issue. It’s happened enough that I’ve joked with my team: just have the customer submit the ticket so I actually get heard. But it reinforces why I keep logging everything, even the things I deprioritize, because sometimes it takes a second voice to turn something from an edge case into a priority.
So what does a PM do when she hits a flow she needs but the product doesn’t support yet? Well… they call me the workaround queen 👑.
I’m not going to sit around waiting for a feature to be built – I’m going to make that flow work for me and find a workaround. And then when I share those workarounds with our tech lead, the response is almost always: “That’s actually really cool. That’s something we need to build.”
There’s a real tension there. I’m building workarounds because what I’m doing isn’t common enough to prioritize… yet. Most people’s mental model of AI is still a chat box. Type a question, get an answer. The real power is way beyond that, and most users haven’t gotten there. I’m just further in because I live in the product every day.
So the cycle becomes: use the product → hit a wall → build a workaround → share the workaround → it becomes the product. That’s dogfooding at its best. The instinct to deprioritize myself is the right discipline. But the workarounds are feature discovery in disguise.
And there’s another layer to this. While I’m not the paying customer, part of the goal is figuring out how to get the paying customer to where I am. How do we get them past the basic chat and into the real power of the product? And how do I fix the flaws I’m finding now, before they ever encounter them? I’m not just the edge case. I’m scouting the path our users are eventually going to walk.
AI as a Tool, Not a Crutch
In Part 1, I talked about the work AI could have saved me during NOLA Phud. The recipe math, the grocery logistics, the macro calculations. That was looking back and seeing lost time. Now I’m looking forward, and every improvement compounds into the next.
I’m a firm believer in human-in-the-loop. I don’t ever just use AI and blindly trust the output. Yes, that takes more time. But in my experience, I would rather brain-dump my thoughts, have AI organize them, and then review and refine, than have to sit and think about how to perfectly articulate everything from scratch. The human is still doing the thinking. The AI is helping me get what’s in my head into something structured. And yes, the way it structures things shapes the output. That’s why the review step isn’t optional… it’s where I make sure it still sounds like me and says what I actually mean.
And here’s what people don’t always see: it’s not like I took a manual process and made it an AI process and called it a day. The evolution is more like: I took a manual process, made it an AI process, then made it a better AI process, and then made it an even more optimized one after that.
And at every stage, I’m still the one telling the AI what to do. It doesn’t just know. I have to define the task, structure the prompt, refine the output, and iterate until it’s right. That takes time. Sometimes more time upfront than just doing the task manually. But that initial time investment is what makes it a one-shot the next time. And the time after that. And the time after that. The payoff isn’t in the first run. It’s in every run after.
If you’ve automated something with AI once and stopped there, you’ve probably only scratched the surface. The real gains come from the second, third, and fourth generation of the same workflow. Let me show you what I mean with the simplest example I can think of: meeting notes. Not a generic transcriber that gives you the same output regardless of context. Something that knows whether it’s a kickoff, a stakeholder interview, or an external user session, and structures the output accordingly.
Generation one: handwriting or typing notes in a Google Doc during the meeting. Half-listening, half-transcribing.
Generation two: downloading Zoom transcripts after the call and pasting them into a prompt to generate structured notes. Better, but still manual.
Generation three: automations that route notes to Slack, upload to Google Drive, with different processes for different meeting types. Each with their own action items. Getting closer, but I still had to find the time to run them and then circle back on every action item.
Generation four: a full agent pipeline. Zoom transcripts go in, and what comes out is structured meeting notes, triaged action items (is this a feature? a bug? a task?), de-duplicated against our existing backlog, with draft tickets and prompt templates ready for review. It lands in my Slack. I review, refine, and ship. Nothing goes live without me looking at it.
What used to be a full afternoon of post-meeting work is now a review-and-refine cycle. Four generations of the same task, each one a multiplier on the last, and each one only happened because I was using the previous one every day and feeling where it fell short. Today I had four discovery sessions back to back. Instead of sifting through notes while exhausted, I queried my action items, filtered down to what outreach was needed, and drafted follow-ups from the meeting context.
The Walls Are Coming Down
There’s a new model, a new tool, a new capability every day. But here’s what I’ve realized: it’s not about PMs who use AI versus PMs who don’t. It’s about becoming full-stack in everything.
I’m not an expert in construction. Or finance. Or hospitality. Or real estate. But I don’t need to be. That’s the thing I keep learning over and over. I’ve built cost estimation tools for construction teams, investment screening workflows for private equity, financial models for real estate acquisitions, and compliance review agents. Prompts that query historical project data to find comparable estimates.
Agents that run multi-step analysis and flag issues before humans even look. I can build prompts that play the role of an industry expert. I can lean on my product skills to do the discovery work, figure out where the pain points are, and identify where AI workflows can help. In Part 1, I talked about not knowing what I didn’t know. Now I’ve realized that’s not a limitation. It’s the starting point. The industries change. The discovery process doesn’t.
Right now I have my hands in five or six products. Most PMs are supposed to manage one or two. I’m not saying that to complain. I’m saying it because AI tools make it possible. Context-switching is part of it, but the real unlock is how everything works together.
Automations that turn recorded interviews into notes and post them to Slack. MCP tools that connect to everything from GitHub to our CRM to internal project setup, letting me update documentation, tickets, and project files all at once. GitHub epics and issues that stay in sync without me manually chasing them down. Pre-discovery interview responses that feed into draft interview guides. The labor disappears. What’s left is the actual product work: the thinking, the listening, the asking the right questions.
Even with interview guides drafted for me, I’m still the one in the room, picking up on what someone just said, and digging deeper. For a product rewrite, most teams do five to ten discovery interviews total. I did ten-plus in a single week. A three-month discovery project? I was able to draft 95% of the v1 deliverables in two weeks. That’s not me being superhuman. That’s the multiplier. And because we’re still building, still iterating, I run into limitations.
When I hit something that would make not just me but our users more productive, I call it out. We discuss it, prioritize it, and often it becomes the next thing we build. Don’t get me wrong, I’m busy. But without AI tools, I wouldn’t be drowning. I’d be sinking like the Titanic.
I’m able to be a better PM because of these tools. But I’m also able to learn about engineering and design in ways that weren’t possible before. I can go into Figma and create design prototypes without needing to take time from our engineers. When a feature I’ve been anticipating hits a PR, I pull the branch down and test it locally. AI tools made that possible for someone who barely knew their way around a terminal. When I’m diving into something new, I want to figure it out, I want to bridge the gap, I want to understand how things connect.
The real unlock is this: AI lets me get what’s in my head out into something tangible. I can vibecode something that looks the way I see the UX looking. I don’t need the code to be great. I just need it to be real enough to hand to an engineer or designer and say, “Can you perfect this?” Because so much of product work is about getting what’s in your brain into a form other people can see, react to, and build from. And AI makes that translation faster and more accurate than it’s ever been.
If you’re a PM still describing features entirely in words and hoping your team sees what you see, try building it. Even badly. The conversation that happens around a rough prototype is worth more than the most detailed user story you’ll ever write.
I was talking to Cam, one of our other product managers, recently, and we both looked at each other and said: I don’t even remember what it was like before we used this. I don’t know how we were efficient before. Obviously we were. We were getting the job done, doing great work. But the level we can deliver at now is something else entirely.
In Part 1, I talked about not knowing what I didn’t know. Now I do. And I can’t go back.
Cam can’t either. More on that from him soon.
That’s what dogfooding really looks like.