Update! I wrote a follow-up post here: How My AI Product “Second Brain” Evolved.
I’ve been refining my approach to using LLMs for product work, and I figured it’s time to write up how I actually use them day-to-day.
I think the most valuable thing an AI assistant can do isn’t to write your PRD or draft your strategy docs. It’s to push back on weak reasoning, spot gaps you missed, and force you to articulate why your idea is actually good. It’s less a ghostwriter and more a skeptical colleague who shares your product philosophy. With the right prompts AI assistants are also really good at creating background and framing documents (such as explainers that synthesize complex topics, summaries of technical concepts, etc.).
So let’s walk through how I’ve set this up, what makes it work, and how you might build something similar for yourself.
The Philosophy: Sparring Partner, Not Ghostwriter
I believe LLMs are most useful when you give them two things: context and constraints.
- Context tells the model who you are, what you’re working on, and what “good” looks like in your world.
- Constraints keep the model from going off the rails with generic advice or hallucinated frameworks.
Every prompt I use is designed to provide both. They’re opinionated on purpose. I’d rather have an assistant that pushes back on bad ideas than one that says “Great idea!” to everything.
The goal isn’t to have AI write presentations for me. It’s to have a thinking partner that:
- Challenges weak problem statements before I waste time on solutions
- Spots missing success criteria I forgot to define
- Asks “why?” when my reasoning gets hand-wavy
- Points out when I’m jumping to solutions before understanding the problem
- Helps me create background docs and explainers that set context for others
The Building Blocks
The system works because of how all the pieces fit together. Here’s an overview of the general folder structure I maintain with a series of Markdown files inside:
llm-prompts/
├── prompts/ # System prompts for different use cases
│ ├── pm/ # Product management prompts
│ └── technical/ # Technical/engineering prompts
├── context/ # Personal context files (who I am, how I work)
├── reference/ # Syntax guides and reference docs
└── work/ # Saved feedback and refined docs
The magic isn’t in any single prompt—it’s in how you combine them. Let me break down each layer.
Layer 1: System Prompts
These are the instructions that tell the AI how to behave for a specific task. I have different prompts for different jobs:
- General PM sparring: A prompt that knows my product philosophy and pushes back on weak reasoning. I use this for thinking through tradeoffs, preparing for meetings, and sanity-checking my approach.
- Document review: Prompts specifically designed to critique PRDs, OKRs, strategy docs, and other artifacts. These encode what “good” looks like and call out common anti-patterns.
- Idea stress-testing: A prompt that I stole from my friend Stephen, which simulates a debate between an optimist and a skeptic to pressure-test new ideas before I get too attached to them.
- Technical understanding: Prompts that help me understand systems, architectural decisions, and technical concepts well enough to lead effectively (I’m not an engineer, but I need to hold my own in architecture reviews).
The key is that each prompt is opinionated. They’re not generic “be helpful” instructions—they encode specific philosophies about what good work looks like.
Layer 2: Personal Context
This is where it gets powerful. I maintain files that describe:
- Who I am: My role, my experience, my communication style
- How I work: My product philosophy, my management approach, my values
- What I’m working on: Current projects, team context, company priorities
When I start a conversation, I can pull in the relevant context files alongside my prompt. The model then has the background it needs to give me advice that actually fits my situation—not generic best practices from a blog post.
Layer 3: Reference Materials
Sometimes you need the model to follow specific formats or conventions. I keep reference files for things like wiki markup syntax, documentation templates, or internal style guides. These ensure the output is actually usable without a bunch of reformatting.
How I Actually Use This
I use Windsurf as my daily driver, and it has a feature that makes this whole system work: the @ mention. In the chat panel, I can reference any file by typing @ followed by the path. Windsurf then includes that file’s contents as context for the conversation.
This means I can compose my “assistant” on the fly by combining:
- A system prompt for the task at hand
- Relevant personal context files
- The document or code I’m working on
Example: Document Review
When I need feedback on a PRD before sharing it with stakeholders, I’ll start a conversation and reference my PRD review prompt plus my product philosophy context. Then I paste in the PRD and ask for critique.
The model comes back with feedback grounded in my own standards—not generic advice. It’ll call out if my problem statement is vague, if my success metrics aren’t measurable, or if I’m jumping to solutions before properly framing the problem. Exactly the kind of pushback I’d want from a senior colleague.
Example: Brainstorming Partner
For early-stage thinking, I use a more conversational prompt that knows how I like to explore ideas. I’ll describe what I’m thinking about and ask it to poke holes, suggest angles I haven’t considered, or help me articulate why something feels off.
This is particularly useful before big meetings. I can rehearse my reasoning and get challenged on the weak spots before I’m in front of stakeholders.
Example: Technical Understanding
I’m not an engineer, but I work with technical teams. When I need to understand how a system works—well enough to ask good questions or spot when something doesn’t add up—I use prompts designed for technical explanation.
The key is that these prompts know to explain things without condescension but also without assuming I know the jargon. They cite specific files and line numbers when relevant, and they explain the “why” behind design decisions.
Connecting to Real Data
One feature that’s made a big difference is MCP (Model Context Protocol) servers. These connect the AI to external data sources—internal wikis, documentation sites, code repositories, APIs—so it can ground its responses in actual information rather than just its training data.
In my prompts, I tell the model which MCP servers are available and when to use them. For example, my technical prompts instruct the model to:
- Search official documentation first to ground answers in verified information
- Check internal wikis for known issues, edge cases, and workarounds
- Look at code repositories when documentation is incomplete
- Always cite sources with links so I can verify
This turns the AI from a general-purpose assistant into something more like an expert who has access to your company’s actual knowledge base. The difference in answer quality is significant—instead of generic advice, I get responses that reference real docs and real code.
Keeping a Record
One practical tip: save the conversation output somewhere useful.
I have a work/ folder organized by topic where I save feedback and refined thinking. When the model gives me good critique on a PRD, I’ll ask it to write a summary of the key issues to a Markdown file I can reference later. This keeps the insights from getting lost in chat history.
What I’ve Learned
A few things that have made this work better over time:
- Context files are worth the investment. I have files that describe who I am, how I work, and what I value. Updating these takes time, but it pays off in every conversation.
- Push back is a feature, not a bug. These prompts are designed to challenge bad thinking. If the model is pushing back on your approach, consider that it might be right.
- Iterate on the prompts. I update these regularly based on what works and what doesn’t. If a prompt isn’t helping, change it.
- Less context is often more. Including too much context can dilute the signal. Start with the minimum you need, add more if the model seems confused.
This setup isn’t a silver bullet that makes thinking go away—it’s just a way to encode my preferences and philosophies into something an LLM can use as a baseline for pushing back on my thinking. I still write my own PRDs, OKRs, and strategy docs—the artifacts that represent my actual thinking. But I let AI help me create background documents, explainers, and context-setting materials. And I have a sparring partner that catches the gaps I miss, challenges the assumptions I glossed over, and asks the uncomfortable questions before stakeholders do.
If you build something similar, I’d love to hear how it goes. The prompts are important, but what matters equally as much is the structure—context plus constraints, composed on the fly for the task at hand. That’s the thing that makes it work.
Update! I wrote a follow-up post here: How My AI Product “Second Brain” Evolved.