A couple of weeks ago I wrote about how I use AI for product work—the basic setup of context files, prompts, and the @ mention system in Windsurf. Since then the system evolved quite a bit, so I figured it’s time for an update.
The philosophy has shifted a bit. I still don’t use AI to do my core thinking—I write my own PRDs and strategy docs. But I’ve come to rely on it more as a helpful assistant for the work around the work: reviewing documents before I share them, researching technical questions, summarizing my week, preparing for meetings. It’s now less “sparring partner” and more “capable colleague who’s always available.”
From Windsurf to Claude Code / OpenCode
The biggest shift was moving from Windsurf to Claude Code, Anthropic’s terminal-based AI assistant. Claude Code runs in your terminal and has direct access to your filesystem, which changes how you can structure these workflows.
The key feature that made this worthwhile is slash commands. Instead of manually @-mentioning prompt files, I can type /ask-se and Claude Code automatically loads the right context, reads the relevant files, and knows how to respond. It’s a small thing, but removing that friction makes a real difference in how often I actually use these tools.
I also started using OpenCode, an open-source alternative that works similarly. Both tools read from the same instruction files, so I maintain one set of prompts that work in either environment.
Slash Commands
The prompts I described in the original post are now wrapped in slash commands. The files live in .claude/commands/ and look like this:
.claude/commands/
├── prd.md # Review a PRD
├── okr.md # Review OKRs
├── debate.md # Stress-test a product idea
├── ask-se.md # Research technical questions
├── briefing.md # Calendar briefing
├── today.md # End-of-day summary
├── weekly.md # Weekly summary
└── ...
Each command file contains instructions for the AI, plus a $ARGUMENTS placeholder for any input I provide. When I run /debate should we build a developer portal?, Claude Code reads the debate prompt, substitutes my question for the placeholder, and runs through its process.
The commands I use most often:
/prd — Reviews a PRD I’ve written and pushes back on vague problem statements, missing success metrics, or unclear scope. I run this before sharing drafts with stakeholders.
/debate — Simulates a debate between an optimist and a skeptic about a product idea. This is probably the command I reach for most when I’m still forming an opinion about something.
/ask-se — Helps me answer specific technical questions about our products. For example, today I ran /ask-se On Gateway HTTP Logpush jobs, what would trigger "unknown" as the action? The command uses MCP (Model Context Protocol) servers to search our public documentation and internal wiki, then synthesizes an answer I can actually use. It’s how I learn the product deeply without having to interrupt engineers or dig through docs myself.
The mental overhead of remembering file paths and composing context manually has mostly disappeared. I just type the command and start the conversation.
Skills: Persistent Methodology
Commands are things I invoke explicitly. Skills are different—they’re methodology files that get applied automatically when relevant.
I have three skills set up:
- pm-thinking — Applies my product philosophy to any PM-related work. When I’m reviewing a PRD, it automatically checks for problem-first thinking, measurable outcomes, and clear non-goals.
- cloudflare-context — Loads knowledge about Cloudflare’s products and triggers proactive use of internal data sources (more on this below).
- data-team-context — Specific context about my team’s priorities, current initiatives, and constraints.
The skill files live in .claude/skills/ and contain triggers (when to apply) and behaviors (what to do). For example, the pm-thinking skill flags anti-patterns like “vague success metrics” or “jumping to solutions without understanding the problem.”
This means even when I’m not explicitly running a PM-focused command, the AI still knows to apply my methodology. It’s like having a linter for product thinking that runs in the background.
Daily and Weekly Summaries
One of the more useful additions: automated work journaling. At the end of the day, I run /today. The command:
- Finds files I modified that day using filesystem timestamps
- Reads the key files to understand what I actually worked on
- Asks if there’s anything else to add
- Generates a summary focused on outcomes, not tasks
- Saves it to a structured folder:
work/cloudflare/weeknotes/2025/01/week-01/2025-01-06.md
The output follows a consistent format:
# Tuesday, December 16, 2025
## Summary
Focused on Q1 planning and customer research. Clarified success
metrics for data quality initiative and documented common Logpush questions.
## What I Worked On
- **Q1 Planning:** Reviewed OKR drafts, identified gaps in success metrics
- **Customer Research:** Documented Logpush egress IP questions for support docs
Then on Friday (or Monday morning), I run /weekly. It reads all the daily notes for the week and synthesizes them into summary—the kind of thing I can use to prepare for 1:1s or status updates.
This has been surprisingly effective for a simple reason: It’s sometimes hard to remember everything I did in the week when Friday rolls around… The daily notes capture work while it’s fresh, and the weekly summary rolls it up into something useful.
Calendar Briefings
The most custom piece of this system is the calendar briefing. I got this idea from an interview with Webflow’s CPO on Claire Vo’s podcast, about building an AI chief of staff. Following her example I wrote a Python script that:
- Connects to Google Calendar via OAuth
- Fetches events for today, tomorrow, or the week
- Generates a briefing that flags meetings that could be async, suggests prep work, and warns if the day is overloaded
I run /briefing tomorrow before wrapping up for the day. It gives me a head start on thinking about what’s coming and whether I need to prepare anything.
The briefing uses context from my about-me.md file, so it knows things like which recurring meetings are more important than others, and what kind of prep I typically need for different meeting types.
One wrinkle: Claude Code and OpenCode have slightly different formats for commands. Claude Code uses plain markdown; OpenCode expects frontmatter with a description field.
So I built a sync command. Running /sync-to-opencode compares the two folders and copies any changed commands, adding the frontmatter OpenCode needs. It means I can maintain one set of prompts (in .claude/commands/) and have them work in both tools.
This is the kind of meta-tooling that probably isn’t worth it for most people, but I switch between the tools often enough that keeping them in sync manually was annoying.
What I’ve Learned Since the First Post
A few things that have become clearer:
- Automation compounds. Each individual piece—slash commands, skills, daily summaries—provides modest value on its own. But they compound. The daily notes feed the weekly summary. The skills apply across all conversations. The slash commands reduce friction enough that I actually use the system instead of just thinking about using it.
- Building custom workflows is easier than ever. With slash commands and skills, I can create tools that work exactly how I want to work. The
/ask-se command knows my products, our customers, and the kinds of questions I typically need answered. The daily summary knows my file structure and my preferred format. It’s not about adapting to a tool’s workflow—it’s about encoding my workflow into a tool.
- Scripts fill the gaps. The calendar briefing couldn’t be a pure prompt—it needed to actually fetch data from an external API. Having a
scripts/ folder for Python code that extends the system has been useful for anything that requires real I/O.
- Keep work output local. The prompts and context files live in a private git repo so I can sync them between machines and version them over time. But the actual work output—weeknotes, briefings, any documents I’m working on—stays on my local machine, not in the repo. The AI helps me do the work, but the work itself doesn’t need to live in the cloud.
If you’re building something similar, I’d recommend starting with the basics from the first post: context files, opinionated prompts, and a tool that lets you compose them easily. The slash commands and automation came later, once I understood what workflows I actually used repeatedly.
The specific tooling matters less than the approach. Figure out where AI can genuinely help—reviewing your work, researching questions, reducing busywork—and then build the smallest system that makes those things easy to do.