Menu

The B2B Product Leadership Delusion

Jason Knight wrote about a fascinating disconnect between how B2B product leaders rate themselves and how their teams see them. The data from his survey is striking:

Across the board, B2B Product Leaders think they’re doing pretty well in all of these areas, but B2B IC PMs are not convinced. The difference is stark, and they can’t both be right.

The survey measured six core responsibilities—setting strategy, aligning teams, enabling prioritization, fostering ownership, removing blockers, and investing in people. In every category, leaders rated themselves significantly higher than their ICs rated them. Jason offers three possible explanations: leaders are doing poorly and don’t know it, leaders are doing well but not communicating it, or ICs have unreasonable expectations. He concludes:

Product Leaders need to do a much better job of setting expectations within their teams and communicating with them openly and well. IC Product Managers need to do a much better job of understanding the constraints of their business context and, indeed, the business they work for.

I keep coming back to the iceberg effect he mentions—where only some of the work someone does is visible. This cuts both ways. Leaders underestimate how opaque their work is to their teams, and ICs underestimate the constraints leaders are working within. The gap isn’t just about performance; it’s about mutual understanding.

The invention of "classic rock"

Daniel Parris wrote a statistical analysis of when rock became “classic rock”, and it’s not the story I expected.

He assumed the genre emerged organically from music nerds debating on message boards and in the pages of Rolling Stone. Instead:

What I found was a deliberate realignment engineered by music executives chasing an ephemeral advertising demographic. Like many entertainment industry decisions, it was a small (mostly male) group of executives quietly deciding the future of popular culture behind closed doors.

The data shows two concentrated periods when stations rapidly switched to classic rock: the mid–1980s (to capture aging Boomers entering their peak earning years) and the mid–1990s (after the Telecommunications Act enabled Clear Channel to buy up local stations and prioritize low-risk, high-profit formats).

The kicker is that this rebrand was designed around economic incentives that have since eroded. Radio isn’t the default distribution channel anymore. On streaming, music can just exist without being packaged for a hyper-valuable consumer cohort.

Another reminder that so much of what feels like culture is really just business decisions made in conference rooms.

How to Set Up OpenCode as Your Product Second Brain

This is the hands-on companion to How I Use AI for Product Work and How My AI Product Second Brain Evolved. Those posts explain the philosophy; this one gets you to a working OpenCode setup in 30 minutes.

By the end, you’ll have a folder structure, three useful slash commands, and a context file that makes the AI actually useful.

Prerequisites

You’ll need OpenCode installed and configured. Follow the installation guide to get started. If you can run opencode in your terminal and get a response, you’re ready.

Step 1: Create the Folder Structure

Create a new directory for your product second brain. I recommend keeping it in a git repo so you can version your prompts over time.

mkdir -p product-ai/{context,prompts/pm,.opencode/command}
cd product-ai
git init

Your structure should look like this:

product-ai/
├── .opencode/
│   └── command/       # Slash commands live here
├── context/           # Personal context files
├── prompts/
│   └── pm/            # PM-specific prompts
├── AGENTS.md          # Instructions for OpenCode
└── opencode.jsonc     # OpenCode config

Step 2: Create Your Context File

The context file tells the AI who you are and how you work. Create context/about-me.md:

# About Me

## Role
[Your title] at [Company], working on [your area].

## What I Care About
- [Your product philosophy, e.g., "Start with the problem, not the solution"]
- [Your working style, e.g., "Bias toward shipping and learning"]
- [Your communication preferences, e.g., "Direct feedback, no hedging"]

## Current Focus
- [Project or initiative 1]
- [Project or initiative 2]

Keep this updated as your focus changes. The more specific you are, the more useful the AI becomes.

Step 3: Create AGENTS.md

This file tells OpenCode how to behave in your repo. Create AGENTS.md in the root:

# Product AI Second Brain

Read this file before responding.

## Who I Am
Read `context/about-me.md` for personal context.

## Slash Commands
Run these by typing the command in OpenCode:

| Command | Purpose |
|---------|---------|
| `/prd` | Review a PRD |
| `/debate` | Stress-test a product idea |
| `/okr` | Review OKRs |

Step 4: Add Your First Commands

Slash commands are markdown files in .opencode/command/. Each command file is a thin wrapper that points to a full prompt file in prompts/pm/. This separation keeps commands simple while allowing prompts to be detailed and shareable.

Command 1: /debate (Stress-test an Idea)

Create .opencode/command/debate.md:

---
description: Stress-test a product idea with pro vs skeptic debate
---

# Product Debate

Read these files before proceeding:
- `prompts/pm/debate-product-idea.md` - **REQUIRED: Full debate framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create the prompt file prompts/pm/debate-product-idea.md with your full debate methodology. The basic structure: define two personas (a Visionary who argues for the idea and a Skeptic who pokes holes), have them debate, then synthesize the strongest arguments from both sides.

Command 2: /okr (Review OKRs)

Create .opencode/command/okr.md:

---
description: Review OKRs for clarity and outcome-orientation
---

# OKR Review

Read these files before proceeding:
- `prompts/pm/review-okrs.md` - **REQUIRED: Full OKR review framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create prompts/pm/review-okrs.md with your OKR criteria. Mine checks for outcome-orientation (are these outputs or outcomes?), measurability, and whether the key results actually ladder up to the objective.

Command 3: /prd (Review a PRD)

Create .opencode/command/prd.md:

---
description: Review a PRD for completeness and clarity
---

# PRD Review

Read these files before proceeding:
- `prompts/pm/review-prd.md` - **REQUIRED: Full PRD review framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create prompts/pm/review-prd.md with your PRD review criteria.

Step 5: Try It Out

Navigate to your product-ai directory and run OpenCode:

cd product-ai
opencode

Test each command:

/debate Should we build a self-serve dashboard for customers to debug their own issues?
/okr [paste your OKRs here]

What’s Next

Once you’re comfortable with this setup, consider adding:

  • More commands: /retro for retrospectives, /feedback for drafting colleague feedback
  • Skills: Methodology files that OpenCode loads automatically based on context (see the skills documentation)
  • Daily summaries: A /today command that summarizes what you worked on
  • Project-specific context: Folders for major initiatives with their own context files

The key is to start small and add complexity as you identify repeated workflows. If you find yourself doing the same thing more than twice, it’s probably worth automating.

If you build something useful on top of this, I’d love to hear about it!

Learning in the Age of AI

Scott H. Young has a thoughtful piece on what’s still worth learning in a world with AI. He cuts through both the panic and the hype to look at what the data actually shows. The biggest finding is probably not all that surprising: early-career workers in AI-exposed fields are getting hit hardest.

Another report from the Stanford Digital Economy Lab notes that early-career workers in AI-exposed fields (such as programming) have seen a relative decline in employment, even as employment among workers aged 30 years and older increased. This matches my intuition that AI coding agents can do a lot of junior developer tasks pretty well, but struggle to match the experience needed to tackle more serious work.

Young’s advice is to cultivate generalist skills. Not the content-free “critical thinking” kind, but genuinely transferable knowledge:

In an environment of change, it’s better to be the hardy dandelion rather than the hothouse orchid. Similarly, I expect with AI-induced change, people who have maintained diverse interests and skills will be best positioned to take advantage of the change, whereas extreme specialists will face a greater risk of extinction.

Don't Outsource Your Love of Music to AI

I’m late to this one, but I like Liz Pelly’s take on Spotify Wrapped. It’s not just about music—it’s about what happens when we let corporations automate our memories:

Spotify Wrapped now feels like just another example of something personal and precious that is being automated away from us; another example of a supposedly unbearable task of thinking and writing being “offloaded” in order to make life more frictionless.

The post is essentially about friction—and why we need it. She argues that working through the process of remembering what mattered to us and thinking critically about our year is what keeps us sharp and curious. When we just accept what a streaming service tells us about our taste, we’re not just outsourcing a task. We’re losing our own sense of what connected with us and why.

It encourages music fans to believe that the records they streamed the most must be the ones they liked the most, which is surely not always the case.

Her suggestion is straightforward: write your own list. It doesn’t have to be polished—a notes app screenshot, a handwritten list, whatever. Just something that came from you, not from an algorithm optimizing for engagement metrics.

What's Actually Working with AI

Natalia Quintero wrote about what she’s learned from talking to more than 100 companies about AI implementation. This part about the problem with early adopters and isolated workflows stood out:

AI doesn’t spread like other software. Think about Asana. If one person decides to organize their team’s tasks there, everyone benefits automatically because the work is more organized, and someone on the team has taken responsibility for that organization. You don’t need to learn the tool to get value from your colleague using it. AI doesn’t work that way. If you develop workflows around how you work, that value doesn’t automatically translate to the rest of the company. Your prompts, your GPTs, your automations—they’re built around your context, your processes, and your way of thinking. They don’t transfer.

That’s the adoption problem in a nutshell. A power user’s AI setup is like their personal note-taking system—valuable for them but not portable. It explains why enterprise rollouts don’t work the way everyone expects.

The recruiting firm example is good: they trained 10 champions who built tools their peers wanted to use. One person automated scheduling coordination (saving 2–10 hours per task), and suddenly 30 others got curious. Peer-to-peer beats top-down mandates.

(If you’re curious about my setup, I wrote about it here)

Humans make mistakes, and so does AI. It's fine.

Will Larson has a good post about implementing “Agent Skills” in their internal agent framework at Imprint. The whole piece is worth reading, but I wanted to highlight this observation:

Humans make mistakes all the time. For example, I’ve seen many dozens of JIRA tickets from humans that don’t explain the actual problem they are having. People are used to that, and when a human makes a mistake, they blame the human. However, when agents make a mistake, a surprising percentage of people view it as a fundamental limitation of agents as a category, rather than thinking that, “Oh, I should go update that prompt.”

There’s a double standard at play here that I’ve noticed too. When a colleague writes a confusing document, we ask them to clarify. When an agent produces something off, we tend to smirk and declare the technology fundamentally broken.

The fix is often as simple as updating a prompt—the same way you’d coach a team member to write better tickets. Skills, in Larson’s implementation, are essentially reusable prompt snippets that encode learned behaviors across workflows. It’s the kind of organizational knowledge we build up with people over time, just made explicit.

How My AI Product "Second Brain" Evolved

A couple of weeks ago I wrote about how I use AI for product work—the basic setup of context files, prompts, and the @ mention system in Windsurf. Since then the system evolved quite a bit, so I figured it’s time for an update.

The philosophy has shifted a bit. I still don’t use AI to do my core thinking—I write my own PRDs and strategy docs. But I’ve come to rely on it more as a helpful assistant for the work around the work: reviewing documents before I share them, researching technical questions, summarizing my week, preparing for meetings. It’s now less “sparring partner” and more “capable colleague who’s always available.”

From Windsurf to Claude Code / OpenCode

The biggest shift was moving from Windsurf to Claude Code, Anthropic’s terminal-based AI assistant. Claude Code runs in your terminal and has direct access to your filesystem, which changes how you can structure these workflows.

The key feature that made this worthwhile is slash commands. Instead of manually @-mentioning prompt files, I can type /ask-se and Claude Code automatically loads the right context, reads the relevant files, and knows how to respond. It’s a small thing, but removing that friction makes a real difference in how often I actually use these tools.

I also started using OpenCode, an open-source alternative that works similarly. Both tools read from the same instruction files, so I maintain one set of prompts that work in either environment.

Slash Commands

The prompts I described in the original post are now wrapped in slash commands. The files live in .claude/commands/ and look like this:

.claude/commands/
├── prd.md          # Review a PRD
├── okr.md          # Review OKRs
├── debate.md       # Stress-test a product idea
├── ask-se.md       # Research technical questions
├── briefing.md     # Calendar briefing
├── today.md        # End-of-day summary
├── weekly.md       # Weekly summary
└── ...

Each command file contains instructions for the AI, plus a $ARGUMENTS placeholder for any input I provide. When I run /debate should we build a developer portal?, Claude Code reads the debate prompt, substitutes my question for the placeholder, and runs through its process.

The commands I use most often:

  • /prd — Reviews a PRD I’ve written and pushes back on vague problem statements, missing success metrics, or unclear scope. I run this before sharing drafts with stakeholders.
  • /debate — Simulates a debate between an optimist and a skeptic about a product idea. This is probably the command I reach for most when I’m still forming an opinion about something.
  • /ask-se — Helps me answer specific technical questions about our products. For example, today I ran /ask-se On Gateway HTTP Logpush jobs, what would trigger "unknown" as the action? The command uses MCP (Model Context Protocol) servers to search our public documentation and internal wiki, then synthesizes an answer I can actually use. It’s how I learn the product deeply without having to interrupt engineers or dig through docs myself.

The mental overhead of remembering file paths and composing context manually has mostly disappeared. I just type the command and start the conversation.

Skills: Persistent Methodology

Commands are things I invoke explicitly. Skills are different—they’re methodology files that get applied automatically when relevant.

I have three skills set up:

  1. pm-thinking — Applies my product philosophy to any PM-related work. When I’m reviewing a PRD, it automatically checks for problem-first thinking, measurable outcomes, and clear non-goals.
  2. cloudflare-context — Loads knowledge about Cloudflare’s products and triggers proactive use of internal data sources (more on this below).
  3. data-team-context — Specific context about my team’s priorities, current initiatives, and constraints.

The skill files live in .claude/skills/ and contain triggers (when to apply) and behaviors (what to do). For example, the pm-thinking skill flags anti-patterns like “vague success metrics” or “jumping to solutions without understanding the problem.”

This means even when I’m not explicitly running a PM-focused command, the AI still knows to apply my methodology. It’s like having a linter for product thinking that runs in the background.

Daily and Weekly Summaries

One of the more useful additions: automated work journaling. At the end of the day, I run /today. The command:

  1. Finds files I modified that day using filesystem timestamps
  2. Reads the key files to understand what I actually worked on
  3. Asks if there’s anything else to add
  4. Generates a summary focused on outcomes, not tasks
  5. Saves it to a structured folder: work/cloudflare/weeknotes/2025/01/week-01/2025-01-06.md

The output follows a consistent format:

# Tuesday, December 16, 2025

## Summary
Focused on Q1 planning and customer research. Clarified success
metrics for data quality initiative and documented common Logpush questions.

## What I Worked On
- **Q1 Planning:** Reviewed OKR drafts, identified gaps in success metrics
- **Customer Research:** Documented Logpush egress IP questions for support docs

Then on Friday (or Monday morning), I run /weekly. It reads all the daily notes for the week and synthesizes them into summary—the kind of thing I can use to prepare for 1:1s or status updates.

This has been surprisingly effective for a simple reason: It’s sometimes hard to remember everything I did in the week when Friday rolls around… The daily notes capture work while it’s fresh, and the weekly summary rolls it up into something useful.

Calendar Briefings

The most custom piece of this system is the calendar briefing. I got this idea from an interview with Webflow’s CPO on Claire Vo’s podcast, about building an AI chief of staff. Following her example I wrote a Python script that:

  1. Connects to Google Calendar via OAuth
  2. Fetches events for today, tomorrow, or the week
  3. Generates a briefing that flags meetings that could be async, suggests prep work, and warns if the day is overloaded

I run /briefing tomorrow before wrapping up for the day. It gives me a head start on thinking about what’s coming and whether I need to prepare anything.

The briefing uses context from my about-me.md file, so it knows things like which recurring meetings are more important than others, and what kind of prep I typically need for different meeting types.

Syncing Between Tools

One wrinkle: Claude Code and OpenCode have slightly different formats for commands. Claude Code uses plain markdown; OpenCode expects frontmatter with a description field.

So I built a sync command. Running /sync-to-opencode compares the two folders and copies any changed commands, adding the frontmatter OpenCode needs. It means I can maintain one set of prompts (in .claude/commands/) and have them work in both tools.

This is the kind of meta-tooling that probably isn’t worth it for most people, but I switch between the tools often enough that keeping them in sync manually was annoying.

What I’ve Learned Since the First Post

A few things that have become clearer:

  • Automation compounds. Each individual piece—slash commands, skills, daily summaries—provides modest value on its own. But they compound. The daily notes feed the weekly summary. The skills apply across all conversations. The slash commands reduce friction enough that I actually use the system instead of just thinking about using it.
  • Building custom workflows is easier than ever. With slash commands and skills, I can create tools that work exactly how I want to work. The /ask-se command knows my products, our customers, and the kinds of questions I typically need answered. The daily summary knows my file structure and my preferred format. It’s not about adapting to a tool’s workflow—it’s about encoding my workflow into a tool.
  • Scripts fill the gaps. The calendar briefing couldn’t be a pure prompt—it needed to actually fetch data from an external API. Having a scripts/ folder for Python code that extends the system has been useful for anything that requires real I/O.
  • Keep work output local. The prompts and context files live in a private git repo so I can sync them between machines and version them over time. But the actual work output—weeknotes, briefings, any documents I’m working on—stays on my local machine, not in the repo. The AI helps me do the work, but the work itself doesn’t need to live in the cloud.

If you’re building something similar, I’d recommend starting with the basics from the first post: context files, opinionated prompts, and a tool that lets you compose them easily. The slash commands and automation came later, once I understood what workflows I actually used repeatedly.

The specific tooling matters less than the approach. Figure out where AI can genuinely help—reviewing your work, researching questions, reducing busywork—and then build the smallest system that makes those things easy to do.

Introducing TL;DL: AI-Powered Podcast Summaries

Do you ever listen to a podcast episode and wish you could have a summary you could reference later? Not the whole transcript or someone else’s review, just a concise breakdown of the key points in a format you can scan quickly when you need to remember what was covered. Well, that’s why I spent the weekend building TL;DL (Too Long; Didn’t Listen). It generates AI summaries from podcast episodes.

Beyond that, there were a few other podcast use cases I kept running into:

  • Catching up on episodes I missed. Sometimes a podcast gets 10 episodes ahead while life happens. A summary helps me decide which ones are worth going back to.
  • Getting a feel for a new podcast. Before committing to a full episode, I want to know if a new show covers topics in a way that works for how I think and learn.
  • Quick reference after listening. When I want to apply something from an episode—like a framework or technique—I don’t want to re-listen to an hour of audio to find the relevant 5 minutes.

So I built something for myself, and now I’m making it available to others.

How It Works

Registered users can submit an Apple Podcasts episode URL, choose a summary template, and the system does the rest. It transcribes the audio (using OpenAI Whisper), generates a summary (using GPT–5.2), and caches everything for a year.

The TL;DL submission form with three template options

The three templates are designed for different types of content:

  • Key Takeaways & Practical Steps — This is the default, and it’s what I use most. The summary includes an overview, key insights, actionable steps, and notable quotes. Best for professional development and craft podcasts where you want to walk away with something to implement.
  • Narrative Summary — For story-driven content and interviews. Instead of bullet points, this generates flowing prose that captures the arc of the conversation, including key moments and themes.
  • ELI5 (Explain Like I’m 5) — For technical or complex topics. It breaks down dense material using everyday analogies and simple language.

The ELI5 Template Passed the Real Test

My wife is a therapist. She listens to highly technical psychology podcasts about things like Transference-Focused Psychotherapy and pathological narcissism. When I ran a recent episode she listed to through TL;DL, she was genuinely impressed by the “Key Takeaways” summary. It captured the clinical nuances accurately.

I, on the other hand, didn’t understand a word of it.

So I generated another summary using the “ELI5” template, and suddenly I could follow along. Concepts like devaluation got explained as “when a patient puts down the therapist, the therapy, or anything connected to it.” The technical frameworks became accessible. Here’s the episode page if you want to toggle between the two summaries yourself.

A Note about Podcast Creators and Attribution

Attribution matters to me. Every episode page prominently displays the podcast name, creator names, and both a “Listen on Apple Podcasts” link and a “Website” link to the official podcast website. My hope is that TL;DL helps expand a podcast’s audience by making the content more accessible. Summaries should bring people to a podcast, not replace the experience of listening—most podcasts have transcripts available already after all.

That said, if creators would rather not have their podcast processed, they can opt out and I’ll add their show to the blocklist.

The Technical Bits

For those interested in the stack: TL;DL runs entirely on Cloudflare’s edge platform. Cloudflare Workers handles the serverless compute, Workers KV stores the cached transcripts and summaries, and Cloudflare Queues manages the background job processing.

One interesting technical challenge was job status consistency. When you submit an episode, you want to see the status update in real-time as it progresses from “queued” to “transcribing” to “summarizing” to “completed.” Workers KV is eventually consistent, which meant status updates could lag by up to a minute. Users would refresh and still see “queued” even after the job was done.

I solved this with Durable Objects, Cloudflare’s strongly consistent coordination layer. The job status gets written to both the Durable Object (for immediate reads) and KV (for persistence and fallback). The UI now updates instantly.

Audio file handling for long episodes was another challenge. OpenAI Whisper has a 25MB file size limit. For podcasts that exceed this, I implemented MP3 frame-aware chunking—splitting the audio at frame boundaries so transcription can be stitched back together cleanly. The overlap handling ensures no words get lost between chunks.

What’s Next

Beyond solving my own problem, this was one of those projects where the building itself was the reward. The technical challenges were interesting, the product felt useful from day one, and I got to learn Durable Objects properly.

Submitting new episodes is currently invite-only while I iron out the rough edges. If you’re interested in access, reach out. For now, you can browse existing summaries to see how it works.

Building a music discovery app (and what I learned about Product)

I miss liner notes. In the age of infinite streaming and algorithmic playlists I find myself longing for the days when you’d flip open a CD case and actually read about the music you were listening to. Who produced this? What’s the story behind the album? Why does this track feel different from everything else they’ve made?

Spotify and Apple Music are great at giving you more music. They’re less good at helping you understand why you might love something, or what to explore next. So I built my own solution—and then rebuilt it twice.

The problem I was trying to solve

My relationship with Last.fm goes back to 2007. In case you’re not familiar, Last.fm is a service that “scrobbles” (tracks) everything you listen to, building a comprehensive history of your musical life. It’s become a wonderful archive of my taste evolution over nearly two decades.

Last.fm is great at telling you what you listened to. It’s less useful for helping you understand why you might love something, or what else you should explore. Spotify and Apple Music’s algorithmic playlists are fine, but they often feel like they’re optimizing for engagement rather than genuine discovery.

I wanted a tool that would:

  • Show me context about the artists and albums in my listening history
  • Help me discover music through similarity and connection, not just popularity metrics
  • Give me that “liner notes” depth I was craving
  • Work with my existing Last.fm data (18 years of listening history is a lot to throw away)

So I started building, first by copy-pasting from GPT–4 (the olden days!), and most recently with Antigravity + Claude Opus 4.5 (we’ve come a long way since 2023). Here’s where it all stands today…

Listen To More: three iterations and counting

Listen To More is the core project—a music discovery platform that combines real-time listening data with AI-powered insights.

The first version was simple: a personal dashboard that pulled my Last.fm data and displayed it nicely. Functional, but limited. The second version added some AI summaries using OpenAI’s API. Better, but still rough around the edges.

The current version—iteration three—is a complete rebuild focused on speed and multi-user support. What started as “a thing I made for myself” is now something anyone can use. Sign in with your Last.fm account, and you get:

  • Rich album and artist pages with AI-generated summaries, complete with source citations (so you know the AI isn’t just making things up)
  • Your personal stats showing recent listening activity, top artists and albums over different time periods.
  • Weekly insights powered by AI that analyze your 7-day listening patterns and suggest albums you might love
  • Cross-platform streaming links for every album—Spotify, Apple Music, and more
  • A Discord bot so you can share music discoveries with friends

The tech stack is Hono on Cloudflare Workers, with D1 (SQLite) for the database and KV for caching. The whole thing is server-side rendered with vanilla JavaScript for progressive enhancement. Pages load in about 300ms, then AI summaries stream in asynchronously.

I chose this stack partly because I work at Cloudflare and wanted to understand our developer platform better. More on that later.

Extending the ecosystem with MCP servers

MCP stands for Model Context Protocol. In plain terms, it’s a standard that lets AI assistants (like Claude) connect to external data sources and tools. Think of it as giving an AI the ability to actually use personalized data rather than just answer questions based on pre-training.

I built two MCP servers to extend my music discovery ecosystem:

Last.fm MCP Server

Available at lastfm-mcp.com, this server lets AI assistants access your Last.fm listening data. Once connected, you can have conversations like:

  • “When did I start listening to Led Zeppelin?”
  • “What was I obsessed with in summer 2023?”
  • “Show me how my music taste has evolved over the years”

The AI can pull your actual scrobble data, analyze trends, and give you personalized insights. It supports temporal queries (looking at specific time periods), similar artists discovery, and comprehensive listening statistics.

Discogs MCP Server

This one connects to Discogs—the massive music database and marketplace that’s especially popular with vinyl collectors. If you have a Discogs collection, the MCP server lets AI assistants:

  • Search your collection with intelligent mood mapping (“find something mellow for a Sunday evening”)
  • Get context-aware recommendations based on what you own
  • Provide collection analytics and insights

Both servers run on Cloudflare Workers and use OAuth for secure authentication. They’re open source if you want to poke around or deploy your own.

What I learned

I’m a Product Manager, not an engineer. But I’ve found that having more technical depth broadens the scope of things I am able to contextualize—and makes me more confident in my interactions with engineers. Here’s what building these projects reinforced for me:

  • Side projects are a low-stakes learning environment. When you’re building for yourself, there’s no pressure to ship by a deadline or meet someone else’s requirements. You can experiment, break things, and iterate freely. I tried approaches that would have been too risky to propose in a work context—some of them broke the site spectacularly, others worked beautifully.
  • There’s no substitute for using your own product. I use these tools every day. That constant exposure surfaces issues and opportunities that you’d never catch in a quarterly review or user interview. The feature prioritization becomes obvious when you’re feeling your own friction.
  • Building with your company’s tools is invaluable. I now have deep, practical knowledge of Cloudflare Workers, D1, KV, and the rest of our developer platform. When I’m talking to customers or evaluating feature requests, I’m drawing on real experience, not just documentation. I can empathize with the developer experience because I’ve lived it.
  • The fun matters. I keep coming back to these projects because I genuinely enjoy working on them. The satisfaction of solving a problem you personally care about is different from the satisfaction of shipping something at work. Both are valuable, but the former is what sustains a side project through the inevitable rough patches.

What’s next

I have a list of features I’d love to add—better recommendations, more sophisticated listening pattern analysis, maybe even integration with other music services. But I’m also learning to pace myself. These projects aren’t going anywhere, and part of the joy is the slow, steady improvement over time.

If you’re curious, you can check them out here:

And if you’re a PM thinking about starting a technical side project: do it. Pick something you personally care about, use tools you want to learn, and give yourself permission to build slowly. The lessons transfer in ways you won’t expect.