Menu

Release: tldl v1.2.0 — RSS-first monitoring

Project
tldl
Summary
Your favorite podcasts, summarized.

RSS-first monitoring brings tldl podcasts close to real-time instead of lagging behind Podcast Index by hours. The monitor now fetches RSS directly with conditional GETs and queues full episode metadata.

Breaking changes

  • The episode ID shape has changed for RSS-sourced episodes. Existing episodes keep their old IDs.

What’s new

  • RSS-first detection path for all monitored feeds.
  • Cron cadence bumped from every 3 hours to every 30 minutes.

Fixes

  • Episodes retitled after publish are now deduped by audio URL.

Under the hood

  • New rssSourced flag on queue messages lets the consumer skip Podcast Index enrichment.

What actually changed about being a PM

I have decided that in this new AI era I will be practicing FDD. Fear-Driven Development. Every time I send a pull request, which happens a lot now, I’m terrified of an engineer sending it back to me and asking me to please stay in my lane and stop sending them slop. So I plan, write specs and implementation plans, test thoroughly, and I don’t trust the agent’s inevitable confidence.

I’ll come back to that, but let me first frame what this post is about. The loudest take on PM work right now is that AI is collapsing the role — that we’re one product cycle away from redundancy, or being reduced to prompt jockeys. That hasn’t been my experience at all. The job got more hands-on, harder (brain fry is real), but also a lot more fun. What follows is what actually shifted for me over the last 5 months at Cloudflare, what didn’t, and a couple of things I got wrong.

What changed

We all ship now. The biggest shift in my day-to-day is that my team and I write code. Not as a vanity exercise or to replace engineers — there is no universe in which I’m touching our data pipeline code. But for prototypes, internal tools, small features, and live bugs or UX improvements that are safe for us to do, we just go ahead and make a PR instead of adding things to the backlog. This quote from one of our EMs sums it up well:

I like these dashboard revamps. Far better if PMs can express their visions for the product directly to Opencode, avoids a lot of back and forth.

This is where FDD comes in. The terror of shipping slop is the thing that keeps me responsible. Telling Claude “hey, build me X” is a fast road to code that doesn’t work the way it should, or worse, works but introduces ten regressions along the way. So the pattern I run now is: brainstorm first, usually with a skill that forces me to articulate what I’m actually trying to do; write a spec; turn the spec into an implementation plan; and only then start generating code. Counterintuitively, the planning is what makes the whole thing faster (and better!). Skip the brainstorm and you’ll spend ten rounds of PR review untangling code the model wrote confidently and incorrectly. Plan properly and the build itself is usually the fast part.

Context is the product, and evals are the new PRD. The second change is arguably even more consequential: I spend real time maintaining a context layer. A CLAUDE.md, a library of skills, stakeholder memory, agent routing, a second brain that feeds all of it. None of this was part of my job a year ago, and treating AI as a chat window means missing most of what it can do. My own PM rubrics live inside this layer too. My /okr and /prd commands load the problem-first frameworks and antipattern checklists I’d apply myself, so the first pass on any draft or review is already done before I open the doc.

The same shift is showing up in specs and PRDs. When a document has two audiences (the team building the product and the agent helping them) the writing changes. Ambiguity gets expensive. Rhetorical flourishes don’t survive the first load into context. A good spec is one that loads cleanly as context and runs as a plan — that’s a different job than writing a memo for stakeholders. Ornella at Braintrust has a good post on how evals are the new PRDs:

An eval is a structured, repeatable test that answers one question. Does my AI system do the right thing? Think of it as a unit test for AI behavior.

Her argument is that in AI-native products the eval suite is what actually defines the product. A PRD says what you want; an eval tells you whether you got it. For PMs, the artifact that matters is the one you can run.

We take more load off engineering. CUSTESCs (our customer escalations) used to take hours and hours of PM and engineering capacity. Someone would dig through Jira, read code across a dozen repos, chase down the relevant ClickHouse tables, check the docs against what the customer expected, and go back and forth for days before anyone had a useful working hypothesis. Our team now has a /custesc command that does most of that in parallel. It pulls the ticket, runs three agents at once across code, Jira, and the wiki, generates and runs ClickHouse queries to check the leading hypothesis, and passes the draft through a blind validator and challenger before it lands as a classified analysis. Ticket ID to root cause in about 20 minutes, most of the time.

This moves where the investigation work sits. A CUSTESC used to be a tax on engineering. Now I can run the full investigation myself and come to engineering with a classified issue and a working hypothesis, instead of a vague “can someone take a look at this?” One enormous side benefit: I’ve learned more about our products in the past couple of months than I have in the 1.5 years before that.

What didn’t change

Figuring out what to bet on. What to say yes/no to is still the hardest part of the job. AI can lay out the tradeoffs; it can’t tell you which user/business opportunities to prioritize this quarter. The roadmap is still a set of bets you own. (What “roadmap” even means in this new world deserves a post on its own, but suffice to say we have fully adopted Now/Next/Later)

Trust with engineers. There is no AI shortcut for being useful to your team over time. Showing up and owning the ambiguous stuff is still the job. The calls no one wants to make are still yours. If anything, PMs being able to prototype makes the line between helping and stepping on toes harder to hold. It’s a human line, and it gets redrawn every week.

Owning outcomes when things go wrong. AI doesn’t absorb accountability. It won’t take the hit in a postmortem, and it won’t rebuild trust with a customer after an incident — you will. The hardest moments in the job haven’t changed.

What I got wrong

I underestimated the context layer. For the first year I treated AI as a chat tool: ask good questions, get good answers. I thought prompt engineering was the skill. The thing I missed is that skills, memory, agent routing, and the specs you load in are the product. Prompts sit downstream.

I thought adoption would be gradual. I assumed PMs would pick this up on a normal curve: some fast, some slow, most in the middle. What I’m seeing industry-wide is a widening gap between PMs who are willing to change how they work and PMs who aren’t. You can see the gap in how fast they investigate a problem, how concretely they argue about a design, and how useful they are to other teams.

Getting started is easy and the early wins are obvious. The hard part is being open to changing your job. I was talking to my wife the other day about what I’m doing, and she asked the obvious question: “Why are you automating your job away?” My answer: the people who automate their own jobs away are the ones who become more valuable, because the craft is now in orchestration — setting up the layers so the AI does the right thing.

Where this leaves us

I don’t know what this job looks like in another year. The pattern of the last few months has been that the ground shifts faster than the opinions written about it, and most of the stable-sounding takes age badly within a quarter. What I’m trying to do is pay attention to what’s getting easier, what’s getting harder, and what’s making me uncomfortable. And reminding myself that “uncomfortable” is usually where the real learning happens.

Stand out of our Light

It’s my firm conviction, now more than ever, that the degree to which we are able and willing to struggle for ownership of our attention is the degree to which we are free.

– James Williams, Stand out of our Light: Freedom and Resistance in the Attention Economy

Is Hip-Hop in Decline? A Statistical Analysis

I love this blog and try not to link to it too much, but this one about how fewer people listen to hip hop was especially great.

So, what’s filled the space hip-hop once dominated? A blend of new arrivals and familiar mainstays. Latin music—led by Bad Bunny—and Asian pop, powered by K-pop acts like BTS, have expanded their global footprint. At the same time, legacy formats are resurging: country is booming, driven in large part by Morgan Wallen, while the loosely defined “alternative” category continues to gain share across the charts.

I particularly love how he tries to avoid causation/correlation errors in his hypotheses. Like this one I hadn’t thought about:

Streaming adoption laggards: Hip-hop uniquely benefited from early streaming adopters in the 2010s. Younger listeners—who were predisposed to the genre—were among the first to embrace platforms like Spotify, giving hip-hop an outsized digital footprint. More recently, late adopters—like country fans, older cohorts, and global audiences—have rebalanced the charts, lifting genres like country and K-pop.

I am finally — FINALLY — off WordPress

A quick meta-post incoming! This site has been running on WordPress and Dreamhost for 18 years. It worked fine, but the overhead was really starting to get to me: a MySQL database, monthly hosting costs, plugin updates that arrive every other week, and embarrassing page load times…

I’ve wanted to move to a static site for years, but it felt impossible. Every time I started to think about it I just gave up. How do I migrate 1,700 posts without breaking almost 20 years of URLs? What do I do about search? The Last.fm widget? Email routing? The existing CSS? There were too many things I didn’t know I didn’t know, so I never got very far.


A few months ago I started working on a fresh migration plan with Claude Code, using the obra/superpowers skill set. Not to write code yet, but to think through the plan. What’s the best long-term architecture for a move like this? What’s the actual order of operations? Where are the traps?

We iterated on it many times over several weeks. Each pass surfaced something I hadn’t thought about: redirect strategy, shortcode handling, whether my existing CSS depended on WordPress-specific class names, email routing before cancelling the old host, rollback options if something went wrong… The plan got longer and more detailed, but it also got clearer. What had felt like an insurmountable thing gradually became something with known phases, concrete steps, and tradeoffs I could actually evaluate.

By the end of the planning process, the migration had a 1,300-line plan covering everything from exporting WordPress content to the DNS cutover runbook. And then Claude and I did the actual migration… in a single weekend.

So WordPress and Dreamhost are both gone. What you see here now is Astro, deployed to Cloudflare Workers, with no database and no server-side runtime. Content lives as Markdown files in a git repo. Search runs entirely client-side via Pagefind. And of course, it’s also the fastest the site has ever been.


A lot of people I know have a project like this — something they’ve wanted to do for years, have started and abandoned, and have filed away under “someday.” It stays there because the project doesn’t feel possible. You can’t begin to think of all the complexities involved, so you just don’t start.

I think we’ve learned now that AI is pretty good at making work legible. If you iterate on a plan long enough the project stops being a vague, scary thing and becomes a checklist you can actually run through. So hey, find your “I’m free” thing. It might be more doable than it looks.

Evals Are the New PRD

Braintrust makes a good case (apologies for the X.com link…) for rethinking how PMs work on AI products: the eval replaces the PRD.

An eval is a structured, repeatable test that answers one question. Does my AI system do the right thing? You define a set of inputs along with expected outputs, run them through your AI system, and score the results using algorithms or AI judges.

The eval becomes both the spec and the acceptance criteria. The directive to engineering:

“Here is the eval. Make this number go up.”

That’s very different to how most teams work today, but I can definitely see the industry moving this way. Product usage generates signals, observability captures them, and evals turn them into improvement targets. The PM’s job is to define what “good” looks like in code and curate the data that reveals what “bad” looks like.

The PM skills that transfer are the same ones that always mattered — discovering needs and opportunities, and making judgment calls about what to build for business value. The difference is that instead of a document that describes the intent, you have a test suite that encodes it.

No One Else Can Speak the Words on Your Lips

Ben Roy explains why prompting an LLM to write an essay misunderstands what writing actually is:

People fundamentally can’t prompt good essays into existence because writing is not a top-down exercise of applying knowledge you have upfront and asking an LLM to create something. AI agents also can’t create good essays for the same reason. Even though their step-by-step reasoning is more complex and iterative than human prompting, a chain of thought is still trying to accomplish a predefined goal. By contrast, real writing is bottom up. You don’t know what you want to say in advance. It’s a process of discovery where you start with a set of half-baked ideas and work with them in non-linear ways to find out what you really think.

I will continue to argue that for general business writing LLMs are fantastic if they are given the right context and guidance, and that it can save hours of work (with high quality results). But all my experiments with using LLMs for creative writing has so far fallen flat. Maybe—likely?—that will change within the next few months. But for now, the brain work this kind of writing requires remains. Not a bad thing imo.

Zombie Flow

Derek Thompson goes into the history of the “flow” concept, and how tech and entertainment companies learned to simulate it without any of the substance psychologist Mihaly Csikszentmihalyi originally had in mind:

Algorithmic flow is flow without achievement, flow without challenge, flow without even volition… To be lost in the lazy river of algorithmic media is to be lost the current of life without a mind. Zombie flow.

Ten years ago the question was how to get into flow more often. Now it might be how to get out of the fake version fast enough to remember what the real one felt like.

AI might actually need more PMs

Amol Avasare, Anthropic’s Head of Growth, said on Lenny’s Podcast that maybe PM jobs are not going to shrink as much as we may have thought…

Rather than immediately replacing PMs, AI is currently increasing engineering leverage the fastest, which creates new pressure on PMs and designers. In larger organizations, that may actually increase the value of PMs who can guide priorities, manage alignment, and sharpen decision-making—especially as engineers take on more “mini-PM” responsibilities.

Eight years of wanting, three months of building with AI

Lalit Maganti writes about building a SQLite parser with AI — a project he’d been putting off for eight years, finished in three months. His comparison of AI coding to slot machines is uncomfortably familiar:

I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”

Also, I agree that this is still true today, but I’m not convinced it will remain true beyond 2026:

AI is an incredible force multiplier for implementation, but it’s a dangerous substitute for design.