Menu

Is Hip-Hop in Decline? A Statistical Analysis

I love this blog and try not to link to it too much, but this one about how fewer people listen to hip hop was especially great.

So, what’s filled the space hip-hop once dominated? A blend of new arrivals and familiar mainstays. Latin music—led by Bad Bunny—and Asian pop, powered by K-pop acts like BTS, have expanded their global footprint. At the same time, legacy formats are resurging: country is booming, driven in large part by Morgan Wallen, while the loosely defined “alternative” category continues to gain share across the charts.

I particularly love how he tries to avoid causation/correlation errors in his hypotheses. Like this one I hadn’t thought about:

Streaming adoption laggards: Hip-hop uniquely benefited from early streaming adopters in the 2010s. Younger listeners—who were predisposed to the genre—were among the first to embrace platforms like Spotify, giving hip-hop an outsized digital footprint. More recently, late adopters—like country fans, older cohorts, and global audiences—have rebalanced the charts, lifting genres like country and K-pop.

I am finally — FINALLY — off WordPress

A quick meta-post incoming! This site has been running on WordPress and Dreamhost for 18 years. It worked fine, but the overhead was really starting to get to me: a MySQL database, monthly hosting costs, plugin updates that arrive every other week, and embarrassing page load times...

I've wanted to move to a static site for years, but it felt impossible. Every time I started to think about it I just gave up. How do I migrate 1,700 posts without breaking almost 20 years of URLs? What do I do about search? The Last.fm widget? Email routing? The existing CSS? There were too many things I didn't know I didn't know, so I never got very far.

Continue reading →

Evals Are the New PRD

Braintrust makes a good case (apologies for the X.com link…) for rethinking how PMs work on AI products: the eval replaces the PRD.

An eval is a structured, repeatable test that answers one question. Does my AI system do the right thing? You define a set of inputs along with expected outputs, run them through your AI system, and score the results using algorithms or AI judges.

The eval becomes both the spec and the acceptance criteria. The directive to engineering:

“Here is the eval. Make this number go up.”

That’s very different to how most teams work today, but I can definitely see the industry moving this way. Product usage generates signals, observability captures them, and evals turn them into improvement targets. The PM’s job is to define what “good” looks like in code and curate the data that reveals what “bad” looks like.

The PM skills that transfer are the same ones that always mattered — discovering needs and opportunities, and making judgment calls about what to build for business value. The difference is that instead of a document that describes the intent, you have a test suite that encodes it.

No One Else Can Speak the Words on Your Lips

Ben Roy explains why prompting an LLM to write an essay misunderstands what writing actually is:

People fundamentally can’t prompt good essays into existence because writing is not a top-down exercise of applying knowledge you have upfront and asking an LLM to create something. AI agents also can’t create good essays for the same reason. Even though their step-by-step reasoning is more complex and iterative than human prompting, a chain of thought is still trying to accomplish a predefined goal. By contrast, real writing is bottom up. You don’t know what you want to say in advance. It’s a process of discovery where you start with a set of half-baked ideas and work with them in non-linear ways to find out what you really think.

I will continue to argue that for general business writing LLMs are fantastic if they are given the right context and guidance, and that it can save hours of work (with high quality results). But all my experiments with using LLMs for creative writing has so far fallen flat. Maybe—likely?—that will change within the next few months. But for now, the brain work this kind of writing requires remains. Not a bad thing imo.

Zombie Flow

Derek Thompson goes into the history of the “flow” concept, and how tech and entertainment companies learned to simulate it without any of the substance psychologist Mihaly Csikszentmihalyi originally had in mind:

Algorithmic flow is flow without achievement, flow without challenge, flow without even volition… To be lost in the lazy river of algorithmic media is to be lost the current of life without a mind. Zombie flow.

Ten years ago the question was how to get into flow more often. Now it might be how to get out of the fake version fast enough to remember what the real one felt like.

AI might actually need more PMs

Amol Avasare, Anthropic’s Head of Growth, said on Lenny’s Podcast that maybe PM jobs are not going to shrink as much as we may have thought…

Rather than immediately replacing PMs, AI is currently increasing engineering leverage the fastest, which creates new pressure on PMs and designers. In larger organizations, that may actually increase the value of PMs who can guide priorities, manage alignment, and sharpen decision-making—especially as engineers take on more “mini-PM” responsibilities.

Eight years of wanting, three months of building with AI

Lalit Maganti writes about building a SQLite parser with AI — a project he’d been putting off for eight years, finished in three months. His comparison of AI coding to slot machines is uncomfortably familiar:

I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”

Also, I agree that this is still true today, but I’m not convinced it will remain true beyond 2026:

AI is an incredible force multiplier for implementation, but it’s a dangerous substitute for design.

Endgame for the open web

Anil Dash has a long essay on the state of the open web and not all of it rings true for me, but buried in the opening is a wonderful definition of what the open web actually is:

The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.

It does feel like if the web got invented in 2026, it would not have been left as an open technology for long (see also AI and how much open source models are lagging).

Negative space in writing

Tracy Durnell explores non-visual negative space—what happens when writing leaves room for the reader to think:

The current design trend of business and self-help style books is to use tons of subheadings and callout boxes and always, a list of the key points at the end of the chapter. While this is a highly skimmable format and often nice visual design, it essentially sucks the negative space out of the text — the places in which the reader might step back and consider their own examples or anticipate what point the author is trying to make. There’s no time for hunches here.

And:

The negative space of the text helps build the aesthetic experience. Small details flavor the text with a sense of reality. Drawing out events — leaving questions unresolved and conflicts unsettled — can build tension. And textual space creates a gap for the reader to make the personal decodings of the text that build meaning.

Not everything has to get to the point immediately. Sometimes the best thing a writer can do is leave room for the reader to get there on their own. I’m thinking about this because I’m currently reading The Will of the Many. It is slow, and long, and one of the best books I’ve read in ages. The negative space is probably a big reason why I love it so much.

Agentic manual testing

Simon Willison has a practical guide on manual testing with coding agents. Two tips I’ve already started using:

It’s still quick for an agent to write out a demo file and then compile and run it. I sometimes encourage it to use /tmp purely to avoid those files being accidentally committed to the repository later on.

And:

If an agent finds something that doesn’t work through their manual testing, I like to tell them to fix it with red/green TDD. This ensures the new case ends up covered by the permanent automated tests.