Menu

The AI baseline has moved

Geoffrey Huntley wrote about what happens when people finally “get” AI:

If you’re having trouble sleeping because of all the things that you want to create, congratulations. You’ve made it through to the other side of the chasm, and you are developing skills that employers in 2026 are expecting as a bare minimum.

The only question that remains is whether you are going to be a consumer of these tools or someone who understands them deeply and automates your job function? Trust me, you want to be in the latter camp because consumption is now the baseline for employment.

Knowing how to use these tools is no longer a differentiator. The gap is between people who consume AI outputs and people who understand the systems well enough to build on top of them.

For product managers, this means that prompting ChatGPT for a first draft doesn’t count as an AI skill anymore. The question is whether you can wire together agents, automate your own workflows, and spot opportunities others miss because they’re still thinking in manual processes.

The Jevons Paradox and the Future of Knowledge Work

I keep thinking about this essay by Mike Fisher about what happens when automation makes work easier. His central argument challenges the assumption that’s baked into most AI-and-jobs discourse:

In every domain where automation becomes powerful, the pattern remains consistent. Human expertise becomes more valuable because the total volume of meaningful work increases. Early fears of automation nearly always assume a fixed amount of work being redistributed. But work is not fixed. Work expands when constraints are removed.

He anchors this on the Jevons Paradox—the 19th century observation that improved steam engine efficiency led to more coal consumption, not less. And then he traces the pattern through radiology, where the number of US radiologists grew from 30,723 in 2014 to 36,024 in 2023, despite Hinton’s 2016 prediction that deep learning would make them obsolete within five years.

He concludes:

AI will reshape the profession, but only in the sense that cars reshaped transportation or spreadsheets reshaped finance. Not by eliminating the field, but by expanding its scope. Not by reducing labor, but by elevating it. Not by shrinking opportunity, but by multiplying it. The world does not need fewer people who understand systems. It needs far more of them.

I find this framing useful because it shifts the question from “will AI take my job?” to “how will the work change as the volume increases?” That’s a much more interesting thing to figure out (which is also why I have been so focused on expanding my Product Second Brain).

Why "Correction of Error" Gets Incidents (and Product Failures) Wrong

I’ve covered “root cause” thinking in incident reviews before, and Lorin Hochstein takes aim at a related issue: AWS’s “Correction of Error” terminology:

I hate the term “Correction of Error” because it implies that incidents occur as a result of errors. As a consequence, it suggests that the function of a post-incident review process is to identify the errors that occurred and to fix them. I think this view of incidents is wrong, and dangerously so: It limits the benefits we can get out of an incident review process.

What makes his critique compelling is the observation that production systems are full of defects that never cause outages:

If your system is currently up (which I bet it is), and if your system currently has multiple undetected defects in it (which I also bet it does), then it cannot be the case that defects are a sufficient condition for incidents to occur. In other words, defects alone can’t explain incidents.

This applies to product work too. When users report problems, our instinct is to find “the bug” and fix it. But often the bug has been there for months—what changed is the context around it. A new user flow, a spike in traffic, a feature interaction we didn’t anticipate. If we stop at “fixed the bug,” we miss the chance to understand why the system let that failure through in the first place.

Why AI in Interviews Is Bad for Candidates, Not Just Companies

A quick post on LinkedIn about interviewing a candidate who used real-time AI got more engagement than is usual for me. And as often happens when something goes semi-viral, some folks took issue with what I said, so I want to expand on the point I was trying to make (it wasn’t that “AI is cheating”).

Here’s what I wrote:

I had my first experience interviewing a candidate who used real-time AI today. If you’re someone who uses AI daily, it’s so easy to spot. The pause before the answer, the constant eyes flicking to the other screen, the perfectly-manicured 3-point answer…

Friends, just don’t do this. It’s too easy to spot, and it will also set you up for failure, because it might get you a job that you’re not a good fit for, which is bad for everyone.

Use AI in your job, for sure. But don’t use it to get the job. The interview process is about you. Be you.

One response called this “absolutely myopic” (I had to double check I didn’t accidentally post on Hacker News) and asked why candidates shouldn’t use AI if it allows for “a better, more creative answer.” Another suggested that if candidates will use AI on the job anyway, then the “real you” isn’t going to be working, so what’s the difference?

Let’s dig into this.

What interviews are actually for

I don’t interview people to test whether they can produce a good answer to a question. I interview people to understand how they think, what they’ve actually done, and whether we’ll work well together.

When I ask “Tell me about a time you had to make a difficult prioritization decision,” I’m not looking for the theoretically optimal framework. I want to hear your story. The messy details and the trade-offs you wrestled with. The thing you got wrong and what you learned from it. AI can’t give me that. It can only give me a polished summary of what prioritization frameworks exist.

One commenter put it well: “It’s about both the company and the individual, so you will often talk about their real experience, what they did, how they felt, what did they learn, digging deeper into their real experience to find out the interesting things that could make them a good match.”

AI might help you phrase things more clearly. But if it’s generating your answers, you’re hiding the very thing I’m trying to evaluate.

The fit problem

Here’s the part that didn’t seem to land: using AI to get a job you’re not qualified for is bad for you.

Let’s say the AI-assisted interview works. You get hired. Now what? You show up on day one, and the expectations are set based on how you performed in those interviews. But that wasn’t you. That was a performance enhanced by a tool you won’t have in the same way during actual work conversations, whiteboard sessions, and quick chat exchanges where people expect you to just… know things.

I’ve seen what happens when there’s a mismatch between interview performance and actual capability. It’s not a fun experience for anyone, least of all the person who’s now struggling in a role they weren’t ready for. One person called it “artificial buzzword ventriloquism” in the comments. Harsh, but not wrong.

It’s about context, not absolutes

A few commenters suggested that interviews should evolve to assume AI assistance, since that’s how people will actually work. One person wrote: “By prohibiting AI during interviews, the interview environment diverges from actual job conditions and fails to evaluate a critical skill: the ability to effectively use one of the most powerful productivity tools available today.”

I think there’s something to this. In fact, our interview process includes a take-home assessment where we explicitly encourage candidates to use AI. We want to see how they approach a problem, how they structure their thinking, and yes, how they use modern tools to get to a good answer. That’s a legitimate skill worth evaluating.

But that’s different from what happened in my interview, where someone was clearly trying to hide their AI usage while answering questions about their past experience. That’s not “using AI as a tool.” That’s using AI as a mask.


I think candidates should absolutely use AI to prepare for interviews: research the company, practice answering common questions, refine their resume.

But in the interview itself, when I’m asking about your experience and your thinking, I need to hear from you. Not because AI is cheating, but because the whole point is to figure out if you are the right fit for this role and this team. If I can’t evaluate that, we can’t make a good hiring decision. And that’s bad for both of us.

The invisible 40% of engineering work

Anton Zaides wrote a good post about shadow work in engineering teams. He discovered a senior engineer on his team was spending over 40% of his time on work that didn’t show up anywhere—code reviews, mentoring, ad-hoc support fixes, etc.

This part is important:

The shadow backlog isn’t the problem—in my opinion, that’s probably the work that should have been done in the first place. The solution is to stop doing it under the table and make sure you have space for it. The more people don’t agree with your roadmap because it was decided for them, the more shadow backlog you’ll have.

The shadow backlog is a symptom of a roadmap that doesn’t reflect reality—and that often happens when engineering teams are not involved in planning and prioritization. That is the real fix—making sure everyone understands and is aligned on the roadmap, and making sure this kind of BAU (Business As Usual) work is visible and planned for.

The B2B Product Leadership Delusion

Jason Knight wrote about a fascinating disconnect between how B2B product leaders rate themselves and how their teams see them. The data from his survey is striking:

Across the board, B2B Product Leaders think they’re doing pretty well in all of these areas, but B2B IC PMs are not convinced. The difference is stark, and they can’t both be right.

The survey measured six core responsibilities—setting strategy, aligning teams, enabling prioritization, fostering ownership, removing blockers, and investing in people. In every category, leaders rated themselves significantly higher than their ICs rated them. Jason offers three possible explanations: leaders are doing poorly and don’t know it, leaders are doing well but not communicating it, or ICs have unreasonable expectations. He concludes:

Product Leaders need to do a much better job of setting expectations within their teams and communicating with them openly and well. IC Product Managers need to do a much better job of understanding the constraints of their business context and, indeed, the business they work for.

I keep coming back to the iceberg effect he mentions—where only some of the work someone does is visible. This cuts both ways. Leaders underestimate how opaque their work is to their teams, and ICs underestimate the constraints leaders are working within. The gap isn’t just about performance; it’s about mutual understanding.

The invention of "classic rock"

Daniel Parris wrote a statistical analysis of when rock became “classic rock”, and it’s not the story I expected.

He assumed the genre emerged organically from music nerds debating on message boards and in the pages of Rolling Stone. Instead:

What I found was a deliberate realignment engineered by music executives chasing an ephemeral advertising demographic. Like many entertainment industry decisions, it was a small (mostly male) group of executives quietly deciding the future of popular culture behind closed doors.

The data shows two concentrated periods when stations rapidly switched to classic rock: the mid–1980s (to capture aging Boomers entering their peak earning years) and the mid–1990s (after the Telecommunications Act enabled Clear Channel to buy up local stations and prioritize low-risk, high-profit formats).

The kicker is that this rebrand was designed around economic incentives that have since eroded. Radio isn’t the default distribution channel anymore. On streaming, music can just exist without being packaged for a hyper-valuable consumer cohort.

Another reminder that so much of what feels like culture is really just business decisions made in conference rooms.

How to Set Up OpenCode as Your Product Second Brain

This is the hands-on companion to How I Use AI for Product Work and How My AI Product Second Brain Evolved. Those posts explain the philosophy; this one gets you to a working OpenCode setup in 30 minutes.

By the end, you’ll have a folder structure, three useful slash commands, and a context file that makes the AI actually useful.

Prerequisites

You’ll need OpenCode installed and configured. Follow the installation guide to get started. If you can run opencode in your terminal and get a response, you’re ready.

Step 1: Create the Folder Structure

Create a new directory for your product second brain. I recommend keeping it in a git repo so you can version your prompts over time.

mkdir -p product-ai/{context,prompts/pm,.opencode/command}
cd product-ai
git init

Your structure should look like this:

product-ai/
├── .opencode/
│   └── command/       # Slash commands live here
├── context/           # Personal context files
├── prompts/
│   └── pm/            # PM-specific prompts
├── AGENTS.md          # Instructions for OpenCode
└── opencode.jsonc     # OpenCode config

Step 2: Create Your Context File

The context file tells the AI who you are and how you work. Create context/about-me.md:

# About Me

## Role
[Your title] at [Company], working on [your area].

## What I Care About
- [Your product philosophy, e.g., "Start with the problem, not the solution"]
- [Your working style, e.g., "Bias toward shipping and learning"]
- [Your communication preferences, e.g., "Direct feedback, no hedging"]

## Current Focus
- [Project or initiative 1]
- [Project or initiative 2]

Keep this updated as your focus changes. The more specific you are, the more useful the AI becomes.

Step 3: Create AGENTS.md

This file tells OpenCode how to behave in your repo. Create AGENTS.md in the root:

# Product AI Second Brain

Read this file before responding.

## Who I Am
Read `context/about-me.md` for personal context.

## Slash Commands
Run these by typing the command in OpenCode:

| Command | Purpose |
|---------|---------|
| `/prd` | Review a PRD |
| `/debate` | Stress-test a product idea |
| `/okr` | Review OKRs |

Step 4: Add Your First Commands

Slash commands are markdown files in .opencode/command/. Each command file is a thin wrapper that points to a full prompt file in prompts/pm/. This separation keeps commands simple while allowing prompts to be detailed and shareable.

Command 1: /debate (Stress-test an Idea)

Create .opencode/command/debate.md:

---
description: Stress-test a product idea with pro vs skeptic debate
---

# Product Debate

Read these files before proceeding:
- `prompts/pm/debate-product-idea.md` - **REQUIRED: Full debate framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create the prompt file prompts/pm/debate-product-idea.md with your full debate methodology. The basic structure: define two personas (a Visionary who argues for the idea and a Skeptic who pokes holes), have them debate, then synthesize the strongest arguments from both sides.

Command 2: /okr (Review OKRs)

Create .opencode/command/okr.md:

---
description: Review OKRs for clarity and outcome-orientation
---

# OKR Review

Read these files before proceeding:
- `prompts/pm/review-okrs.md` - **REQUIRED: Full OKR review framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create prompts/pm/review-okrs.md with your OKR criteria. Mine checks for outcome-orientation (are these outputs or outcomes?), measurability, and whether the key results actually ladder up to the objective.

Command 3: /prd (Review a PRD)

Create .opencode/command/prd.md:

---
description: Review a PRD for completeness and clarity
---

# PRD Review

Read these files before proceeding:
- `prompts/pm/review-prd.md` - **REQUIRED: Full PRD review framework**
- `context/about-me.md` - Your product beliefs and context

## Your Task

$ARGUMENTS

## Instructions

Then create prompts/pm/review-prd.md with your PRD review criteria.

Step 5: Try It Out

Navigate to your product-ai directory and run OpenCode:

cd product-ai
opencode

Test each command:

/debate Should we build a self-serve dashboard for customers to debug their own issues?
/okr [paste your OKRs here]

What’s Next

Once you’re comfortable with this setup, consider adding:

  • More commands: /retro for retrospectives, /feedback for drafting colleague feedback
  • Skills: Methodology files that OpenCode loads automatically based on context (see the skills documentation)
  • Daily summaries: A /today command that summarizes what you worked on
  • Project-specific context: Folders for major initiatives with their own context files

The key is to start small and add complexity as you identify repeated workflows. If you find yourself doing the same thing more than twice, it’s probably worth automating.

If you build something useful on top of this, I’d love to hear about it!

Learning in the Age of AI

Scott H. Young has a thoughtful piece on what’s still worth learning in a world with AI. He cuts through both the panic and the hype to look at what the data actually shows. The biggest finding is probably not all that surprising: early-career workers in AI-exposed fields are getting hit hardest.

Another report from the Stanford Digital Economy Lab notes that early-career workers in AI-exposed fields (such as programming) have seen a relative decline in employment, even as employment among workers aged 30 years and older increased. This matches my intuition that AI coding agents can do a lot of junior developer tasks pretty well, but struggle to match the experience needed to tackle more serious work.

Young’s advice is to cultivate generalist skills. Not the content-free “critical thinking” kind, but genuinely transferable knowledge:

In an environment of change, it’s better to be the hardy dandelion rather than the hothouse orchid. Similarly, I expect with AI-induced change, people who have maintained diverse interests and skills will be best positioned to take advantage of the change, whereas extreme specialists will face a greater risk of extinction.

Don't Outsource Your Love of Music to AI

I’m late to this one, but I like Liz Pelly’s take on Spotify Wrapped. It’s not just about music—it’s about what happens when we let corporations automate our memories:

Spotify Wrapped now feels like just another example of something personal and precious that is being automated away from us; another example of a supposedly unbearable task of thinking and writing being “offloaded” in order to make life more frictionless.

The post is essentially about friction—and why we need it. She argues that working through the process of remembering what mattered to us and thinking critically about our year is what keeps us sharp and curious. When we just accept what a streaming service tells us about our taste, we’re not just outsourcing a task. We’re losing our own sense of what connected with us and why.

It encourages music fans to believe that the records they streamed the most must be the ones they liked the most, which is surely not always the case.

Her suggestion is straightforward: write your own list. It doesn’t have to be polished—a notes app screenshot, a handwritten list, whatever. Just something that came from you, not from an algorithm optimizing for engagement metrics.