<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Elezea - RSS Feed</title><description>Rian van der Merwe&apos;s blog</description><link>https://elezea.com/</link><item><title>I am finally — FINALLY — off WordPress</title><link>https://elezea.com/2026/04/free-from-wordpress/</link><guid isPermaLink="true">https://elezea.com/2026/04/free-from-wordpress/</guid><description>After 15+ years on WordPress and Dreamhost, I migrated elezea.com to Astro and Cloudflare Pages in a weekend. AI-assisted planning is what finally made it possible.</description><pubDate>Thu, 16 Apr 2026 02:24:03 GMT</pubDate><content:encoded>&lt;p&gt;A quick meta-post incoming! This site has been running on WordPress and Dreamhost for 18 years. It worked &lt;em&gt;fine&lt;/em&gt;, but the overhead was really starting to get to me: a MySQL database, monthly hosting costs, plugin updates that arrive every other week, and embarrassing page load times...&lt;/p&gt;
&lt;p&gt;I&apos;ve wanted to move to a static site for years, but it felt impossible. Every time I started to think about it I just gave up. How do I migrate 1,700 posts without breaking almost 20 years of URLs? What do I do about search? The &lt;a href=&quot;http://Last.fm&quot;&gt;Last.fm&lt;/a&gt; widget? Email routing? The existing CSS? There were too many things I didn&apos;t know I didn&apos;t know, so I never got very far.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;A few months ago I started working on a fresh migration plan with Claude Code, using the &lt;a href=&quot;https://github.com/obra/superpowers&quot;&gt;obra/superpowers&lt;/a&gt; skill set. Not to write code yet, but to think through the plan. What&apos;s the best long-term architecture for a move like this? What&apos;s the actual order of operations? Where are the traps?&lt;/p&gt;
&lt;p&gt;We iterated on it many times over several weeks. Each pass surfaced something I hadn&apos;t thought about: redirect strategy, shortcode handling, whether my existing CSS depended on WordPress-specific class names, email routing before cancelling the old host, rollback options if something went wrong... The plan got longer and more detailed, but it also got clearer. What had felt like an insurmountable thing gradually became something with known phases, concrete steps, and tradeoffs I could actually evaluate.&lt;/p&gt;
&lt;p&gt;By the end of the planning process, the migration had a 1,300-line plan covering everything from exporting WordPress content to the DNS cutover runbook. And then Claude and I did the actual migration... in a single weekend.&lt;/p&gt;
&lt;p&gt;So WordPress and Dreamhost are both gone. What you see here now is &lt;a href=&quot;https://astro.build&quot;&gt;Astro&lt;/a&gt;, deployed to &lt;a href=&quot;https://workers.cloudflare.com/&quot;&gt;Cloudflare Workers&lt;/a&gt;, with no database and no server-side runtime. Content lives as Markdown files in a git repo. Search runs entirely client-side via &lt;a href=&quot;https://pagefind.app/&quot;&gt;Pagefind&lt;/a&gt;. And of course, it&apos;s also the fastest the site has ever been.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;A lot of people I know have a project like this — something they&apos;ve wanted to do for years, have started and abandoned, and have filed away under &amp;quot;someday.&amp;quot; It stays there because the project doesn&apos;t feel possible. You can&apos;t begin to think of all the complexities involved, so you just don&apos;t start.&lt;/p&gt;
&lt;p&gt;I think we&apos;ve learned now that AI is pretty good at making work legible. If you iterate on a plan long enough the project stops being a vague, scary thing and becomes a checklist you can actually run through. So hey, find your &amp;quot;I&apos;m free&amp;quot; thing. It might be more doable than it looks.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Evals Are the New PRD</title><link>https://elezea.com/2026/04/evals-are-the-new-prd/</link><guid isPermaLink="true">https://elezea.com/2026/04/evals-are-the-new-prd/</guid><description>For AI products, the eval replaces the PRD — it defines what good looks like and is the acceptance criteria.</description><pubDate>Wed, 15 Apr 2026 17:31:37 GMT</pubDate><content:encoded>&lt;p&gt;&lt;a href=&quot;https://x.com/braintrust/status/2039356267949445230&quot;&gt;Braintrust makes a good case&lt;/a&gt; (apologies for the &lt;a href=&quot;http://X.com&quot;&gt;X.com&lt;/a&gt; link...) for rethinking how PMs work on AI products: the eval replaces the PRD.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;An eval is a structured, repeatable test that answers one question. Does my AI system do the right thing? You define a set of inputs along with expected outputs, run them through your AI system, and score the results using algorithms or AI judges.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The eval becomes both the spec and the acceptance criteria. The directive to engineering:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Here is the eval. Make this number go up.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&apos;s &lt;em&gt;very&lt;/em&gt; different to how most teams work today, but I can definitely see the industry moving this way. Product usage generates signals, observability captures them, and evals turn them into improvement targets. The PM&apos;s job is to define what &amp;quot;good&amp;quot; looks like in code and curate the data that reveals what &amp;quot;bad&amp;quot; looks like.&lt;/p&gt;
&lt;p&gt;The PM skills that transfer are the same ones that always mattered — discovering needs and opportunities, and making judgment calls about what to build for business value. The difference is that instead of a document that describes the intent, you have a test suite that encodes it.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>No One Else Can Speak the Words on Your Lips</title><link>https://elezea.com/2026/04/no-one-else-can-speak-the-words-on-your-lips/</link><guid isPermaLink="true">https://elezea.com/2026/04/no-one-else-can-speak-the-words-on-your-lips/</guid><description>Ben Roy on why LLMs can&apos;t write good essays: real writing is a bottom-up process of discovery, not a top-down application of what you already know.</description><pubDate>Tue, 14 Apr 2026 02:17:00 GMT</pubDate><content:encoded>&lt;p&gt;Ben Roy explains why &lt;a href=&quot;https://benroy.substack.com/p/no-one-else-can-speak-the-words-on&quot;&gt;prompting an LLM to write an essay misunderstands what writing actually is&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;People fundamentally can&apos;t prompt good essays into existence because writing is not a top-down exercise of applying knowledge you have upfront and asking an LLM to create something. AI agents also can&apos;t create good essays for the same reason. Even though their step-by-step reasoning is more complex and iterative than human prompting, a chain of thought is still trying to accomplish a predefined goal. By contrast, real writing is bottom up. You don&apos;t know what you want to say in advance. It&apos;s a process of discovery where you start with a set of half-baked ideas and work with them in non-linear ways to find out what you really think.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I will continue to argue that for general business writing LLMs are fantastic if they are given the right context and guidance, and that it can save &lt;em&gt;hours&lt;/em&gt; of work (with high quality results). But all my experiments with using LLMs for creative writing has so far fallen flat. Maybe—likely?—that will change within the next few months. But for now, the brain work this kind of writing requires remains. Not a bad thing imo.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Zombie Flow</title><link>https://elezea.com/2026/04/zombie-flow/</link><guid isPermaLink="true">https://elezea.com/2026/04/zombie-flow/</guid><description>Derek Thompson traces the concept of &quot;flow&quot; from Csikszentmihalyi to the algorithmic feeds that simulate it, and argues the skill we need now is getting out of zombie flow, not into flow.</description><pubDate>Sun, 12 Apr 2026 14:54:13 GMT</pubDate><content:encoded>&lt;p&gt;Derek Thompson &lt;a href=&quot;https://www.derekthompson.org/p/how-zombie-flow-took-over-culture&quot;&gt;goes into the history of the &amp;quot;flow&amp;quot; concept&lt;/a&gt;, and how tech and entertainment companies learned to simulate it without any of the substance psychologist Mihaly Csikszentmihalyi originally had in mind:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Algorithmic flow is flow without achievement, flow without challenge, flow without even volition... To be lost in the lazy river of algorithmic media is to be lost the current of life without a mind. Zombie flow.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Ten years ago the question was how to get into flow more often. Now it might be how to get out of the fake version fast enough to remember what the real one felt like.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>AI might actually need more PMs</title><link>https://elezea.com/2026/04/ai-might-actually-need-more-pms/</link><guid isPermaLink="true">https://elezea.com/2026/04/ai-might-actually-need-more-pms/</guid><description>Amol Avasare on why AI-accelerated engineering might increase the value of product managers, not replace them.</description><pubDate>Fri, 10 Apr 2026 20:51:22 GMT</pubDate><content:encoded>&lt;p&gt;Amol Avasare, Anthropic&apos;s Head of Growth, said &lt;a href=&quot;https://tldl-pod.com/episode/1627920305_1000759379580&quot;&gt;on Lenny&apos;s Podcast&lt;/a&gt; that maybe PM jobs are &lt;em&gt;not&lt;/em&gt; going to shrink as much as we may have thought...&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Rather than immediately replacing PMs, AI is currently increasing engineering leverage the fastest, which creates new pressure on PMs and designers. In larger organizations, that may actually increase the value of PMs who can guide priorities, manage alignment, and sharpen decision-making—especially as engineers take on more &amp;quot;mini-PM&amp;quot; responsibilities.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Eight years of wanting, three months of building with AI</title><link>https://elezea.com/2026/04/eight-years-of-wanting-three-months-of-building-with-ai/</link><guid isPermaLink="true">https://elezea.com/2026/04/eight-years-of-wanting-three-months-of-building-with-ai/</guid><description>Lalit Maganti on building real software with AI — the slot machine addiction, the corrosion, and why design still needs a human.</description><pubDate>Fri, 10 Apr 2026 20:41:58 GMT</pubDate><content:encoded>&lt;p&gt;Lalit Maganti &lt;a href=&quot;https://lalitm.com/post/building-syntaqlite-ai/&quot;&gt;writes about building a SQLite parser with AI&lt;/a&gt; — a project he&apos;d been putting off for eight years, finished in three months. His comparison of AI coding to slot machines is uncomfortably familiar:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I found myself up late at night wanting to do &amp;quot;just one more prompt,&amp;quot; constantly trying AI just to see what would happen even when I knew it probably wouldn&apos;t work. The sunk cost fallacy kicked in too: I&apos;d keep at it even in tasks it was clearly ill-suited for, telling myself &amp;quot;maybe if I phrase it differently this time.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Also, I agree that this is still true &lt;em&gt;today&lt;/em&gt;, but I&apos;m not convinced it will remain true beyond 2026:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI is an incredible force multiplier for implementation, but it&apos;s a dangerous substitute for design.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Endgame for the open web</title><link>https://elezea.com/2026/03/endgame-for-the-open-web/</link><guid isPermaLink="true">https://elezea.com/2026/03/endgame-for-the-open-web/</guid><description>Anil Dash defines the open web as the radical ability to create and share using open specs, free platforms, and no gatekeepers—and argues that every aspect of that architecture is now under coordinated attack.</description><pubDate>Sun, 29 Mar 2026 17:57:15 GMT</pubDate><content:encoded>&lt;p&gt;Anil Dash has &lt;a href=&quot;https://anildash.com/2026/03/27/endgame-open-web/&quot;&gt;a long essay on the state of the open web&lt;/a&gt; and not all of it rings true for me, but buried in the opening is a wonderful definition of what the open web actually is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It does feel like if the web got invented in 2026, it would &lt;em&gt;not&lt;/em&gt; have been left as an open technology for long (see also AI and how much open source models are lagging).&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Negative space in writing</title><link>https://elezea.com/2026/03/negative-space-in-writing/</link><guid isPermaLink="true">https://elezea.com/2026/03/negative-space-in-writing/</guid><description>Tracy Durnell on how modern writing formats strip out the reflective pauses where readers build their own meaning.</description><pubDate>Tue, 24 Mar 2026 00:14:41 GMT</pubDate><content:encoded>&lt;p&gt;Tracy Durnell explores &lt;a href=&quot;https://tracydurnell.com/2026/03/06/non-visual-negative-space/&quot;&gt;non-visual negative space&lt;/a&gt;—what happens when writing leaves room for the reader to think:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The current design trend of business and self-help style books is to use tons of subheadings and callout boxes and always, a list of the key points at the end of the chapter. While this is a highly skimmable format and often nice visual design, it essentially sucks the negative space out of the text — the places in which the reader might step back and consider their own examples or anticipate what point the author is trying to make. There&apos;s no time for hunches here.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The negative space of the text helps build the aesthetic experience. Small details flavor the text with a sense of reality. Drawing out events — leaving questions unresolved and conflicts unsettled — can build tension. And textual space creates a gap for the reader to make the personal decodings of the text that build meaning.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Not everything has to get to the point immediately. Sometimes the best thing a writer can do is leave room for the reader to get there on their own. I&apos;m thinking about this because I&apos;m currently reading &lt;a href=&quot;https://amzn.to/4rUrxmy&quot;&gt;The Will of the Many&lt;/a&gt;. It is slow, and long, and one of the best books I&apos;ve read in ages. The negative space is probably a big reason why I love it so much.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Agentic manual testing</title><link>https://elezea.com/2026/03/agentic-manual-testing/</link><guid isPermaLink="true">https://elezea.com/2026/03/agentic-manual-testing/</guid><description>Two practical tips from Simon Willison on testing with coding agents: write demo files to /tmp to keep repos clean, and use red/green TDD to turn manually discovered bugs into permanent automated tests.</description><pubDate>Mon, 23 Mar 2026 23:53:30 GMT</pubDate><content:encoded>&lt;p&gt;Simon Willison has &lt;a href=&quot;https://simonwillison.net/guides/agentic-engineering-patterns/agentic-manual-testing/&quot;&gt;a practical guide on manual testing with coding agents&lt;/a&gt;. Two tips I&apos;ve already started using:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It&apos;s still quick for an agent to write out a demo file and then compile and run it. I sometimes encourage it to use /tmp purely to avoid those files being accidentally committed to the repository later on.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If an agent finds something that doesn&apos;t work through their manual testing, I like to tell them to fix it with red/green TDD. This ensures the new case ends up covered by the permanent automated tests.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>From Assistant to Collaborator: How My AI Second Brain Grew Up</title><link>https://elezea.com/2026/03/from-assistant-to-collaborator-how-my-ai-second-brain-grew-up/</link><guid isPermaLink="true">https://elezea.com/2026/03/from-assistant-to-collaborator-how-my-ai-second-brain-grew-up/</guid><description>Over the past few months my AI second brain crossed a threshold from tool I invoke to collaborator I dispatch — driven by multi-agent workflows, cross-session memory, and deep domain expertise.</description><pubDate>Sat, 14 Mar 2026 15:13:36 GMT</pubDate><content:encoded>&lt;p&gt;Over the past few months I’ve been writing about how I use AI for product work. The &lt;a href=&quot;https://elezea.com/2025/12/ai-for-product-management/&quot;&gt;first post&lt;/a&gt; covered the philosophy: context files, opinionated prompts, and how to compose the right inputs for each task. The &lt;a href=&quot;https://elezea.com/2025/12/how-my-ai-product-second-brain-evolved/&quot;&gt;second&lt;/a&gt; added slash commands and daily summaries. The &lt;a href=&quot;https://elezea.com/2026/01/how-to-set-up-opencode-as-your-product-second-brain/&quot;&gt;third&lt;/a&gt; was a hands-on setup guide. And the &lt;a href=&quot;https://elezea.com/2026/02/project-brains-organizing-complex-initiatives-for-ai-assisted-work/&quot;&gt;fourth&lt;/a&gt; introduced project brains for keeping complex initiatives organized.&lt;/p&gt;
&lt;p&gt;This post covers a different kind of change. The earlier additions were incremental: more commands, better context, smoother workflows. What changed recently feels more like a threshold. The system graduated from a tool I invoke for specific tasks to something closer to a collaborator I dispatch to do real work. Three capabilities drove that shift: multi-agent orchestration, cross-session memory, and the encoding of domain expertise into the system itself.&lt;/p&gt;
&lt;h2&gt;Multi-Agent Workflows&lt;/h2&gt;
&lt;p&gt;The clearest example is customer escalation investigations. As a PM for data products, I regularly investigate customer-reported issues: logging gaps, data discrepancies, behavior that doesn’t match expectations. These investigations require pulling information from multiple sources and cross-referencing it all into an analysis that engineering can act on.&lt;/p&gt;
&lt;p&gt;I built a &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/04-docs/multi-agent-investigation.md&quot;&gt;slash command that handles this as a multi-phase workflow&lt;/a&gt;. When I run it with a ticket ID, here’s what happens:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The system reads the customer ticket, extracts the core problem, identifies which product area is involved, and classifies the issue type.&lt;/li&gt;
&lt;li&gt;Three specialist agents launch simultaneously, each focused on a different data source. One searches the codebase for the relevant logic and recent changes. Another searches for related tickets and prior incidents across projects. A third checks documentation and internal wiki pages for relevant operational context.&lt;/li&gt;
&lt;li&gt;A fourth agent receives the combined findings and produces database queries that can confirm or refute the working hypothesis.&lt;/li&gt;
&lt;li&gt;The system combines everything into a structured analysis: issue classification, root cause anchored in code where possible, customer impact, and recommended next steps.&lt;/li&gt;
&lt;li&gt;A &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/.opencode/agent/blind-validator.md&quot;&gt;blind validator&lt;/a&gt; independently re-fetches every source cited in the draft to verify the claims hold up. Then an adversarial challenger looks for alternative explanations and tests whether the classification is correct.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The output is a document I can review with an engineering colleague or paste into a chat thread. It includes a confidence assessment and a data collection status table showing what was checked and what was unavailable, along with how the analysis compensated for gaps.&lt;/p&gt;
&lt;p&gt;The command file that orchestrates all of this isn’t prompting in the traditional sense. It defines which agents to dispatch, what information each one needs, when to wait for results before proceeding, and how to handle failures gracefully. Writing this felt more like designing a workflow than writing a prompt.&lt;/p&gt;
&lt;p&gt;I’ve applied the same pattern to other tasks. A “fix feasibility” command evaluates whether a ticket describes a code change simple enough for a PM to implement with AI coding assistance, and produces an implementation brief if the answer is yes. The specific use cases differ, but the architecture is the same: break the problem into specialist tasks that run in parallel, then synthesize and validate the results.&lt;/p&gt;
&lt;h2&gt;Cross-Session Memory&lt;/h2&gt;
&lt;p&gt;AI conversations are stateless by default. Every new session starts from zero, which means re-explaining context that should already be established. Over a few weeks of working on the same projects, this friction adds up.&lt;/p&gt;
&lt;p&gt;I addressed this with a &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/04-docs/cross-session-memory.md&quot;&gt;four-layer memory system&lt;/a&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The first layer is &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/01-context/stable-facts-template.md&quot;&gt;stable facts&lt;/a&gt;: a compact file that captures the current state of all active work, including project status, recent decisions, and environment constraints. This is the primary orientation file. When I start a session, the AI reads it and immediately knows what’s in flight.&lt;/li&gt;
&lt;li&gt;The second is a session log: a reverse-chronological list of handoff notes. Each entry records what happened in a session and what threads remain open. The last three entries give enough context to pick up where I left off.&lt;/li&gt;
&lt;li&gt;Third, a corrections file. This holds behavioral fixes for things the AI consistently gets wrong. It’s a staging area that should shrink over time as fixes get promoted elsewhere.&lt;/li&gt;
&lt;li&gt;And finally, a decisions log: a cross-cutting record of decisions that don’t belong to a specific project. Each entry captures context and rationale so I don’t relitigate settled questions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Two commands manage this. &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/.opencode/command/session-start.md&quot;&gt;&lt;code&gt;/session-start&lt;/code&gt;&lt;/a&gt; loads all four files and presents a brief summary of current state and recent sessions. &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/.opencode/command/session-end.md&quot;&gt;&lt;code&gt;/session-end&lt;/code&gt;&lt;/a&gt; reviews the conversation, writes a handoff note, and then checks whether any learnings should be promoted to infrastructure.&lt;/p&gt;
&lt;p&gt;“Promote to infrastructure” means taking something learned during a session and baking it into the files the agent actually reads. A correction about how to handle a specific edge case in escalation investigations might start in the corrections file, then get promoted into the escalation command or a domain skill once it’s validated. The corrections file shrinks over time as knowledge graduates into the right places.&lt;/p&gt;
&lt;p&gt;This creates a loop where the system improves its own instructions. I approve every change, so it’s not self-modifying in a creepy way. But in practice each work session can make the next one slightly better, and the compound effect over weeks is noticeable.&lt;/p&gt;
&lt;h2&gt;Domain Expertise&lt;/h2&gt;
&lt;p&gt;The earlier posts described skills like &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/.opencode/skills/pm-thinking/SKILL.md&quot;&gt;&lt;code&gt;pm-thinking&lt;/code&gt;&lt;/a&gt;, which applies product methodology (problem-first thinking, measurable outcomes) to any PM-related conversation. That’s useful, but generic. It works the same way regardless of what product you’re building.&lt;/p&gt;
&lt;p&gt;The bigger shift was building skills that encode institutional knowledge about specific products. I now have skills for each major product area my team owns: log delivery, analytics, audit logs, alerting, and data pipelines. Each skill contains the product’s architecture and common failure modes, along with which code repositories to search and which database tables hold relevant data.&lt;/p&gt;
&lt;p&gt;This is what makes the multi-agent workflows genuinely useful. When the code investigator agent examines an escalation about missing logs, the domain skill tells it which service handles job state and which repository contains the delivery pipeline. It also flags recent architectural changes that might be relevant. Without that context, the agent produces plausible-sounding analysis that misses the specific details engineering needs.&lt;/p&gt;
&lt;p&gt;Now every investigation that uses a skill validates or extends the knowledge it contains, and &lt;code&gt;/session-end&lt;/code&gt; catches insights that should be added back.&lt;/p&gt;
&lt;h2&gt;How The Work Changes&lt;/h2&gt;
&lt;p&gt;A few practical observations from working this way:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The role has shifted from “write the right prompt” to “design the right process.” The escalation command is a workflow with phases, dependencies, and validation steps. Thinking about it that way produces better results than trying to pack everything into a single conversation.&lt;/li&gt;
&lt;li&gt;Validation has to be built in. The blind validator exists because agents make mistakes. They cite files that don’t exist, mischaracterize what code does, or draw conclusions the evidence doesn’t support. Catching those issues before they reach anyone else is the whole point.&lt;/li&gt;
&lt;li&gt;Cross-session memory requires discipline. The system only works if I run &lt;code&gt;/session-end&lt;/code&gt; after substantive sessions and keep stable facts current. When I skip it, the next session starts cold and I lose the compounding benefit. Automation helps, but the commitment to maintain the memory is mine.&lt;/li&gt;
&lt;li&gt;And domain skills need regular maintenance. Products change. Code gets refactored, pipelines get rearchitected. Skills that aren’t periodically updated drift from reality. I haven’t solved this well yet. It’s still a manual process of noticing when a skill’s knowledge is stale and updating it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The system still makes mistakes. Multi-agent workflows are more thorough than single-prompt conversations, but they’re not infallible. The confidence assessment in the escalation output exists because sometimes the answer is “medium confidence, we couldn’t confirm this from the available data.” That honesty about limitations is more useful than false certainty.&lt;/p&gt;
&lt;h2&gt;Where This Is Going&lt;/h2&gt;
&lt;p&gt;I’m sure the specific commands and skills will look different in six months as I learn what works and what doesn’t. But the underlying pattern feels durable: compose specialist agents with deep domain context, validate their output, and feed learnings back into the system.&lt;/p&gt;
&lt;p&gt;I’ve published updated files to the &lt;a href=&quot;https://github.com/rianvdm/product-ai-public&quot;&gt;Product AI Public repo&lt;/a&gt;, including the session memory commands and a generalized version of the multi-agent escalation workflow. If you’re building something similar, those might be useful starting points.&lt;/p&gt;
&lt;p&gt;The value of this system is in how the pieces reinforce each other. Domain skills make agents useful for real investigations. Session memory means the system gets smarter over time. And the promote-to-infrastructure loop ties it together, so each piece of work has a chance to make the next one better.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>When Using AI Leads to “Brain Fry&quot;</title><link>https://elezea.com/2026/03/when-using-ai-leads-to-brain-fry/</link><guid isPermaLink="true">https://elezea.com/2026/03/when-using-ai-leads-to-brain-fry/</guid><description>Research shows that &quot;AI brain fry&quot; — cognitive exhaustion from overseeing AI agents — is real, and that productivity actually dips after using more than three AI tools simultaneously.</description><pubDate>Sat, 14 Mar 2026 13:58:45 GMT</pubDate><content:encoded>&lt;p&gt;I am definitely &lt;a href=&quot;https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry&quot;&gt;feeling the &amp;quot;brain fry&amp;quot;&lt;/a&gt; right now:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We found that the phenomenon described in these posts—cognitive exhaustion from intensive oversight of AI agents—is both real and significant. We call it “AI brain fry,” which we define as &lt;em&gt;mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity.&lt;/em&gt; Participants described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The research is fascinating and worth reading, with super interesting findings like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As employees go from using one AI tool to two simultaneously, they experience a significant increase in productivity. As they incorporate a third tool, productivity again increases, but at a lower rate. After three tools, though, productivity scores &lt;em&gt;dipped&lt;/em&gt;. Multitasking is &lt;a href=&quot;https://pmc.ncbi.nlm.nih.gov/articles/PMC7075496/&quot;&gt;notoriously unproductive&lt;/a&gt;, and yet we fall for its allure time and again.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Earlier this week I had this thought: &amp;quot;Oh no, I think I&apos;ve blown out my context window. I wish I could add some more tokens to my brain. Until then I might just have to respond to new requests with &lt;code&gt;401 Unauthorized&lt;/code&gt;.&amp;quot;&lt;/p&gt;
&lt;p&gt;And that&apos;s when I realized I probably need to go touch grass or something.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>AI should help us produce better code</title><link>https://elezea.com/2026/03/ai-should-help-us-produce-better-code/</link><guid isPermaLink="true">https://elezea.com/2026/03/ai-should-help-us-produce-better-code/</guid><description>Shipping worse code with AI agents is a choice. Simon Willison and Mitchell Hashimoto both argue we should engineer our processes so agents make our code better, not worse.</description><pubDate>Sat, 14 Mar 2026 13:43:35 GMT</pubDate><content:encoded>&lt;p&gt;As usual, Simon Willison &lt;a href=&quot;https://simonwillison.net/guides/agentic-engineering-patterns/better-code/&quot;&gt;hits the nail on the head here&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If adopting coding agents demonstrably reduces the quality of the code and features you are producing, you should address that problem directly: figure out which aspects of your process are hurting the quality of your output and fix them. Shipping worse code with agents is a &lt;em&gt;choice&lt;/em&gt;. We can choose to ship code &lt;a href=&quot;https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/#good-code&quot;&gt;that is better&lt;/a&gt; instead.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Also see Mitchell Hashimoto’s &lt;a href=&quot;https://mitchellh.com/writing/my-ai-adoption-journey&quot;&gt;idea of “harness engineering”&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It is the idea that anytime you find an agent makes a mistake, you take the time to engineer a solution such that the agent never makes that mistake again.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>On Meeting Your Child Again, and Again</title><link>https://elezea.com/2026/03/on-meeting-your-child-again-and-again/</link><guid isPermaLink="true">https://elezea.com/2026/03/on-meeting-your-child-again-and-again/</guid><description>*Derek Thompson writes about three reasons to be a parent—the most compelling being that parenthood means constantly meeting new versions of your child, &quot;a permanent relationship with strangers, plural to the extreme.&quot;*</description><pubDate>Sat, 07 Mar 2026 22:11:52 GMT</pubDate><content:encoded>&lt;p&gt;Derek Thompson wrote a &lt;a href=&quot;https://www.derekthompson.org/p/three-reasons-to-be-a-parent&quot;&gt;wonderful essay on what happens when you become a parent&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The baby you bring home from the hospital is not the baby you rock to sleep at two weeks, and the baby at three months is a complete stranger to both. In a phenomenological sense, parenting a newborn is not at all like parenting &amp;quot;a&amp;quot; singular newborn, but rather like parenting hundreds of babies, each one replacing the previous week&apos;s child, yet retaining her basic facial structure. &amp;quot;Parenthood abruptly catapults us into a permanent relationship with a stranger,&amp;quot; Andrew Solomon wrote in &lt;em&gt;Far From the Tree&lt;/em&gt;. Almost. Parenthood catapults us into a permanent relationship with &lt;em&gt;strangers&lt;/em&gt;, plural to the extreme.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Why It&apos;s Still Valuable To Learn To Code</title><link>https://elezea.com/2026/03/why-its-still-valuable-to-learn-to-code/</link><guid isPermaLink="true">https://elezea.com/2026/03/why-its-still-valuable-to-learn-to-code/</guid><description>Carson Gross&apos;s essay on AI and junior programmers applies just as much to product managers: you can&apos;t build an effective AI-powered workflow until you&apos;ve spent years developing the underlying judgment it&apos;s meant to amplify.</description><pubDate>Fri, 06 Mar 2026 22:47:32 GMT</pubDate><content:encoded>&lt;p&gt;Carson Gross &lt;a href=&quot;https://htmx.org/essays/yes-and/&quot;&gt;has a good essay on whether junior programmers should still learn to code&lt;/a&gt; given how capable AI has become. His core warning to students:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Yes, AI can generate the code for this assignment. Don&apos;t let it. You &lt;em&gt;have&lt;/em&gt; to write the code. I explain that, if they don&apos;t write the code, they will not be able to effectively &lt;em&gt;read&lt;/em&gt; the code. The ability to read code is certainly going to be valuable, maybe &lt;em&gt;more&lt;/em&gt; valuable, in an AI-based coding future. If you can&apos;t read the code you are going to fall into &lt;a href=&quot;https://www.youtube.com/watch?v=m-W8vUXRfxU&quot;&gt;The Sorcerer&apos;s Apprentice Trap&lt;/a&gt;, creating systems &lt;a href=&quot;https://www.youtube.com/watch?v=GFiWEjCedzY&quot;&gt;you don&apos;t understand and can&apos;t control&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And on what separates senior engineers who can use AI well from those who can&apos;t:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Senior programmers who already have a lot of experience from the pre-AI era are in a good spot to use LLMs effectively: they know what &apos;good&apos; code looks like, they have experience with building larger systems and know what matters and what doesn&apos;t. The danger with senior programmers is that they stop programming entirely and start suffering from brain rot.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This maps directly onto what I&apos;ve been writing about with &lt;a href=&quot;https://elezea.com/2025/12/ai-for-product-management/&quot;&gt;AI for product work&lt;/a&gt; and the &lt;a href=&quot;https://elezea.com/2025/12/how-my-ai-product-second-brain-evolved/&quot;&gt;second brain setup I&apos;ve built&lt;/a&gt;. The system works because I spent years writing and reading PRDs, strategy docs, and OKRs—enough to develop actual opinions about what good looks like. You have to do the work first, &lt;em&gt;then&lt;/em&gt; the second brain is worth building.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>An AI Wake-Up Call</title><link>https://elezea.com/2026/03/an-ai-wake-up-call/</link><guid isPermaLink="true">https://elezea.com/2026/03/an-ai-wake-up-call/</guid><description>Matt Shumer makes the case that AI is fundamentally different from previous waves of automation because it&apos;s a general substitute for cognitive work, and the escape routes that existed before are closing fast.</description><pubDate>Sun, 01 Mar 2026 22:05:04 GMT</pubDate><content:encoded>&lt;p&gt;Matt Shumer&apos;s &lt;a href=&quot;https://www.linkedin.com/pulse/something-big-happening-matt-shumer-so5he&quot;&gt;Something Big Is Happening&lt;/a&gt; has made the rounds over the last couple of weeks, but just in case you haven&apos;t seen it, I think it&apos;s very much worth reading. He&apos;s an AI startup founder writing for the non-technical people in his life:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;AI isn&apos;t replacing one specific skill. It&apos;s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn&apos;t leave a convenient gap to move into. Whatever you retrain for, it&apos;s improving at that too.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Previous waves of automation always left somewhere to go. The uncomfortable implication here is that the escape routes are closing as fast as they open.&lt;/p&gt;
&lt;p&gt;There are too many quotes worth commenting on, but this observation about what we tell our kids feels important:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Predictions about the pace of change tend to be simultaneously too aggressive and too conservative in ways that are hard to anticipate. But the direction feels right, and the practical advice is sound: use the tools seriously, don&apos;t assume they can&apos;t do something just because it seems too hard, and spend your energy adapting rather than debating whether this is real.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Toolshed, blueprints, and why good agents need good DevEx</title><link>https://elezea.com/2026/03/toolshed-blueprints-and-why-good-agents-need-good-devex/</link><guid isPermaLink="true">https://elezea.com/2026/03/toolshed-blueprints-and-why-good-agents-need-good-devex/</guid><description>Stripe&apos;s Alistair Gray goes deep on how they built their internal coding agents, and the infrastructure patterns that make them work at scale.</description><pubDate>Sun, 01 Mar 2026 18:36:16 GMT</pubDate><content:encoded>&lt;p&gt;Alistair Gray published &lt;a href=&quot;https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2&quot;&gt;part two of Stripe’s “Minions” series&lt;/a&gt;, going deeper on how they built their internal coding agents. It’s a great read throughout, but three ideas really stood out to me.&lt;/p&gt;
&lt;p&gt;First, blueprints. These are workflows that mix deterministic steps with agentic ones:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Blueprints are workflows defined in code that direct a minion run. Blueprints combine the determinism of workflows with agents’ flexibility in dealing with the unknown: a given node can run either deterministic code or an agent loop focused on a task. In essence, a blueprint is like a collection of agent skills interwoven with deterministic code so that particular subtasks can be handled most appropriately.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you &lt;em&gt;know&lt;/em&gt; a step should always happen the same way, don’t let an LLM decide how to do it. Let the agent handle the ambiguous parts, and hardcode the rest (this can also dramatically reduce token cost).&lt;/p&gt;
&lt;p&gt;Second, their centralized MCP server:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We built a centralized internal MCP server called Toolshed, which makes it easy for Stripe engineers to author new tools and make them automatically discoverable to our agentic systems. All our agentic systems are able to use Toolshed as a shared capability layer; adding a tool to Toolshed immediately grants capabilities to our whole fleet of hundreds of different agents.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A shared tool layer that all agents can use… 500 tools, one server, hundreds of agents. Very cool idea.&lt;/p&gt;
&lt;p&gt;And third, what they call “shifting feedback left”:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have pre-push hooks to fix the most common lint issues. A background daemon precomputes lint rule heuristics that apply to a change and caches the results of running those lints, so developers can usually get lint fixes in well under a second on a push.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you can catch a problem before it hits CI, do it there. A sub-second lint fix on push is better than a 10-minute CI failure, whether you’re a person or an LLM burning tokens.&lt;/p&gt;
&lt;p&gt;So much of Stripe’s agent success is built on top of investments they made for &lt;em&gt;human&lt;/em&gt; developer productivity. Good dev environments, fast feedback loops, shared tooling. The agents benefit from all of it, and developers remain in control.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>Project Brains: Organizing Complex Initiatives for AI-Assisted Work</title><link>https://elezea.com/2026/02/project-brains-organizing-complex-initiatives-for-ai-assisted-work/</link><guid isPermaLink="true">https://elezea.com/2026/02/project-brains-organizing-complex-initiatives-for-ai-assisted-work/</guid><description>How a simple folder structure with a living CONTEXT.md file eliminates context fragmentation and makes AI assistants dramatically more useful on complex projects.</description><pubDate>Tue, 24 Feb 2026 01:46:50 GMT</pubDate><content:encoded>&lt;p&gt;I’ve written before about &lt;a href=&quot;https://elezea.com/2025/12/ai-for-product-management/&quot;&gt;how I use AI for product work&lt;/a&gt; and &lt;a href=&quot;https://elezea.com/2026/01/how-to-set-up-opencode-as-your-product-second-brain/&quot;&gt;how that workflow evolved&lt;/a&gt; with slash commands and skills. This post focuses on how to maintain context for complex, long-running projects.&lt;/p&gt;
&lt;h2&gt;The Problem: Context Fragmentation&lt;/h2&gt;
&lt;p&gt;When I’m working on a major initiative, relevant information ends up scattered everywhere: PRDs in one tool, tickets in another, meeting notes in a third, plus emails and chat threads. Every time I return to a project after a few days, I spend time reconstructing where things stand.&lt;/p&gt;
&lt;p&gt;AI assistants can make this worse because each conversation starts fresh. I can reference files, but the model doesn’t know which files matter for this project, what decisions we’ve already made, or what questions remain open. I end up re-explaining context that should be obvious.&lt;/p&gt;
&lt;p&gt;Project brains solve this by creating a dedicated folder for each major initiative with a standard structure that both humans and AI can navigate.&lt;/p&gt;
&lt;h2&gt;What a Project Brain Looks Like&lt;/h2&gt;
&lt;p&gt;The structure looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;projects/[project-name]/
├── CONTEXT.md        # The hub: status, stakeholders, decisions, open questions
├── artifacts/        # PRDs, specs, designs, one-pagers
├── decisions/        # Decision logs with rationale and alternatives
├── research/         # Customer feedback, data analysis, technical investigation
└── meetings/         # Meeting notes related to this project
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;CONTEXT.md&lt;/code&gt; file is a living document that answers the questions I’d need to answer every time I pick up a project:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What’s the current status?&lt;/li&gt;
&lt;li&gt;Who are the stakeholders and what do they care about?&lt;/li&gt;
&lt;li&gt;What decisions have we made and why?&lt;/li&gt;
&lt;li&gt;What questions are still open?&lt;/li&gt;
&lt;li&gt;Where are the relevant artifacts?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When I start a conversation about a project, I point the AI to the project folder. It reads &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; first, then can drill into specific artifacts as needed. The model immediately knows the project state without me explaining it.&lt;/p&gt;
&lt;h2&gt;A Real Example&lt;/h2&gt;
&lt;p&gt;Say I’m working on adding observability to an internal platform—something that needs coordination across multiple teams over several months. The &lt;code&gt;CONTEXT.md&lt;/code&gt; includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Quick reference table:&lt;/strong&gt; Status, PM, engineering lead, target dates, links to the PRD and relevant tickets. Everything I’d need to orient myself.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Problem statement:&lt;/strong&gt; A clear articulation of the user pain. In this case: “Platform incidents go undetected until users report them, and debugging takes hours due to lack of visibility.”&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Success metrics with baselines and targets:&lt;/strong&gt; Things like uptime targets, reduction in mean time to resolution, and alert accuracy. These anchor every conversation about scope.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Key decisions made:&lt;/strong&gt; A table showing what was decided, when, why, and what alternatives we considered. When someone asks “why aren’t we including component X in v1?”, the answer is already documented.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Open questions:&lt;/strong&gt; A checklist of unresolved issues. This prevents the AI from assuming things are settled when they’re not.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Links:&lt;/strong&gt; Direct paths to the PRD, spec, analysis docs, and related pages.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The &lt;code&gt;decisions/&lt;/code&gt; folder contains detailed decision logs for significant choices. The &lt;code&gt;research/&lt;/code&gt; folder holds whatever analysis informed the project direction. The &lt;code&gt;meetings/&lt;/code&gt; folder captures sync notes that would otherwise disappear into Gemini notes in a Google Drive… somewhere.&lt;/p&gt;
&lt;h2&gt;When to Create a Project Brain&lt;/h2&gt;
&lt;p&gt;Not every task needs this treatment. I create a project brain when:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The work spans multiple weeks or months.&lt;/strong&gt; Short-term tasks don’t need the overhead.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multiple stakeholders are involved.&lt;/strong&gt; If I need to coordinate with other teams, having a single source of context helps.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Decisions require documented rationale.&lt;/strong&gt; If someone might ask “why did you do it this way?” later, a decision log is worth the investment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The project crosses team boundaries.&lt;/strong&gt; Cross-functional initiatives benefit from dedicated context that doesn’t live in any one team’s space.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For simpler work, I use a flatter folder structure with documents organized by type. Project brains are for the complex initiatives where context fragmentation is a real cost.&lt;/p&gt;
&lt;h2&gt;How AI Uses Project Brains&lt;/h2&gt;
&lt;p&gt;The payoff comes when I’m working with AI on project-specific tasks. A few examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Preparing for a meeting:&lt;/strong&gt; “Read the &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; in the [project] folder. I have a spec review meeting tomorrow. What are the open questions I should raise?”&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Drafting an update:&lt;/strong&gt; “Based on the project context, draft a status update for leadership. Focus on progress since the start of the month and remaining blockers.”&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Decision analysis:&lt;/strong&gt; “We need to decide whether to include [component] in scope. Read the research folder and the current &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt;. What would you recommend and why?”&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The AI knows the project history, the stakeholders, the constraints. Its recommendations are grounded in documented context rather than generic best practices.&lt;/p&gt;
&lt;h2&gt;Maintaining the Project Brain&lt;/h2&gt;
&lt;p&gt;The value depends on keeping &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; current. I’ve found a few practices help:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Update after significant events.&lt;/strong&gt; When a decision is made, a meeting happens, or the status changes, update the file immediately. “I’ll do it later” means it won’t happen. LLMs are great at making these updates, so you can simply say “update relevant files based on the session we just concluded.”&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Move open questions to resolved.&lt;/strong&gt; When a question gets answered, don’t delete it. Mark it resolved and note the answer. This preserves the reasoning trail.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Link, don’t duplicate.&lt;/strong&gt; &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; should point to artifacts, not contain them. Keep PRDs in the artifacts folder. Keep meeting notes in the meetings folder. The context file is a hub, not a repository.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Scaffolding New Projects&lt;/h2&gt;
&lt;p&gt;I have a slash command that scaffolds new project brains:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/new-project platform-observability
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This creates the folder structure, generates a &lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; from a template, and fills out a rough draft based on whatever context I provide. Removing the friction of setup means I’m more likely to actually use the system. You can &lt;a href=&quot;https://github.com/rianvdm/product-ai-public/blob/main/.opencode/command/new-project.md&quot;&gt;view the command here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The template includes the standard sections (Quick Reference, Problem Statement, Success Metrics, etc.) with placeholder text. I fill in what I know and mark other sections as TBD. Even an incomplete project brain is more useful than scattered notes.&lt;/p&gt;
&lt;h2&gt;What I’ve Learned&lt;/h2&gt;
&lt;p&gt;A few observations from using this approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Structure beats volume.&lt;/strong&gt; A well-organized project brain with sparse content is more useful than a folder full of undifferentiated documents. The AI (and future me) can navigate structure. It can’t navigate chaos.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Decision logs compound.&lt;/strong&gt; Every decision I document now saves time later. When stakeholders ask “why didn’t we do X?”, I can point to a decision log instead of reconstructing my reasoning from memory.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href=&quot;http://CONTEXT.md&quot;&gt;CONTEXT.md&lt;/a&gt; is for humans too.&lt;/strong&gt; I originally built this for AI assistance, but I reference these files constantly in my own work. The discipline of maintaining project context helps me stay oriented, not just the AI.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The folder structure is flexible.&lt;/strong&gt; Some projects need more subfolders (like &lt;code&gt;research/customer-interviews/&lt;/code&gt;). Some need fewer. The template is a starting point, not a requirement.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This approach requires discipline to maintain, and the upfront setup takes time. But for complex initiatives where context fragmentation is a real problem, project brains have been worth the investment. The AI becomes a more useful collaborator when it has access to the same context I do.&lt;/p&gt;
&lt;p&gt;I’m still iterating on the structure. I suspect the template will look different six months from now as I learn what sections actually get used and which ones I skip every time. The point isn’t to get the folder structure perfect, but to stop losing context between conversations and start building on what you already know.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>The A.I. Disruption Has Arrived, and It Sure Is Fun</title><link>https://elezea.com/2026/02/the-a-i-disruption-has-arrived-and-it-sure-is-fun/</link><guid isPermaLink="true">https://elezea.com/2026/02/the-a-i-disruption-has-arrived-and-it-sure-is-fun/</guid><description>Paul Ford on vibe coding and what it means when software becomes cheap and fast to ship—acknowledging every objection while pointing at history, and ending with a quiet question about whether the trade might actually be worth it.</description><pubDate>Sat, 21 Feb 2026 17:09:08 GMT</pubDate><content:encoded>&lt;p&gt;Paul Ford &lt;a href=&quot;https://www.nytimes.com/2026/02/18/opinion/ai-software.html?unlocked_article_code=1.N1A.OHAz.2tA0w960TSwX&amp;amp;smid=url-share&quot;&gt;writes about vibe coding for the NYT&lt;/a&gt; (gift link) and what happens when software suddenly becomes cheap and fast to ship:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There are many arguments against vibe coding through A.I. It is an ecological disaster, with data centers consuming billions of gallons of water for cooling each year; it can generate bad, insecure code; it creates cookie-cutter apps instead of real, thoughtful solutions; the real value is in people, not software. All of these are true and valid. But I’ve been around too long. The web wasn’t “real” software until it was. Blogging wasn’t publishing. Big, serious companies weren’t going to migrate to the cloud, and then one day they did.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And then he brings it home in a way that continues to make him one of my favorite web writers:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it’s fun to code on the train, too. And if this technology keeps improving, then everyone who tells me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;We can grieve what we lost, while also being optimistic about the future AI is unlocking for all of us. It’s uncomfortable, but that’s ok, all technological shifts are.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>The Father-Daughter Divide</title><link>https://elezea.com/2026/02/the-father-daughter-divide/</link><guid isPermaLink="true">https://elezea.com/2026/02/the-father-daughter-divide/</guid><description>Isabel Woodford&apos;s Atlantic piece on the father-daughter divide finds that 28% of American women are estranged from their fathers—and the root of it is simpler than you&apos;d expect: daughters want emotional closeness, and many dads don&apos;t know how to give it.</description><pubDate>Sat, 21 Feb 2026 16:56:23 GMT</pubDate><content:encoded>&lt;p&gt;Isabel Woodford has a research-heavy &lt;a href=&quot;https://www.theatlantic.com/family/2026/02/father-daughter-divide/684466/&quot;&gt;essay in The Atlantic&lt;/a&gt; about why dads and daughters crave closeness but struggle to find it. 28% of American women are estranged from their father, and even where relationships are intact, they tend to be thinner—more transactional, less emotionally honest—than daughters want.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;At the root of the modern father-daughter divide seems to be a mismatch in expectations. Fathers, generally speaking, have for generations been less involved than mothers in their kids’ (and especially their daughters‘) lives. But lots of children today expect more: more emotional support and more egalitarian treatment. Many fathers, though, appear to have struggled to adjust to their daughters’ expectations. The result isn’t a relationship that has suddenly ruptured so much as one that has failed to fully adapt.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And the psychological explanation that cuts deepest:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“What generates closeness is another person’s vulnerability,” Coleman explained, and dads may not be ready for that.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Daughters aren’t asking for grand gestures or dramatic change—they’re asking for their fathers to show up emotionally. Which turns out to be hard for a lot of men who were raised to see that kind of openness as weakness.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item><item><title>The AI baseline has moved</title><link>https://elezea.com/2026/02/the-ai-baseline-has-moved/</link><guid isPermaLink="true">https://elezea.com/2026/02/the-ai-baseline-has-moved/</guid><description>Geoffrey Huntley argues that simply using AI tools is now table stakes for employment. The real differentiator is understanding them deeply enough to automate your own job function.</description><pubDate>Sat, 21 Feb 2026 16:53:14 GMT</pubDate><content:encoded>&lt;p&gt;Geoffrey Huntley wrote about &lt;a href=&quot;https://ghuntley.com/teleport/&quot;&gt;what happens when people finally “get” AI&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;If you’re having trouble sleeping because of all the things that you want to create, congratulations. You’ve made it through to the other side of the chasm, and you are developing skills that employers in 2026 are expecting as a bare minimum.&lt;/p&gt;
&lt;p&gt;The only question that remains is whether you are going to be a consumer of these tools or someone who understands them deeply and automates your job function? Trust me, you want to be in the latter camp because consumption is now the baseline for employment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Knowing &lt;em&gt;how to use&lt;/em&gt; these tools is no longer a differentiator. The gap is between people who consume AI outputs and people who understand the systems well enough to build on top of them.&lt;/p&gt;
&lt;p&gt;For product managers, this means that prompting ChatGPT for a first draft doesn’t count as an AI skill anymore. The question is whether you can wire together agents, automate your own workflows, and spot opportunities others miss because they’re still thinking in manual processes.&lt;/p&gt;
&lt;br&gt;&lt;br&gt;&lt;hr&gt;Thanks for still believing in RSS! Feel free to &lt;a href=&quot;https://elezea.com/contact&quot;&gt;get in touch&lt;/a&gt;.</content:encoded><author>Rian van der Merwe</author></item></channel></rss>