<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://elezea.com/wp-content/themes/elz_2023/styles/pretty-feed-v3.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  xmlns:slash="http://purl.org/rss/1.0/modules/slash/" >
  <channel>
    <title>Elezea by Rian van der Merwe - RSS Feed</title>
    <atom:link href="https://elezea.com/category/uncategorized/feed/" rel="self" type="application/rss+xml" />
    <link>https://elezea.com/category/uncategorized/</link>
    <description>A personal blog about product, technology, and interesting things that are worth sharing.</description>
    <lastBuildDate>Thu, 02 Apr 2026 17:43:52 +0000</lastBuildDate>
    <language></language>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <generator>https://wordpress.org/?v=6.9.4</generator>
          <item>
        <title>Endgame for the open web</title>
        <link>https://elezea.com/2026/03/endgame-for-the-open-web/</link>
        <pubDate>Sun, 29 Mar 2026 17:57:15 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/endgame-for-the-open-web/</guid>
        <description>
          <![CDATA[Anil Dash defines the open web as the radical ability to create and share using open specs, free platforms, and no gatekeepers—and argues that every aspect of that architecture is now under coordinated attack.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Anil Dash has <a href="https://anildash.com/2026/03/27/endgame-open-web/">a long essay on the state of the open web</a> and not all of it rings true for me, but buried in the opening is a wonderful definition of what the open web actually is:</p>
<blockquote>
<p>The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.</p>
</blockquote>
<p>It does feel like if the web got invented in 2026, it would <em>not</em> have been left as an open technology for long (see also AI and how much open source models are lagging).</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>Negative space in writing</title>
        <link>https://elezea.com/2026/03/negative-space-in-writing/</link>
        <pubDate>Tue, 24 Mar 2026 00:14:41 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/negative-space-in-writing/</guid>
        <description>
          <![CDATA[Tracy Durnell on how modern writing formats strip out the reflective pauses where readers build their own meaning.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Tracy Durnell explores <a href="https://tracydurnell.com/2026/03/06/non-visual-negative-space/">non-visual negative space</a>—what happens when writing leaves room for the reader to think:</p>
<blockquote>
<p>The current design trend of business and self-help style books is to use tons of subheadings and callout boxes and always, a list of the key points at the end of the chapter. While this is a highly skimmable format and often nice visual design, it essentially sucks the negative space out of the text — the places in which the reader might step back and consider their own examples or anticipate what point the author is trying to make. There&#8217;s no time for hunches here.</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>The negative space of the text helps build the aesthetic experience. Small details flavor the text with a sense of reality. Drawing out events — leaving questions unresolved and conflicts unsettled — can build tension. And textual space creates a gap for the reader to make the personal decodings of the text that build meaning.</p>
</blockquote>
<p>Not everything has to get to the point immediately. Sometimes the best thing a writer can do is leave room for the reader to get there on their own. I&#8217;m thinking about this because I&#8217;m currently reading <a href="https://amzn.to/4rUrxmy">The Will of the Many</a>. It is slow, and long, and one of the best books I&#8217;ve read in ages. The negative space is probably a big reason why I love it so much.</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>Agentic manual testing</title>
        <link>https://elezea.com/2026/03/agentic-manual-testing/</link>
        <pubDate>Mon, 23 Mar 2026 23:53:30 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/agentic-manual-testing/</guid>
        <description>
          <![CDATA[Two practical tips from Simon Willison on testing with coding agents: write demo files to /tmp to keep repos clean, and use red/green TDD to turn manually discovered bugs into permanent automated tests.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Simon Willison has <a href="https://simonwillison.net/guides/agentic-engineering-patterns/agentic-manual-testing/">a practical guide on manual testing with coding agents</a>. Two tips I&#8217;ve already started using:</p>
<blockquote>
<p>It&#8217;s still quick for an agent to write out a demo file and then compile and run it. I sometimes encourage it to use /tmp purely to avoid those files being accidentally committed to the repository later on.</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>If an agent finds something that doesn&#8217;t work through their manual testing, I like to tell them to fix it with red/green TDD. This ensures the new case ends up covered by the permanent automated tests.</p>
</blockquote>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>From Assistant to Collaborator: How My AI Second Brain Grew Up</title>
        <link>https://elezea.com/2026/03/from-assistant-to-collaborator-how-my-ai-second-brain-grew-up/</link>
        <pubDate>Sat, 14 Mar 2026 15:13:36 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/?p=10834</guid>
        <description>
          <![CDATA[Over the past few months my AI second brain crossed a threshold from tool I invoke to collaborator I dispatch — driven by multi-agent workflows, cross-session memory, and deep domain expertise.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Over the past few months I’ve been writing about how I use AI for product work. The <a href="https://elezea.com/2025/12/ai-for-product-management/">first post</a> covered the philosophy: context files, opinionated prompts, and how to compose the right inputs for each task. The <a href="https://elezea.com/2025/12/how-my-ai-product-second-brain-evolved/">second</a> added slash commands and daily summaries. The <a href="https://elezea.com/2026/01/how-to-set-up-opencode-as-your-product-second-brain/">third</a> was a hands-on setup guide. And the <a href="https://elezea.com/2026/02/project-brains-organizing-complex-initiatives-for-ai-assisted-work/">fourth</a> introduced project brains for keeping complex initiatives organized.</p>
<p>This post covers a different kind of change. The earlier additions were incremental: more commands, better context, smoother workflows. What changed recently feels more like a threshold. The system graduated from a tool I invoke for specific tasks to something closer to a collaborator I dispatch to do real work. Three capabilities drove that shift: multi-agent orchestration, cross-session memory, and the encoding of domain expertise into the system itself.</p>
<h2>Multi-Agent Workflows</h2>
<p>The clearest example is customer escalation investigations. As a PM for data products, I regularly investigate customer-reported issues: logging gaps, data discrepancies, behavior that doesn’t match expectations. These investigations require pulling information from multiple sources and cross-referencing it all into an analysis that engineering can act on.</p>
<p>I built a <a href="https://github.com/rianvdm/product-ai-public/blob/main/04-docs/multi-agent-investigation.md">slash command that handles this as a multi-phase workflow</a>. When I run it with a ticket ID, here’s what happens:</p>
<ol>
<li>The system reads the customer ticket, extracts the core problem, identifies which product area is involved, and classifies the issue type.</li>
<li>Three specialist agents launch simultaneously, each focused on a different data source. One searches the codebase for the relevant logic and recent changes. Another searches for related tickets and prior incidents across projects. A third checks documentation and internal wiki pages for relevant operational context.</li>
<li>A fourth agent receives the combined findings and produces database queries that can confirm or refute the working hypothesis.</li>
<li>The system combines everything into a structured analysis: issue classification, root cause anchored in code where possible, customer impact, and recommended next steps.</li>
<li>A <a href="https://github.com/rianvdm/product-ai-public/blob/main/.opencode/agent/blind-validator.md">blind validator</a> independently re-fetches every source cited in the draft to verify the claims hold up. Then an adversarial challenger looks for alternative explanations and tests whether the classification is correct.</li>
</ol>
<p>The output is a document I can review with an engineering colleague or paste into a chat thread. It includes a confidence assessment and a data collection status table showing what was checked and what was unavailable, along with how the analysis compensated for gaps.</p>
<p>The command file that orchestrates all of this isn’t prompting in the traditional sense. It defines which agents to dispatch, what information each one needs, when to wait for results before proceeding, and how to handle failures gracefully. Writing this felt more like designing a workflow than writing a prompt.</p>
<p>I’ve applied the same pattern to other tasks. A “fix feasibility” command evaluates whether a ticket describes a code change simple enough for a PM to implement with AI coding assistance, and produces an implementation brief if the answer is yes. The specific use cases differ, but the architecture is the same: break the problem into specialist tasks that run in parallel, then synthesize and validate the results.</p>
<h2>Cross-Session Memory</h2>
<p>AI conversations are stateless by default. Every new session starts from zero, which means re-explaining context that should already be established. Over a few weeks of working on the same projects, this friction adds up.</p>
<p>I addressed this with a <a href="https://github.com/rianvdm/product-ai-public/blob/main/04-docs/cross-session-memory.md">four-layer memory system</a>:</p>
<ul>
<li>The first layer is <a href="https://github.com/rianvdm/product-ai-public/blob/main/01-context/stable-facts-template.md">stable facts</a>: a compact file that captures the current state of all active work, including project status, recent decisions, and environment constraints. This is the primary orientation file. When I start a session, the AI reads it and immediately knows what’s in flight.</li>
<li>The second is a session log: a reverse-chronological list of handoff notes. Each entry records what happened in a session and what threads remain open. The last three entries give enough context to pick up where I left off.</li>
<li>Third, a corrections file. This holds behavioral fixes for things the AI consistently gets wrong. It’s a staging area that should shrink over time as fixes get promoted elsewhere.</li>
<li>And finally, a decisions log: a cross-cutting record of decisions that don’t belong to a specific project. Each entry captures context and rationale so I don’t relitigate settled questions.</li>
</ul>
<p>Two commands manage this. <a href="https://github.com/rianvdm/product-ai-public/blob/main/.opencode/command/session-start.md"><code>/session-start</code></a> loads all four files and presents a brief summary of current state and recent sessions. <a href="https://github.com/rianvdm/product-ai-public/blob/main/.opencode/command/session-end.md"><code>/session-end</code></a> reviews the conversation, writes a handoff note, and then checks whether any learnings should be promoted to infrastructure.</p>
<p>“Promote to infrastructure” means taking something learned during a session and baking it into the files the agent actually reads. A correction about how to handle a specific edge case in escalation investigations might start in the corrections file, then get promoted into the escalation command or a domain skill once it’s validated. The corrections file shrinks over time as knowledge graduates into the right places.</p>
<p>This creates a loop where the system improves its own instructions. I approve every change, so it’s not self-modifying in a creepy way. But in practice each work session can make the next one slightly better, and the compound effect over weeks is noticeable.</p>
<h2>Domain Expertise</h2>
<p>The earlier posts described skills like <a href="https://github.com/rianvdm/product-ai-public/blob/main/.opencode/skills/pm-thinking/SKILL.md"><code>pm-thinking</code></a>, which applies product methodology (problem-first thinking, measurable outcomes) to any PM-related conversation. That’s useful, but generic. It works the same way regardless of what product you’re building.</p>
<p>The bigger shift was building skills that encode institutional knowledge about specific products. I now have skills for each major product area my team owns: log delivery, analytics, audit logs, alerting, and data pipelines. Each skill contains the product’s architecture and common failure modes, along with which code repositories to search and which database tables hold relevant data.</p>
<p>This is what makes the multi-agent workflows genuinely useful. When the code investigator agent examines an escalation about missing logs, the domain skill tells it which service handles job state and which repository contains the delivery pipeline. It also flags recent architectural changes that might be relevant. Without that context, the agent produces plausible-sounding analysis that misses the specific details engineering needs.</p>
<p>Now every investigation that uses a skill validates or extends the knowledge it contains, and <code>/session-end</code> catches insights that should be added back.</p>
<h2>How The Work Changes</h2>
<p>A few practical observations from working this way:</p>
<ul>
<li>The role has shifted from “write the right prompt” to “design the right process.” The escalation command is a workflow with phases, dependencies, and validation steps. Thinking about it that way produces better results than trying to pack everything into a single conversation.</li>
<li>Validation has to be built in. The blind validator exists because agents make mistakes. They cite files that don’t exist, mischaracterize what code does, or draw conclusions the evidence doesn’t support. Catching those issues before they reach anyone else is the whole point.</li>
<li>Cross-session memory requires discipline. The system only works if I run <code>/session-end</code> after substantive sessions and keep stable facts current. When I skip it, the next session starts cold and I lose the compounding benefit. Automation helps, but the commitment to maintain the memory is mine.</li>
<li>And domain skills need regular maintenance. Products change. Code gets refactored, pipelines get rearchitected. Skills that aren’t periodically updated drift from reality. I haven’t solved this well yet. It’s still a manual process of noticing when a skill’s knowledge is stale and updating it.</li>
</ul>
<p>The system still makes mistakes. Multi-agent workflows are more thorough than single-prompt conversations, but they’re not infallible. The confidence assessment in the escalation output exists because sometimes the answer is “medium confidence, we couldn’t confirm this from the available data.” That honesty about limitations is more useful than false certainty.</p>
<h2>Where This Is Going</h2>
<p>I’m sure the specific commands and skills will look different in six months as I learn what works and what doesn’t. But the underlying pattern feels durable: compose specialist agents with deep domain context, validate their output, and feed learnings back into the system.</p>
<p>I’ve published updated files to the <a href="https://github.com/rianvdm/product-ai-public">Product AI Public repo</a>, including the session memory commands and a generalized version of the multi-agent escalation workflow. If you’re building something similar, those might be useful starting points.</p>
<p>The value of this system is in how the pieces reinforce each other. Domain skills make agents useful for real investigations. Session memory means the system gets smarter over time. And the promote-to-infrastructure loop ties it together, so each piece of work has a chance to make the next one better.</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>When Using AI Leads to “Brain Fry&#8221;</title>
        <link>https://elezea.com/2026/03/when-using-ai-leads-to-brain-fry/</link>
        <pubDate>Sat, 14 Mar 2026 13:58:45 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/when-using-ai-leads-to-brain-fry/</guid>
        <description>
          <![CDATA[Research shows that "AI brain fry" — cognitive exhaustion from overseeing AI agents — is real, and that productivity actually dips after using more than three AI tools simultaneously.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>I am definitely <a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry">feeling the &quot;brain fry&quot;</a> right now:</p>
<blockquote>
<p>We found that the phenomenon described in these posts—cognitive exhaustion from intensive oversight of AI agents—is both real and significant. We call it “AI brain fry,” which we define as <em>mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity.</em> Participants described a “buzzing” feeling or a mental fog with difficulty focusing, slower decision-making, and headaches.</p>
</blockquote>
<p>The research is fascinating and worth reading, with super interesting findings like this:</p>
<blockquote>
<p> As employees go from using one AI tool to two simultaneously, they experience a significant increase in productivity. As they incorporate a third tool, productivity again increases, but at a lower rate. After three tools, though, productivity scores <em>dipped</em>. Multitasking is <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7075496/">notoriously unproductive</a>, and yet we fall for its allure time and again.</p>
</blockquote>
<p>Earlier this week I had this thought: &quot;Oh no, I think I&#8217;ve blown out my context window. I wish I could add some more tokens to my brain. Until then I might just have to respond to new requests with <code>401 Unauthorized</code>.&quot;</p>
<p>And that&#8217;s when I realized I probably need to go touch grass or something.</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>AI should help us produce better code</title>
        <link>https://elezea.com/2026/03/ai-should-help-us-produce-better-code/</link>
        <pubDate>Sat, 14 Mar 2026 13:43:35 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/ai-should-help-us-produce-better-code/</guid>
        <description>
          <![CDATA[Shipping worse code with AI agents is a choice. Simon Willison and Mitchell Hashimoto both argue we should engineer our processes so agents make our code better, not worse.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>As usual, Simon Willison <a href="https://simonwillison.net/guides/agentic-engineering-patterns/better-code/">hits the nail on the head here</a>:</p>
<blockquote>
<p>If adopting coding agents demonstrably reduces the quality of the code and features you are producing, you should address that problem directly: figure out which aspects of your process are hurting the quality of your output and fix them. Shipping worse code with agents is a <em>choice</em>. We can choose to ship code <a href="https://simonwillison.net/guides/agentic-engineering-patterns/code-is-cheap/#good-code">that is better</a> instead.</p>
</blockquote>
<p>Also see Mitchell Hashimoto’s <a href="https://mitchellh.com/writing/my-ai-adoption-journey">idea of “harness engineering”</a>:</p>
<blockquote>
<p>It is the idea that anytime you find an agent makes a mistake, you take the time to engineer a solution such that the agent never makes that mistake again.</p>
</blockquote>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>On Meeting Your Child Again, and Again</title>
        <link>https://elezea.com/2026/03/on-meeting-your-child-again-and-again/</link>
        <pubDate>Sat, 07 Mar 2026 22:11:52 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/?p=10828</guid>
        <description>
          <![CDATA[*Derek Thompson writes about three reasons to be a parent—the most compelling being that parenthood means constantly meeting new versions of your child, "a permanent relationship with strangers, plural to the extreme."*]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Derek Thompson wrote a <a href="https://www.derekthompson.org/p/three-reasons-to-be-a-parent">wonderful essay on what happens when you become a parent</a>:</p>
<blockquote>
<p>The baby you bring home from the hospital is not the baby you rock to sleep at two weeks, and the baby at three months is a complete stranger to both. In a phenomenological sense, parenting a newborn is not at all like parenting &quot;a&quot; singular newborn, but rather like parenting hundreds of babies, each one replacing the previous week&#8217;s child, yet retaining her basic facial structure. &quot;Parenthood abruptly catapults us into a permanent relationship with a stranger,&quot; Andrew Solomon wrote in <em>Far From the Tree</em>. Almost. Parenthood catapults us into a permanent relationship with <em>strangers</em>, plural to the extreme.</p>
</blockquote>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
          <item>
        <title>Why It&#8217;s Still Valuable To Learn To Code</title>
        <link>https://elezea.com/2026/03/why-its-still-valuable-to-learn-to-code/</link>
        <pubDate>Fri, 06 Mar 2026 22:47:32 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/2026/03/while-its-still-valuable-to-learn-to-code/</guid>
        <description>
          <![CDATA[Carson Gross's essay on AI and junior programmers applies just as much to product managers: you can't build an effective AI-powered workflow until you've spent years developing the underlying judgment it's meant to amplify.]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Carson Gross <a href="https://htmx.org/essays/yes-and/">has a good essay on whether junior programmers should still learn to code</a> given how capable AI has become. His core warning to students:</p>
<blockquote>
<p>Yes, AI can generate the code for this assignment. Don&#8217;t let it. You <em>have</em> to write the code. I explain that, if they don&#8217;t write the code, they will not be able to effectively <em>read</em> the code. The ability to read code is certainly going to be valuable, maybe <em>more</em> valuable, in an AI-based coding future. If you can&#8217;t read the code you are going to fall into <a href="https://www.youtube.com/watch?v=m-W8vUXRfxU">The Sorcerer&#8217;s Apprentice Trap</a>, creating systems <a href="https://www.youtube.com/watch?v=GFiWEjCedzY">you don&#8217;t understand and can&#8217;t control</a>.</p>
</blockquote>
<p>And on what separates senior engineers who can use AI well from those who can&#8217;t:</p>
<blockquote>
<p>Senior programmers who already have a lot of experience from the pre-AI era are in a good spot to use LLMs effectively: they know what &#8216;good&#8217; code looks like, they have experience with building larger systems and know what matters and what doesn&#8217;t. The danger with senior programmers is that they stop programming entirely and start suffering from brain rot.</p>
</blockquote>
<p>This maps directly onto what I&#8217;ve been writing about with <a href="https://elezea.com/2025/12/ai-for-product-management/">AI for product work</a> and the <a href="https://elezea.com/2025/12/how-my-ai-product-second-brain-evolved/">second brain setup I&#8217;ve built</a>. The system works because I spent years writing and reading PRDs, strategy docs, and OKRs—enough to develop actual opinions about what good looks like. You have to do the work first, <em>then</em> the second brain is worth building.</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
      </channel>
</rss>