<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="https://elezea.com/wp-content/themes/elz_2023/styles/pretty-feed-v3.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  xmlns:slash="http://purl.org/rss/1.0/modules/slash/" >
  <channel>
    <title>Elezea by Rian van der Merwe - RSS Feed</title>
    <atom:link href="https://elezea.com/2025/12/measuring-ais-impact-on-shipping-speed-and-code-quality/feed/" rel="self" type="application/rss+xml" />
    <link>https://elezea.com/2025/12/measuring-ais-impact-on-shipping-speed-and-code-quality/</link>
    <description>A personal blog about product, technology, and interesting things that are worth sharing.</description>
    <lastBuildDate>Thu, 02 Apr 2026 17:43:52 +0000</lastBuildDate>
    <language></language>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <generator>https://wordpress.org/?v=6.9.4</generator>
          <item>
        <title>Measuring AI&#8217;s Impact on Shipping Speed and Code Quality</title>
        <link>https://elezea.com/2025/12/measuring-ais-impact-on-shipping-speed-and-code-quality/</link>
        <pubDate>Tue, 16 Dec 2025 19:18:21 +0000</pubDate>
        <dc:creator>Rian van der Merwe</dc:creator>
        <guid isPermaLink="false">https://elezea.com/?p=10714</guid>
        <description>
          <![CDATA[Will Larson has a good post about how they&#8217;re adopting AI at his company. The process is interesting, but this is the part that jumped out at me: My biggest fear for AI adoption is that they can focus on creating the impression of adopting AI, rather than focusing on creating additional productivity. Optics are [&#8230;]]]>
        </description>
        <content:encoded>
          <![CDATA[<p>Will Larson <a href="https://lethain.com/company-ai-adoption/">has a good post</a> about how they&#8217;re adopting AI at his company. The process is interesting, but this is the part that jumped out at me:</p>
<blockquote>
<p>My biggest fear for AI adoption is that they can focus on creating the impression of adopting AI, rather than focusing on creating additional productivity. Optics are a core part of any work, but almost all interesting work occurs where optics and reality intersect.</p>
</blockquote>
<p>It&#8217;s really hard to figure out if AI tools are (1) helping teams ship faster (2) without sacrificing quality.</p>
<p>We&#8217;re working on figuring out this problem right now at Cloudflare. Our proposed approach sidesteps the problem of per-commit AI attribution (did Copilot write this line? did Claude?) by correlating team-level AI tool usage with team-level health metrics over time. If a team&#8217;s AI adoption increases by 30% and their change failure rate stays stable, that&#8217;s a useful signal. If AI usage spikes and incidents start trending up, that&#8217;s worth investigating.</p>
<p>The key insight is that you don&#8217;t need perfect attribution to get directionally useful data. Correlation isn&#8217;t causation, and teams adopting AI tools may already be more experimental or higher-performing. But at least you&#8217;re measuring something real instead of the something like &#8220;# of lines written by AI&#8221;, which leads straight to the <a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart&#8217;s Law</a> problem where metrics become targets.</p>
          <br>
          <br>
          <hr>
          Thanks for still believing in RSS! Get in touch <a href="https://elezea.com/contact">here</a> if you'd like.]]>
        </content:encoded>
                      </item>
      </channel>
</rss>