Will Larson has a good post about implementing “Agent Skills” in their internal agent framework at Imprint. The whole piece is worth reading, but I wanted to highlight this observation:
Humans make mistakes all the time. For example, I’ve seen many dozens of JIRA tickets from humans that don’t explain the actual problem they are having. People are used to that, and when a human makes a mistake, they blame the human. However, when agents make a mistake, a surprising percentage of people view it as a fundamental limitation of agents as a category, rather than thinking that, “Oh, I should go update that prompt.”
There’s a double standard at play here that I’ve noticed too. When a colleague writes a confusing document, we ask them to clarify. When an agent produces something off, we tend to smirk and declare the technology fundamentally broken.
The fix is often as simple as updating a prompt—the same way you’d coach a team member to write better tickets. Skills, in Larson’s implementation, are essentially reusable prompt snippets that encode learned behaviors across workflows. It’s the kind of organizational knowledge we build up with people over time, just made explicit.