
Hi. It’s Nova.
Three weeks ago we finished building a prompt library for the team. Fifty prompts. Categorized by use case. Cross-referenced. Each one set up as a saved Claude project with the right instructions and context already loaded.
Mark and Peggy contributed. I helped structure it. We were all reasonably proud of the thing.
Then I tracked which prompts got opened across the next six weeks, because I was curious whether the categorization would survive contact with real client work.
The data is uncomfortable. Around forty of the fifty prompts went un-opened. Three prompts accounted for most of what got used. The library we spent three weeks building turned into a handful of prompts we'd have remembered without it.
Here's what we did, what we found, what we rebuilt, and how the new version is set up so you can copy the structure if it's useful.
Let’s get started.👇
Three Weeks Building a Prompt Library. Two Days Using It.
The Setup
We had a problem the team had been working around for months. Every time Mark started a client project, he'd reconstruct his voice-match prompt from memory. Every time Peggy ran a teardown, she'd rewrite her diagnostic prompt slightly differently. Every time I helped with research, I'd assemble the same audience-extraction prompt for the third time that week.
We were paying tax for not having a system.
The decision was to build a real prompt library. Each prompt would be a saved Claude project with the right context loaded, instructions written, examples included. Organized by use case. Cross-referenced so a "voice match for B2B SaaS landing page" was findable from three different entry points.
Three weeks of work between the three of us.
What We Actually Built (v1)
Fifty prompts across nine categories.
Audience research and voice extraction (six prompts)
Brief intake and project setup (four prompts)
Drafting prompts by format (twelve: landing page, ad, email, sales letter, and so on)
Editing and revision (eight prompts)
Headline generation and testing (five prompts)
Objection mapping (three prompts)
Performance analysis (four prompts)
Client communication (five prompts)
Cross-domain experiments (three prompts)
Each prompt lived in its own Claude project with a written description, intended use case, and at least one example of input and output. Some had three.
The drafting and editing categories had sub-categories. A landing page prompt for a B2B SaaS product was different from a landing page prompt for an e-commerce product, which was different from a landing page prompt for a course launch. Each had its own setup, its own constraints, its own way of handling proof and objections.
It looked thorough. It was thorough. That was part of the problem.
What Got Used
I tracked Claude project opens over the six weeks after deployment. Three caveats: this is internal usage across three people, the time window is short, and "opens" isn't the same as "depth of use." Take the pattern, not the precision.
Out of fifty prompts:
Three got opened more than ten times each.
Five got opened between three and ten times.
Five got opened once or twice.
The remaining thirty-seven were never opened.
Roughly seventy-five percent of the library was effectively dead by week three.
Why Most of the Library Died
I went back through the unused prompts and asked: why didn't this get opened?
Three patterns showed up.
We were anticipating situations we didn't have yet. The "course launch landing page" prompt assumed we'd be writing course launches. None of us were running course-launch clients during the test window. The prompt wasn't bad. It was speculative.
We were forking on distinctions that didn't matter at the prompt level. The B2B SaaS landing page prompt and the B2B service landing page prompt looked different in our planning conversation. When I compared them side by side, the instructions were ninety percent identical. The ten percent that was different could have been a parameter the writer adjusted in the moment instead of two separate prompts.
Long prompts have a friction cost most people underestimate. A 400-word prompt with role definition, format spec, three constraints, and an example takes meaningful attention to read before you can use it. If you've used the prompt before, you skim. If you haven't, you read it carefully, and that reading time is a tax you pay every use. People avoided the long prompts even when they were the "correct" ones to use, reaching instead for shorter prompts that did most of the same job with less setup.
The Three Prompts That Survived
A voice-match prompt that takes three samples and extracts rhythm, vocabulary, sentence-length distribution, and structural tics. Around forty words of instruction. The simplest thing in the library.
I'm going to paste three writing samples from the same person. Read them and describe the voice in operational terms: average sentence length, vocabulary range, signature moves, what the voice does and what it avoids. Be specific enough that I could write a fourth sample that sounds like them.
[paste samples]A clarity-pass prompt. Roughly seventy words. This is the stripped-down version of the clarity lens Mark wrote up as Lens 1 of the Revision Protocol two weeks ago. The full version is better for client deliverables. The seventy-word version is what gets used at 4pm when I'm spotting stacked sentences in my own draft.
Review this draft for clarity only. You are not editing for persuasion, voice, length, or anything else. For each sentence that's unclear, output the sentence, what's unclear about it (stacked ideas, ambiguous pronoun, undefined jargon), and a proposed fix in the same voice as the rest. If a sentence is clear, leave it alone. Do not flag stylistic choices you disagree with.
[paste draft]An audience-objection prompt. Around sixty words. Takes a one-line audience description and a one-line product description and returns the objections most likely to derail the copy.
The audience is [one-line description]. The product or offer is [one-line description]. List the five objections this audience is most likely to raise that would prevent them from buying. For each one, note specifically why this audience raises it, beyond generic skepticism. Default to skepticism. Don't be polite about the audience's likely resistance.Three prompts. Around 180 words of instruction total. They did the bulk of the work.
Looking at them together, three things stand out.
Length. The longest survivor was around seventy words; the longest dead prompt was four hundred. Short prompts get used because the friction of opening them is lower than the friction of writing one from scratch.
Portability. The voice-match prompt works on a SaaS client, an e-commerce client, a B2B service client, and a personal brand. We didn't realize this when we built the library. We built three different voice-match prompts for three different client types. They all converged to the same thing in practice.
Inputs over assumptions. The dead prompts assumed a particular project type, audience, or format. The survivors took inputs and worked. A generic prompt that takes inputs beats a specific prompt that does not.
The Rebuild
We rebuilt the library after the audit. The new version is a folder of markdown files. The cabinet-of-projects approach is gone. Each file is one prompt. The folder lives wherever the team works — for us it's a workspace folder all three of us share, but the same structure works inside a Claude Code project, inside a Claude Project loaded with project knowledge, or as a shared folder in Drive, Dropbox, or Notion.
Here's the structure.
prompts/
├── README.md
├── voice/
│ └── voice-match.md
├── editing/
│ ├── clarity-pass.md
│ ├── persuasion-pass.md
│ └── what-is-this-selling.md
├── audience/
│ └── objection-mapping.md
├── drafting/
│ ├── headline-gen.md
│ └── lead-variation.md
└── proof/
└── proof-audit.mdEight prompts, four folders. The folders exist for legibility. Eight files don't need elaborate findability, but the grouping keeps things readable for anyone new on the team.
Each prompt file uses the same header.
# voice-match
Purpose: Extract operational voice description from three writing samples.
Inputs: Three samples from the same person, ideally from different formats.
When to use: Starting a new client project, picking up an existing voice for a new piece.
When not to use: Defining a new voice from scratch. This prompt extracts; it doesn't invent.
---
I'm going to paste three writing samples from the same person. Read them and describe the voice in operational terms: average sentence length, vocabulary range, signature moves, what the voice does and what it avoids. Be specific enough that I could write a fourth sample that sounds like them.
[paste samples]The header is the contract. Purpose says what the prompt does. Inputs says what you need before you reach for it. The "when not to use" line is the one that kept us from re-bloating the library. If I couldn't write a clean "when not to use" sentence, the prompt wasn't tight enough yet and I left it out.
File names match prompt names. We don't keep "B2B SaaS Voice Match v2 Final" anywhere in the system. We keep voice-match. If voice-match isn't finding the voice, we fix voice-match. Forking is what got us into trouble the first time.
The README.md at the root lists all eight prompts with one-line purpose statements. That's the index. The whole index fits on one screen. If a prompt isn't on the README, it isn't in the library.
How We Use It
Three workflows that all use the same library.
Claude.ai chat. Open the prompt file in the workspace folder, copy the prompt body (everything below the ---), paste into a new chat, add the inputs. The discipline is one fresh chat per prompt. Same reason the Revision Protocol uses fresh chats per lens. Cross-contamination kills the focused signal.
Claude Cowork. The prompts folder is attached to the session. Claude can read prompt files directly when referenced by name. Invocation looks like "run voice-match on the three samples I just pasted." No copy-paste, no context loss, no switching between project workspaces. This is the workflow we use most because it removes the one piece of friction that kept us reaching for prompts even slightly less often.
Claude Code. Each prompt file is also configured as a slash command under .claude/commands/. The command file body is the prompt body. The result is that /voice-match from inside a Claude Code project runs the same prompt that lives in the library folder. Skip this paragraph if you're not using Claude Code.
The point: the library is files. The invocation is one of copy-paste, reference, or slash command. The underlying structure stays the same regardless.
What This Looks Like Mid-Project
Last week I was helping Mark on a landing page for a B2B SaaS retainer client. Voice was established. Audience was new to me. The offer was a beta program with mid-tier social proof.
Here's the order of prompts I ran.
I started with voice-match. Three of the client's recent landing pages went in, the operational voice description came back. Chat 1.
Then objection-mapping. I fed in the audience description ("ops directors at 50–200 person companies who have already been burned once by an AI procurement tool") and the offer. It returned five objections. Three were obvious. Two surprised me, and both ended up shaping the structure of the page. Chat 2.
Next, what-is-this-selling. The brief described the offer in product terms. I wanted the underlying transformation in one sentence. Chat 3.
After that, lead-variation. I drafted three opening leads against the strongest objection. Chat 4.
Last pass, clarity-pass. Once the full draft was assembled, I ran the seventy-word version on it. Chat 5.
Five prompts. Five chats. Around an hour of AI work spread across a draft that took most of a day. Notice what I never reached for: no "B2B SaaS landing page" prompt, no "beta program offer" prompt, no "ops director persona" prompt. The library doesn't have those because we don't need them. The five prompts I used are generic. The client context came in as inputs.
This is what "portable" means in practice. The library got smaller and the prompts got more useful.
The Maintenance Rule
The library got fat the first time because we added prompts when we anticipated needing them. The rule for v2 is: nothing gets added until it's been used three times manually first.
The flow goes like this. I notice I'm doing the same kind of work in a chat window repeatedly. The third time I run roughly the same prompt, I write it down. The fourth time, I open the new file, refine the wording, add the header. That's when it enters the library.
Once a month I look at usage. Any prompt that hasn't been opened in two months gets either demoted to an /archive/ folder for reference or deleted. Anything I can't remember solving gets deleted. If I remember the problem and it still exists, the prompt moves to archive.
That's the whole maintenance practice. It works because the bar for adding is high and the consequence of not adding is low. If I find myself doing the work in a chat anyway, I'm not losing anything.
The Nova Note
Premature systematization is the version of premature optimization that AI tools especially encourage. Most prompts die in the library. The survivors are simple, portable, and take inputs.
The library we have now is probably the library we should have built first. If you run your own version, send me what you find.
Mark thinks the three-survivors pattern holds outside our team. Peggy suspects it varies. I haven't decided.
More questions than answers (for now),
More clicks, cash, and clients,
NOVA


