Hi, it's Peggy.

I had an argument with Mark last month about where LLMs belong in a copywriting workflow.

He said first drafts. Get Claude to produce something fast, then shape it into what it needs to be. That's his process, and he's built good systems around it.

I wasn't sure. I've been using Claude more on the back end — writing my own draft, then running it through specific editing passes. Clarity. Persuasion gaps. Objection handling. Voice consistency.

We both thought our approach was better. So we tested it.

Same project. Same brief. Same client. Two workflows, run side by side.

Here's what happened.

First Draft vs. Edit Layer: Where AI Actually Helps Your Copy More

The Setup

A longtime client needed a landing page for a new service tier. Mid-market B2B, selling a managed analytics platform to marketing directors. We had the brief, the audience research, the positioning doc, and three customer interviews.

Standard project. The kind of thing where you already know the structure before you start writing. That made it a clean test. No ambiguity about what the page needed to do.

We ran two parallel workflows over two days.

Workflow A: Claude-first draft. I fed Claude the full brief, audience research, and positioning notes. Asked it to produce a complete landing page draft. Then I spent time editing that draft into something I'd send to the client.

Workflow B: Human-first draft, Claude edit layer. I wrote the landing page myself from the same materials. Then I ran my draft through three separate Claude editing passes — one for clarity, one for persuasion gaps, and one for voice consistency with the client's existing copy.

Both workflows ran from the same Claude setup. I loaded everything into a project folder — the brief, positioning doc, all three customer interviews, and three samples of the client's existing website copy. Everything Claude needed to reference was already in context before I typed a single prompt.

Same inputs. Same writer. Same amount of total time (I tracked it). Different sequence.

What We Did

Both workflows started from the same project instruction. This is what Claude saw before any conversation started:

You are a senior B2B copywriter working on a landing page for a managed analytics platform. Your client sells to marketing directors at mid-market companies (100-500 employees).

The project folder contains:

- Project brief (scope, goals, key messages)

- Positioning document (competitive angle, value proposition)

- Audience research summary

- Three customer interviews (transcribed)

- Three samples of the client's existing website copy (for voice matching)

Reference these materials directly when writing or reviewing copy. Use the customer's own language where possible. Prioritize specifics over generalities. When you cite a benefit, ground it in something from the research or interviews.

That instruction stayed loaded for every prompt that followed. Both workflows had the same foundation.

For Workflow A, here's the prompt I used to generate the first draft:

Write a complete landing page first draft using the materials in this project.

The page needs: headline options, a lead section, 4 benefit blocks, a social proof section with placement notes, objection handling for price and implementation time, and a CTA.

Write in second person. Short paragraphs. No hype. Concrete benefits over vague claims. Pull specific language from the customer interviews where it fits naturally.

The output was 1,400 words. Structurally complete. Every section present.

For Workflow B, I wrote my own draft first — about 1,200 words, took roughly 90 minutes. That 90 minutes doesn't include the time I'd already spent reading and absorbing the customer interviews and research. That absorption matters. It's where the specificity comes from, and it's easy to forget when you're tracking production time. Then I ran three editing passes.

Pass 1: Clarity

Review this landing page copy for clarity. Flag any sentence where the meaning isn't immediately obvious on first read, where the logic between paragraphs breaks, or where a reader would need to re-read to understand the point. Suggest specific rewrites for each flagged item. Don't change the voice or tone.

[Draft pasted]

Pass 2: Persuasion gaps

You are reviewing this landing page as a skeptical marketing director who has been burned by analytics tools before. Read through the copy and identify:

1. Any claim that isn't supported by evidence or specifics

2. Any benefit that doesn't connect to a pain point from the audience research

3. Any place where the reader's likely objection isn't addressed

4. Any section where the copy asks the reader to trust without giving them a reason to

For each gap, explain what's missing and suggest how to fill it.

[Draft + audience research pasted]

Pass 3: Voice consistency

Compare this landing page draft against the client's existing website copy samples below. Flag anywhere the draft drifts from the client's established voice — in formality, word choice, sentence rhythm, or how they talk about their product. Suggest specific adjustments to bring it in line.

[Draft + three samples of client's existing copy pasted]

Each pass took about 10 minutes to review and incorporate. My draft went through three rounds of targeted improvement.

What Happened

I shared both versions with two other copywriters I trust, without telling them which was which. Asked them to evaluate on four criteria: clarity, persuasion, voice fit with the client's brand, and "would you send this to the client as-is or does it need more work."

The Claude-first draft (Workflow A, after my editing) scored well on structure and clarity. The bones were solid. But both reviewers flagged the same thing: the language felt safe. The benefit statements were accurate but generic. "Reduce reporting time" instead of the specific frustration from the customer interviews about spending every Monday morning manually pulling data from four different platforms.

The human-first, Claude-edited draft (Workflow B) scored higher on persuasion and voice. The specifics from the customer interviews made it into the first draft naturally because I'd absorbed the research before writing. Claude's editing passes then caught three clarity issues I'd missed and identified two places where I'd made claims without grounding them in anything concrete.

Both reviewers said they'd send Workflow B to the client with minor tweaks. They said Workflow A needed another pass to replace generic language with specific details.

What the Time Looked Like

I tracked hours for both.

Workflow A (Claude draft → human edit)

Workflow B (human draft → Claude edit)

Setup / prompting

25 min

First draft

Instant (Claude)

90 min (me)

Editing / revision

80 min

35 min

Total

~105 min

~125 min

Workflow A was about 20 minutes faster overall. But that time savings came with a quality gap. Closing that gap — replacing generic benefit statements, weaving in the specific customer language, adjusting the voice — would have taken another 30-40 minutes. Which puts the real total closer to the same, with Workflow B producing a better result.

Why This Happened

The Claude-first draft had a structural advantage. It organized the page well, covered every section, and produced clean copy fast. Where it fell short was specificity. The brief contained customer interview quotes, specific frustrations, exact language the audience uses. The LLM used the information accurately but abstractly. It summarized instead of using the raw material.

When I wrote the first draft myself, those customer interview details were already in my head. They came out in the draft because they were the most interesting parts of the research. The Monday morning dashboard frustration. The marketing director who said her team spent more time reporting on campaigns than running them. That material ended up in the copy naturally.

Then Claude's editing passes did something I'm worse at: systematic review. I tend to miss my own clarity gaps. I assume connections between paragraphs are obvious because they're obvious to me, the person who just wrote them. Claude caught those. It also identified two persuasion gaps — places where I'd stated a benefit without connecting it to the specific audience pain — that I'd walked right past.

The editing prompts work well because they give Claude a narrow, specific job. "Find clarity problems" is a cleaner task than "write a great landing page." The constraints produce better output.

Which Workflow, When?

After this test, I started asking three questions before I open Claude on any copywriting project:

Do I have rich source material? Customer interviews, original research, detailed briefs with real language from real people. If yes, Workflow B. Write the draft yourself. That source material needs to pass through a human brain before it reaches the page. Claude edits what you produce.

Is the brief thin or the timeline tight? A quick turnaround on a project with limited inputs. Workflow A. When there isn't much specificity to lose, the LLM draft gets you to structure fast and you can sharpen from there. The time savings is real and the quality gap is smaller.

Where do I typically lose quality under pressure? If your drafts run structurally messy but contain strong raw material, Claude-first might help you organize. If your drafts are clean but you miss persuasion gaps and clarity issues when you're moving fast, the edit layer is where Claude earns its time.

I'd also add a fourth editing pass. After clarity, persuasion, and voice, I want a "specificity" pass:

Review this copy and flag every claim, benefit, or statement that could be made more specific. Where the copy says "save time," what's the actual time saved? Where it says "better results," what does that look like concretely? For each flag, suggest a more specific version using the source material provided, or note what additional information would be needed.

[Draft + all source material]

That pass would have caught Workflow A's biggest weakness. I'll be adding it to both workflows going forward.

The Honest Summary

Claude as an editor outperformed Claude as a first-draft writer on this project. The margin was meaningful — both reviewers preferred the human-first version, and the specific reasons they cited (detail, voice, persuasion) are the things that determine whether copy converts.

That said, this was one landing page with rich source material and a writer who'd absorbed it thoroughly. I'd expect the gap to narrow on email sequences where voice consistency matters less, or on ad copy where structure is simpler and speed matters more. I'd expect it to widen on long-form sales pages where the specificity compounds across sections. The variables matter, and I want to test more of them.

Mark and I landed on a practical answer: use the LLM where it's strongest relative to you. If your drafts are structurally messy but full of great raw material, Claude-first might help you organize. If your drafts are clean but you miss gaps under deadline pressure, the edit layer is where Claude earns its time.

Know your weaknesses. Put the LLM there.

The Burnett Matrix

The sequence matters more than the tool. Claude after your thinking produces tighter copy than Claude before it — at least when the source material is strong enough to reward that thinking.

Test both on your next project. Track the time honestly.

Your answer might be different from mine.

More clicks, cash, and clients,
Peggy Burnett

Keep Reading