MODULE_06 Domain

AI for Writing
& Content

~40-65 min active work 2 prompts 3 artifacts Prereq: Module 05

Build a complete AI writing system: a JSON voice profile that captures how you actually sound, a self-improving humanizer skill, and one finished content piece produced using the full workflow.

01

Build a JSON writer's profile that captures every dimension of your voice and use it as persistent context in your workspace

02

Spot AI-generated writing patterns on sight and explain the mechanical reason each one exists

03

Write role definitions specific enough to produce output nobody else gets from the same model

04

Build a self-improving humanizer skill that gets sharper every time you correct it

05

Run the full human+AI writing workflow on any content task: ideate, outline, draft, edit, refine

module-06-writing/ ARTIFACT
writer-profile.json - your voice DNA across every measurable dimension
humanizer-skill.md - self-improving voice enforcement skill with corrections log
Your finished content piece - produced using the full 5-stage workflow
+ session-notes.md from the Learning Extraction Prompt

Almost Every AI Model Writes the Same Way

Not because they can't do better. Because you haven't told them who you are, what good looks like, or how you actually sound on the page.

Open any model right now. Type "write me a blog post about productivity." Read what comes back. It will have a very common structure, use the same words as every other model, use the same bullet point format... It'll conclude by restating everything it just said. Swap the topic to fitness, finance, marketing. Same voice. Same structure. Same feeling of reading content that was written by nobody, for nobody.

Some models are better than others at writing. Claude is considered as the best option to write content that doesn't sound like too much AI. Kimi is also very solid. It's mostly because of their training data...

Let me explain to you what's happening mechanically. You know from Module 2 that vague prompts create flat probability distributions. "Write me a blog post" is one of the flattest writing prompts possible. The model has billions of tokens of internet writing in its training data, and without a stronger signal pulling it toward your voice, it samples from the dead center of that distribution. The statistical average of all writing on the internet. That average is what "AI writing" sounds like.

This is fixable. Not with a better prompt. With a better system.

The Patterns That Give AI Away

AI writing has a fingerprint. Once you see it, you can't unsee it.

"In today's rapidly evolving landscape." "It's worth noting that." "Let's dive in." Parallel structures where every sentence follows the same rhythm. Hedging qualifiers tucked into every claim ("While results may vary..."). Conclusions that summarize everything you just read as if you'd already forgotten it. Em dashes everywhere, three of them per paragraph sometimes. Exactly three examples for every point, never two, never four, always three. You can find more examples if you search on Google "Signs of AI Writing" and check Wikipedia's page on that topic.

These aren't random quirks. They're high-probability tokens. The model defaults to them because they sit at the peak of the distribution for "professional writing." They're safe. They're average. They're the statistical equivalent of beige paint.

My position: if your AI output contains more than two of these patterns, the problem isn't the model. It's what you gave it to work with. Which was nothing.

Your Writing DNA, in a File

A voice guide isn't "tone: professional, friendly." That instruction is nearly useless. Too vague to narrow the probability distribution. Too generic to activate any specific pattern.

A real voice profile captures how you actually write. Sentence architecture: short punchy fragments or long exploratory sentences? What's your ratio? Vocabulary fingerprint: Anglo-Saxon roots or Latinate? Jargon or plain speech? What words show up in everything you write? What words would you never type? Punctuation habits. Opening patterns. How you handle transitions. The things you never do (the negative space is often more diagnostic than the positive).

You'll build this as JSON. From Module 4: JSON is for structured data the model consumes. Key-value pairs parsed with near-perfect accuracy. This gets attached to your workspace alongside your context file. Two files, two purposes. Context file: who you are. Voice profile: how you sound.

Frameworks Give AI a Spine

Left to its own devices, AI defaults to the same skeleton every time. Introduction. Three body sections. Conclusion. You've seen it. You're sick of it.

Writing frameworks replace that skeleton with proven structures. AIDA (Attention, Interest, Desire, Action) for sales copy. PAS (Problem, Agitation, Solution) for content that moves people. StoryBrand for narrative-driven pieces. The inverted pyramid for journalism. Before-After-Bridge for transformation stories.

You don't need to memorize these. You need to reference them. "Structure this email using PAS" gives the model a strong architectural constraint. Combined with your voice profile, the output has structure AND voice. Without a framework, the model builds its own. And its default framework is boring.

The Self-Improving Skill

You built skills in Module 5. A humanizer skill is one that strips AI patterns and enforces your voice rules. The twist: this skill gets better over time.

Every time you correct the output ("remove all em dashes, I never use those," "stop starting paragraphs with transition words," "I'd never say 'it's worth noting'"), you add that correction to the skill's constraints. A <corrections_log> section that grows with use. After a month, you have 20 specific rules. After three months, 50. The skill becomes a detailed negative-space definition of your voice: everything the AI should NOT do when writing as you.

This is a compounding asset. The more you use it, the more precisely it matches your voice. Every correction is data. Capture it or lose it.

The Workflow That Just Works

Most people's AI writing process is: type a prompt, get output, publish (or give up). That's a coin flip, not a workflow.

Five stages.

Ideate: you bring the thinking, the angle, the argument. Human-led.

Outline: collaborative, AI proposes structure using your framework, you adjust.

Draft: AI-led, generated inside your voice profile and role constraints.

Edit: this is the stage most people skip and it matters most. Not "make it better" (worthless). "The second paragraph buries the lead, the key point is X, move it to the opening" (useful).

Refine: final sweep for AI patterns using your humanizer skill.

Writing WITH AI and editing WITH AI are different skills requiring different prompts. The prompt below teaches both.

One more thing. The quality of what you feed AI determines the ceiling of what comes back. Most people give AI the lowest-quality version of their thinking and wonder why the output is mediocre. Your input IS the ceiling.

Where the Models Stand for Writing

Quick honest assessment. Claude is strongest at voice matching and style adherence when given structured examples. GPT-5.2 has the widest creative range and best literary rhythm. Gemini 3.1 Pro defaults to concise, direct output; useful for some formats, needs more steering for nuanced voice.

All of them improve dramatically with a structured voice profile. The system is portable. Don't marry a model. Test your profile on whatever you use.

Voice Forensics (Build Your Voice Profile)

This module has two prompts that run in sequence. Start here. This is where you build the foundation of your writing system. The AI analyzes your actual writing, not a questionnaire about how you think you write, and produces a JSON profile that captures your voice across every measurable dimension: sentence architecture, vocabulary fingerprint, rhythm patterns, punctuation habits, structural tendencies, and the negative space (what you never do).

Before you paste

Make sure you're in your course project

Your my-context.md, my-learning-style.md, and my-knowledge.md files should be attached

Have 2-3 writing samples ready. Things YOU wrote. Emails, posts, proposals, messages, anything in your natural voice. At least 500 words total across all samples. More is better. If you can't find anything, the prompt has a fallback.

What happens in each phase

voice-forensics.xml - 5 phases

PHASE 1 Context Bridge and Sample Collection: understand your writing situation, collect raw material for analysis

PHASE 2 Voice Forensics Analysis: analyze your writing across every measurable dimension, show the analysis transparently

PHASE 3 Profile Construction: build the complete JSON writer's profile from the validated analysis

PHASE 4 Validation Experiment: test the profile with a side-by-side generation experiment

PHASE 5 Profile Refinement and Delivery: finalize the JSON profile and prepare it for use as persistent workspace context

What you should see: The AI reads your files, asks about your writing situation, then asks for samples. It analyzes them in front of you, showing you what it found in each dimension: how your sentences move, what words you reach for, what you never do. Then it produces the JSON profile, writes a paragraph using it alongside a paragraph in default AI voice, and asks you to compare. You'll iterate until it captures you accurately.

Time: 15-25 minutes

voice-forensics.xml

When it's done

Save as writer-profile.json in your module-06-writing/ folder

Attach to your course project (it becomes persistent context for all future writing tasks)

Writing Lab (Build Your System, Produce Real Content)

This is the main event. You take the voice profile from Prompt 1 and build a complete writing system: a self-improving humanizer skill, a creative role definition, and a finished piece of content produced using the full five-stage workflow.

Before you paste

Stay in your course project (or start a new conversation in it)

Your writer-profile.json from Prompt 1 should be attached to the project

Have a real content task ready. Something you actually need to deliver this week. A LinkedIn post, a client email, a blog draft, a proposal. Not a hypothetical.

Optional but recommended: 3-5 examples of writing you admire in the format you're producing. These become the seed for a swipe file you can build over time.

After the exercise, run the Learning Extraction Prompt to update your knowledge file and save your session notes

What happens in each phase

writing-lab.xml - 5 phases + mastery gate

PHASE 1 Context Bridge and Real Task Selection: connect to your writing situation, lock in a real content task

PHASE 2 AI-ism Detection Lab: identify AI writing patterns by seeing them generated, then see the difference with your profile applied

PHASE 3 Humanizer Skill Construction: build the self-improving humanizer skill from your corrections

PHASE 4 Role Definition and Framework Workshop: write creative, specific role definitions and use writing frameworks as scaffolding

PHASE 5 Full Writing Workflow on Real Task: produce a finished piece using voice profile + role definition + framework + humanizer skill

GATE Mastery Gate: 6 application-based questions testing mechanical understanding applied to writing scenarios

What you should see: The AI generates content in your domain with no voice profile loaded, and you identify the AI patterns together. Then it regenerates with your profile, and you catch what's still off. Each correction becomes a rule in your humanizer skill. After that, you workshop a creative role definition and test it against a generic one (the difference teaches itself). Then you run the full writing workflow on your real task, end to end, using every tool you've built. By the end you have a finished piece and a system you can reuse from tomorrow.

Time: 25-40 minutes

writing-lab.xml

When it's done

Save humanizer-skill.md to your module-06-writing/ folder

Save your finished content piece to module-06-writing/

Both also live in your course project as reusable assets

Save Your Work

Run the Learning Extraction Prompt to update my-knowledge.md with what you learned.

Save to module-06-writing/

writer-profile.json: your voice DNA

humanizer-skill.md: your self-improving voice enforcement skill

Your finished content piece

session-notes.md from the Learning Extraction Prompt

Optional next step: Build swipe files for each content format you produce regularly. Collect 3-5 examples of writing you admire per format. Structure them as few-shot reference documents (you know from Module 3 that examples are attention anchors). Attach the relevant swipe file when working on that format. One voice profile + format-specific swipe files + your humanizer skill = a writing system that gets sharper with use.

Next: Module 7: AI for Images & Visual Content.
You just built a system for making words sound like you. Module 7 does the equivalent for visuals: extracting a consistent style and producing images that look like they belong together.

Run This After Every Module

After completing the module prompts above, paste this into the same conversation as Prompt 2. The AI reviews everything that just happened and extracts what you actually learned, not what was presented, but what you demonstrated.

learning-extraction-prompt.xml