MODULE_05 Foundation

From Conversations
to Systems

~35-50 min active work 5 phases 5 artifacts Prereq: Module 04

The model forgets everything between conversations. Build a system prompt, configure your workspace, and formalize three skills so you never start from scratch again.

01

Design XML system prompts with identity, context, instructions, and constraints that persist across every conversation in a workspace

02

Set up a configured AI workspace (Claude Project, ChatGPT Project, or Gemini Gem) loaded with your context file from Module 4

03

Formalize your best prompts as reusable, documented skills you can run repeatedly without rebuilding from scratch

module-05-systems/ ARTIFACT
system-prompt.xml - XML system prompt with identity, context, instructions, and constraints
skill-01.md, skill-02.md, skill-03.md - three formalized skills for tasks you actually do
workspace-config.md - platform, files attached, setup decisions
+ session-notes.md from the Learning Extraction Prompt

The Model Doesn't Remember You

You've spent four modules building real infrastructure. Context files. XML prompt architecture. An understanding of how attention and prediction shape output.

The model doesn't remember any of it.

Not the context file from Module 4. Not the prompt you perfected in Module 3. Not even the conversation from five minutes ago. Close the chat window and it's gone. The next time you open a new conversation, the model has zero knowledge that you exist.

This isn't "bad memory." It's no memory. Every conversation starts from absolute scratch.

Most people don't realize this. They treat AI like a colleague who remembers yesterday's meeting. It's closer to a brilliant stranger you meet fresh every time, who happens to know an astonishing amount about everything except you.

Now, Claude, ChatGPT, and Gemini have all rolled out "memory" features over the past year. Worth knowing what these actually are: separate systems that save snippets from your conversations and inject them into future ones. The prediction engine itself still starts from zero. Memory is a layer on top. Useful, but patchy. It misses things. It can't replace the deliberate context engineering you learned in Module 4. You'll test this in the prompt below.

So the question becomes: how do you stop starting over?

System Prompts: Your Context, Permanently Loaded

A system prompt is context that loads automatically at the top of every conversation in a workspace. You built a context file in Module 4. A system prompt makes that permanent. It gets injected before your first message in every new chat, so the model always knows who you are, what you do, and how you want it to behave.

Remember the context hierarchy from Module 4? System prompt sits at the very top. Highest attention weight. Everything the model generates in that conversation gets shaped by what's in your system prompt. This is why getting it right matters more than any individual prompt you'll ever write.

Here's the anatomy. Four XML sections:

<identity>

Shapes every response

Who the AI is

Role, expertise, personality traits that shape how it responds. This section bends the prediction engine across the entire conversation. A system prompt with identity set to "a direct, senior content strategist who hates fluff" produces fundamentally different output than "a friendly assistant." Specificity narrows probability distributions (Module 2). Identity is the widest-reaching specificity you can set.

<context>

What the AI always needs

Background information

Your work domain, your audience, your brand guidelines, your constraints. The stuff from your Module 4 context file that applies to every task, not just one.

<instructions>

Standing orders

Default behaviors and rules

How to respond, what format to use, what tone to hit, what to do when uncertain. These are standing orders, not per-task directions.

<constraints>

Hard limits

Things the AI must always or never do

Don't do X. Always do Y. Never assume Z. Use absolute language here (Module 3 showed you that XML tags create attention boundaries; constraints inside their own tag get treated as a distinct category by the model).

Most people write system prompts that are either absurdly long or uselessly vague. A 4,000-token system prompt stuffed with your entire project brief triggers context rot (Module 4). A one-sentence prompt like "You are a helpful marketing assistant" tells the model almost nothing. Sweet spot: 500 to 1,500 tokens of high-signal, well-structured XML.

Workspaces: The Configured Environment

A workspace is a system prompt plus attached files plus a persistent environment where every conversation inherits the same configuration. Open a new chat inside your workspace and everything is already loaded. Your identity. Your context. Your rules.

Here's where the platforms stand right now.

Claude

Best fit for this course

Claude Projects

Attach files to the Knowledge section, add your system prompt via Project Instructions, and Claude uses RAG to retrieve relevant sections from those files (Module 4 covered how file naming affects retrieval). Memory features on Pro and Max plans persist project-specific context across sessions. Infinite Chats eliminates context window limit errors.

ChatGPT

Similar feature set

ChatGPT Projects

Upload files, set custom instructions, project-specific memory. Custom GPTs can't be used inside Projects as of February 2026, but the system prompt replaces that need. GPT-5.2 powers all conversations.

Gemini

Google Workspace native

Gemini Gems

Custom instructions plus up to 10 uploaded files, with Google Drive integration that keeps files synced automatically. Less structured, but native Google Workspace integration is a real advantage if your team already lives there.

We have tools to build this kind of environment with ANY model. This will soon be a topic of a weekly implementation guide.

My position: use Claude Projects for this course. If you prefer another platform, the XML is portable. The system prompt you write works anywhere that accepts custom instructions.

Skills: Prompts That Don't Die in Chat Windows

Your best prompts live in chat windows you'll never scroll back to find. Maybe you bookmarked one once. That prompt is effectively dead.

A skill is a prompt that's been documented, saved as a file, and designed to be run repeatedly. You know the XML architecture from Module 3. A skill wraps it in a reusable package: a clear name, a purpose statement, what inputs it needs, the prompt itself, and what the expected output looks like.

Think of skills like tools in a toolkit. Your workspace is the workshop. Your system prompt is the workbench configuration. Skills are individual tools you reach for when a specific job comes up. "Write a client proposal." "Review a draft and flag problems." Each one is a saved, tested prompt you can run anytime inside your configured workspace.

By the end of this module, you'll have three. Built for tasks you actually do. Tested inside your workspace.

This is the last infrastructure module. Modules 6 through 10 are domain application: writing, images, video, research, code. You'll walk into those modules with a workspace that knows who you are, a system prompt that shapes every output, and skills you can use from day one. The building phase is done. The using phase starts next.

Systems Architect

This module uses one interactive mega-prompt that walks you through five phases: testing the no-memory reality, building a system prompt, configuring your workspace, and formalizing three reusable skills.

Before you paste

Make sure you're in your course project

Your my-context.md, my-learning-style.md, and my-knowledge.md files should be attached to the project

Have your Module 4 context file ready (you'll load it into your workspace during the exercise)

Know which AI platform you use most: Claude, ChatGPT, Gemini, or something else

After the exercise, run the Learning Extraction Prompt to update your knowledge file and save your session notes

What happens in each phase

systems-architect.xml - 5 phases

PHASE 1 Context Bridge: connect to your situation, identify your most common AI tasks and current setup

PHASE 2 The No-Memory Experiment: prove the model starts from zero every time by comparing blank chat vs configured project

PHASE 3 System Prompt Architecture: build an XML system prompt section by section (identity, context, instructions, constraints)

PHASE 4 Workspace Configuration: set up your configured environment on your platform with system prompt and files

PHASE 5 Skill Formalization: produce three reusable skills for real work tasks, tested inside your workspace

What you should see: The AI reads your files, asks about your most common AI tasks, then runs you through experiments that make the no-memory problem visceral. You'll see the difference between a blank chat and a configured workspace with your own eyes. Then you build a system prompt section by section, get it reviewed against the mechanics from Modules 2 through 4, and test it live on a real task. After that, you formalize three skills for work you actually do. The mastery gate at the end tests whether you can design system prompts and skills for scenarios you haven't seen before. Expect 35 to 50 minutes of active work.

Shortcut seekers: the prompt stands alone. Paste it and go. Deep investors: reading the concept section first means you'll already have the four-section anatomy in your head before the AI walks you through building one, which makes the design decisions feel less arbitrary and more deliberate.

systems-architect.xml

Save Your Work

Run the Learning Extraction Prompt to update my-knowledge.md with what you learned.

Save to module-05-systems/

Your XML system prompt

Your 3 formalized skills (one file per skill)

Workspace configuration notes (what platform you used, what files you attached, any setup decisions)

session-notes.md from the Learning Extraction Prompt

Next: Module 6: AI for Writing & Content.
Your workspace is built. Now you'll use it to create a voice and style guide, then produce content that actually sounds like you wrote it.

Run This After Every Module

After completing the module prompt above, paste this into the same conversation. The AI reviews everything that just happened and extracts what you actually learned, not what was presented, but what you demonstrated.

learning-extraction-prompt.xml