MODULE_09 Domain

AI for Research
& Analysis

~30-45 min active work 1 prompt 2 artifacts Prereq: Module 05

Learn the three-layer research stack: decompose questions, match tools, synthesize findings. Stop outsourcing your thinking to polished AI reports.

01

Break any complex question into answerable sub-questions before you touch an AI tool

02

Match each sub-question to the right research approach (deep research agents, conversational AI with search, your own data, specialist sources) and explain your reasoning

03

Evaluate conflicting findings across sources and assign honest confidence levels

04

Produce a structured research analysis on a real work question using the full decompose, execute, synthesize workflow

module-09-research/ ARTIFACT
research-analysis.md - your completed research analysis on a real work question
research-workflow.md - your reusable XML-structured research workflow (also save as a skill in your workspace)
+ session-notes.md from the Learning Extraction Prompt

The Three-Layer Research Stack

You open Perplexity. Or Claude. Or ChatGPT. You type something big and messy: "Should we expand into the UK market?" or "What's the best project management tool for a team of 12?" You hit enter. A wall of confident, well-sourced text comes back. You copy it into a doc and move on with your day.

That's not research. That's pure lazy work.

The tools aren't the problem. They're better than they've ever been. Every major AI platform now ships a "Deep Research" mode that searches hundreds of sources, cross-references findings, and hands you a polished report in minutes.

Perplexity does it. ChatGPT does it. Gemini, Claude, Grok. All of them. And the reports look incredible.

But looking incredible and being useful are two different things.

Deep Research tools are execution engines. They're brilliant at finding and compiling information. What they can't do is tell you whether you asked the right question.

Feed a deep research agent something vague and you'll get back a polished, well-cited, confidently wrong report. Your boss might believe it. You might believe it. The footnotes are there. The structure is clean. And the underlying conclusion is built on sand, because the question itself was broken.

Research has three layers. Almost everyone only touches one.

Layer 1: Decomposition. You take your big messy question and crack it into specific, answerable sub-questions. "Should we expand into the UK?" splits into: What are the regulatory requirements for our industry? Who are the existing competitors and how entrenched are they? What's the realistic cost structure? Where do the customers actually buy? Each of those can be answered. The original question can't. Not directly. Not well.

Layer 2: Execution. Running each sub-question through the right tool. This is the layer everyone jumps to. Yes, the tools are good at it. But "good at it" only counts if you gave them a sharp question and picked the right tool for that specific type of question. A deep research agent crushing a landscape scan doesn't help you if the answer lived in your own spreadsheet the whole time.

Layer 3: Synthesis. Combining what you found, deciding which sources to trust when they contradict each other, spotting the gaps in your own research, and building a conclusion that owns its uncertainty instead of hiding it. This is where decisions get made. AI can assist here, but you drive.

Most people skip Layer 1 entirely. They rush through Layer 2 with a single tool. They never even attempt Layer 3. The result is a false sense of certainty built on lazy questions.

You already have a head start.

Your context file from Module 4 and your configured workspace from Module 5 taught you how to structure what the model sees.

Research applies that same muscle at a bigger scale: deciding which information goes in, from where, in what order, to produce analysis you'd actually put money behind.

One thing worth getting honest about...

Citation presence is not citation accuracy. A Columbia journalism study tested eight AI search tools and found that even the best performer carried a 37% error rate in its citations. More than one in three cited claims had problems: wrong sources, sources that didn't support the stated claim, or sources that argued something entirely different from what the AI attributed to them. The report looked polished. The footnotes existed. Over a third of them were broken.

That's not a reason to avoid AI research. It's a reason to stop outsourcing your thinking to it. The skill you're building here isn't "how to use Perplexity." It's how to think like a researcher who happens to have tireless, very fast assistants who occasionally make things up.

Two mediocre tools with great question decomposition will beat one perfect tool with a lazy question. Every time.

Research Analyst

The prompt turns the AI into a senior research methodologist who walks you through the complete three-layer research stack using YOUR actual work question. Not a hypothetical. Not a classroom exercise. The project on your desk, the decision you're weighing this week.

Before you paste

Make sure you're in your course project

Your my-context.md, my-learning-style.md, and my-knowledge.md files should be attached to the project

Have a real research question ready. Something where the answer would actually change a decision you're making this week or this month. Not "something interesting." Something with stakes.

After the exercise, run the Learning Extraction Prompt to update your knowledge file and save your session notes

What happens in each phase

research-analyst-prompt.xml - 5 phases + mastery gate

PHASE 1 Context Bridge: establish your real research question with actual stakes

PHASE 2 Question Decomposition: break the messy question into sharp, answerable sub-questions

PHASE 3 Tool Matching: assign each sub-question to the right research category

PHASE 4 Source Evaluation & Synthesis: run sub-questions, evaluate findings, build the findings grid

PHASE 5 Artifact Production: produce the research analysis and reusable workflow document

GATE Mastery Gate: 7 real-world scenarios testing the complete research workflow

What you should see: The AI reads your files, asks for your real research question, then walks you through each layer. You'll decompose a messy question into sharp sub-questions, match each to the right tool category, evaluate findings for trustworthiness, and synthesize everything into a structured analysis. By the end, you'll have a completed research analysis and a reusable workflow document. Expect 30-45 minutes of active work.

research-analyst-prompt.xml

Save Your Work

Run the Learning Extraction Prompt to update my-knowledge.md with what you learned.

Save to module-09-research/

research-analysis.md: your completed research analysis on a real work question

research-workflow.md: your reusable XML-structured research workflow (also save as a skill in your Module 5 workspace)

session-notes.md from the Learning Extraction Prompt

Next: Module 10: AI for Code & Building.
You just learned to write research questions as precise specifications for information. Module 10 applies the same muscle to writing specifications for software. Instead of a research analysis, you walk away with a working tool.

Run This After Every Module

After completing the module prompt above, paste this into the same conversation. The AI reviews everything that just happened and extracts what you actually learned, not what was presented, but what you demonstrated.

learning-extraction-prompt.xml