Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

SAMWISE Prompt Bank — AINS-M6001 (Personal Attention & Influence Systems)

These prompts guide post-lecture reflection. They are specific and adversarial: hidden assumptions, ethics edges, and links between formal structure and real tradeoffs.

Faculty may assign a subset; students choose one as the SAMWISE Prompt in reflection_template.md.


Lecture 1 — Marketing world models

  1. Ontology vs vanity: Which entities did you include because they flatter your self-image rather than because they change outcomes?

  2. Commensurability: Where did you convert trust or attention into numbers you cannot defend under challenge?

  3. Exogenous fog: What important driver did you label “noise” or “luck” to avoid modeling it?

  4. Objective function: If your model implicitly maximizes one metric, state it in one sentence. Is it reach, revenue, reputation, or ego?

  5. Tail risk: What single event would make your world model irrelevant overnight? Is that event in your scenario set?

  6. Next perturbation: Name an audience or channel you refuse to model — what changes if you must include it?


Lecture 2 — Attention allocation

  1. Stated vs revealed: Where does your calendar contradict your stated strategy?

  2. Recovery: Where is rest modeled — and what breaks if you delete it?

  3. Coupling: What external agent (employer, client, family) constrains your best creative hours? Encode one coupling.

  4. Metric gaming: If you optimized your tracked metric only, what would you stop doing that you believe matters ethically?

  5. Diminishing returns: Point to a block where extra hours yield sharply lower marginal output. Are you spending there anyway?

  6. Counterfactual: If you moved 20% attention from creation to distribution, what happens to trust two weeks later (even if your model doesn’t include lag yet — hypothesize)?


Lecture 3 — Credibility and leverage

  1. Proof rate: What is your evidence throughput per week vs your claim strength?

  2. Convexity: Where is your downside in reputation larger than your upside from the next campaign?

  3. Silence: Name a situation where the best marketing move is to say nothing. Does your model allow abstention?

  4. Borrowed trust: Where are you leaning on someone else’s credibility — and what happens if they withdraw?

  5. Inconsistency tax: List one past inconsistency your audience might remember. Is that memory in your state?

  6. Ethical edge: Describe a persuasion tactic your model could optimize but you will not deploy. Is non-deployment encoded or only in your head?


Lecture 4 — Message–market fit

  1. Niche horror: What audience are you afraid to exclude — and how does fear widen your niche artificially?

  2. Distance metric: Why did you choose your mismatch metric? What changes if you switch metric?

  3. Evidence: What interview or log data would falsify your audience_need_vector?

  4. Differentiation: What do you refuse to claim — and who does that refusal delight?

  5. Competitive substitute: If a competitor copied your message verbatim, what remains yours — proof, speed, relationship?

  6. Overfit: Could your message vectors be storytelling after the fact? How would you test that?


Lecture 5 — Content economics

  1. Unit cost honesty: What is the true hours-per-unit for your best work — including rework and anxiety?

  2. Half-life: Which channel decay are you ignoring because it hurts?

  3. Quality proxy: What observable variable stands in for quality — and how gameable is it?

  4. Backlog risk: What happens if you skip a week — does your model punish you realistically?

  5. Burnout path: Simulate (mentally or in code) three months at your planned cadence. Where does the human break?

  6. Abstain: When is silence optimal? Would your optimizer agree?


Lecture 6 — Integration

  1. Coherence: One paragraph describing your integrated system — what is the biggest internal inconsistency?

  2. Sensitivity: Top three parameters that move outcomes ±20% — are you measuring them in life?

  3. Life delta: What real decision changed because of the model — what decision resisted change?

  4. Meta-learning: What mistake did you repeat across lectures — what guardrail stops it next term?

  5. Fork: If another student forked your model and maximized reach only, what breaks first — trust, fit, or burnout?

  6. Ethics: Where does your integrated model hide moral tradeoffs inside “optimization”?


Lecture 7 — MCP and real signals

  1. Falsification: What signal, if it moved against you, would force a rule change — not a story change?

  2. Scope creep: Where are you tempted to pull more data than your ethics or MCP policy allows?

  3. Synthetic honesty: If you used synthetic data, where is the documented bridge to behavior you could measure later?

  4. Metric vs meaning: Which number are you tracking because it is easy, not because it is strategic?

  5. Differentiation in data: What would “proof of distinctiveness” look like as an observable — not a brand statement?

  6. Agency: Who else’s privacy is implicated by your MCP connections — did you model that?


Lecture 8 — Defense and lineage

  1. Lineage gap: If a reviewer opened your repo, where would they get lost — and what will you fix before defense?

  2. Feared question: What is the one challenge you hope to avoid — draft your answer without defensiveness.

  3. Revision story: Name the single biggest mistake you corrected across the eight lectures. What guardrail prevents recurrence?

  4. Automation boundary: What will you not hand to an agent or MCP workflow — and is that boundary documented?

  5. Tail + ethics: Give one scenario where aggressive optimization is legal but wrong for your brand. Does your model encode “stop”?

  6. Handoff: One sentence you want faculty to remember about your system after you leave the room.


Disclose AI assistance where required by program policy. Undisclosed substitution of reflection is misconduct.