SAMWISE Prompt Bank — AINS-M6001 (Personal Attention & Influence Systems)
These prompts guide post-lecture reflection. They are specific and adversarial: hidden assumptions, ethics edges, and links between formal structure and real tradeoffs.
Faculty may assign a subset; students choose one as the SAMWISE Prompt in reflection_template.md.
Lecture 1 — Marketing world models¶
Ontology vs vanity: Which entities did you include because they flatter your self-image rather than because they change outcomes?
Commensurability: Where did you convert trust or attention into numbers you cannot defend under challenge?
Exogenous fog: What important driver did you label “noise” or “luck” to avoid modeling it?
Objective function: If your model implicitly maximizes one metric, state it in one sentence. Is it reach, revenue, reputation, or ego?
Tail risk: What single event would make your world model irrelevant overnight? Is that event in your scenario set?
Next perturbation: Name an audience or channel you refuse to model — what changes if you must include it?
Lecture 2 — Attention allocation¶
Stated vs revealed: Where does your calendar contradict your stated strategy?
Recovery: Where is rest modeled — and what breaks if you delete it?
Coupling: What external agent (employer, client, family) constrains your best creative hours? Encode one coupling.
Metric gaming: If you optimized your tracked metric only, what would you stop doing that you believe matters ethically?
Diminishing returns: Point to a block where extra hours yield sharply lower marginal output. Are you spending there anyway?
Counterfactual: If you moved 20% attention from creation to distribution, what happens to trust two weeks later (even if your model doesn’t include lag yet — hypothesize)?
Lecture 3 — Credibility and leverage¶
Proof rate: What is your evidence throughput per week vs your claim strength?
Convexity: Where is your downside in reputation larger than your upside from the next campaign?
Silence: Name a situation where the best marketing move is to say nothing. Does your model allow abstention?
Borrowed trust: Where are you leaning on someone else’s credibility — and what happens if they withdraw?
Inconsistency tax: List one past inconsistency your audience might remember. Is that memory in your state?
Ethical edge: Describe a persuasion tactic your model could optimize but you will not deploy. Is non-deployment encoded or only in your head?
Lecture 4 — Message–market fit¶
Niche horror: What audience are you afraid to exclude — and how does fear widen your niche artificially?
Distance metric: Why did you choose your mismatch metric? What changes if you switch metric?
Evidence: What interview or log data would falsify your
audience_need_vector?Differentiation: What do you refuse to claim — and who does that refusal delight?
Competitive substitute: If a competitor copied your message verbatim, what remains yours — proof, speed, relationship?
Overfit: Could your message vectors be storytelling after the fact? How would you test that?
Lecture 5 — Content economics¶
Unit cost honesty: What is the true hours-per-unit for your best work — including rework and anxiety?
Half-life: Which channel decay are you ignoring because it hurts?
Quality proxy: What observable variable stands in for quality — and how gameable is it?
Backlog risk: What happens if you skip a week — does your model punish you realistically?
Burnout path: Simulate (mentally or in code) three months at your planned cadence. Where does the human break?
Abstain: When is silence optimal? Would your optimizer agree?
Lecture 6 — Integration¶
Coherence: One paragraph describing your integrated system — what is the biggest internal inconsistency?
Sensitivity: Top three parameters that move outcomes ±20% — are you measuring them in life?
Life delta: What real decision changed because of the model — what decision resisted change?
Meta-learning: What mistake did you repeat across lectures — what guardrail stops it next term?
Fork: If another student forked your model and maximized reach only, what breaks first — trust, fit, or burnout?
Ethics: Where does your integrated model hide moral tradeoffs inside “optimization”?
Lecture 7 — MCP and real signals¶
Falsification: What signal, if it moved against you, would force a rule change — not a story change?
Scope creep: Where are you tempted to pull more data than your ethics or MCP policy allows?
Synthetic honesty: If you used synthetic data, where is the documented bridge to behavior you could measure later?
Metric vs meaning: Which number are you tracking because it is easy, not because it is strategic?
Differentiation in data: What would “proof of distinctiveness” look like as an observable — not a brand statement?
Agency: Who else’s privacy is implicated by your MCP connections — did you model that?
Lecture 8 — Defense and lineage¶
Lineage gap: If a reviewer opened your repo, where would they get lost — and what will you fix before defense?
Feared question: What is the one challenge you hope to avoid — draft your answer without defensiveness.
Revision story: Name the single biggest mistake you corrected across the eight lectures. What guardrail prevents recurrence?
Automation boundary: What will you not hand to an agent or MCP workflow — and is that boundary documented?
Tail + ethics: Give one scenario where aggressive optimization is legal but wrong for your brand. Does your model encode “stop”?
Handoff: One sentence you want faculty to remember about your system after you leave the room.
Legal / integrity note¶
Disclose AI assistance where required by program policy. Undisclosed substitution of reflection is misconduct.