What /blindspots Does
/blindspots is a Claude prompt prefix that does something most codes don't: it changes what Claude attends to before answering. Specifically, it makes Claude audit the assumptions baked into your question, surface the 2-3 things you probably haven't considered, and THEN answer.
In our testing of 120 popular Claude codes, only 5 codes showed consistent reasoning-shift behavior — meaning they produced genuinely different conclusions, not just different wording. /blindspots is one of them.
When /blindspots Earns Its Place
You use /blindspots when:
- You're about to execute a decision you've been thinking about for weeks
- The answer feels too obvious (that's usually when you're missing something)
- You're presenting a plan to your team and want pre-mortem input first
- You're asking for advice and suspect your framing is part of the problem
You don't use /blindspots when:
- The question is genuinely simple
- You just want a fast answer
- You've already thought through alternatives and picked one
A Real Example
Let me show you what /blindspots actually does with a question a founder might ask.
Baseline (no prefix):
Q: Should I hire a head of sales or a founding sales engineer first?
A: Both roles have value at different stages. A head of sales brings strategic leadership and team-building experience. A founding sales engineer brings technical credibility and direct deal-closing ability. Consider your current sales process maturity, the technical complexity of your product, and your team's existing sales experience.
Classic Claude hedge. Enumerates the tradeoffs. Doesn't help you decide.
With /blindspots:
Q: /blindspots Should I hire a head of sales or a founding sales engineer first?
A: Direct answer: founding sales engineer, unless you already have someone on the team doing sales well.
But here's what your question assumes that I'd push back on:
You're assuming your next hire should be in sales. Most early-stage companies with "we need to hire sales" problems actually have product-market fit problems, and adding sales hires masks the underlying issue instead of solving it. Before hiring, I'd want to know: what's your current conversion rate from qualified call to close? If it's under 20% with a technical founder pitching, no salesperson will fix that — the product isn't ready yet.
You're framing this as head-of-sales vs founding-sales-engineer. There's a third option you didn't name: an experienced individual contributor AE who's been a founding rep at another startup. Cheaper than a VP, more execution-focused than an engineer. Often the right first hire between $500K and $3M ARR.
You haven't said what your current sales motion looks like. If you're doing founder-led sales, a VP walks into a vacuum — no process, no data, no cadence. They need 3-6 months to build, during which you're paying executive salary for setup work. A sales engineer integrates with existing founder-led motion from day one.
Given those, my direct answer: founding sales engineer if you're under $2M ARR, VP of Sales if you're above $2M ARR with a working motion already.
Same question. Fundamentally different answer. Not just better phrasing — /blindspots found three hidden assumptions that change what the correct recommendation actually is.
How /blindspots Works Under The Hood
Most prompt codes work by changing Claude's output style. /blindspots works by inserting a reasoning step BEFORE the answer:
- Read the user's question
- Identify 2-3 assumptions or unstated constraints in the question
- Evaluate whether each assumption is load-bearing (would the answer change if the assumption is wrong?)
- Answer the question + surface the most load-bearing assumptions
This is what we call a "reasoning-shifter" pattern — the prefix changes what Claude attends to before generating, not just how the output is phrased. See our full classification of 40 tested codes for which others fall in this category.
/blindspots vs Similar-Sounding Codes
Several codes claim to do similar things. They don't all work:
/blindspots vs /skeptic
- /blindspots surfaces what you DIDN'T ask. Audits assumptions.
- /skeptic rejects the framing of what you DID ask. Refuses the question if it's broken.
They overlap but they're not redundant. Use /blindspots when you want an answer + context; use /skeptic when you want Claude to refuse the question if it's malformed.
/blindspots vs /deepthink
- /blindspots finds missing context (horizontal depth)
- /deepthink goes deeper on what you already asked (vertical depth)
/deepthink makes Claude reason longer about the given question. /blindspots makes Claude reason about what's NOT in the given question.
/blindspots vs ULTRATHINK
- /blindspots tests as reasoning-shifter (changes conclusions)
- ULTRATHINK tests as placebo (changes vocabulary, not reasoning)
See our anti-pattern library for why ULTRATHINK doesn't work.
/blindspots vs "what am I missing?"
You can just ask "what am I missing?" at the end of any prompt. That works about 60% as well as /blindspots. The difference: /blindspots forces the audit to happen BEFORE the answer, not as an afterthought after Claude already committed to a direction.
Advanced Usage
Stack with L99 for decisive pre-mortems
L99 /blindspots We're about to launch at $49/month. Should we?
Combines forced-commitment (L99) with assumption-audit (/blindspots). Claude picks a side AND names the assumptions that would make the other side correct.
Stack with PERSONA for expert pre-mortems
PERSONA Act as a VP of Product at a Series B SaaS who has launched 4 products. /blindspots Here's my product launch plan: [paste]
Gives you an experienced operator applying the /blindspots audit to your specific plan, with the operator's industry context shaping which assumptions count as "load-bearing."
Stack with /punch for sharp feedback
/blindspots /punch Review my pitch deck for investor call tomorrow: [paste]
The audit surfaces assumptions. /punch makes the feedback direct instead of mealy-mouthed.
Common Failure Modes
Failure 1: Applying to trivial questions
/blindspots on "what's the capital of France?" is wasteful. The question has no assumptions to audit. Save the code for questions where assumptions actually matter.
Failure 2: Expecting it to find information Claude doesn't have
/blindspots can audit assumptions within the context you provided. It can't tell you about market conditions Claude doesn't know about. If you need external info, provide it upfront.
Failure 3: Using without real context
Vague questions produce vague /blindspots output. "Should I raise a round?" gets you generic assumption-auditing. "Should I raise a $2M seed at $12M valuation to extend my 6-month runway while we're at $30K MRR with 40% month-over-month growth?" gets you specific assumption-auditing.
FAQ
Is /blindspots an official Anthropic feature?
No. Like L99 and /skeptic, it's a community-discovered prompt pattern that reliably triggers a specific reasoning behavior in Claude. Not in the docs. No API parameter.
Does it work with Claude Haiku 4.5?
Yes. Tested cleanly across Haiku 4.5, Sonnet 4.6, and Opus 4. Output quality is best on Sonnet — Haiku's audits are shorter and sometimes miss nuance.
Will /blindspots stop working in future Claude versions?
Potentially, if Anthropic bakes assumption-auditing into RLHF. For now it's safe to use. We re-test when new models ship and update the /insights dashboard classifications.
Can I use it for code reviews?
Yes — it's excellent for that. /blindspots Review this authentication flow: [paste] will surface assumptions in your threat model you didn't notice, not just point out bugs.
How many assumptions does it typically surface?
2-4. If you want more, ask explicitly: /blindspots List the top 5 assumptions I'm making in this plan.
Where are more codes like this?
Our free prompts library has all 5 reasoning-shifters plus ~95 other tested codes. The Cheat Sheet has the full classification + before/after test data for each.
Try /blindspots Right Now
Take the most important decision you're making this week. Write it in Claude.ai as a clear question. Prefix it with /blindspots. Send.
The 2-3 assumptions Claude surfaces will either (a) confirm your plan is solid or (b) save you from making an expensive mistake. Takes 30 seconds.