Free 40-page Claude guide — download today
April 16, 2026Claude Skills Hubclaudehaiku-4.5prompt-codes

Claude Haiku 4.5 Prompt Codes That Actually Work (Tested vs Sonnet 4.6)

Re-tested 120 prompt codes when Haiku 4.5 shipped. 20% of rankings shifted. Here's which codes still work, which got better, and which broke.

Why Model Version Matters For Prompt Codes

When a new Claude model ships, community-discovered prompt codes don't always carry forward cleanly. RLHF training shifts, vocabulary distributions change, and what reliably triggered a behavior on Sonnet 3.5 might do nothing on Sonnet 4.6.

When Haiku 4.5 shipped, I re-ran our testing harness on all 120 codes. About 20% of rankings shifted. Some codes got better. A few broke. Most held up.

Here's the full breakdown for anyone switching to Haiku 4.5 or using it for cost-sensitive workloads.

What Changed With Haiku 4.5

Codes that got BETTER on Haiku 4.5

L99 — Haiku 4.5 actually commits more decisively than Sonnet 4.6. Sonnet occasionally over-explains the tradeoffs before giving the answer; Haiku skips straight to the call. If you're using L99 primarily for speed + decisiveness, Haiku is the better choice.

/trim — Noticeably tighter outputs on Haiku than Sonnet. Haiku's base verbosity is lower, so /trim compounds.

/raw — Works more reliably on Haiku. Sonnet sometimes adds a preamble ("Here's the response without formatting:") before the raw output. Haiku drops it cleanly.

/focus — Better on Haiku for code-in-monorepo questions where you want Claude to anchor on a specific file. The smaller attention window actually helps here.

Codes that HELD UP unchanged

Most codes maintained their classification across models:

  • /skeptic — Same reasoning-shifter behavior. Equally good on Haiku.
  • /blindspots — Surfaces the same kinds of assumptions. Quality equivalent.
  • /ghost — Works cleanly on Haiku. Maybe slightly better at stripping AI tells (Haiku's style is less flowery to begin with).
  • /mirror — Matches reference style. Both models equivalent.
  • /punch — Sharpens language. Haiku's output is slightly shorter; same directness.
  • PERSONA — Works the same with real specifics. Fails the same way with vague personas.
  • OODA — Same framework application. Output quality matches.

Codes that got WORSE on Haiku 4.5

/deepthink — Haiku doesn't have the attention budget to do truly deep reasoning. /deepthink on Sonnet produces 5-8 reasoning steps; on Haiku it produces 3-4 and truncates earlier. For genuinely hard reasoning, stay on Sonnet.

ULTRATHINK — Still placebo on Haiku. Actually slightly worse — produces shorter verbose hedging instead of longer verbose hedging. Net: still no reasoning shift.

PARETO — Quality drop on Haiku. The 80/20 synthesis needs more context than Haiku comfortably holds. Use Sonnet for multi-source synthesis.

Long persona setups (CRISPE)CRISPE framework prompts are slightly less effective on Haiku because the long context dilutes. Works but needs tighter editing. See the CRISPE guide for how to tighten it.

Codes that BROKE completely

None, actually. Every previously-working code still works at some level on Haiku 4.5. The failures are gradual, not binary.

The one code we had to reclassify: ARTIFACTS. Haiku 4.5 doesn't support artifacts the same way Sonnet 4.6 does in the API. If you're using ARTIFACTS as a prefix in Claude Code with Haiku, expect inconsistent results.

The Haiku-Specific Recommendations

If your workload is on Haiku 4.5 (usually for cost reasons — Haiku is ~7x cheaper than Sonnet), here's the tested set of codes that actually pull their weight:

For decisions

  • L99 — forces commitment. Better on Haiku than Sonnet.
  • /skeptic — rejects broken framings. Equivalent.

For writing

  • /ghost — strips AI tells. Equivalent.
  • /punch — sharpens language. Marginal Haiku advantage.
  • /trim — cuts filler. Haiku advantage.
  • /mirror — matches style. Equivalent.

For analysis

  • /blindspots — surfaces assumptions. Equivalent.
  • /focus — anchors on specific context. Haiku advantage.
  • PERSONA — expert perspective (with real specifics). Equivalent.

Skip on Haiku

  • /deepthink — use Sonnet
  • PARETO — use Sonnet
  • ULTRATHINK — skip on any model (placebo)
  • Long CRISPE setups — tighten them or use Sonnet

See classifications for 40 deep-tested codes →

Why Haiku 4.5 For Prompt Codes

Cost

Haiku is roughly 7-8x cheaper than Sonnet per token. For high-volume use cases (processing many docs, running automated scripts, batch generation), the cost difference is significant.

Speed

Haiku outputs faster. For interactive use where you're iterating on prompts, the faster feedback loop matters. Especially when combined with short-output codes like /trim and /punch.

"Good enough" threshold

For 70% of founder tasks, Haiku 4.5 is good enough. The reasoning-shifter codes (L99, /skeptic, /blindspots) all work. You save the Sonnet budget for tasks that genuinely need it — complex multi-step reasoning, long-context synthesis, expert-level technical work.

When to switch to Sonnet

  • Architecture decisions with 5+ interacting systems
  • Long-context synthesis (multiple documents over 30K tokens)
  • Code review on 200+ line functions
  • Strategic reasoning with many constraints
  • Anything where the cost of a wrong answer is high

For everything else, Haiku 4.5 + the tested prompt codes above is the efficient frontier.

The Haiku 4.5 Workflow We Actually Use

For running /prompts live demo and the automated outreach agent:

  • Default: Haiku 4.5 with /focus + /punch for most tasks
  • Escalation to Sonnet 4.6: Only for /deepthink questions or multi-doc synthesis
  • Opus 4: Never. Not worth the 20x cost delta vs Sonnet for our workload

The same principle applies to your own Claude usage. Start on Haiku. Escalate only when Haiku output is visibly worse. Most tasks don't need more than Haiku.

FAQ

Does Claude Code support Haiku 4.5?

Yes. Set the model flag when running: claude --model claude-haiku-4-5-20251001. Or in your CLAUDE.md file: default_model: claude-haiku-4-5.

Is Haiku 4.5 good enough for Claude Code agent work?

For agents that mostly read/write files and do simple reasoning, yes. For agents that need to plan multi-step architecture changes, use Sonnet.

Which model should I use with the Cheat Sheet?

All codes in the Cheat Sheet are tested across Haiku 4.5 and Sonnet 4.6. Classifications noted per-code indicate where each model shines.

What about Opus 4.5?

Opus is stronger than Sonnet on genuinely hard reasoning but costs 20x more. For prompt-code-driven workflows, the marginal improvement usually doesn't justify the cost. We test on Opus quarterly to check for behavior changes but don't recommend it as default.

When will you re-test on Claude 5?

When it ships. If rankings shift significantly, we republish the /insights classifications and email everyone who bought the Pro Cheat Sheet (lifetime updates included).

Do the free codes still work on Haiku 4.5?

Yes. Our 100 free prompt codes all work on Haiku 4.5. Performance notes for each model are in the Cheat Sheet's tested classifications.

Bottom Line

Haiku 4.5 + 5-7 well-tested prompt codes = most founder workloads handled for 1/7th the Sonnet cost. The marginal quality gain from Sonnet is worth paying for only on specific reasoning-heavy tasks.

The reasoning-shifter codes (L99, /skeptic, /blindspots) all work identically on both models. If you've been using these on Sonnet, switch to Haiku for most of your usage. The quality difference is smaller than you'd expect.

Want the full research library?

120 tested Claude prompt codes with before/after output and token deltas.

See the Cheat Sheet — $15