All tools/score_hook1 creditsynthesize

What is score_hook?

score_hook is a Hooklayer MCP tool that takes a TikTok/Reels/Shorts hook string and returns a 0-100 score against proven viral patterns. Response includes percentile, matched viral pattern, 3 rewrites at higher quality, a 6-signal evidence array, and a `would_fail_because` counterfactual naming the closest version that scores 20 points lower.

The AI-slop quality gate every content workflow needs. Score every generated hook BEFORE publishing. Reject anything below 70. Auto-rewrite to 85+. One credit per call, deterministic via 24-hour hash cache (same hook input → same score).

Calibration anchors at 10/30/50/70/85/95 enforce score discrimination. The model is required to name the closest anchor in the `why` field — so "82/100" comes with "Closer to the 85 anchor — has the contrast but missing one concrete number."

Six signals returned per scoring: specificity, emotional_stake, scroll_stop_velocity, credibility_signal, pattern_freshness, share_trigger. Each scored 0-10 with an evidence string quoting the concrete phrase from the hook that drove the score. Agents cite the phrase, not paraphrase.

Inputs & outputs

Endpoint: POST /api/v1/hook/score

Inputs

  • textstringrequired

    The hook to score (3-200 chars typical)

  • platformstringrequired

    tiktok | reels | shorts — affects pattern matching

  • nichestring

    Optional niche context like "Finance & Business"

Output fields

  • score

    0-100 hook quality score

  • percentile

    Rank vs corpus of representative hooks

  • pattern_match

    Named viral pattern (Knowledge Gap, Result First, Receipt, Mid-Action Drop, etc.)

  • rewrites

    3 rewritten versions at materially higher scores using different patterns

  • why

    One sentence naming the closest calibration anchor

  • breakdown

    5 sub-scores 0-20 each (hook_strength, structure, retention, CTA, shareability)

  • signals

    6 evidence-backed signals 0-10 each with cited evidence strings

  • would_fail_because

    One-sentence counterfactual naming closest -20 version

  • cache_hit

    True when input matches a cached response within 24h TTL

cURL

curl -X POST https://hooklayer.dev/api/v1/hook/score \
  -H "Authorization: Bearer hl_live_..." \
  -H "Content-Type: application/json" \
  -d '{"text": "3 things every Gen Z investor needs to know", "platform": "tiktok", "niche": "Finance"}'

Example prompts

Paste any of these into Claude Desktop (with Hooklayer connected) to see the live response.

Score with evidence

Use Hooklayer score_hook on this hook: "3 things every Gen Z investor needs to know." Show me the full signals[] array — all 6 with the evidence strings verbatim — plus the would_fail_because field and the calibration anchor named in the why field.

Expected output: Score lands ~50 (the generic listicle anchor). Signals show specificity=2 / emotional_stake=1 / pattern_freshness=2 etc. would_fail_because explains how it could score in the 20s.

A/B compare three hooks

Score these three hooks via Hooklayer and compare their signals[] arrays — not just the numeric scores. I want to know which one wins on credibility_signal and which one wins on specificity, with evidence. Hooks: (1) "Stop wasting money on coffee" (2) "I lost $14K to this Robinhood mistake — here's the receipt" (3) "Most 22-year-olds don't know about this 401k loophole."

Expected output: Returns 3 score_hook responses. Hook 2 (Receipt pattern with concrete number + stake) wins. Comparison shows the discriminating signals not just the numbers.

Reject AI-slop before publishing

I have a draft hook generated by ChatGPT: "Today we will discuss the importance of saving money for retirement." Run it through Hooklayer score_hook. If the score is below 70, return the 3 rewrites it suggests and tell me which rewrite I should use based on the signals it shows.

Expected output: Score lands ~30 (AI-slop anchor). Returns 3 rewrites using Receipt / Mid-Action Drop / Knowledge Gap patterns. Agent picks the highest-scoring rewrite based on pattern_match + signals.

Frequently asked

How does score_hook know what makes a hook viral?

The scoring rubric matches the input hook against 11 named viral patterns (Knowledge Gap, Receipt, Mid-Action Drop, Identity Call, Contrarian Strike, Confession, Result First, Status Threat, Pattern Interrupt, Curiosity Escalation, Generic). Each pattern is calibrated against scoring anchors at 10/30/50/70/85/95 with example hooks at each anchor — so a hook scoring 78 is mathematically closer to the 85 anchor than the 70 one. The model must name the closest anchor in the why field.

Is the score deterministic? Will I get the same number twice?

Yes within 24 hours. Same input (text + platform + niche) returns the same response from a SHA-256-keyed cache. The deterministic temperature (0.2) plus the cache means scores don't drift across runs. Cache hit is surfaced via the cache_hit boolean. Bumping the rubric version invalidates prior cached responses.

What's the difference between score and percentile?

Score is the absolute calibration anchor placement (0-100 against named anchor examples). Percentile is rank against a representative feed of 100 hooks for the same platform/niche — answering "where would this land if dropped into a feed?" A 78 score might be 82nd percentile in a saturated finance niche but 65th percentile in a less competitive one.

Why do I get 3 rewrites instead of 1 best version?

Variety of patterns. Each rewrite uses a DIFFERENT viral pattern than the original so you can test which voice fits your account. A "Receipt" rewrite reads differently from a "Mid-Action Drop" rewrite — both might score 85, but one will fit your brand and the other won't. The agent / user picks based on voice not just score.

Can I score hooks for Reels or YouTube Shorts?

Yes. The platform parameter accepts tiktok | reels | shorts. The pattern matcher adjusts — Shorts hooks favor the first 1-2 seconds harder than TikTok, Reels rewards multi-character openers more. The 11 viral patterns are platform-agnostic but the weights shift.

What does would_fail_because tell me?

The closest version of your hook that would score 20+ points LOWER, and why. Example: "If this said 'crypto secrets' instead of 'no crypto, no dropshipping', the anti-hype share trigger disappears and share rate drops roughly 40%." Names what to avoid, not just what to copy. The intelligence layer that prevents you from regressing into AI-slop on your next draft.

Try score_hook in 30 seconds.

100 free credits at signup. No card. Works in Claude Desktop, Cursor, n8n, or any MCP client.