All tools/predict_virality2 creditssynthesize

What is predict_virality?

predict_virality is a Hooklayer MCP tool that scores a draft script 0-100 against viral patterns AND runs an adversarial second-pass that hunts failure modes. Returns the headline virality_score (skeptical), optimistic_score (upstream), score_range, signals with evidence, attack_vectors[] with severity + mitigation, and a would_fail_because counterfactual. Breaks the self-grading loop.

Two structurally-independent scores. The optimistic_score comes from the upstream forensic analyzer that mirrors the generation distribution — useful but susceptible to cardinal coupling (always scores well-formed Hooklayer-generated scripts at ~87). The adversarial_score comes from a separate Claude pass with the opposite system prompt: "your job is to BREAK this script." When the gap exceeds 15 points, agents should use the adversarial number for go/no-go decisions.

Named attack vectors. Returns 3-5 specific failure modes from a fixed taxonomy: skip_at_3s, share_blocker, algorithm_suppressed, audience_mismatch, credibility_collapse, comment_killer, saturation_clone, anti_climax, platform_mismatch. Each has severity (low | medium | high) and a one-line mitigation. Actionable, not vague.

Calibration anchors at 25 / 45 / 65 / 80 / 95. The post-processor names the closest anchor and flags whether the upstream score AGREES, OVERSTATES, or UNDERSTATES relative to that anchor. Lets you trust the score band even when you don't trust the number.

Inputs & outputs

Endpoint: POST /api/v1/predict

Inputs

  • scriptstringrequired

    The draft script to predict on (or a video URL — auto-extracts transcript)

  • nichestring

    Niche context

Output fields

  • virality_score

    Headline score = adversarial (skeptical) — what the script realistically earns after attack vectors land

  • optimistic_score

    Upstream forensic-analyzer score (preserved for back-compat)

  • score_range

    [adversarial, optimistic] — honest variance band

  • adversarial_gap

    optimistic minus adversarial — when >15, the optimistic score is unreliable

  • signals

    6 evidence-backed signals: hook_strength, structural_clarity, retention_design, emotional_stakes, shareability_trigger, specificity_density

  • attack_vectors

    3-5 failure modes with severity + mitigation

  • would_fail_because

    One-sentence counterfactual: closest version that drops 20+ points and why

  • calibration_check

    anchor (25 | 45 | 65 | 80 | 95) + agreement (agrees | overstated by ~X | understated by ~X)

  • cache_hit

    True when input matches a cached response within 24h TTL

cURL

curl -X POST https://hooklayer.dev/api/v1/predict \
  -H "Authorization: Bearer hl_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "script": "I will give you $1,000. Spend it on a phone or put it in an index fund. One is worth $19,000 in 30 years..."
  }'

Example prompts

Paste any of these into Claude Desktop (with Hooklayer connected) to see the live response.

Adversarial pre-publish check

Use Hooklayer predict_virality on this draft script: "Hey guys today I want to talk about saving money. So basically you should save more and spend less. That is the whole video bye." Show me BOTH the headline virality_score (adversarial) AND the optimistic_score side by side with the adversarial_gap. List every attack_vectors item with its severity and mitigation. If the gap is >15, explicitly tell me the optimistic score is unreliable.

Expected output: Returns optimistic ~75, adversarial ~15, gap ~60. attack_vectors lists skip_at_3s / no share trigger / generic opener / etc. each with severity high. The gap is the cardinal coupling problem made visible.

Calibration agreement check

Score this script with Hooklayer predict_virality: [paste script]. Surface the calibration_check field — anchor and agreement. If the agreement says "overstated by ~X" tell me what specifically in the script is inflating the upstream score.

Expected output: Returns calibration_check.anchor (e.g. "65") and agreement (e.g. "agrees" or "overstated by ~12"). The agent reports drift in plain language.

Chain after viral_remix

First call Hooklayer viral_remix on a viral TikTok URL with my_topic="early-stage SaaS hiring." Take the fresh_script.hook from the response and call score_hook on it. Then run predict_virality on the FULL fresh_script. Show me all three responses chained. Use the adversarial_score from predict_virality (not the optimistic) as the go/no-go signal.

Expected output: Demonstrates the no-self-grading pattern. viral_remix doesn't self-score; score_hook gives an independent hook signal; predict_virality gives the independent script signal via adversarial pass.

Frequently asked

Why are there two scores?

Cardinal coupling. The same AI distribution that generates scripts also scores them — so well-formed Hooklayer-generated scripts always land at the upstream analyzer's "good not great" mode (~87). predict_virality fixes this by running an adversarial second-pass with the opposite system prompt — its job is to BREAK the script, not validate it. The adversarial_score is the realistic number after failure modes land; the optimistic_score is preserved for back-compat.

When should I use the adversarial vs the optimistic score?

For go/no-go decisions, use the adversarial (it's the headline virality_score in v2). For directional sanity-checking and comparison across scripts, the optimistic is fine. The adversarial_gap field tells you how much they disagree — gap >15 means the optimistic is unreliable for this specific script and you should defer to the adversarial.

What does each attack_vector mean?

skip_at_3s: hook doesn't survive the scroll-test. share_blocker: no reason to send to a friend. algorithm_suppressed: format/topic triggers known suppression. audience_mismatch: implied audience and message misaligned. credibility_collapse: speaker claims authority not earned. comment_killer: nothing invites engagement. saturation_clone: format/pattern overdone. anti_climax: payoff weaker than hook promised. platform_mismatch: written for wrong platform.

Is the score deterministic?

Yes within 24 hours. Hash-cache keyed on (script, niche) returns the same response from cache. cache_hit boolean surfaces this. Credits are still charged on cache hits — the promise is determinism, not cost relief.

Can I pass a video URL instead of a script string?

Yes. URLs auto-extract via Whisper (adds ~3-5s latency, no extra credit). Useful for scoring competitor videos retroactively — pass their URL, get the adversarial breakdown of why their video did or didn't work.

What does would_fail_because tell me beyond attack_vectors?

attack_vectors lists 3-5 named failure modes from a taxonomy. would_fail_because is a single sentence describing the closest version of the script that scores 20+ points lower and why. Different layers — attack_vectors is "what could break THIS script"; would_fail_because is "what would be a worse version of the same idea." Together they tell you what to defend against and what to avoid drifting into.

Try predict_virality in 30 seconds.

100 free credits at signup. No card. Works in Claude Desktop, Cursor, n8n, or any MCP client.