All integrations/OpenAI Agents SDK60s setupmoderate

Hooklayer for OpenAI Agents SDK

OpenAI's Agents SDK supports MCP natively. Register Hooklayer as an MCP server via the SDK's server config, pass your Bearer hl_live_ key in the authorization header, and the 7 Hooklayer tools become callable from any OpenAI-powered agent. Works with GPT-5, GPT-5-mini, and reasoning models.

What works in OpenAI Agents SDK

  • All 7 Hooklayer tools
  • Cross-provider compatible (use Hooklayer with GPT models)
  • Streamable HTTP MCP transport
  • Bearer auth (no OAuth needed for backend agents)
  • recommended_chain readable by GPT-5 reasoning loop

Setup (60s)

Config file: your_agent.py

PythonCopy + paste
from openai_agents import Agent, MCPServer

# Register Hooklayer as an MCP server
hooklayer = MCPServer(
    name="hooklayer",
    url="https://hooklayer.dev/api/mcp",
    transport="http",
    headers={
        "Authorization": f"Bearer {HOOKLAYER_API_KEY}",  # hl_live_*
    },
)

# Create an agent with Hooklayer's tools available
agent = Agent(
    name="content-research-agent",
    model="gpt-5",
    mcp_servers=[hooklayer],
    instructions=(
        "You are a viral-content research agent. When the user asks about "
        "a TikTok creator, use Hooklayer's analyze_account tool and then "
        "fire the recommended_chain that the response returns."
    ),
)

# Run the agent
result = agent.run("Analyze @humphreytalks and tell me what's portable.")
print(result.final_output)
  1. 1
    Install the SDK

    pip install openai-agents (Python) or npm install openai-agents (TypeScript). v1.0+ required for MCP support.

  2. 2
    Get your API keys

    OPENAI_API_KEY (from platform.openai.com) and HOOKLAYER_API_KEY (hl_live_ from hooklayer.dev/auth/signup).

  3. 3
    Register Hooklayer as an MCP server

    In your agent code, instantiate MCPServer with Hooklayer's URL and Bearer header. Pass to Agent's mcp_servers list.

  4. 4
    Run the agent

    Call agent.run() with a user prompt. The OpenAI model reads Hooklayer's tool catalog, picks the right tool, and fires it.

Example prompts for OpenAI Agents SDK

Paste any of these to see Hooklayer respond live.

Backend agent: analyze creators and email a report

# Python pseudo-code:
result = agent.run(
    "Analyze @humphreytalks, @herfirst100k, and @mrbeast on TikTok. "
    "Compare their replicability scores and return the highest. "
    "Fire match_voice on the winner using their top 3 video URLs."
)
# result.final_output contains the comparison + voice_metrics output

Expected: Backend agent runs 3 analyze_account calls in parallel + 1 match_voice on the winner. Total cost: 17 credits. Demonstrates non-interactive (server-side) Hooklayer use.

Reasoning model + Hooklayer for go/no-go content gates

# Agent uses gpt-5 reasoning mode:
result = agent.run(
    "I want to post this hook on TikTok: [hook text]. "
    "Run it through Hooklayer's score_hook. If it scores below 70, "
    "auto-rewrite using the rewrites[] array and re-score. Repeat "
    "until the score crosses 75 or we hit 3 rewrites."
)

Expected: Reasoning loop that gates publish on Hooklayer score. Iterates the rewrites array deterministically. Cost: 1 credit per score call (24h cache means iteration is cheap).

Multi-provider workflow: OpenAI generates, Hooklayer scores

# Hybrid agent:
result = agent.run(
    "Generate 5 TikTok hooks for a personal-finance video about Roth IRAs. "
    "Then run each through Hooklayer's score_hook. Return the top 2 hooks "
    "with their signals[] arrays and would_fail_because counterfactuals."
)

Expected: OpenAI handles generation; Hooklayer handles scoring/verification. The structurally-independent scoring prevents OpenAI from grading its own homework.

Frequently asked

Does Hooklayer work with non-Anthropic models?

Yes. MCP is provider-agnostic — Hooklayer is just an HTTP API that speaks the MCP protocol. OpenAI Agents SDK, LangChain, custom Python/TypeScript clients, and any HTTP MCP-capable system can call it. No Anthropic dependency.

Can GPT-5 read the recommended_chain field automatically?

Yes. The recommended_chain is just JSON in the response. GPT-5's reasoning loop reads the structured output, sees the suggested next tools with pre-filled params, and decides whether to execute them. Same agentic pattern Claude uses.

How does authentication differ from Claude.ai?

For backend agents, use the Bearer hl_live_ key directly in the Authorization header — no OAuth needed because there's no end-user OAuth flow. OAuth 2.1 is only needed when an end user authorizes a third-party app to call Hooklayer on their behalf.

Does the SDK cache tool catalogs?

Yes. The SDK calls tools/list once per session and caches the catalog. If you deploy a new Hooklayer tool, your agent picks it up on the next session start (not mid-conversation).

Is there a rate-limit difference for backend agents?

No. Hooklayer's rate limits are per-API-key, not per-client. Starter tier: 60 req/min. Pro: 300/min. Agency: 1,000/min. Backend agents typically run well below these caps.

Can I use multiple MCP servers in one agent?

Yes. The mcp_servers parameter accepts a list. Common pattern: Hooklayer for content intelligence + Composio for posting + Stripe for billing. Each server's tools appear in the agent's combined tool catalog with namespacing.

Try it in OpenAI Agents SDK.

100 free credits at signup. No card. OpenAI Agents SDK setup in 60 seconds.