Hooklayer for LangChain
LangChain 0.3+ ships an MCP client adapter. Pass Hooklayer's URL and Bearer header to the MCPToolkit constructor; all 7 tools are returned as standard LangChain Tool objects you can hand to AgentExecutor, LangGraph nodes, or any chain composition. Works with OpenAI, Anthropic, Vertex AI models.
What works in LangChain
- All 7 Hooklayer tools as LangChain Tools
- Cross-provider model support
- LangGraph node integration
- Streaming responses
- OpenTelemetry tracing
Setup (90s)
Config file: your_agent.py
from langchain_mcp_adapters import MCPToolkit
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI
# Connect to Hooklayer's remote MCP
toolkit = MCPToolkit(
server_url="https://hooklayer.dev/api/mcp",
headers={"Authorization": f"Bearer {HOOKLAYER_API_KEY}"},
transport="http",
)
hooklayer_tools = toolkit.get_tools() # list of 7 LangChain Tools
# Hand off to a tool-calling agent
llm = ChatOpenAI(model="gpt-5", temperature=0)
agent = create_tool_calling_agent(llm, hooklayer_tools, prompt)
executor = AgentExecutor(agent=agent, tools=hooklayer_tools, verbose=True)
result = executor.invoke({
"input": "Analyze @humphreytalks and execute the recommended_chain."
})- 1Install the adapter
pip install langchain-mcp-adapters langchain langchain-openai. v0.3+ required.
- 2Get your API key
Sign up at hooklayer.dev/auth/signup for an hl_live_ Bearer key.
- 3Wire the MCPToolkit
Instantiate MCPToolkit with server_url + Authorization header. Call get_tools() to receive 7 LangChain Tool objects.
- 4Compose with your chain
Pass the tools to AgentExecutor, LangGraph nodes, or any chain that accepts BaseTool[]. Standard LangChain composition rules apply.
Example prompts for LangChain
Paste any of these to see Hooklayer respond live.
LangGraph workflow: analyze → remix → score
# LangGraph nodes wired in sequence:
# Node 1: extract_handle (string parser)
# Node 2: hooklayer_analyze (calls analyze_account tool)
# Node 3: hooklayer_remix (calls viral_remix on top recommended_chain url)
# Node 4: hooklayer_score (calls score_hook on the generated hook)
# Node 5: conditional_edge (if score >= 75 → save, else → loop to node 3)Expected: Declarative state-machine workflow using LangGraph. Each Hooklayer call is a typed node. Loop-back gives quality-gated content generation.
AgentExecutor with multi-model fallback
# Try Claude first, fall back to GPT-5 if rate-limited:
for model in [ChatAnthropic("claude-opus-4-7"), ChatOpenAI("gpt-5")]:
try:
agent = create_tool_calling_agent(model, hooklayer_tools, prompt)
result = AgentExecutor(agent=agent, tools=hooklayer_tools).invoke(...)
break
except RateLimitError:
continueExpected: Cross-provider resilience. Hooklayer tools work identically across models because they're MCP-defined, not provider-specific. Failover is graceful.
Tracing with LangSmith / OpenTelemetry
# Set LANGCHAIN_TRACING_V2=true and LANGCHAIN_API_KEY for LangSmith.
# Every Hooklayer tool call is traced as a span with:
# - tool name (analyze_account, score_hook, etc.)
# - latency
# - credits consumed (from response)
# - cache_hit boolean
# Use the trace to debug long agent runs and identify which tool is the latency hotspot.Expected: Full observability. LangSmith / OTel traces show each Hooklayer call with timing, credits, and cache behavior — essential for cost-tuning agent workflows.
Frequently asked
Does LangChain support OAuth or only Bearer?
For backend agents (no end-user in the loop), Bearer auth via header injection is the standard pattern — LangChain doesn't need OAuth here. For end-user-facing agents where each user authorizes their own Hooklayer access, implement an OAuth handshake before instantiating MCPToolkit per-user.
Can I use Hooklayer tools in a custom LangChain chain (not Agent)?
Yes. The tools returned by MCPToolkit.get_tools() are standard LangChain BaseTool instances — you can call them directly via .invoke() in any chain composition, not just AgentExecutor.
How does this work with LangGraph state machines?
Each Hooklayer tool becomes a node in your LangGraph. Wire them with conditional edges (e.g., if score_hook returns < 70, loop back to viral_remix). The recommended_chain field from analyze_account can directly populate downstream node parameters.
Will LangChain re-fetch the tool catalog on every call?
No. MCPToolkit caches the catalog for the lifetime of the toolkit instance. Re-instantiate the toolkit if you deploy a new Hooklayer tool you want to use mid-process.
Can I use Hooklayer with LangSmith evaluation?
Yes. Hooklayer's response includes deterministic fields (signals[], would_fail_because, calibration_check) that make agent outputs evaluable. LangSmith's LLM-as-judge can grade against these structured fields rather than free-text outputs.
Are there async versions of the tools?
Yes. MCPToolkit returns async-capable tools — use .ainvoke() instead of .invoke() to call Hooklayer asynchronously. Essential for parallel calls (e.g., analyzing 5 creators in parallel via asyncio.gather).
Try it in LangChain.
100 free credits at signup. No card. LangChain setup in 90 seconds.
