Anchor Browser
Anchor is a platform for AI Agentic browser automation, which solves the challenge of automating workflows for web applications that lack APIs or have limited API coverage. It simplifies the creation, deployment, and management of browser-based automations, transforming complex web interactions into simple API endpoints.
This notebook provides a quick overview for getting started with Anchor Browser tools. For more information of Anchor Browser visit Anchorbrowser.io or the Anchor Browser Docs
Overview
Integration details
Anchor Browser package for LangChain is langchain-anchorbrowser, and the current latest version is .
Tool features
Tool Name | Package | Description | Parameters |
---|---|---|---|
AnchorContentTool | langchain-anchorbrowser | Extract text content from web pages | url , format |
AnchorScreenshotTool | langchain-anchorbrowser | Take screenshots of web pages | url , width , height , image_quality , wait , scroll_all_content , capture_full_height , s3_target_address |
AnchorWebTaskToolKit | langchain-anchorbrowser | Perform intelligent web tasks using AI (Simple & Advanced modes) | see below |
The parameters allowed in langchain-anchorbrowser
are only a subset of those listed in the Anchor Browser API reference respectively: Get Webpage Content, Screenshot Webpage, and Perform Web Task.
Info: Anchor currently implements SimpleAnchorWebTaskTool
and AdvancedAnchorWebTaskTool
tools for langchain with browser_use
agent. For
AnchorWebTaskToolKit Tools
The difference between each tool in this toolkit is the pydantic configuration structure.
Tool Name | Package | Parameters |
---|---|---|
SimpleAnchorWebTaskTool | langchain-anchorbrowser | prompt, url |
AdvancedAnchorWebTaskTool | langchain-anchorbrowser | prompt, url, output_schema |
Setup
The integration lives in the langchain-anchorbrowser
package.
%pip install --quiet -U langchain-anchorbrowser
Credentials
Use your Anchor Browser Credentials. Get them on Anchor Browser API Keys page as needed.
import getpass
import os
if not os.environ.get("ANCHORBROWSER_API_KEY"):
os.environ["ANCHORBROWSER_API_KEY"] = getpass.getpass("ANCHORBROWSER API key:\n")
Instantiation
Instantiace easily Anchor Browser tools instances.
from langchain_anchorbrowser import (
AnchorContentTool,
AnchorScreenshotTool,
AdvancedAnchorWebTaskTool,
)
anchor_content_tool = AnchorContentTool()
anchor_screenshot_tool = AnchorScreenshotTool()
anchor_advanced_web_task_tool = AdvancedAnchorWebTaskTool()
Invocation
Invoke directly with args
The full available argument list appear above in the tool features table.
# Get Markdown Content for https://www.anchorbrowser.io
anchor_content_tool.invoke(
{"url": "https://www.anchorbrowser.io", "format": "markdown"}
)
# Get a Screenshot for https://docs.anchorbrowser.io
anchor_screenshot_tool.invoke(
{"url": "https://docs.anchorbrowser.io", "width": 1280, "height": 720}
)
# Get a Screenshot for https://docs.anchorbrowser.io
anchor_advanced_web_task_tool.invoke(
{
"prompt": "Collect the node names and their CPU average %",
"url": "https://play.grafana.org/a/grafana-k8s-app/navigation/nodes?from=now-1h&to=now&refresh=1m",
"output_schema": {
"nodes_cpu_usage": [
{"node": "string", "cluster": "string", "cpu_avg_percentage": "number"}
]
},
}
)
Invoke with ToolCall
We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:
# This is usually generated by a model, but we'll create a tool call directly for demo purposes.
model_generated_tool_call = {
"args": {"url": "https://www.anchorbrowser.io", "format": "markdown"},
"id": "1",
"name": anchor_content_tool.name,
"type": "tool_call",
}
anchor_content_tool.invoke(model_generated_tool_call)
Chaining
We can use our tool in a chain by first binding it to a tool-calling model and then calling it:
Use within an agent
%pip install -qU langchain langchain-openai
from langchain.chat_models import init_chat_model
llm = init_chat_model(model="gpt-4o", model_provider="openai")
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("OPENAI API key:\n")
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
prompt = ChatPromptTemplate(
[
("system", "You are a helpful assistant."),
("human", "{user_input}"),
("placeholder", "{messages}"),
]
)
# specifying tool_choice will force the model to call this tool.
llm_with_tools = llm.bind_tools(
[anchor_content_tool], tool_choice=anchor_content_tool.name
)
llm_chain = prompt | llm_with_tools
@chain
def tool_chain(user_input: str, config: RunnableConfig):
input_ = {"user_input": user_input}
ai_msg = llm_chain.invoke(input_, config=config)
tool_msgs = anchor_content_tool.batch(ai_msg.tool_calls, config=config)
return llm_chain.invoke({**input_, "messages": [ai_msg, *tool_msgs]}, config=config)
tool_chain.invoke(input())
API reference
Related
- Tool conceptual guide
- Tool how-to guides