Skip to main content

Browserless MCP Server

The Browserless MCP server gives AI assistants full browser automation capabilities through the Model Context Protocol. Connect Claude Desktop, Cursor, VS Code, Windsurf, or any MCP-compatible client to the hosted server and start scraping, searching, mapping, crawling, running Lighthouse audits, exporting, downloading, and running custom browser code — no infrastructure required.

Prerequisites

  • A Browserless account - either an API token from your account dashboard, or OAuth sign-in
  • An MCP-compatible client (Claude Desktop, Cursor, VS Code, Windsurf, etc.)

Hosted Server

Browserless provides a hosted MCP server ready to use:

https://mcp.browserless.io/mcp

No installation or environment variables required. See Authentication for how to connect.

Authentication

The hosted server supports three authentication methods:

MethodBest for
OAuth (Browserless account login)Clients that support OAuth — no token needed
Authorization headerClients that support custom headers
token query parameterURL-only clients (e.g. Claude.ai custom connectors)

When multiple methods are present, they are evaluated in this order: Authorization header (plain API key) → token query parameter → OAuth JWT.

OAuth

For clients that support OAuth (for example Claude Desktop or Cursor), the hosted server can authenticate you through your Browserless account - no API token required. When you connect, your client will open a browser window to sign in. After authenticating, the server resolves your account automatically.

OAuth is enabled on the hosted server at https://mcp.browserless.io/mcp with no extra configuration needed.

API Token

Pass your API token as a Bearer header or query parameter:

  • Header (recommended): Authorization: Bearer your-token-here
  • Query parameter: ?token=your-token-here

Client Setup

Claude.ai supports MCP servers via custom connectors. Since the connector form only accepts a URL, pass your token as a query parameter:

  1. Go to Settings > Connectors in Claude.ai.
  2. Click Add custom connector.
  3. Enter a name (e.g., Browserless) and the following URL:
https://mcp.browserless.io/mcp?token=your-token-here
  1. Click Add.

Replace your-token-here with your Browserless API token from the account dashboard.

tip

Clients that support OAuth can connect without a token - the server will prompt you to sign in with your Browserless account.

Regional Endpoints

By default, the hosted MCP server connects to the US West (San Francisco) Browserless region. To use a different region, pass the endpoint as a header or query parameter:

RegionEndpoint
US West — San Francisco (default)https://production-sfo.browserless.io
Europe — Londonhttps://production-lon.browserless.io
Europe — Amsterdamhttps://production-ams.browserless.io

Using the x-browserless-api-url header (for clients that support headers):

{
"mcpServers": {
"browserless": {
"url": "https://mcp.browserless.io/mcp",
"headers": {
"Authorization": "Bearer your-token-here",
"x-browserless-api-url": "https://production-sfo.browserless.io"
}
}
}
}

Using the browserlessUrl query parameter (for URL-only clients like Claude.ai):

https://mcp.browserless.io/mcp?token=your-token-here&browserlessUrl=https://production-sfo.browserless.io

Tools

The MCP server exposes eight tools to your AI assistant:

browserless_smartscraper

Scrapes any webpage using cascading strategies — HTTP fetch, proxy, headless browser, and CAPTCHA solving — automatically selecting the best approach.

ParameterTypeRequiredDefaultDescription
urlstringYesThe URL to scrape (http or https)
formatsstring[]No["markdown"]Output formats: markdown, html, screenshot, pdf, links
timeoutnumberNo30000Request timeout in milliseconds

Output formats:

  • markdown — Page content converted to clean Markdown (default)
  • html — Raw HTML of the page
  • screenshot — Full-page screenshot as a PNG image
  • pdf — PDF rendering of the page
  • links — All links found on the page

browserless_function

Executes custom Puppeteer JavaScript code on the Browserless cloud. Your function receives a Puppeteer page object and optional context data, and returns { data, type } to control the response payload and Content-Type.

ParameterTypeRequiredDescription
codestringYesJavaScript (ESM) code to execute. The default export receives { page, context } and should return { data, type }
contextobjectNoOptional context object passed to the function
timeoutnumberNoRequest timeout in milliseconds

browserless_download

Runs custom Puppeteer code and returns the file that Chrome downloads during execution. Useful for downloading CSVs, PDFs, images, or any file from a website.

ParameterTypeRequiredDescription
codestringYesJavaScript (ESM) code that triggers a file download in the browser
contextobjectNoOptional context object passed to the function
timeoutnumberNoRequest timeout in milliseconds

browserless_export

Exports a webpage by URL in its native format (HTML, PDF, image, etc.). Set includeResources to bundle all page assets into a ZIP archive for offline use.

ParameterTypeRequiredDescription
urlstringYesThe URL to export (http or https)
gotoOptionsobjectNoPuppeteer Page.goto() options (waitUntil, timeout, referer)
bestAttemptbooleanNoWhen true, proceed even if awaited events fail or timeout
includeResourcesbooleanNoBundle all linked resources (CSS, JS, images) into a ZIP file
waitForTimeoutnumberNoMilliseconds to wait after page load before exporting
timeoutnumberNoRequest timeout in milliseconds

Searches the web using Browserless and optionally scrapes each result. Performs web searches via SearXNG and can return results from web, news, or images. Optionally scrape each result URL to get markdown, HTML, links, or screenshots.

ParameterTypeRequiredDefaultDescription
querystringYesThe search query string
limitnumberNo10Maximum number of results to return (capped by plan limits)
langstringNo"en"Language code for search results
countrystringNoCountry code for geo-targeted results
locationstringNoLocation string for geo-targeted results
tbsstringNoTime-based filter: day, week, month, year
sourcesstring[]No["web"]Search sources: web, news, images
categoriesstring[]NoFilter by categories: github, research, pdf
scrapeOptionsobjectNoOptions for scraping each search result (see below)
timeoutnumberNo30000Request timeout in milliseconds

Scrape options (optional sub-object on scrapeOptions):

ParameterTypeDescription
formatsstring[]Output formats for scraped content: markdown, html, links, screenshot
onlyMainContentbooleanExtract only the main content using Readability
includeTagsstring[]Only include content from these HTML tags
excludeTagsstring[]Exclude content from these HTML tags

browserless_map

Discovers and maps all URLs on a website using Browserless. Crawls a site via sitemaps and link extraction to find all pages. Returns a list of URLs with optional titles and descriptions. Use the search parameter to order results by relevance to a query.

ParameterTypeRequiredDefaultDescription
urlstringYesThe base URL to start mapping from (http or https)
searchstringNoSearch query to order results by relevance
limitnumberNo100Maximum number of links to return (max: 5000)
sitemapstringNo"include"Sitemap handling: include, skip, only
includeSubdomainsbooleanNotrueInclude URLs from subdomains
ignoreQueryParametersbooleanNotrueExclude URLs with query parameters
timeoutnumberNo30000Request timeout in milliseconds

browserless_performance

Runs a Lighthouse performance audit on any URL via the Browserless /performance API. Returns scores and detailed metrics for accessibility, best practices, performance, PWA, and SEO. Optionally filter by category or supply performance budgets.

note

Audits can take 30 s–120 s depending on the site.

ParameterTypeRequiredDefaultDescription
urlstringYesThe URL to audit (http or https)
categoriesstring[]NoallLighthouse categories: accessibility, best-practices, performance, pwa, seo
budgetsobject[]NoLighthouse performance budgets array
timeoutnumberNoRequest timeout in milliseconds

browserless_crawl

Crawls a website starting from a seed URL and scrapes every discovered page using the Browserless /crawl API. Follows links up to a configurable depth and supports sitemap discovery, path filtering, subdomain handling, and custom scrape options. Returns scraped content (markdown, HTML, or raw text) for each page along with metadata.

ParameterTypeRequiredDefaultDescription
urlstringYesThe URL to crawl (http or https)
limitnumberNo100Maximum pages to crawl (max: 10 000)
maxDepthnumberNo5Maximum link-follow depth from the root URL
maxRetriesnumberNo1Retry attempts per failed page
allowExternalLinksbooleanNofalseFollow links to external domains
allowSubdomainsbooleanNofalseFollow links to subdomains
sitemapstringNo"auto"Sitemap handling: auto, force, skip
includePathsstring[]NoRegex patterns for URL paths to include
excludePathsstring[]NoRegex patterns for URL paths to exclude
delaynumberNo200Delay between requests in milliseconds
scrapeOptionsobjectNoPer-page scrape settings (see below)
waitForCompletionbooleanNotrueWait for crawl to finish; if false, returns immediately with a crawl ID
pollIntervalnumberNo5000Polling interval in ms when waiting for completion
maxWaitTimenumberNo300000Maximum wait time in ms (default 5 minutes)
timeoutnumberNo30000HTTP request timeout in milliseconds

Scrape options (optional sub-object on scrapeOptions):

ParameterTypeDefaultDescription
formatsstring[]["markdown"]Output formats: markdown, html, rawText
onlyMainContentbooleantrueExtract only the main content using Readability
includeTagsstring[]HTML tag selectors to include
excludeTagsstring[]HTML tag selectors to exclude
waitFornumber0Time in ms to wait after page load before scraping
headersobjectCustom HTTP headers to send with each request
timeoutnumberNavigation timeout in milliseconds

Example Usage

Ask your AI assistant:

Scrape https://example.com and summarize the content.

Take a screenshot and extract all links from https://example.com.

Download the CSV export from https://example.com/report.

Export https://example.com as a full offline ZIP with all assets.

Search for "headless browser automation" and summarize the top 5 results.

Map all the pages on https://example.com and list them.

Run a Lighthouse audit on https://example.com and show me the scores.

Crawl https://example.com up to 3 levels deep and summarize each page.

Resources

The MCP server also exposes these resources that your AI assistant can read:

ResourceDescription
browserless://api-docsSmart Scraper API documentation and parameter reference
browserless://statusLive status of the Browserless API connection

Prompt Templates

Built-in prompt templates help your AI assistant use the tools effectively:

PromptDescription
scrape-urlScrape a webpage and summarize its content
extract-contentExtract specific information from a webpage using custom instructions

Further Reading