REST API Tools
The Browserless MCP server exposes eight stateless tools that wrap the corresponding Browserless REST APIs. Each tool maps to a single REST endpoint and returns its result in one call — useful when the AI just needs content, a file, or a one-off result without holding browser state.
For the stateful, multi-turn tool, see the Browser Agent page.
browserless_smartscraper
Scrapes any webpage using cascading strategies — HTTP fetch, proxy, headless browser, and CAPTCHA solving — automatically selecting the best approach. Wraps the /smart-scrape API.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | — | The URL to scrape (http or https) |
formats | string[] | No | ["markdown"] | Output formats: markdown, html, screenshot, pdf, links |
timeout | number | No | 30000 | Request timeout in milliseconds |
Output formats:
markdown— Page content converted to clean Markdown (default)html— Raw HTML of the pagescreenshot— Full-page screenshot as a PNG imagepdf— PDF rendering of the pagelinks— All links found on the page
Example prompts:
Scrape https://browserless.io and summarize the content.
Extract all links from https://browserless.io.
browserless_function
Executes custom Puppeteer JavaScript code on the Browserless cloud. Your function receives a Puppeteer page object and optional context data, and returns { data, type } to control the response payload and Content-Type. Wraps the /function API.
| Parameter | Type | Required | Description |
|---|---|---|---|
code | string | Yes | JavaScript (ESM) code to execute. The default export receives { page, context } and should return { data, type } |
context | object | No | Optional context object passed to the function |
timeout | number | No | Request timeout in milliseconds |
Example prompts
Load script.js and run it with the context
{ "url": "https://example.com" }.
Run a Puppeteer script that opens https://browserless.io, waits for the hero to render, and returns the visible text.
browserless_download
Runs custom Puppeteer code and returns the file that Chrome downloads during execution. Useful for downloading CSVs, PDFs, images, or any file from a website. Wraps the /download API.
| Parameter | Type | Required | Description |
|---|---|---|---|
code | string | Yes | JavaScript (ESM) code that triggers a file download in the browser |
context | object | No | Optional context object passed to the function |
timeout | number | No | Request timeout in milliseconds |
Example prompts:
Download the CSV export from https://example.com/report.
Trigger the "Download PDF" button on https://example.com/invoice/123 and return the file.
browserless_export
Exports a webpage by URL in its native format (HTML, PDF, image, etc.). Set includeResources to bundle all page assets into a ZIP archive for offline use. Wraps the /export API.
| Parameter | Type | Required | Description |
|---|---|---|---|
url | string | Yes | The URL to export (http or https) |
gotoOptions | object | No | Puppeteer Page.goto() options (waitUntil, timeout, referer) |
bestAttempt | boolean | No | When true, proceed even if awaited events fail or timeout |
includeResources | boolean | No | Bundle all linked resources (CSS, JS, images) into a ZIP file |
waitForTimeout | number | No | Milliseconds to wait after page load before exporting |
timeout | number | No | Request timeout in milliseconds |
Example prompt:
Export
https://example.comas a full offline ZIP with all assets.
browserless_search
Searches the web using Browserless and optionally scrapes each result. Performs web searches via SearXNG and can return results from web, news, or images. Optionally scrape each result URL to get markdown, HTML, links, or screenshots. Wraps the /search API.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | The search query string |
limit | number | No | 10 | Maximum number of results to return (capped by plan limits) |
lang | string | No | "en" | Language code for search results |
country | string | No | — | Country code for geo-targeted results |
location | string | No | — | Location string for geo-targeted results |
tbs | string | No | — | Time-based filter: day, week, month, year |
sources | string[] | No | ["web"] | Search sources: web, news, images |
categories | string[] | No | — | Filter by categories: github, research, pdf |
scrapeOptions | object | No | — | Options for scraping each search result (see below) |
timeout | number | No | 30000 | Request timeout in milliseconds |
Scrape options (optional sub-object on scrapeOptions):
| Parameter | Type | Description |
|---|---|---|
formats | string[] | Output formats for scraped content: markdown, html, links, screenshot |
onlyMainContent | boolean | Extract only the main content using Readability |
includeTags | string[] | Only include content from these HTML tags |
excludeTags | string[] | Exclude content from these HTML tags |
Example prompt:
Search for "headless browser automation" and summarize the top 5 results.
Find the latest news about Puppeteer and return markdown for each result.
browserless_map
Discovers and maps all URLs on a website using Browserless. Crawls a site via sitemaps and link extraction to find all pages. Returns a list of URLs with optional titles and descriptions. Use the search parameter to order results by relevance to a query. Wraps the /map API.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | — | The base URL to start mapping from (http or https) |
search | string | No | — | Search query to order results by relevance |
limit | number | No | 100 | Maximum number of links to return (max: 5000) |
sitemap | string | No | "include" | Sitemap handling: include, skip, only |
includeSubdomains | boolean | No | true | Include URLs from subdomains |
ignoreQueryParameters | boolean | No | true | Exclude URLs with query parameters |
timeout | number | No | 30000 | Request timeout in milliseconds |
Example prompt:
Map all the pages on https://browserless.io and list them.
browserless_performance
Runs a Lighthouse performance audit on any URL via the Browserless /performance API. Returns scores and detailed metrics for accessibility, best practices, performance, PWA, and SEO. Optionally filter by category or supply performance budgets.
Audits can take 30 s–120 s depending on the site.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | — | The URL to audit (http or https) |
categories | string[] | No | all | Lighthouse categories: accessibility, best-practices, performance, pwa, seo |
budgets | object[] | No | — | Lighthouse performance budgets array |
timeout | number | No | — | Request timeout in milliseconds |
Example prompt:
Run a Lighthouse audit on https://browserless.io and show me the scores.
Audit https://example.com for accessibility and SEO only.
browserless_crawl
Crawls a website starting from a seed URL and scrapes every discovered page using the Browserless /crawl API. Follows links up to a configurable depth and supports sitemap discovery, path filtering, subdomain handling, and custom scrape options. Returns scraped content (markdown, HTML, or raw text) for each page along with metadata. Note: Crawl is in beta and only available on Cloud plans — see pricing or contact us to confirm access.
The Crawl API is in beta and only available for Cloud plans. Parameters and response shapes may change in future releases. Contact us to enable access on your account.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | — | The URL to crawl (http or https) |
limit | number | No | 100 | Maximum pages to crawl (max: 10 000) |
maxDepth | number | No | 5 | Maximum link-follow depth from the root URL |
maxRetries | number | No | 1 | Retry attempts per failed page |
allowExternalLinks | boolean | No | false | Follow links to external domains |
allowSubdomains | boolean | No | false | Follow links to subdomains |
sitemap | string | No | "auto" | Sitemap handling: auto, force, skip |
includePaths | string[] | No | — | Regex patterns for URL paths to include |
excludePaths | string[] | No | — | Regex patterns for URL paths to exclude |
delay | number | No | 200 | Delay between requests in milliseconds |
scrapeOptions | object | No | — | Per-page scrape settings (see below) |
waitForCompletion | boolean | No | true | Wait for crawl to finish; if false, returns immediately with a crawl ID |
pollInterval | number | No | 5000 | Polling interval in ms when waiting for completion |
maxWaitTime | number | No | 300000 | Maximum wait time in ms (default 5 minutes) |
timeout | number | No | 30000 | HTTP request timeout in milliseconds |
Scrape options (optional sub-object on scrapeOptions):
| Parameter | Type | Default | Description |
|---|---|---|---|
formats | string[] | ["markdown"] | Output formats: markdown, html, rawText |
onlyMainContent | boolean | true | Extract only the main content using Readability |
includeTags | string[] | — | HTML tag selectors to include |
excludeTags | string[] | — | HTML tag selectors to exclude |
waitFor | number | 0 | Time in ms to wait after page load before scraping |
headers | object | — | Custom HTTP headers to send with each request |
timeout | number | — | Navigation timeout in milliseconds |
Example prompt:
Crawl https://example.com up to 3 levels deep and summarize each page.
Resources
The MCP server also exposes these resources that your AI assistant can read:
| Resource | Description |
|---|---|
browserless://api-docs | Smart Scraper API documentation and parameter reference |
browserless://status | Live status of the Browserless API connection |
Prompt Templates
Built-in prompt templates help your AI assistant use the tools effectively:
| Prompt | Description |
|---|---|
scrape-url | Scrape a webpage and summarize its content |
extract-content | Extract specific information from a webpage using custom instructions |
Further reading
- Browserless MCP Server — connection setup, authentication, and regional endpoints
- Browser Agent — the stateful, multi-turn
browserless_agenttool - REST APIs — direct REST API access without MCP