Skip to main content

Browserless GraphQL API

Aside from running your dedicated fleet on chrome.browserless.io, we also expose a GraphQL API for other operations. This document goes over queries that you can use to gather more insight on your fleet's health and metrics.

note

This GraphQL API is specifically for monitoring and managing private Browserless deployments (checking session pressure, metrics, etc). If you are looking for browser automation capabilities, you should use BrowserQL instead, which is our dedicated browser automation API.

Pressure

The pressure query shows how much load your instance(s) are under, and whether they can accept more traffic or not. This is real-time, so if you need to check if your instance can take more traffic you can do a request to check prior to running your puppeteer.connect call.

warning

The pressure queries are still in BETA and may experience breaking schema changes.

GraphQL Example:

{
pressure(apiToken: "YOUR_API_TOKEN_HERE") {
running
recentlyRejected
queued
isAvailable
date
}
}

cURL Example:

curl --location 'https://api.browserless.io/graphql' \
--header 'Content-Type: application/json' \
--data '{"query":"{\n pressure(apiToken: \"YOUR_API_TOKEN_HERE\") {\n running\n recentlyRejected\n queued\n isAvailable\n date\n }\n}","variables":{}}'

This request will return a JSON object with the following payload:

{
"data": {
"pressure": {
"running": 0,
"recentlyRejected": 0,
"queued": 0,
"isAvailable": true,
"date": 1524762532204
}
}
}

You can use the isAvailable boolean to check if the instance can handle more connections before running your puppeteer.connect call. This is useful for pre-session checks to keep your workers healthy and prevent rejections or queuing. You're also free to write any custom check with the other provided fields in the JSON response.

Metrics

The metrics query gives you insight to how your worker(s) are performing. It details things like successful, rejected, and timed-out sessions. Eventually this will be expanded to include things like average session-time and other helpful data-points.

Below is an example of a request for metrics and the responding payload.

warning

The metrics queries are still in BETA and may experience breaking schema changes.

GraphQL Example:

{
metrics(apiToken: "YOUR_API_TOKEN_HERE") {
successful
rejected
timedout
queued
cpu
memory
date
}
}

cURL Example:

curl --location 'https://api.browserless.io/graphql' \
--header 'Content-Type: application/json' \
--data '{"query":"{\n metrics(apiToken: \"YOUR_API_TOKEN_HERE\") {\n successful\n rejected\n timedout\n queued\n cpu\n memory\n date\n }\n}","variables":{}}'

This request returns an array of object detailing the metrics of your instance(s). If there's more than one instance stats are aggregated together in 5-minute intervals. CPU and Memory are averaged accross instances.

{
"data": {
"metrics": [
{
"successful": 0,
"rejected": 0,
"timedout": 0,
"queued": 0,
"cpu": 0.002734375,
"memory": 0.9055320561641963,
"date": 1524227700000
},
//...
]
}
}

Sessions

Not for Session Management

This GraphQL sessions query is for monitoring currently running browser sessions. It should not be confused with the REST sessions endpoint, which is used to create and manage persistent browser session endpoints.

The sessions query shows what browsers you have instantiated and running. This will only work for customers who have their dedicated fleet, for security purposes.

GraphQL Example:

{
sessions(apiToken: "YOUR_API_TOKEN_HERE") {
description
devtoolsFrontendUrl
live
kill
title
type
url
trackingId
browserId
browserWSEndpoint
browserWSEndpointClient
}
}

cURL Example:

curl --location 'https://api.browserless.io/graphql' \
--header 'Content-Type: application/json' \
--data '{
"query": "{ sessions(apiToken: \"YOUR_API_TOKEN_HERE\") { description devtoolsFrontendUrl live kill title type url trackingId browserId browserWSEndpoint browserWSEndpointClient } }",
"variables": {}
}'
warning

For security purposes, we limit the number of failed GraphQL request attempts. If you encounter rate limiting errors, you'll need to wait until the top of the hour before making additional requests.