Real-time integration

This integration puts the entire job request into one URL, which we form ourselves.

– 100% success rate plus accurate data.
– Data sources: direct; search; ads; images, suggestions, shopping prod; product pricing; hotels...
– Requires an open connection to send the acquired results.
– Single query – no batches.
– Supports any SERP keyword.
– Parsing: raw HTML, and in most cases – structured JSON.

Endpoint: POST scraper-api.smartproxy.com/v2/scrape

Integration examples:

https://scraper-api.smartproxy.com/v2/scrape?target=google_search&query=world&domain=com&access_token=pass2021
curl -u username:password 'https://scraper-api.smartproxy.com/v2/scrape' -H "Content-Type: application/json" -d '{"target": "google_search", "domain": "com", "query": "world"}'
<?php 

$username = "SPusername";
$password = "SPpassword";

$search = [
    'target' => 'google_search',
    'domain' => 'com',
    'query' => 'world',
    'parse' => true
];

$ch = curl_init();

$headers[] = 'Content-Type: application/json';

$options = [
   CURLOPT_URL => 'https://scraper-api.smartproxy.com/v2/scrape',
   CURLOPT_USERPWD => sprintf('%s:%s', $username, $password),
   CURLOPT_POSTFIELDS => json_encode($search),
   CURLOPT_RETURNTRANSFER => 1,
   CURLOPT_ENCODING => 'gzip, deflate',
   CURLOPT_HTTPHEADER => $headers,
   CURLOPT_SSL_VERIFYPEER => false,
   CURLOPT_SSL_VERIFYHOST => false
];
curl_setopt_array($ch, $options);

$result = curl_exec($ch);
if (curl_errno($ch)) {
    echo 'Error:' . curl_error($ch);
}
curl_close($ch);

$result = json_decode($result);
var_dump($result);

 ?>
import requests

headers = {
    'Content-Type': 'application/json'
}

task_params = {
    'target': 'google_search',
    'domain': 'com',
    'query': 'world',
    'parse': True
}

username = 'SPuserame'
password = 'SPpassword'
  
response = requests.post(
    'https://scraper-api.smartproxy.com/v2/scrape',
    headers = headers,
    json = task_params,
    auth = (username, password)
)
print(response.text)

Check out the recipes below for a step-by-step overview of SERP Scraping API integration:

📘

Encoding

Please note that if you use direct data source (a.k.a provide the URL yourself), everything you type into the URL string has to be encoded.

How to use it

  1. You send us a query. If you need to specify it, you can add parameters. You need to post query parameters the same way you post JSON ones. Don't forget to input your credentials (tokens).

  2. We retrieve the content you need.

  3. We need an open connection in order to return the requested data. The data should come back with the HTTP status code 200, and it should be parsed in JSON format or contain raw HTML.

❗️

Keep an open connection

If the connection is closed before the job is completed, the data is lost.

The timeout limit for open connections is 150 seconds. In a rare case of a heavy load, we may not be able to get the data to you.

Need any help with your setup? Drop us a line via chat.