Asynchronous Requests

Queue up multiple requests and receive task_id instantly - after the task is completed you can retrieve the results from your request using said task_id

Queue a Single Task

Single query or url endpoint - https://scraper-api.smartproxy.com/v2/task

Make a POST request to this endpoint with your preferred parameters to receive the task_id for retrieval once the task is done, along with parameters used

curl -u username:password -X POST --url https://scraper-api.smartproxy.com/v2/task -H "Content-Type: application/json" -d "{\"url\": \"https://ip.smartproxy.com\", \"target\": \"universal\" }"
import requests payload={ 'target': 'universal', 'url': 'https://ip.smartproxy.com' } response = requests.request("POST", 'http://scrape.smartproxy.com/v2/task', auth=('user', 'pass'), data=payload) print(response.text)
<?php $params = array( 'url' => 'https://ip.smartproxy.com', 'target' => 'universal' ); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'https://scrape.smartproxy.com/v2/task'); curl_setopt($ch, CURLOPT_USERPWD, 'username' . ':' . 'password'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($params)); curl_setopt($ch, CURLOPT_POST, 1); $headers = array(); $headers[] = 'Content-Type: application/json'; curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); $result = curl_exec($ch); echo $result; if (curl_errno($ch)) { echo 'Error:' . curl_error($ch); } curl_close ($ch); ?>

Retrieve result using task_id

Make a GET request to this endpoint replacing {task_id} with the id received from previous POST request to retrieve the result

curl -u username:password https://scraper-api.smartproxy.com/v2/task/{task_id}/results
import requests response = requests.request("GET", 'http://scrape.smartproxy.com/v2/task/{task_id}/results', auth=('user', 'pass')) print(response.text)
<?php $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'https://scrape.smartproxy.com/v2/task/{task}/results'); curl_setopt($ch, CURLOPT_USERPWD, 'username' . ':' . 'password'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $result = curl_exec($ch); echo $result; if (curl_errno($ch)) { echo 'Error:' . curl_error($ch); } curl_close ($ch); ?>

Queue multiple tasks

Make a POST request to this endpoint providing multiple queries or urls in JSON format

🚧

With a single batch you can submit either multiple queries or urls, but not both. Also, one batch must have only one target, like google_search shown in the example below

import requests import json with open('queries.json', 'r') as f: payload = json.loads(f.read()) response = requests.request( 'POST', 'https://scraper-api.smartproxy.com/v2/task/batch', auth=('user', 'pass'), json=payload, ) print(response.text)
<?php $ch = curl_init(); $payload = json_decode(file_get_contents('queries.json'), true); curl_setopt($ch, CURLOPT_URL, 'https://scraper-api.smartproxy.com/v2/task/batch'); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload)); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC); curl_setopt($ch, CURLOPT_USERPWD, 'username:password'); $headers = array(); $headers[] = "Content-Type: application/json"; curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); $response = curl_exec($ch); if ($response === false) { echo 'Error: ' . curl_error($ch); } else { echo $response; } curl_close($ch); ?>

Example of the queries.json file used in above code:

{ "query":[ "blue", "skyline", "below" ], "target": "google_search", "parse": "true" }

Receive task status to your callback_url

This will work with any async endpoint by entering callback_url as one of the parameters

We will make a POST request to your provided URL with the task_id and parameters used once the task is done.
You can use a website like this one to test out receiving a response. Example using single task endpoint:

curl -u username:password -X POST --url https://scraper-api.smartproxy.com/v2/task -H "Content-Type: application/json" -d "{\"url\": \"https://ip.smartproxy.com\", \"target\": \"universal\", \"callback_url\": \"<https://your.url>\" }"
import requests payload={ 'target': 'universal', 'url': 'https://ip.smartproxy.com', 'callback_url': '<https://your.url>' } response = requests.request("POST", 'http://scrape.smartproxy.com/v2/task', auth=('user', 'pass'), data=payload) print(response.text)
<?php $params = array( 'url' => 'https://ip.smartproxy.com', 'target' => 'universal', 'callback_url' => '<https://your.url>' ); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'https://scrape.smartproxy.com/v2/task'); curl_setopt($ch, CURLOPT_USERPWD, 'username' . ':' . 'password'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($params)); curl_setopt($ch, CURLOPT_POST, 1); $headers = array(); $headers[] = 'Content-Type: application/json'; curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); $result = curl_exec($ch); echo $result; if (curl_errno($ch)) { echo 'Error:' . curl_error($ch); } curl_close ($ch); ?>

Example of a response you will receive

{ "id": "7039164056019693569", "status": "done", "target": "universal", "query": "", "url": "https://ip.smartproxy.com", "domain": "com", "num_pages": 10, "locale": null, "geo": null, "device_type": "desktop", "page_from": 1, "parse": 0, "output_schema": null, "headless": null, "priority": 0, "persist": true, "content_encoding": "utf-8", "created_at": "2023-03-08 09:24:52", "updated_at": "2023-03-08 09:24:52"

You can then use the "id" to retrieve the result of your task using this endpoint -

https://scraper-api.smartproxy.com/v2/task/{task_id}/results

For example, to retrieve result from the above example you would send a GET request to:
https://scraper-api.smartproxy.com/v2/task/7039164056019693569/results


Did this page help you?