Leverage your custom trained model for cloud-hosted inference.


Each model trained with can be deployed and exposed as REST API you can use to make predictions from any device that has an internet connection with a REST API client. The inference done on the server, so you don't need to worry about the edge device's hardware capabilities.

We automatically scale this API up and down and do load balancing for you so that you can rest assured that your application will be able to handle sudden spikes in traffic without having to pay for the GPU time you're not using. Our hosted prediction API can handle even the most demanding production applications.

Code snippet

For your convenience, we've provided code snippets for calling this endpoint in Javascript. If you need help integrating the inference API into your project don't hesitate to contact



We're using axios to perform the POST request in this example so first run npm install axios to install the dependency.

Infering on a local image

const axios = require("axios");
const fs = require("fs");

const image = fs.readFileSync("IMAGE.jpg", {
encoding: "base64"

method: "POST",
data: {
image: image,
headers: {
 'content-type': 'multipart/form-data' ,
‘authorization’:  ‘YOUR_TOKEN’,
.then(function(response) {
.catch(function(error) {

!!! information Note: Predict Url: “YOUR_PREDICT_URL” will be available in your deployment section. Token: “YOUR_TOKEN” - you have already stored your key while creating the key in the settings section.