Skip to main content
GET
/
cloud
/
v2
/
inference
/
models
List ML Model Catalog
curl --request GET \
  --url https://api.gcore.com/cloud/v2/inference/models \
  --header 'Authorization: <api-key>'
{
  "category": "Text Classification",
  "default_flavor_id": "123e4567-e89b-12d3-a456-426614174111",
  "description": "My first model",
  "developer": "Stability AI",
  "documentation_page": "/docs",
  "eula_url": "https://example.com/eula",
  "example_curl_request": "curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'",
  "has_eula": true,
  "id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
  "image_registry_id": "123e4567-e89b-12d3-a456-426614174999",
  "image_url": "registry.hub.docker.com/my_model:latest",
  "inference_backend": "torch",
  "inference_frontend": "gradio",
  "model_id": "mistralai/Pixtral-12B-2409",
  "name": "model1",
  "openai_compatibility": "full",
  "port": 8080,
  "version": "v0.1"
}

Authorizations

Authorization
string
header
required

API key for authentication. Make sure to include the word apikey, followed by a single space and then your token. Example: apikey 1234$abcdef

Query Parameters

limit
integer

Limit the number of returned instances. Limited by max limit value of 1000

offset
integer

Offset value is used to exclude the first set of records from the result

order_by
string

Order instances by transmitted fields and directions (name.asc)

Response

List of the ML Models

description
string
required

Description of the model.

Examples:

"My first model"

id
string<uuid>
required

Model ID.

Examples:

"3fa85f64-5717-4562-b3fc-2c963f66afa6"

image_url
string
required

Image URL of the model.

Examples:

"registry.hub.docker.com/my_model:latest"

name
string
required

Name of the model.

Examples:

"model1"

port
integer
required

Port on which the model runs.

Examples:

8080

category
string | null

Category of the model.

Examples:

"Text Classification"

default_flavor_id
string<uuid> | null

Default flavor for the model.

Examples:

"123e4567-e89b-12d3-a456-426614174111"

developer
string | null

Developer of the model.

Examples:

"Stability AI"

documentation_page
string | null

Path to the documentation page.

Examples:

"/docs"

eula_url
string | null

URL to the EULA text.

Examples:

"https://example.com/eula"

example_curl_request
string | null

Example curl request to the model.

Examples:

"curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'"

has_eula
boolean
default:false

Whether the model has an EULA.

Examples:

true

image_registry_id
string<uuid> | null

Image registry of the model.

Examples:

"123e4567-e89b-12d3-a456-426614174999"

inference_backend
string | null

Describing underlying inference engine.

Examples:

"torch"

"tensorflow"

inference_frontend
string | null

Describing model frontend type.

Examples:

"gradio"

"vllm"

"triton"

model_id
string | null

Model name to perform inference call.

Examples:

"mistralai/Pixtral-12B-2409"

openai_compatibility
string | null

OpenAI compatibility level.

Examples:

"full"

"partial"

"none"

version
string | null

Version of the model.

Examples:

"v0.1"