Skip to main content
Multi Model Inference

This endpoint allows you to do inference on multiple models

Per Näslund avatar
Written by Per Näslund
Updated over 2 years ago

Do inference on multiple models. Note that amount of inferences that will be debited are <amount of texts> * <amount of models called>.

Resource URL

https://api.app.labelf.ai/v2/models/inference

Resource information

Response formats

JSON

Requires authentication?

Yes - Bearer token

Path parameters

There are no path parameters.

JSON body parameters

Name

Required

Type

Description

Example

model_settings

required

List of model_setting objects

(further described in the table below this one)

Specifies which models to use for this inference run.

[

{"model_id":3},

{"model_id":5}

]

texts

required

List of strings

Specifies the texts that should be inferenced on.

["Breakfast was not tasty", "breakfast was very good"]

model_setting object json structure:

Name

Required

Type

Description

Example

model_id

required

int

defines what model you want to call

5

max_predictions

NOT required

int

Specifies the amount of predictions you want back. For example, setting this to three will yield the three class predictions with the highest probability. You can skip sending this parameter and you will instead get predictions for all classes.

3

label_filter

NOT required

List of strings

If you only want the predictions for certain classes and ignore all other classes, then list those here. Partial matches will also be filtered here, so "po" will filter yield the class "positive". Leaving this out will yield predictions for all classes.

["positive", "negative"]"

read_full_text

NOT required

Boolean

Default value is: False

Set this to true if you want the inference model to do a sliding window over the full text instead of truncating it after 512 tokens (~2000-4000 characters). Note that truncation is done during training right now so results can be unexpected. (this will be upgraded as well).

True

Example Request

POST /v2/models/inference HTTP/1.1 Host: api.app.labelf.ai Authorization: Bearer YOUR_BEARER_TOKEN Content-Type: application/json;charset=UTF-8 { "model_settings": [{"model_id": 33, "max_predictions": 3}, {"model_id": 35}], "texts": ["Breakfast was not tasty"]}

Example Curl Request

curl --location --request POST 'https://api.app.labelf.ai/v2/models/inference \ --header 'Authorization: Bearer YOUR_BEARER_TOKEN' \ --header 'Content-Type: application/json' \ --data-raw '{ "model_settings":[{"model_id": 33}, {"model_id": 35}], "texts": ["Breakfast was not tasty"] } '
Example Response
HTTP/1.1 200 OK Status: 200 OK Content-Type: application/json; charset=utf-8 ... "[{"text":"this t-shirt fits me perfectly and looks great","result":[{"predictions":[{"label":"positive","score":0.93},{"label":"neutral","score":0.03}],"model":"sentiment model","model_id":33},{"predictions":[{"label":"does not mention size","score":0.93},{"label":"mentions size","score":0.17}],"model":"mentions size model","model_id":35}]}]"
Did this answer your question?