Skip to main content

Deploying Your Model

Deploying a model and overview

Ted Tigerschiöld avatar
Written by Ted Tigerschiöld
Updated today

Deployment is the step where your model goes live — it starts classifying conversations in your dataset and its predictions become available across the platform. This article covers the deployment lifecycle, what to expect, and how to use your model's output.

Deploying from instructions

One of Labelf's key strengths is that you can deploy a model immediately after creating it, based on the task description and label definitions alone. There's no requirement to manually annotate data or fine-tune before going live.

This means you can go from "I want to classify customer issues" to seeing predictions across your dataset in minutes. If the results are good enough, you're done. If you want higher accuracy, you can always come back and improve the model later through prompt tuning or fine-tuning (see [Improving Your Model]).

Monitoring deployment

The Models page gives you a bird's-eye view of all your models and their statuses. The stats bar at the top shows how many models are in each stage at any given time.

You can also track individual model progress from the model's detail page. The Overview tab shows the model's current status badge (Ready, Deployed, Training, etc.) alongside key information like total items, labeled items, and model performance metrics. The Activity tab provides a timeline of all status changes — when the model started training, finished validating, and completed deployment — so you can track exactly what happened and when.

What happens when a model is deployed

Once a model reaches the Deployed status:

Predictions appear in Search — You can filter conversations by the model's predictions using the Model filter in Search. This lets you browse all conversations the model classified under a specific label.

Predictions are available in Dashboards — The model's classification data becomes available as a dimension in your dashboard charts. You can build charts that visualize the distribution of predictions over time, across agents, or by any other dimension.

The API endpoint becomes active — If you use the Labelf API, your deployed model is available for inference calls. You can find the endpoint details and a ready-to-use curl example on the API Management page (accessible from the workspace menu under API Keys).

New conversations are classified automatically — As new data flows into your dataset (via integrations, uploads, or API), the deployed model classifies those conversations automatically.

The model detail page

Each model has a detail page where you can monitor performance, tune settings, and manage the model. The detail page has five tabs:

Overview — Shows key stats (total items, labeled items, number of labels), model performance metrics (accuracy, F1, precision, recall), label distribution with a donut chart, training progress, your labeling activity, connected datasets, and a prompt tuning guide.

Task Settings — Where you edit the model's task description and label definitions, run Analyze & Optimize, and view your prompt tuning history.

Metrics — Detailed performance metrics with Training/Validation toggle, confusion matrix, and per-label breakdowns.

Labeling History — A list of all conversations you've labeled for this model, with the ability to review and change labels.

Activity — A timeline of events for this model, including status changes, renames, and configuration updates.

The model detail page also provides quick-access buttons at the top: Start Labeling to begin annotating conversations, and Settings for additional configuration.

Model settings

The model's Overview tab displays a Model Information sidebar with key metadata and two important toggles:

Always Deployed — When enabled, the model remains visible in dashboards even when it's not in the "Deployed" state. This is useful if you want to keep using a model's predictions in dashboards while you retrain or fine-tune it.

Exportable — Controls whether this model can be exported to other systems.

You can also see the model's ID, creator, creation date, last update, and checkpoint status in this sidebar.

Retraining and redeployment

As you improve your model — whether through prompt tuning or fine-tuning — Labelf automatically retrains and redeploys the updated version. During retraining, the previous model version continues to serve predictions, so there's no downtime.

Managing deployed models

Organizing models

As your workspace grows, use folders to keep your models organized. On the Models page, click New Folder to create a folder, then organize models into it. A common approach is to group models by topic — for example, putting all resolution-related models in a "Resolution" folder.

Searching and filtering models

Use the search bar on the Models page to find models by name. You can also use the Filters button and the sort options to navigate large model collections.

Model views

The Models page offers two views — toggle between them using the icons in the top-right of the model list:

  • List view — A compact list with label chips, status badges, and metadata

  • Grid view — Cards with more visual detail

Next steps

Need help?

If you have questions about deployment, run into issues, or need help with the API, reach out to us at support@labelf.ai or use the chat widget in the bottom-right corner of the screen.

Did this answer your question?