Everything within this section is only visible while editing. Use Page Properties ID "STUB", "REFACTURE", "PROGRESS", "rDEV", "rDOC", "rLANG" and "DONE".

Everything contained within the table is displayed in the (INTERNAL) Reports page.

Product Version2021 Autumn
Report Note
AssigneeAntje

Resources & Remarks

Modification History

Add a new line to this table and fill it whenever you edit the page.

NameDateProduct VersionAction
Antje14 JUL 20212021 Autumncreated
Goran18 OCT 20212021 Autumnupdated
Agnieszka25 OCT 20212021 AutumnrLANG



The service of the AI platform provides the API for the retrieval of typification predictions determined by the Machine Learning (ML) Pipeline.


The PREDICT-API service is a component of the Auto ML Platform. This platform is not included in yuuvis® Momentum installations and is available as a beta version only on request.

Characteristics

Service Namepredict-api
Public APIPREDICT-API Endpoints

Function

The Predcit-API service is responsible for calling the appropriate machine learning models, improving and validating results returned by the ML models, and finally creating responses for the calling party according to rules set in the inference schema.

The endpoints of the PREDICT-API service are provided in an own API that can be called by client applications.
>> PREDICT-API Endpoints

Requirements

The PREDICT-API service is a part of the Artificial Intelligence Platform and can run only in combination with the other included services.

If you want to use the PREDICT-API service for the AI integration in a client application based on our client libraries (e.g., yuuvis® client as a reference implementation), also the requirements of the involved services have to be considered.

Installation

The AI platform services including the PREDICT-API service are not yet included in yuuvis® Momentum installations but are only available on request.

Configuration

The PREDICT-API service is managed, configured, and maintained via the command line application Kairos CLI.

Especially, the Inference Schema needs to be defined according to your client application.

Read on


PREDICT-API Endpoints

 Keep reading


Kairos CLI App

 Keep reading


ML Training Pipeline

 Keep reading