...
tag -n [MODEL NAME] -v [MODEL VERSION] -approachUsed [APPROACH USED VALUE] - dispName [DISPLAY NAME VALUE] -predType [PREDICTION TYPE VALUE] -predTypeLabel [PREDICTION TYPE LABEL VALUE]
...
Assigns initial tags to the registered model. Requires a MODEL NAME as well as MODEL VERSION for the model in the registry.
The following tags can also be defined:
APPROACH USED VALUE – value for used approach
DISPLAY NAME VALUE – value for display name
PREDICTION TYPE VALUE – value for prediction type
PREDICTION TYPE LABEL VALUE – value of prediction type label, e.g., for extraction INVOICE_COMPANY_NAME, CURR-DOM-GROSS, for classification CLASSIFICATION
...
Deploys the model on the Kubernetes cluster configured with local .kube(config) file.
Requires the following parameters:
REGISTERED MODEL NAME] – name of the model in the model registry
REGISTERED MODEL VERSION] – version of the model in the model registry
SERVICE/DEPLOYMENT NAME – name of the service and deployment on Kubernetes
IMAGE TAG NAME – name we want to give to the image
IMAGE PULL SECRET NAME – name of the image pull secret defined on Kubernetes cluster for a specific namespace
CONTAINER PORT VALUE – port defined inside the model container
SERVICE PORT VALUE – port we want to define for our service
MEMORY LIMITS VALUE – the amount of memory for our deployed model container (e.g., 5Gi), replaces value for limits:memory: defined in deploy manifest
MEMORY REQUEST VALUE – the amount of memory container requests in order to work (e.g., 4Gi), replaces value for requests:memory: defined in deploy manifest
URL TO DOCKER IMAGE REGISTRY – URL to docker image registry (e.g., https://docker.optimal-systems.org)
...
Starts inference with a specific model. Requires a REGISTERED MODEL NAME and REGISTERED MODEL VERSION in order to start inference.
...
delete -deployName [SERVICE/
DEPLOYMENT NAME] -n [REGISTERED MODEL NAME] -v [REGISTERED MODEL VERSION]
...
Returns model performances in previous requests (accuracy, recall, f1, response time...). Requires the RUN ID of the model run you want to get performances for. Optionally, a START DATE can be defined, from which you want to retrieve model results for the performances. If it is not defined, results starting with the first run will be retrieved. Optionally, an END DATE can be defined, until which you want to retrieve model results for the performances. If it is not defined, results until the last run will be retrieved.
...
Returns console output of the specified inference schema.
Optionally, an APP NAME can be specified via the option -appName
in order to get the inference schema of the specified app.
...
Uploads and overwrites an existing inference schema to config api. Requires the FILE PATH to the inference schema to be uploaded.
Optionally, an APP NAME can be specified via the option -appName
in order to upload the inference schema for the specified app.
...
Downloads an inference schema to the specified location. Requires a RETURN PATH where the inference schema will be saved.
Optionally, an APP NAME can be specified via the option -appName
in order to get the inference schema of the specified app.
Requirements
...
Page Properties | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||
Resources & Remarks Modification History
|
Excerpt |
---|
A command-line application for managing the Artificial Intelligence Platform. |
...
border | true |
---|
Column | ||||
---|---|---|---|---|
Table of Contents
|
Characteristics
Note | ||
---|---|---|
| ||
The app is a component of the Artificial Intelligence Platform. This platform is not included in yuuvis® Momentum installations and is available as a beta version only on request. |
Function
The management of the Artificial Intelligence Platform is done via the command line application Kairos CLI. The following commands are available.
...
Login to Kairos CLI app to get further access to commands. The login requires TENANT, USERNAME and PASSWORD in any order. If you do not specify values for all options, a dialog will be started.
You will stay logged in until you enter the kairos
logout
command. Even if you close the terminal and enter again, you are still logged in and will be able to run all commands.
...
Adds a new pipeline to the list. Requires a NAME for the new pipeline and a path to the new pipeline's ARTIFACT LOCATION.
...
build -n [NAME] -t [TYPE] -d [DEVICE OPTIONAL] -v
[VERSION OPTIONAL]
...
Creates a docker image for an existing pipeline. Requires a NAME for the pipeline and a TYPE of the pipeline to build an image for.
Optionally, DEVICE used in all runs for the build (CPU/GPU) can be specified via the option -d
, and VERSION as manual version tag can be specified via option -v
.
...
Adds a new experiment to the list. Requires a NAME for the new experiment and a path to the new experiment's ARTIFACT LOCATION.
...
Removes an experiment and its configurations. Requires the ID of the experiment (NOT the pipeline!) you want to delete.
...
Returns console output with list of all available registered models.
Optionally, a MODEL NAME can be specified via the option -n
. The output will be a list of all available versions of the corresponding model.
In addition to the option -n
, the option -v
can be used to specify a MODEL VERSION. The output will be a list containing specific model version information.
|
Excerpt |
---|
This service of the AI platform provides the API for the configuration of AI platform, such as the schema that binds the ML-extractors to fields in object schema. |
A Swagger UI is available via https://<host>/kairos-api/swagger-ui.html
.
Section | ||||||
---|---|---|---|---|---|---|
| ||||||
|
Characteristics
Service Name | kairos-api |
---|---|
Public API | KAIROS-API Endpoints |
Function
The KAIROS-API service is responsible for configuring the inference schema, that is in turn used by PREDICT-API to call the appropriate machine learning models for an object type.
The endpoints of the KAIROS-API service are provided in an own API that shall be called to configure the system by system operators.
Requirements
The Kairos-API service is a part of the Artificial Intelligence Platform and can run only in combination with the other included componentsservices.
Kairos CLI furthermore requires:
- Java 13+
The current beta version of Kairos CLI can be operated only on Windows systems.
Installation and Configuration
Use the terminal in administrator mode to have access to all system directories.
- If you were already using the Kairos CLI app before, uninstall the old version.
- Run the installation file for the current version of Kairos CLI.
- Finish the installation.
Set path of the Kairos CLI installation directory as value for your system environment path variable.
Now, configure the custom properties for the Kairos CLI app:
...
Further requirements:
>> AI Platform Requirements
Configuration
The Inference Schema needs to be defined according to your client application.
Info | |||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||||||||||||||||
Read on
|
...