A command-line application for managing the Artificial Intelligence Platform.
Table of Contents
Characteristics
Beta Version
The app is a component of the Artificial Intelligence Platform. This platform is not included in yuuvis® Momentum installations and is available as a beta version only on request.
Function
The management of the Artificial Intelligence Platform is done via the command line application Kairos CLI. The following commands are available.
Command | Sub-Command | Description | Authorization required |
---|---|---|---|
kairos help | Lists all available commands and how to use them. | no | |
kairos --help [COMMAND] | List the help page for the specific command. | no | |
kairos login -t [TENANT] -u [USERNAME] -p [PASSWORD] | Login to Kairos CLI app to get further access to commands. The login requires TENANT, USERNAME and PASSWORD in any order. If you do not specify values for all options, a dialog will be started. You will stay logged in until you enter the | - | |
| Logout from the Kairos CLI app. | yes | |
kairos pipeline | Manage local AI pipelines. | yes | |
list | Returns console output with a list of pipelines. | ||
create -n [NAME] -l [ARTIFACT LOCATION] | Adds a new pipeline to the list. Requires a NAME for the new pipeline and a path to the new pipeline's ARTIFACT LOCATION. | ||
delete -id [PIPELINEID] | Removes a pipeline from the list. Requires the ID of the pipeline you want to delete. | ||
| Creates a docker image for an existing pipeline. Requires a NAME for the pipeline and a TYPE of the pipeline to build an image for. | ||
kairos experiment | Run and manage MLFlow experiments. | yes | |
list | Returns console output with a list of available experiments. | ||
create -n [NAME] -l [ARTIFACT LOCATION] | Adds a new experiment to the list. Requires a NAME for the new experiment and a path to the new experiment's ARTIFACT LOCATION. | ||
run -n [NAME] -t [TYPE] -id [EXPERIMENTID] | Executes and tracks an experiment run. Requires a NAME of the pipeline to execute, a TYPE of the pipeline to execute, and an EXPERIMENTID as an experiment ID to track the run. | ||
delete -id [EXPERIMENTID] | Removes an experiment and its configurations. Requires the ID of the experiment (NOT the pipeline!) you want to delete. | ||
kairos model | Manage models. | yes | |
list -n [MODEL NAME] -v [MODEL VERSION] | Returns console output with list of all available registered models. Optionally, a MODEL NAME can be specified via the option | ||
| Assigns initial tags to the registered model. Requires a MODEL NAME as well as MODEL VERSION for the model in the registry. The following tags can also be defined: | ||
deploy -n [REGISTERED MODEL NAME] -v [REGISTERED MODEL VERSION] -deployName [SERVICE/DEPLOYMENT NAME] - dockerImage [IMAGE TAG NAME] -imagePS [IMAGE PULL SECRET NAME] -containerP [CONTAINER PORT VALUE] -serviceP [SERVICE PORT VALUE] - memLimits [MEMORY LIMITS VALUE] -memRequests [MEMORY REQUEST VALUE] | Deploys the model on the Kubernetes cluster configured with local .kube(config) file. Requires the following parameters: | ||
startInference -n [REGISTERED MODEL NAME] -v [REGISTERED MODEL VERSION] | Starts inference with a specific model. Requires a REGISTERED MODEL NAME and REGISTERED MODEL VERSION in order to start inference. | ||
stopInference -n [REGISTERED MODEL NAME] -v [REGISTERED MODEL VERSION] | Stop inference with specific model. Requires a REGISTERED MODEL NAME and REGISTERED MODEL VERSION of the model to be stopped. | ||
| Delete deployed model from the cluster (service and deployment). Requires SERVICE/DEPLOYMENT NAME, REGISTERED MODEL NAME and REGISTERED MODEL VERSION in order to delete model from the cluster. | ||
metrics -runID [RUN ID] | Returns model metrics. Requires the RUN ID of the model for which all metrics should be returned. | ||
performance -runID [RUN ID] | Returns model performances in previous requests (accuracy, recall, f1, response time...). Requires the RUN ID of the model run you want to get performances for. Optionally, a START DATE can be defined, from which you want to retrieve model results for the performances. If it is not defined, results starting with the first run will be retrieved. Optionally, an END DATE can be defined, until which you want to retrieve model results for the performances. If it is not defined, results until the last run will be retrieved. | ||
kairos schema | Manage inference schema. | yes | |
view -appName [APP NAME] | Returns console output of the specified inference schema. Optionally, an APP NAME can be specified via the option | ||
store -fp [FILE PATH] -appName [APP NAME] | Uploads and overwrites an existing inference schema to config api. Requires the FILE PATH to the inference schema to be uploaded. Optionally, an APP NAME can be specified via the option | ||
download -rp [RETURN PATH] -appName [APP NAME] | Downloads an inference schema to the specified location. Requires a RETURN PATH where the inference schema will be saved. Optionally, an APP NAME can be specified via the option |
Requirements
The Kairos CLI app is a part of the Auto ML platform and can run only in combination with the other included components.
Kairos CLI furthermore requires:
- Java 13+
The current beta version of Kairos CLI can be operated only on Windows systems.
Installation and Configuration
Use the terminal in administrator mode to have access to all system directories.
- If you were already using the Kairos CLI app before, uninstall the old version.
- Run the installation file for the current version of Kairos CLI.
- Finish the installation.
Set path of the Kairos CLI installation directory as value for your system environment path variable.
Now, configure the custom properties for the Kairos CLI app:
- Create an
application.properties
file in the<installationDirectoryPath>\app\config
directory on your system drive. - Fill in your custom properties and save the file.
The file path should be separated with / instead of not \ (e.g.,C:/Program Files/kairos/app/config/pipelines.json
instead ofC:\Program Files\kairos\app\config\pipelines.json
).
Note: May differ for Linux operating systems.