Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Page Properties
hiddentrue
idDONE

Product Version
Report Note
Assignee

Resources & Remarks

Modification History

NameDateProduct VersionAction
Goran18 OCT 20212021 Autumncreated
Agnieszka25 OCT 20212021 WinterrLANG
Antje25 NOV 20222022 Winterremove beta label, update content



Excerpt

A set of APIs and tools that enables customers to set up and use AI functionalities by themselves.

...

The Artificial Intelligence
Section
bordertrue


Column

Table of Contents

Table of Contents
exclude(Table of Contents|Read on|ML Training Pipeline|PREDICT-API Service|Kairos CLI App)

Note
titleBeta Version


Platform

...

Platform Concept

The Artificial Intelligence Platform is a set of APIs and tools that enable customers to set up and use AI functionalities by themselves, yet with low costs and high performance.

...

Every part of the AI Platform can be deployed on-premises or in the cloud.

 


Platform Components

There are two big parts of the Artificial Intelligence Platform, the Training part and the Serving part, both with their own components.

Model Training

The Model Training part of the platform is responsible for storing exported data from yuuvis® Momentum (documents, metadata), preprocessing exported data, machine learning training, model evaluation, etc.

...

  • Exported data repository – a physical place where exported data are stored (local storage, S3, Azure Blob Storage, etc.)
  • ML Training Pipeline –  – the component responsible for the preprocessing of data, machine learning AI model training, and model evaluation, etc. Can be deployed on-premises or in the cloud.

Model Serving

The Model Serving part of the platform is responsible for serving predictions to the calling application (yuuvis® Momentum client application, 3rd party applications, scanners, etc.).

There are two main components:

  • Model Serving Server – – infrastructure that hosts machine learning models that make predictionsruns dockerized AI models that perform classification or metadata-extraction from documents. Can be deployed on-premises or in the cloud. 
  • PREDICT-API Service –  – responsible for calling appropriate machine learning models, aggregating, improving, and validating results, and finally rendering responses to the calling application.

Platform Managing

The whole platform is managed using Kairos CLI AppKAIROS platform is configured using KAIROS-API service to manage the Inference Schema. This allows system integrators to configure use of classifiers and metadata extractors for the whole system or for each of the tenants.

Infrastructure

All Serving and platform management components are dockerized and can be deployed to any infrastructure that supports Docker/Kubernetes, on-premises, or in the cloud. 

The Model Training and Model Serving parts are independent of each other and can be deployed to different infrastructures.Those who is installed directly into a host operating system. This will be changed in future versions, and training will be dockerized and fully Kubernetes compatible as well.

To support the customers that have sensitive data and do not want their documents to leave their premises can do on-premises systems, the the model training can be executed in their own data centerinfrastructure and once the models are dockerized, they can be used on-premises or in the cloud, according to customers' needs.

Flow Description

Since this is an end-to-end platform, the flow will be best described using the most common use case: metadata extraction from invoices.

The whole process can be done in just 11 8 steps.

  1. Export documents and their metadata in a predefined format (e.g., 5,000 invoices received from different partners/suppliers). 
  2. Check exported documents and their metadata (remove those without metadata or with wrong metadata).
  3. Select a suitable algorithm for training and set hyperparameters (there are one or more different algorithms for every class of problems).
  4. Run the machine learning training. Decide which metadata shall be extracted and train the corresponding ML Training Pipelines.
  5. Evaluate the model performance.
  6. Repeat steps 1 to 5 or 3 to 5 until you are satisfied with the results.
  7. Deploy the model to the serving infrastructure.
  8. Define inference schema Dockerize and deploy the models.
  9. Define Inference Schema (set usage of the model for the appropriate document type).
  10. Call predict endpoints of the Use PREDICT-API .
  11. Show results to the user.
  12. Collect and save feedback – users' choices should be saved for further improvement of the modelsService to extract the metadata from new invoices in your client application.


Info
iconfalse

Read on

Section


Column
width25%
Kairos CLI App

KAIROS-API Service

Insert excerpt
Kairos CLI AppKairos CLI AppKAIROS-API Service
KAIROS-API Service
nopaneltrue
 Keep reading


Column
width25%

ML Training Pipeline

Insert excerpt
ML Training Pipeline
ML Training Pipeline
nopaneltrue
 Keep reading


Column
width25%

PREDICT-API Service

Insert excerpt
PREDICT-API Service
PREDICT-API Service
nopaneltrue
 Keep reading



...