Platform Overview

A set of APIs and tools that enables customers to set up and use AI functionalities by themselves.

Table of Contents

Beta Version

The Artificial Intelligence Platform is not included in yuuvis® Momentum installations but only available on request as a beta version.

Platform Concept

The Artificial Intelligence Platform is a set of APIs and tools that enable customers to set up and use AI functionalities by themselves, yet with low costs and high performance.

Customers can export their own documents, train models using different algorithms provided by OPTIMAL SYSTEMS GmbH, evaluate trained models, and deploy to make predictions.

Every part of the AI Platform can be deployed on-premises or in the cloud.

 


Platform Components

There are two big parts of the Artificial Intelligence Platform, the Training part and the Serving part, both with their own components.

Model Training

The Model Training part of the platform is responsible for storing exported data from yuuvis® Momentum (documents, metadata), preprocessing exported data, machine learning training, model evaluation, etc.

There are two main components:

  • Exported data repository – a physical place where exported data are stored (local storage, S3, Azure Blob Storage, etc.)
  • ML Training Pipeline – the component responsible for the preprocessing of data, machine learning training, model evaluation, etc. Can be deployed on-premises or in the cloud.

Model Serving

The Model Serving part of the platform is responsible for serving predictions to the calling application (yuuvis® Momentum client application, 3rd party applications, scanners, etc.).

There are two main components:

  • Model Serving Server – infrastructure that hosts machine learning models that make predictions. Can be deployed on-premises or in the cloud. 
  • PREDICT-API Service – responsible for calling appropriate machine learning models, aggregating, improving, and validating results, and finally rendering responses to the calling application.

Platform Managing

The whole platform is managed using Kairos CLI App.

Infrastructure

All platform components are dockerized and can be deployed to any infrastructure that supports Docker/Kubernetes, on-premises, or in the cloud. 

The Model Training and Model Serving parts are independent of each other and can be deployed to different infrastructures.

Those who have sensitive data and do not want their documents to leave their premises can do the model training in their own data center.

Flow Description

Since this is an end-to-end platform, the flow will be best described using the most common use case: metadata extraction from invoices.

The whole process can be done in just 11 steps.

  1. Export documents and their metadata (e.g., 5,000 invoices received from different partners/suppliers). 
  2. Check exported documents and their metadata (remove those without metadata or with wrong metadata).
  3. Select a suitable algorithm for training and set hyperparameters (there are one or more different algorithms for every class of problems).
  4. Run the machine learning training. 
  5. Evaluate the model performance.
  6. Repeat steps 1 to 5 or 3 to 5 until you are satisfied with the results.
  7. Deploy the model to the serving infrastructure.
  8. Define inference schema (set usage of the model for the appropriate document type).
  9. Call predict endpoints of the PREDICT-API.
  10. Show results to the user.
  11. Collect and save feedback – users' choices should be saved for further improvement of the models.


Read on

Kairos CLI App

A command-line application for managing the Artificial Intelligence Platform. Keep reading

ML Training Pipeline

Responsible for preparing data for training, training of machine learning models, evaluation of trained models, and preparing for the deployment to the production. Keep reading

PREDICT-API Service

The service of the AI platform provides the API for the retrieval of typification predictions determined by the Machine Learning (ML) Pipeline. Keep reading